WorldWideScience

Sample records for automated jitter correction

  1. An adaptive feedback controller for transverse angle and position jitter correction in linear particle beam accelerators

    International Nuclear Information System (INIS)

    Barr, D.S.

    1993-01-01

    It is desired to design a position and angle jitter control system for pulsed linear accelerators that will increase the accuracy of correction over that achieved by currently used standard feedback jitter control systems. Interpulse or pulse-to-pulse correction is performed using the average value of each macropulse. The configuration of such a system resembles that of a standard feedback correction system with the addition of an adaptive controller that dynamically adjusts the gain-phase contour of the feedback electronics. The adaptive controller makes changes to the analog feedback system between macropulses. A simulation of such a system using real measured jitter data from the Stanford Linear Collider was shown to decrease the average rms jitter by over two and a half times. The system also increased and stabilized the correction at high frequencies; a typical problem with standard feedback systems

  2. An adaptive feedback controller for transverse angle and position jitter correction in linear particle beam accelerators

    International Nuclear Information System (INIS)

    Barr, D.S.

    1992-01-01

    It is desired to design a position and angle jitter control system for pulsed linear accelerators that will increase the accuracy of correction over that achieved by currently used standard feedback jitter control systems. Interpulse or pulse-to-pulse correction is performed using the average value of each macropulse. The configuration of such a system resembles that of a standard feedback correction system with the addition of an adaptive controller that dynamically adjusts the gain-phase contour of the feedback electronics. The adaptive controller makes changes to the analog feedback system between macropulses. A simulation of such a system using real measured jitter data from the Stanford Linear Collider was shown to decrease the average rms jitter by over two and a half times. The system also increased and stabilized the correction at high frequencies; a typical problem with standard feedback systems

  3. Automated jitter correction for IR image processing to assess the quality of W7-X high heat flux components

    International Nuclear Information System (INIS)

    Greuner, H; De Marne, P; Herrmann, A; Boeswirth, B; Schindler, T; Smirnow, M

    2009-01-01

    An automated IR image processing method was developed to evaluate the surface temperature distribution of cyclically loaded high heat flux (HHF) plasma facing components. IPP Garching will perform the HHF testing of a high percentage of the series production of the WENDELSTEIN 7-X (W7-X) divertor targets to minimize the number of undiscovered uncertainties in the finally installed components. The HHF tests will be performed as quality assurance (QA) complementary to the non-destructive examination (NDE) methods used during the manufacturing. The IR analysis of an HHF-loaded component detects growing debonding of the plasma facing material, made of carbon fibre composite (CFC), after a few thermal cycles. In the case of the prototype testing, the IR data was processed manually. However, a QA method requires a reliable, reproducible and efficient automated procedure. Using the example of the HHF testing of W7-X pre-series target elements, the paper describes the developed automated IR image processing method. The algorithm is based on an iterative two-step correlation analysis with an individually defined reference pattern for the determination of the jitter.

  4. Localisation of beam offset jitter sources at ATF2

    CERN Document Server

    Pfingstner, J; Patecki, M; Schulte, D; Tomás, R

    2014-01-01

    For the commissioning and operation of modern particle accelerators, automated error detection and diagnostics methods are becoming increasingly important. In this paper, we present two such methods, which are capable of localising sources of beam offset jitter with a combination of correlation studies and so called degree of freedom plots. The methods were applied to the ATF2 beam line at KEK, where one of the major goals is the reduction of the beam offset jitter. Results of this localisation are shown in this paper. A big advantage of the presented method is its high robustness especially to varying optics parameters. Therefore, we believe that the developed beam offset jitter localisation methods can be easily applied to other accelerators.

  5. Peripheral refractive correction and automated perimetric profiles.

    Science.gov (United States)

    Wild, J M; Wood, J M; Crews, S J

    1988-06-01

    The effect of peripheral refractive error correction on the automated perimetric sensitivity profile was investigated on a sample of 10 clinically normal, experienced observers. Peripheral refractive error was determined at eccentricities of 0 degree, 20 degrees and 40 degrees along the temporal meridian of the right eye using the Canon Autoref R-1, an infra-red automated refractor, under the parametric conditions of the Octopus automated perimeter. Perimetric sensitivity was then undertaken at these eccentricities (stimulus sizes 0 and III) with and without the appropriate peripheral refractive correction using the Octopus 201 automated perimeter. Within the measurement limits of the experimental procedures employed, perimetric sensitivity was not influenced by peripheral refractive correction.

  6. On the use of the autocorrelation and covariance methods for feedforward control of transverse angle and position jitter in linear particle beam accelerators

    International Nuclear Information System (INIS)

    Barr, D.S.

    1994-01-01

    It is desired to design a predictive feedforward transverse jitter control system to control both angle and position jitter in pulsed linear accelerators. Such a system will increase the accuracy and bandwidth of correction over that of currently available feedback correction systems. Intrapulse correction is performed. An offline process actually ''learns'' the properties of the jitter, and uses these properties to apply correction to the beam. The correction weights calculated offline are downloaded to a real-time analog correction system between macropulses. Jitter data were taken at the Los Alamos National Laboratory (LANL) Ground Test Accelerator (GTA) telescope experiment at Argonne National Laboratory (ANL). The experiment consisted of the LANL telescope connected to the ANL ZGS proton source and linac. A simulation of the correction system using this data was shown to decrease the average rms jitter by a factor of two over that of a comparable standard feedback correction system. The system also improved the correction bandwidth

  7. On the use of the autocorrelation and covariance methods for feedforward control of transverse angle and position jitter in linear particle beam accelerators

    International Nuclear Information System (INIS)

    Barr, D.S.

    1993-01-01

    It is desired to design a predictive feedforward transverse jitter control system to control both angle and position jitter in pulsed linear accelerators. Such a system will increase the accuracy and bandwidth of correction over that of currently available feedback correction systems. Intrapulse correction is performed. An offline process actually open-quotes learnsclose quotes the properties of the jitter, and uses these properties to apply correction to the beam. The correction weights calculated offline are downloaded to a real-time analog correction system between macropulses. Jitter data were taken at the Los Alamos National Laboratory (LANL) Ground Test Accelerator (GTA) telescope experiment at Argonne National Laboratory (ANL). The experiment consisted of the LANL telescope connected to the ANL ZGS proton source and linac. A simulation of the correction system using this data was shown to decrease the average rms jitter by a factor of two over that of a comparable standard feedback correction system. The system also improved the correction bandwidth

  8. Framework of Jitter Detection and Compensation for High Resolution Satellites

    Directory of Open Access Journals (Sweden)

    Xiaohua Tong

    2014-05-01

    Full Text Available Attitude jitter is a common phenomenon in the application of high resolution satellites, which may result in large errors of geo-positioning and mapping accuracy. Therefore, it is critical to detect and compensate attitude jitter to explore the full geometric potential of high resolution satellites. In this paper, a framework of jitter detection and compensation for high resolution satellites is proposed and some preliminary investigation is performed. Three methods for jitter detection are presented as follows. (1 The first one is based on multispectral images using parallax between two different bands in the image; (2 The second is based on stereo images using rational polynomial coefficients (RPCs; (3 The third is based on panchromatic images employing orthorectification processing. Based on the calculated parallax maps, the frequency and amplitude of the detected jitter are obtained. Subsequently, two approaches for jitter compensation are conducted. (1 The first one is to conduct the compensation on image, which uses the derived parallax observations for resampling; (2 The second is to conduct the compensation on attitude data, which treats the influence of jitter on attitude as correction of charge-coupled device (CCD viewing angles. Experiments with images from several satellites, such as ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiaometer, LRO (Lunar Reconnaissance Orbiter and ZY-3 (ZiYuan-3 demonstrate the promising performance and feasibility of the proposed framework.

  9. ENERGY CORRECTION FOR HIGH POWER PROTON/H MINUS LINAC INJECTORS.

    Energy Technology Data Exchange (ETDEWEB)

    RAPARIA, D.; LEE, Y.Y.; WEI, J.

    2005-05-16

    High-energy proton/H minus energy (> GeV) linac injector suffer from energy jitter due to RF amplitude and phase stability. Especially in high power injectors this energy jitter result beam losses more than 1 W/m that require for hand on maintenance. Depending upon the requirements for next accelerator in the chain, this energy jitter may or may not require to be corrected. This paper will discuss the sources of this energy jitter, correction schemes with specific examples.

  10. Note: A new method for directly reducing the sampling jitter noise of the digital phasemeter

    Science.gov (United States)

    Liang, Yu-Rong

    2018-03-01

    The sampling jitter noise is one non-negligible noise source of the digital phasemeter used for space gravitational wave detection missions. This note provides a new method for directly reducing the sampling jitter noise of the digital phasemeter, by adding a dedicated signal of which the frequency, amplitude, and initial phase should be pre-set. In contrast to the phase correction using the pilot-tone in the work of Burnett, Gerberding et al., Liang et al., Ales et al., Gerberding et al., and Ware et al. [M.Sc. thesis, Luleå University of Technology, 2010; Classical Quantum Gravity 30, 235029 (2013); Rev. Sci. Instrum. 86, 016106 (2015); Rev. Sci. Instrum. 86, 084502 (2015); Rev. Sci. Instrum. 86, 074501 (2015); and Proceedings of the Earth Science Technology Conference (NASA, USA, 2006)], the new method is intrinsically additive noise suppression. The experiment results validate that the new method directly reduces the sampling jitter noise without data post-processing and provides the same phase measurement noise level (10-6 rad/Hz1/2 at 0.1 Hz) as the pilot-tone correction.

  11. Jitter-correction for IR/UV-XUV pump-probe experiments at the FLASH free-electron laser

    International Nuclear Information System (INIS)

    Savelyev, Evgeny; Boll, Rebecca; Bomme, Cedric; Schirmel, Nora; Redlin, Harald

    2017-01-01

    In pump-probe experiments employing a free-electron laser (FEL) in combination with a synchronized optical femtosecond laser, the arrival-time jitter between the FEL pulse and the optical laser pulse often severely limits the temporal resolution that can be achieved. Here, we present a pump-probe experiment on the UV-induced dissociation of 2,6-difluoroiodobenzene C 6 H 3 F 2 I) molecules performed at the FLASH FEL that takes advantage of recent upgrades of the FLASH timing and synchronization system to obtain high-quality data that are not limited by the FEL arrival-time jitter. Here, we discuss in detail the necessary data analysis steps and describe the origin of the time-dependent effects in the yields and kinetic energies of the fragment ions that we observe in the experiment.

  12. Spacecraft Jitter Attenuation Using Embedded Piezoelectric Actuators

    Science.gov (United States)

    Belvin, W. Keith

    1995-01-01

    Remote sensing from spacecraft requires precise pointing of measurement devices in order to achieve adequate spatial resolution. Unfortunately, various spacecraft disturbances induce vibrational jitter in the remote sensing instruments. The NASA Langley Research Center has performed analysis, simulations, and ground tests to identify the more promising technologies for minimizing spacecraft pointing jitter. These studies have shown that the use of smart materials to reduce spacecraft jitter is an excellent match between a maturing technology and an operational need. This paper describes the use of embedding piezoelectric actuators for vibration control and payload isolation. In addition, recent advances in modeling, simulation, and testing of spacecraft pointing jitter are discussed.

  13. Automated general temperature correction method for dielectric soil moisture sensors

    Science.gov (United States)

    Kapilaratne, R. G. C. Jeewantinie; Lu, Minjiao

    2017-08-01

    An effective temperature correction method for dielectric sensors is important to ensure the accuracy of soil water content (SWC) measurements of local to regional-scale soil moisture monitoring networks. These networks are extensively using highly temperature sensitive dielectric sensors due to their low cost, ease of use and less power consumption. Yet there is no general temperature correction method for dielectric sensors, instead sensor or site dependent correction algorithms are employed. Such methods become ineffective at soil moisture monitoring networks with different sensor setups and those that cover diverse climatic conditions and soil types. This study attempted to develop a general temperature correction method for dielectric sensors which can be commonly used regardless of the differences in sensor type, climatic conditions and soil type without rainfall data. In this work an automated general temperature correction method was developed by adopting previously developed temperature correction algorithms using time domain reflectometry (TDR) measurements to ThetaProbe ML2X, Stevens Hydra probe II and Decagon Devices EC-TM sensor measurements. The rainy day effects removal procedure from SWC data was automated by incorporating a statistical inference technique with temperature correction algorithms. The temperature correction method was evaluated using 34 stations from the International Soil Moisture Monitoring Network and another nine stations from a local soil moisture monitoring network in Mongolia. Soil moisture monitoring networks used in this study cover four major climates and six major soil types. Results indicated that the automated temperature correction algorithms developed in this study can eliminate temperature effects from dielectric sensor measurements successfully even without on-site rainfall data. Furthermore, it has been found that actual daily average of SWC has been changed due to temperature effects of dielectric sensors with a

  14. On the use of iterative techniques for feedforward control of transverse angle and position jitter in linear particle beam accelerators

    International Nuclear Information System (INIS)

    Barr, D.S.

    1994-01-01

    It is possible to use feedforward predictive control for transverse position and trajectory-angle jitter correction. The control procedure is straightforward, but creation of the predictive filter is not as obvious. The two processes tested were the least mean squares (LMS) and Kalman inter methods. The controller parameters calculated offline are downloaded to a real-time analog correction system between macropulses. These techniques worked well for both interpulse (pulse-to-pulse) correction and intrapulse (within a pulse) correction with the Kalman filter method being the clear winner. A simulation based on interpulse data taken at the Stanford Linear Collider showed an improvement factor of almost three in the average rms jitter over standard feedback techniques for the Kalman filter. An improvement factor of over three was found for the Kalman filter on intrapulse data taken at the Los Alamos Meson Physics Facility. The feedforward systems also improved the correction bandwidth

  15. Effect of jitter on an imaging FTIR spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, C. L., LLNL

    1997-04-01

    Line of sight (LOS) jitter produces temporal modulations of the signals which are detected in the focal plane of a temporally modulated imaging Fourier Transform Spectrometer. A theoretical treatment of LOS jitter effects is given, and is compared with the results of measurements with LIFTIRS1 (the Livermore Imaging Fourier Transform InfraRed Spectrometer). The identification, isolation, quantification and removal of jitter artifacts in hyperspectral imaging data by means of principal components analysis is discussed. The theoretical distribution of eigenvalues expected from principal components analysis is used to determine the level of significance of spatially coherent instrumental artifacts in general, including jitter as a representative example. It is concluded that an imaging FTIR spectrometer is much less seriously impacted by a given LOS jitter level than a non imaging FTIR spectrometer.

  16. E-model MOS Estimate Improvement through Jitter Buffer Packet Loss Modelling

    Directory of Open Access Journals (Sweden)

    Adrian Kovac

    2011-01-01

    Full Text Available Proposed article analyses dependence of MOS as a voice call quality (QoS measure estimated through ITU-T E-model under real network conditions with jitter. In this paper, a method of jitter effect is proposed. Jitter as voice packet time uncertainty appears as increased packet loss caused by jitter memory buffer under- or overflow. Jitter buffer behaviour at receiver’s side is modelled as Pareto/D/1/K system with Pareto-distributed packet interarrival times and its performance is experimentally evaluated by using statistic tools. Jitter buffer stochastic model is then incorporated into E-model in an additive manner accounting for network jitter effects via excess packet loss complementing measured network packet loss. Proposed modification of E-model input parameter adds two degrees of freedom in modelling: network jitter and jitter buffer size.

  17. On the use of iterative techniques for feedforward control of transverse angle and position jitter in linear particle beam accelerators

    International Nuclear Information System (INIS)

    Barr, D.S.

    1995-01-01

    It is possible to use feedforward predictive control for transverse position and trajectory-angle jitter correction. The control procedure is straightforward, but creation of the predictive filter is not as obvious. The two process tested were the least mean squares (LMS) and Kalman filter methods. The controller parameters calculated offline are downloaded to a real-time analog correction system between macropulses. These techniques worked well for both interpulse (pulse-to-pulse) correction and intrapulse (within a pulse) correction with the Kalman filter method being the clear winner. A simulation based on interpulse data taken at the Stanford Linear Collider showed an improvement factor of almost three in the average rms jitter over standard feedback techniques for the Kalman filter. An improvement factor of over three was found for the Kalman filter on intrapulse data taken at the Los Alamos Meson Physics Facility. The feedforward systems also improved the correction bandwidth. copyright 1995 American Institute of Physics

  18. Jitter reduction in Differentiated Services (Diffserv) networks

    NARCIS (Netherlands)

    Karagiannis, Georgios; Rexhepi, Vlora

    2001-01-01

    A method and a computer program for reducing jitter in IP packet transmission in a Diffserv network having ingress and egress Border Routers and using premium service, expedited forwarding and source route option, recognize incoming packets which have firm jitter requirements. The program verifies

  19. Jitter reduction in Differentiated Services (Diffserv) networks

    NARCIS (Netherlands)

    Karagiannis, Georgios; Rexhepi, Vlora

    2005-01-01

    A method and a computer program for reducing jitter in IP packet transmission in a Diffserv network having ingress and egress Border Routers and using premium service, expedited forwarding and source route option, recognize incoming packets which have firm jitter requirements. The program verifies

  20. Robust real-time change detection in high jitter.

    Energy Technology Data Exchange (ETDEWEB)

    Simonson, Katherine Mary; Ma, Tian J.

    2009-08-01

    A new method is introduced for real-time detection of transient change in scenes observed by staring sensors that are subject to platform jitter, pixel defects, variable focus, and other real-world challenges. The approach uses flexible statistical models for the scene background and its variability, which are continually updated to track gradual drift in the sensor's performance and the scene under observation. Two separate models represent temporal and spatial variations in pixel intensity. For the temporal model, each new frame is projected into a low-dimensional subspace designed to capture the behavior of the frame data over a recent observation window. Per-pixel temporal standard deviation estimates are based on projection residuals. The second approach employs a simple representation of jitter to generate pixelwise moment estimates from a single frame. These estimates rely on spatial characteristics of the scene, and are used gauge each pixel's susceptibility to jitter. The temporal model handles pixels that are naturally variable due to sensor noise or moving scene elements, along with jitter displacements comparable to those observed in the recent past. The spatial model captures jitter-induced changes that may not have been seen previously. Change is declared in pixels whose current values are inconsistent with both models.

  1. Essential Technology and Application of Jitter Detection and Compensation for High Resolution Satellites

    Directory of Open Access Journals (Sweden)

    TONG Xiaohua

    2017-10-01

    Full Text Available Satellite jitter is a common and complex phenomenon for the on-orbit high resolution satellites, which may affect the mapping accuracy and quality of imagery. A framework of jitter detection and compensation integrating data processing of multiple sensors is proposed in this paper. Jitter detection is performed based on multispectral imagery, three-line-array imagery, dense ground control and attitude measurement data, and jitter compensation is conducted both on image and on attitude with the sensor model. The platform jitter of ZY-3 satellite is processed and analyzed using the proposed technology, and the results demonstrate the feasibility and reliability of jitter detection and compensation. The variation law analysis of jitter indicates that the frequencies of jitter of ZY-3 satellite hold in the range between 0.6 and 0.7 Hz, while the amplitudes of jitter of ZY-3 satellite drop from 1 pixel in the early stage to below 0.4 pixels and tend to remain stable in the following stage.

  2. E-Model MOS Estimate Precision Improvement and Modelling of Jitter Effects

    Directory of Open Access Journals (Sweden)

    Adrian Kovac

    2012-01-01

    Full Text Available This paper deals with the ITU-T E-model, which is used for non-intrusive MOS VoIP call quality estimation on IP networks. The pros of E-model are computational simplicity and usability on real-time traffic. The cons, as shown in our previous work, are the inability of E-model to reflect effects of network jitter present on real traffic flows and jitter-buffer behavior on end user devices. These effects are visible mostly on traffic over WAN, internet and radio networks and cause the E-model MOS call quality estimate to be noticeably too optimistic. In this paper, we propose a modification to E-model using previously proposed Pplef (effective packet loss using jitter and jitter-buffer model based on Pareto/D/1/K system. We subsequently perform optimization of newly added parameters reflecting jitter effects into E-model by using PESQ intrusive measurement method as a reference for selected audio codecs. Function fitting and parameter optimization is performed under varying delay, packet loss, jitter and different jitter-buffer sizes for both, correlated and uncorrelated long-tailed network traffic.

  3. Jitter-Robust Orthogonal Hermite Pulses for Ultra-Wideband Impulse Radio Communications

    Directory of Open Access Journals (Sweden)

    Ryuji Kohno

    2005-03-01

    Full Text Available The design of a class of jitter-robust, Hermite polynomial-based, orthogonal pulses for ultra-wideband impulse radio (UWB-IR communications systems is presented. A unified and exact closed-form expression of the auto- and cross-correlation functions of Hermite pulses is provided. Under the assumption that jitter values are sufficiently smaller than pulse widths, this formula is used to decompose jitter-shifted pulses over an orthonormal basis of the Hermite space. For any given jitter probability density function (pdf, the decomposition yields an equivalent distribution of N-by-N matrices which simplifies the convolutional jitter channel model onto a multiplicative matrix model. The design of jitter-robust orthogonal pulses is then transformed into a generalized eigendecomposition problem whose solution is obtained with a Jacobi-like simultaneous diagonalization algorithm applied over a subset of samples of the channel matrix distribution. Examples of the waveforms obtained with the proposed design and their improved auto- and cross-correlation functions are given. Simulation results are presented, which demonstrate the superior performance of a pulse-shape modulated (PSM- UWB-IR system using the proposed pulses, over the same system using conventional orthogonal Hermite pulses, in jitter channels with additive white Gaussian noise (AWGN.

  4. EVIDENCE AGAINST AN ECOLOGICAL EXPLANATION OF THE JITTER ADVANTAGE FOR VECTION

    Directory of Open Access Journals (Sweden)

    Stephen ePalmisano

    2014-11-01

    Full Text Available Visual-vestibular conflicts have been traditionally used to explain both perceptions of self-motion and experiences of motion sickness. However, sensory conflict theories have been challenged by findings that adding simulated viewpoint jitter to inducing displays enhances (rather than reduces or destroys visual illusions of self-motion experienced by stationary observers. One possible explanation of this jitter advantage for vection is that jittering optic flows are more ecological than smooth displays. Despite the intuitive appeal of this idea, it has proven difficult to test. Here we compared subjective experiences generated by jittering and smooth radial flows when observers were exposed to either visual-only or multisensory self-motion stimulations. The display jitter (if present was generated in real-time by updating the virtual computer-graphics camera position to match the observer’s tracked head motions when treadmill walking or walking in place, or was a playback of these head motions when standing still. As expected, the (more naturalistic treadmill walking and the (less naturalistic walking in place were found to generate very different physical head jitters. However, contrary to the ecological account of the phenomenon, playbacks of treadmill walking and walking in place display jitter both enhanced visually induced illusions of self-motion to a similar degree (compared to smooth displays.

  5. GBTX Temperature impact on Jitter Implementation on VLDB

    CERN Document Server

    Pecoraro, Cyril

    2015-01-01

    This report was written within the framework of the CERN Summer Student Program. It is focused on jitter measurement over the temperature of a GBTx ASIC populated on a VLDB board. A complete measurement setup was conceived around a climatic chamber and various configurations of this chip were tested for characterization of skew and cycle to cycle jitter.

  6. Automation of one-loop QCD corrections

    CERN Document Server

    Hirschi, Valentin; Frixione, Stefano; Garzelli, Maria Vittoria; Maltoni, Fabio; Pittau, Roberto

    2011-01-01

    We present the complete automation of the computation of one-loop QCD corrections, including UV renormalization, to an arbitrary scattering process in the Standard Model. This is achieved by embedding the OPP integrand reduction technique, as implemented in CutTools, into the MadGraph framework. By interfacing the tool so constructed, which we dub MadLoop, with MadFKS, the fully automatic computation of any infrared-safe observable at the next-to-leading order in QCD is attained. We demonstrate the flexibility and the reach of our method by calculating the production rates for a variety of processes at the 7 TeV LHC.

  7. A fully automated algorithm of baseline correction based on wavelet feature points and segment interpolation

    Science.gov (United States)

    Qian, Fang; Wu, Yihui; Hao, Peng

    2017-11-01

    Baseline correction is a very important part of pre-processing. Baseline in the spectrum signal can induce uneven amplitude shifts across different wavenumbers and lead to bad results. Therefore, these amplitude shifts should be compensated before further analysis. Many algorithms are used to remove baseline, however fully automated baseline correction is convenient in practical application. A fully automated algorithm based on wavelet feature points and segment interpolation (AWFPSI) is proposed. This algorithm finds feature points through continuous wavelet transformation and estimates baseline through segment interpolation. AWFPSI is compared with three commonly introduced fully automated and semi-automated algorithms, using simulated spectrum signal, visible spectrum signal and Raman spectrum signal. The results show that AWFPSI gives better accuracy and has the advantage of easy use.

  8. Engineering high reliability, low-jitter Marx generators

    International Nuclear Information System (INIS)

    Schneider, L.X.; Lockwood, G.J.

    1985-01-01

    Multimodule pulsed power accelerators typically require high module reliability and nanosecond regime simultaneity between modules. Energy storage using bipolar Marx generators can meet these requirements. Experience gained from computer simulations and the development of the DEMON II Marx generator has led to a fundamental understanding of the operation of these multistage devices. As a result of this research, significant improvements in erection time jitter and reliability have been realized in multistage, bipolar Marx generators. Erection time jitter has been measured as low as 2.5 nanoseconds for the 3.2MV, 16-stage PBFA I Marx and 3.5 nanoseconds for the 6.0MV, 30-stage PBFA II (DEMON II) Marx, while maintaining exceptionally low prefire rates. Performance data are presented from the DEMON II Marx research program, as well as discussions on the use of computer simulations in designing low-jitter Marx generators

  9. Zero-crossing detector with sub-microsecond jitter and crosstalk

    Science.gov (United States)

    Dick, G. John; Kuhnle, Paul F.; Sydnor, Richard L.

    1990-01-01

    A zero-crossing detector (ZCD) was built and tested with a new circuit design which gives reduced time jitter compared to previous designs. With the new design, time jitter is reduced for the first time to a value which approaches that due to noise in the input amplifying stage. Additionally, with fiber-optic transmission of the output signal, crosstalk between units has been eliminated. The measured values are in good agreement with circuit noise calculations and approximately ten times lower than that for ZCD's presently installed in the JPL test facility. Crosstalk between adjacent units was reduced even more than the jitter.

  10. Timing Jitter Analysis for Clock recovery Circuits Based on an Optoelectronic Phase-Locked Loop (OPLL)

    DEFF Research Database (Denmark)

    Zibar, Darko; Mørk, Jesper; Oxenløwe, Leif Katsuo

    2005-01-01

    Timing jitter of an OPLL based clock recovery is investigated. We demonstrate how loop gain, input and VCO signal jitter, loop filter bandwidth and a loop time delay influence jitter of the extracted clock signal......Timing jitter of an OPLL based clock recovery is investigated. We demonstrate how loop gain, input and VCO signal jitter, loop filter bandwidth and a loop time delay influence jitter of the extracted clock signal...

  11. Passive energy jitter reduction in the cascaded third harmonic generation process

    International Nuclear Information System (INIS)

    Yan, L; Du, Y; You, Y; Sun, X; Wang, D; Hua, J; Shi, J; Lu, W; Huang, W; Chen, H; Tang, C; Huang, Z

    2014-01-01

    In free electron laser (FEL) systems with ultraviolet (UV) laser driven injectors, a highly stable UV source generated through cascaded third harmonic generation (THG) from an infrared (IR) source is a key element in guaranteeing the acceptable current jitter at the undulator. In this letter, the negative slope of the THG efficiency for high intensity ultrashort IR pulses is revealed to be a passive stabilization mechanism for energy jitter reduction in UV. A reduction of 2.5 times the energy jitter in UV is demonstrated in the experiment and simulations show that the energy jitter in UV can be reduced by more than one order of magnitude if the energy jitter in IR is less than 3%, with proper design of the THG efficiency curve, fulfilling the challenging requirement for UV laser stability in a broad scope of applications such as the photoinjector of x-ray FELs. (letter)

  12. An Automated Baseline Correction Method Based on Iterative Morphological Operations.

    Science.gov (United States)

    Chen, Yunliang; Dai, Liankui

    2018-05-01

    Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.

  13. Photodetection-induced relative timing jitter in synchronized time-lens source for coherent Raman scattering microscopy

    Directory of Open Access Journals (Sweden)

    Jiaqi Wang

    2017-09-01

    Full Text Available Synchronized time-lens source is a novel method to generate synchronized optical pulses to mode-locked lasers, and has found widespread applications in coherent Raman scattering microscopy. Relative timing jitter between the mode-locked laser and the synchronized time-lens source is a key parameter for evaluating the synchronization performance of such synchronized laser systems. However, the origins of the relative timing jitter in such systems are not fully determined, which in turn prevents the experimental efforts to optimize the synchronization performance. Here, we demonstrate, through theoretical modeling and numerical simulation, that the photodetection could be one physical origin of the relative timing jitter. Comparison with relative timing jitter due to the intrinsic timing jitter of the mode-locked laser is also demonstrated, revealing different qualitative and quantitative behaviors. Based on the nature of this photodetection-induced timing jitter, we further propose several strategies to reduce the relative timing jitter. Our theoretical results will provide guidelines for optimizing synchronization performance in experiments.

  14. Injection Bucket Jitter Compensation Using Phase Lock System at Fermilab Booster

    Energy Technology Data Exchange (ETDEWEB)

    Seiya, K. [Fermilab; Drennan, C. [Fermilab; Pellico, W. [Fermilab; Chaurize, S. [Fermilab

    2017-05-12

    The extraction bucket position in the Fermilab Booster is controlled with a cogging process that involves the comparison of the Booster rf count and the Recycler Ring revolution marker. A one rf bucket jitter in the ex-traction bucket position results from the variability of the process that phase matches the Booster to the Recycler. However, the new slow phase lock process used to lock the frequency and phase of the Booster rf to the Recycler rf has been made digital and programmable and has been modified to correct the extraction notch position. The beam loss at the Recycler injection has been reduced by 20%. Beam studies and the phase lock system will be discussed in this paper.

  15. Practical security analysis of continuous-variable quantum key distribution with jitter in clock synchronization

    Science.gov (United States)

    Xie, Cailang; Guo, Ying; Liao, Qin; Zhao, Wei; Huang, Duan; Zhang, Ling; Zeng, Guihua

    2018-03-01

    How to narrow the gap of security between theory and practice has been a notoriously urgent problem in quantum cryptography. Here, we analyze and provide experimental evidence of the clock jitter effect on the practical continuous-variable quantum key distribution (CV-QKD) system. The clock jitter is a random noise which exists permanently in the clock synchronization in the practical CV-QKD system, it may compromise the system security because of its impact on data sampling and parameters estimation. In particular, the practical security of CV-QKD with different clock jitter against collective attack is analyzed theoretically based on different repetition frequencies, the numerical simulations indicate that the clock jitter has more impact on a high-speed scenario. Furthermore, a simplified experiment is designed to investigate the influence of the clock jitter.

  16. Latency and Jitter Analysis for IEEE 802.11e Wireless LANs

    Directory of Open Access Journals (Sweden)

    Sungkwan Youm

    2013-01-01

    Full Text Available This paper presents a numerical analysis of latency and jitter for IEEE 802.11e wireless local area networks (WLANs in a saturation condition, by using a Markov model. We use this model to explicate how the enhanced distributed coordination function (EDCF differentiates classes of service and to characterize the probability distribution of the medium access control (MAC layer packet latency and jitter, on which the quality of the voice over Internet protocol (VoIP calls is dependent. From the proposed analytic model, we can estimate the available number of nodes determining the system performance, in order to satisfy user demands on the latency and jitter.

  17. Radial Velocities of Subgiant Stars and New Astrophysical Insights into RV Jitter

    Science.gov (United States)

    Luhn, Jacob; Bastien, Fabienne; Wright, Jason T.

    2018-01-01

    For nearly 20 years, the California Planet Search (CPS) has simultaneously monitored precise radial velocities and chromospheric activity levels of stars from Keck observatory to search for exoplanets. This sample provides a useful set of stars to better determine the dependence of RV jitter on flicker (which traces surface gravity) first shown in Bastien et al. (2014). We expand upon this initial work by examining a much larger sample of stars covering a much wider range of stellar parameters (effective temperature, surface gravity, and activity, among others). For more than 600 stars, there are enough RV measurements to distinguish this astrophysical jitter from accelerations due to orbital companions. To properly isolate RV jitter from these effects, we must first remove the RV signal due to these companions, including several previously unannounced giant planets around subgiant stars. We highlight some new results from our analysis of the CPS data. A more thorough understanding of the various sources of RV jitter and the underlying stellar phenomena that drive these intrinsic RV variations will enable more precise jitter estimates for RV follow-up targets such as those from K2 or the upcoming TESS mission.

  18. Multi-objective optimization for an automated and simultaneous phase and baseline correction of NMR spectral data

    Science.gov (United States)

    Sawall, Mathias; von Harbou, Erik; Moog, Annekathrin; Behrens, Richard; Schröder, Henning; Simoneau, Joël; Steimers, Ellen; Neymeyr, Klaus

    2018-04-01

    Spectral data preprocessing is an integral and sometimes inevitable part of chemometric analyses. For Nuclear Magnetic Resonance (NMR) spectra a possible first preprocessing step is a phase correction which is applied to the Fourier transformed free induction decay (FID) signal. This preprocessing step can be followed by a separate baseline correction step. Especially if series of high-resolution spectra are considered, then automated and computationally fast preprocessing routines are desirable. A new method is suggested that applies the phase and the baseline corrections simultaneously in an automated form without manual input, which distinguishes this work from other approaches. The underlying multi-objective optimization or Pareto optimization provides improved results compared to consecutively applied correction steps. The optimization process uses an objective function which applies strong penalty constraints and weaker regularization conditions. The new method includes an approach for the detection of zero baseline regions. The baseline correction uses a modified Whittaker smoother. The functionality of the new method is demonstrated for experimental NMR spectra. The results are verified against gravimetric data. The method is compared to alternative preprocessing tools. Additionally, the simultaneous correction method is compared to a consecutive application of the two correction steps.

  19. Femtosecond precision measurement of laser–rf phase jitter in a photocathode rf gun

    International Nuclear Information System (INIS)

    Shi, Libing; Zhao, Lingrong; Lu, Chao; Jiang, Tao; Liu, Shengguang; Wang, Rui; Zhu, Pengfei; Xiang, Dao

    2017-01-01

    We report on the measurement of the laser–rf phase jitter in a photocathode rf gun with femtosecond precision. In this experiment four laser pulses with equal separation are used to produce electron bunch trains; then the laser–rf phase jitter is obtained by measuring the variations of the electron bunch spacing with an rf deflector. Furthermore, we show that when the gun and the deflector are powered by the same rf source, it is possible to obtain the laser–rf phase jitter in the gun through measurement of the beam–rf phase jitter in the deflector. Based on these measurements, we propose an effective time-stamping method that may be applied in MeV ultrafast electron diffraction facilities to enhance the temporal resolution.

  20. Femtosecond precision measurement of laser–rf phase jitter in a photocathode rf gun

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Libing; Zhao, Lingrong; Lu, Chao; Jiang, Tao; Liu, Shengguang; Wang, Rui; Zhu, Pengfei; Xiang, Dao, E-mail: dxiang@sjtu.edu.cn

    2017-03-21

    We report on the measurement of the laser–rf phase jitter in a photocathode rf gun with femtosecond precision. In this experiment four laser pulses with equal separation are used to produce electron bunch trains; then the laser–rf phase jitter is obtained by measuring the variations of the electron bunch spacing with an rf deflector. Furthermore, we show that when the gun and the deflector are powered by the same rf source, it is possible to obtain the laser–rf phase jitter in the gun through measurement of the beam–rf phase jitter in the deflector. Based on these measurements, we propose an effective time-stamping method that may be applied in MeV ultrafast electron diffraction facilities to enhance the temporal resolution.

  1. Identification of amplitude and timing jitter in external-cavity mode-locked semiconductor lasers

    DEFF Research Database (Denmark)

    Mulet, Josep; Mørk, Jesper; Kroh, Marcel

    2004-01-01

    We theoretically and experimentally investigate the dynamics of external-cavity mode-locked semiconductor lasers, focusing on stability properties, optimization of pulsewidth and timing jitter. A new numerical approach allows to clearly separate timing and amplitude jitter....

  2. Real-time operating system timing jitter and its impact on motor control

    Science.gov (United States)

    Proctor, Frederick M.; Shackleford, William P.

    2001-12-01

    General-purpose microprocessors are increasingly being used for control applications due to their widespread availability and software support for non-control functions like networking and operator interfaces. Two classes of real-time operating systems (RTOS) exist for these systems. The traditional RTOS serves as the sole operating system, and provides all OS services. Examples include ETS, LynxOS, QNX, Windows CE and VxWorks. RTOS extensions add real-time scheduling capabilities to non-real-time OSes, and provide minimal services needed for the time-critical portions of an application. Examples include RTAI and RTL for Linux, and HyperKernel, OnTime and RTX for Windows NT. Timing jitter is an issue in these systems, due to hardware effects such as bus locking, caches and pipelines, and software effects from mutual exclusion resource locks, non-preemtible critical sections, disabled interrupts, and multiple code paths in the scheduler. Jitter is typically on the order of a microsecond to a few tens of microseconds for hard real-time operating systems, and ranges from milliseconds to seconds in the worst case for soft real-time operating systems. The question of its significance on the performance of a controller arises. Naturally, the smaller the scheduling period required for a control task, the more significant is the impact of timing jitter. Aside from this intuitive relationship is the greater significance of timing on open-loop control, such as for stepper motors, than for closed-loop control, such as for servo motors. Techniques for measuring timing jitter are discussed, and comparisons between various platforms are presented. Techniques to reduce jitter or mitigate its effects are presented. The impact of jitter on stepper motor control is analyzed.

  3. Correction of oral contrast artifacts in CT-based attenuation correction of PET images using an automated segmentation algorithm

    International Nuclear Information System (INIS)

    Ahmadian, Alireza; Ay, Mohammad R.; Sarkar, Saeed; Bidgoli, Javad H.; Zaidi, Habib

    2008-01-01

    Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map (μmap), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated μmaps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique in

  4. High reliability low jitter 80 kV pulse generator

    International Nuclear Information System (INIS)

    Savage, Mark Edward; Stoltzfus, Brian Scott

    2009-01-01

    Switching can be considered to be the essence of pulsed power. Time accurate switch/trigger systems with low inductance are useful in many applications. This article describes a unique switch geometry coupled with a low-inductance capacitive energy store. The system provides a fast-rising high voltage pulse into a low impedance load. It can be challenging to generate high voltage (more than 50 kilovolts) into impedances less than 10 (Omega), from a low voltage control signal with a fast rise time and high temporal accuracy. The required power amplification is large, and is usually accomplished with multiple stages. The multiple stages can adversely affect the temporal accuracy and the reliability of the system. In the present application, a highly reliable and low jitter trigger generator was required for the Z pulsed-power facility [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats,J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K.W. Struve, W.A. Stygar, L.K. Warne, and J. R. Woodworth, 2007 IEEE Pulsed Power Conference, Albuquerque, NM (IEEE, Piscataway, NJ, 2007), p. 979]. The large investment in each Z experiment demands low prefire probability and low jitter simultaneously. The system described here is based on a 100 kV DC-charged high-pressure spark gap, triggered with an ultraviolet laser. The system uses a single optical path for simultaneously triggering two parallel switches, allowing lower inductance and electrode erosion with a simple optical system. Performance of the system includes 6 ns output rise time into 5.6 (Omega), 550 ps one-sigma jitter measured from the 5 V trigger to the high voltage output, and misfire probability less than 10 -4 . The design of the system and some key measurements will be shown in the paper. We will discuss the design goals related to high reliability and low jitter. While

  5. Sub-nanosecond jitter, repetitive impulse generators for high reliability applications

    International Nuclear Information System (INIS)

    Krausse, G.J.; Sarjeant, W.J.

    1981-01-01

    Low jitter, high reliability impulse generator development has recently become of ever increasing importance for developing nuclear physics and weapons applications. The research and development of very low jitter (< 30 ps), multikilovolt generators for high reliability, minimum maintenance trigger applications utilizing a new class of high-pressure tetrode thyratrons now commercially available are described. The overall system design philosophy is described followed by a detailed analysis of the subsystem component elements. A multi-variable experimental analysis of this new tetrode thyratron was undertaken, in a low-inductance configuration, as a function of externally available parameters. For specific thyratron trigger conditions, rise times of 18 ns into 6.0-Ω loads were achieved at jitters as low as 24 ps. Using this database, an integrated trigger generator system with solid-state front-end is described in some detail. The generator was developed to serve as the Master Trigger Generator for a large neutrino detector installation at the Los Alamos Meson Physics Facility

  6. Low jitter and high power all-active mode-locked lasers

    DEFF Research Database (Denmark)

    Yvind, Kresten; Larsson, David; Christiansen, Lotte Jin

    2003-01-01

    A novel epitaxial design leading to low loss and low gain saturation improves the properties of 40 GHz mode-locked lasers. We obtain 2.8 ps nearly chirp free pulses with 228 fs jitter and fiber-coupled power of 7 mW.......A novel epitaxial design leading to low loss and low gain saturation improves the properties of 40 GHz mode-locked lasers. We obtain 2.8 ps nearly chirp free pulses with 228 fs jitter and fiber-coupled power of 7 mW....

  7. The identification of credit card encoders by hierarchical cluster analysis of the jitters of magnetic stripes.

    Science.gov (United States)

    Leung, S C; Fung, W K; Wong, K H

    1999-01-01

    The relative bit density variation graphs of 207 specimen credit cards processed by 12 encoding machines were examined first visually, and then classified by means of hierarchical cluster analysis. Twenty-nine credit cards being treated as 'questioned' samples were tested by way of cluster analysis against 'controls' derived from known encoders. It was found that hierarchical cluster analysis provided a high accuracy of identification with all 29 'questioned' samples classified correctly. On the other hand, although visual comparison of jitter graphs was less discriminating, it was nevertheless capable of giving a reasonably accurate result.

  8. An Experimental Study of a Low-Jitter Pulsed Electromagnetic Plasma Accelerator

    Science.gov (United States)

    Thio, Y. C. Francis; Lee, Michael; Eskridge, Richard; Smith, James; Martin, Adam; Rodgers, Stephen L. (Technical Monitor)

    2001-01-01

    An experimental plasma accelerator for a variety of applications under development at the NASA Marshall Space Flight Center is described. The accelerator is a pulsed plasma thruster and has been tested experimentally and plasma jet velocities of approximately 50 kilometers per second have been obtained. The plasma jet structure has been photographed with 10 ns exposure times to reveal a stable and repeatable plasma structure. Data for velocity profile information has been obtained using light pipes embedded in the gun walls to record the plasma transit at various barrel locations. Preliminary spatially resolved spectral data and magnetic field probe data are also presented. A high speed triggering system has been developed and tested as a means of reducing the gun "jitter". This jitter has been characterized and future work for second generation "ultra-low jitter" gun development is identified.

  9. Evaluation of refractive correction for standard automated perimetry in eyes wearing multifocal contact lenses.

    Science.gov (United States)

    Hirasawa, Kazunori; Ito, Hikaru; Ohori, Yukari; Takano, Yui; Shoji, Nobuyuki

    2017-01-01

    To evaluate the refractive correction for standard automated perimetry (SAP) in eyes with refractive multifocal contact lenses (CL) in healthy young participants. Twenty-nine eyes of 29 participants were included. Accommodation was paralyzed in all participants with 1% cyclopentolate hydrochloride. SAP was performed using the Humphrey SITA-standard 24-2 and 10-2 protocol under three refractive conditions: monofocal CL corrected for near distance (baseline); multifocal CL corrected for distance (mCL-D); and mCL-D corrected for near vision using a spectacle lens (mCL-N). Primary outcome measures were the foveal threshold, mean deviation (MD), and pattern standard deviation (PSD). The foveal threshold of mCL-N with both the 24-2 and 10-2 protocols significantly decreased by 2.2-2.5 dB ( P correction without additional near correction is to be recommended.

  10. Voxel-based morphometry and automated lobar volumetry: The trade-off between spatial scale and statistical correction

    Science.gov (United States)

    Voormolen, Eduard H.J.; Wei, Corie; Chow, Eva W.C.; Bassett, Anne S.; Mikulis, David J.; Crawley, Adrian P.

    2011-01-01

    Voxel-based morphometry (VBM) and automated lobar region of interest (ROI) volumetry are comprehensive and fast methods to detect differences in overall brain anatomy on magnetic resonance images. However, VBM and automated lobar ROI volumetry have detected dissimilar gray matter differences within identical image sets in our own experience and in previous reports. To gain more insight into how diverging results arise and to attempt to establish whether one method is superior to the other, we investigated how differences in spatial scale and in the need to statistically correct for multiple spatial comparisons influence the relative sensitivity of either technique to group differences in gray matter volumes. We assessed the performance of both techniques on a small dataset containing simulated gray matter deficits and additionally on a dataset of 22q11-deletion syndrome patients with schizophrenia (22q11DS-SZ) vs. matched controls. VBM was more sensitive to simulated focal deficits compared to automated ROI volumetry, and could detect global cortical deficits equally well. Moreover, theoretical calculations of VBM and ROI detection sensitivities to focal deficits showed that at increasing ROI size, ROI volumetry suffers more from loss in sensitivity than VBM. Furthermore, VBM and automated ROI found corresponding GM deficits in 22q11DS-SZ patients, except in the parietal lobe. Here, automated lobar ROI volumetry found a significant deficit only after a smaller subregion of interest was employed. Thus, sensitivity to focal differences is impaired relatively more by averaging over larger volumes in automated ROI methods than by the correction for multiple comparisons in VBM. These findings indicate that VBM is to be preferred over automated lobar-scale ROI volumetry for assessing gray matter volume differences between groups. PMID:19619660

  11. Automated aberration correction of arbitrary laser modes in high numerical aperture systems

    OpenAIRE

    Hering, Julian; Waller, Erik H.; Freymann, Georg von

    2016-01-01

    Controlling the point-spread-function in three-dimensional laser lithography is crucial for fabricating structures with highest definition and resolution. In contrast to microscopy, aberrations have to be physically corrected prior to writing, to create well defined doughnut modes, bottlebeams or multi foci modes. We report on a modified Gerchberg-Saxton algorithm for spatial-light-modulator based automated aberration compensation to optimize arbitrary laser-modes in a high numerical aperture...

  12. Automated aberration correction of arbitrary laser modes in high numerical aperture systems.

    Science.gov (United States)

    Hering, Julian; Waller, Erik H; Von Freymann, Georg

    2016-12-12

    Controlling the point-spread-function in three-dimensional laser lithography is crucial for fabricating structures with highest definition and resolution. In contrast to microscopy, aberrations have to be physically corrected prior to writing, to create well defined doughnut modes, bottlebeams or multi foci modes. We report on a modified Gerchberg-Saxton algorithm for spatial-light-modulator based automated aberration compensation to optimize arbitrary laser-modes in a high numerical aperture system. Using circularly polarized light for the measurement and first-guess initial conditions for amplitude and phase of the pupil function our scalar approach outperforms recent algorithms with vectorial corrections. Besides laser lithography also applications like optical tweezers and microscopy might benefit from the method presented.

  13. Software-controlled, highly automated intrafraction prostate motion correction with intrafraction stereographic targeting: System description and clinical results

    International Nuclear Information System (INIS)

    Mutanga, Theodore F.; Boer, Hans C. J. de; Rajan, Vinayakrishnan; Dirkx, Maarten L. P.; Os, Marjolein J. H. van; Incrocci, Luca; Heijmen, Ben J. M.

    2012-01-01

    Purpose: A new system for software-controlled, highly automated correction of intrafraction prostate motion,'' intrafraction stereographic targeting'' (iSGT), is described and evaluated. Methods: At our institute, daily prostate positioning is routinely performed at the start of treatment beam using stereographic targeting (SGT). iSGT was implemented by extension of the SGT software to facilitate fast and accurate intrafraction motion corrections with minimal user interaction. iSGT entails megavoltage (MV) image acquisitions with the first segment of selected IMRT beams, automatic registration of implanted markers, followed by remote couch repositioning to correct for intrafraction motion above a predefined threshold, prior to delivery of the remaining segments. For a group of 120 patients, iSGT with corrections for two nearly lateral beams was evaluated in terms of workload and impact on effective intrafraction displacements in the sagittal plane. Results: SDs of systematic (Σ) and random (σ) displacements relative to the planning CT measured directly after initial SGT setup correction were eff eff eff eff eff eff < 0.7 mm, requiring corrections in 82.4% of the fractions. Because iSGT is highly automated, the extra time added by iSGT is <30 s if a correction is required. Conclusions: Without increasing imaging dose, iSGT successfully reduces intrafraction prostate motion with minimal workload and increase in fraction time. An action level of 2 mm is recommended.

  14. Simulations of chopper jitter at the LET neutron spectrometer at the ISIS TS2

    DEFF Research Database (Denmark)

    Klenø, Kaspar Hewitt; Lefmann, Kim; Willendrup, Peter Kjær

    2014-01-01

    The effect of uncertainty in chopper phasing (jitter) has been investigated for the high-resolution time-of-flight spectrometer LET at the ISIS second target station. The investigation is carried out using virtual experiments, with the neutron simulation package McStas, where the chopper jitter i...

  15. Influence of P300 latency jitter on event related potential-based brain-computer interface performance

    Science.gov (United States)

    Aricò, P.; Aloise, F.; Schettini, F.; Salinari, S.; Mattia, D.; Cincotti, F.

    2014-06-01

    Objective. Several ERP-based brain-computer interfaces (BCIs) that can be controlled even without eye movements (covert attention) have been recently proposed. However, when compared to similar systems based on overt attention, they displayed significantly lower accuracy. In the current interpretation, this is ascribed to the absence of the contribution of short-latency visual evoked potentials (VEPs) in the tasks performed in the covert attention modality. This study aims to investigate if this decrement (i) is fully explained by the lack of VEP contribution to the classification accuracy; (ii) correlates with lower temporal stability of the single-trial P300 potentials elicited in the covert attention modality. Approach. We evaluated the latency jitter of P300 evoked potentials in three BCI interfaces exploiting either overt or covert attention modalities in 20 healthy subjects. The effect of attention modality on the P300 jitter, and the relative contribution of VEPs and P300 jitter to the classification accuracy have been analyzed. Main results. The P300 jitter is higher when the BCI is controlled in covert attention. Classification accuracy negatively correlates with jitter. Even disregarding short-latency VEPs, overt-attention BCI yields better accuracy than covert. When the latency jitter is compensated offline, the difference between accuracies is not significant. Significance. The lower temporal stability of the P300 evoked potential generated during the tasks performed in covert attention modality should be regarded as the main contributing explanation of lower accuracy of covert-attention ERP-based BCIs.

  16. Longitudinal Jitter Analysis of a Linear Accelerator Electron Gun

    Directory of Open Access Journals (Sweden)

    MingShan Liu

    2016-11-01

    Full Text Available We present measurements and analysis of the longitudinal timing jitter of a Beijing Electron Positron Collider (BEPCII linear accelerator electron gun. We simulated the longitudinal jitter effect of the gun using PARMELA to evaluate beam performance, including: beam profile, average energy, energy spread, and XY emittances. The maximum percentage difference of the beam parameters is calculated to be 100%, 13.27%, 42.24% and 65.01%, 86.81%, respectively. Due to this, the bunching efficiency is reduced to 54%. However, the longitudinal phase difference of the reference particle was 9.89°. The simulation results are in agreement with tests and are helpful to optimize the beam parameters by tuning the trigger timing of the gun during the bunching process.

  17. High reliability low jitter 80 kV pulse generator

    Directory of Open Access Journals (Sweden)

    M. E. Savage

    2009-08-01

    Full Text Available Switching can be considered to be the essence of pulsed power. Time accurate switch/trigger systems with low inductance are useful in many applications. This article describes a unique switch geometry coupled with a low-inductance capacitive energy store. The system provides a fast-rising high voltage pulse into a low impedance load. It can be challenging to generate high voltage (more than 50 kilovolts into impedances less than 10  Ω, from a low voltage control signal with a fast rise time and high temporal accuracy. The required power amplification is large, and is usually accomplished with multiple stages. The multiple stages can adversely affect the temporal accuracy and the reliability of the system. In the present application, a highly reliable and low jitter trigger generator was required for the Z pulsed-power facility [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats,J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, and J. R. Woodworth, 2007 IEEE Pulsed Power Conference, Albuquerque, NM (IEEE, Piscataway, NJ, 2007, p. 979]. The large investment in each Z experiment demands low prefire probability and low jitter simultaneously. The system described here is based on a 100 kV DC-charged high-pressure spark gap, triggered with an ultraviolet laser. The system uses a single optical path for simultaneously triggering two parallel switches, allowing lower inductance and electrode erosion with a simple optical system. Performance of the system includes 6 ns output rise time into 5.6  Ω, 550 ps one-sigma jitter measured from the 5 V trigger to the high voltage output, and misfire probability less than 10^{-4}. The design of the system and some key measurements will be shown in the paper. We will discuss the

  18. Analysis of jitter due to call-level fluctuations

    NARCIS (Netherlands)

    M.R.H. Mandjes (Michel)

    2005-01-01

    textabstractIn communication networks used by constant bit rate applications, call-level dynamics (i.e., entering and leaving calls) lead to fluctuations in the load, and therefore also fluctuations in the delay (jitter). By intentionally delaying the packets at the destination, one can transform

  19. A Low-Jitter Wireless Transmission Based on Buffer Management in Coding-Aware Routing

    Directory of Open Access Journals (Sweden)

    Cunbo Lu

    2015-08-01

    Full Text Available It is significant to reduce packet jitter for real-time applications in a wireless network. Existing coding-aware routing algorithms use the opportunistic network coding (ONC scheme in a packet coding algorithm. The ONC scheme never delays packets to wait for the arrival of a future coding opportunity. The loss of some potential coding opportunities may degrade the contribution of network coding to jitter performance. In addition, most of the existing coding-aware routing algorithms assume that all flows participating in the network have equal rate. This is unrealistic, since multi-rate environments often appear. To overcome the above problem and expand coding-aware routing to multi-rate scenarios, from the view of data transmission, we present a low-jitter wireless transmission algorithm based on buffer management (BLJCAR, which decides packets in coding node according to the queue-length based threshold policy instead of the regular ONC policy as used in existing coding-aware routing algorithms. BLJCAR is a unified framework to merge the single rate case and multiple rate case. Simulations results show that the BLJCAR algorithm embedded in coding-aware routing outperforms the traditional ONC policy in terms of jitter, packet delivery delay, packet loss ratio and network throughput in network congestion in any traffic rates.

  20. Design and implementation of high-precision and low-jitter programmable delay circuitry

    International Nuclear Information System (INIS)

    Gao Yuan; Cui Ke; Zhang Hongfei; Luo Chunli; Yang Dongxu; Liang Hao; Wang Jian

    2011-01-01

    A programmable delay circuit design which has characteristics of high-precision, low-jitter, wide-programmable-range and low power is introduced. The delay circuitry uses the scheme which has two parts: the coarse delay and the fine delay that could be controlled separately. Using different coarse delay chip can reach different maximum programmable range. And the fine delay programmable chip has the minimum step which is down to 10 ps. The whole circuitry jitter will be less than 100 ps. The design has been successfully applied in Quantum Key Distribution experiment. (authors)

  1. A low jitter supply regulated charge pump PLL with self-calibration

    International Nuclear Information System (INIS)

    Chen Min; Li Zhichao; Xiao Jingbo; Chen Jie; Liu Yuntao

    2016-01-01

    This paper describes a ring oscillator based low jitter charge pump PLL with supply regulation and digital calibration. In order to combat power supply noise, a low drop output voltage regulator is implemented. The VCO gain is tunable by using the 4 bit control self-calibration technique. So that the optimal VCO gain is automatically selected and the process/temperature variation is compensated. Fabricated in the 0.13 μm CMOS process, the PLL achieves a frequency range of 100–400 MHz and occupies a 190 × 200 μm 2 area. The measured RMS jitter is 5.36 ps at a 400 MHz operating frequency. (paper)

  2. Novel design of low-jitter 10 GHz all-active monolithic mode-locked lasers

    DEFF Research Database (Denmark)

    Larsson, David; Yvind, Kresten; Christiansen, Lotte Jin

    2004-01-01

    Using a novel design, we have fabricated 10 GHz all-active monolithic mode-locked semiconductor lasers that generate 1.4 ps pulses with record-low timing jitter. The dynamical properties of lasers with 1 and 2 QWs are compared.......Using a novel design, we have fabricated 10 GHz all-active monolithic mode-locked semiconductor lasers that generate 1.4 ps pulses with record-low timing jitter. The dynamical properties of lasers with 1 and 2 QWs are compared....

  3. Evaluation of refractive correction for standard automated perimetry in eyes wearing multifocal contact lenses

    Directory of Open Access Journals (Sweden)

    Kazunori Hirasawa

    2017-10-01

    Full Text Available AIM: To evaluate the refractive correction for standard automated perimetry (SAP in eyes with refractive multifocal contact lenses (CL in healthy young participants. METHODS: Twenty-nine eyes of 29 participants were included. Accommodation was paralyzed in all participants with 1% cyclopentolate hydrochloride. SAP was performed using the Humphrey SITA-standard 24-2 and 10-2 protocol under three refractive conditions: monofocal CL corrected for near distance (baseline; multifocal CL corrected for distance (mCL-D; and mCL-D corrected for near vision using a spectacle lens (mCL-N. Primary outcome measures were the foveal threshold, mean deviation (MD, and pattern standard deviation (PSD. RESULTS: The foveal threshold of mCL-N with both the 24-2 and 10-2 protocols significantly decreased by 2.2-2.5 dB CONCLUSION: Despite the induced mydriasis and the optical design of the multifocal lens used in this study, our results indicated that, when the dome-shaped visual field test is performed with eyes with large pupils and wearing refractive multifocal CLs, distance correction without additional near correction is to be recommended.

  4. The development of high-voltage repetitive low-jitter corona stabilized triggered switch

    Science.gov (United States)

    Geng, Jiuyuan; Yang, Jianhua; Cheng, Xinbing; Yang, Xiao; Chen, Rong

    2018-04-01

    The high-power switch plays an important part in a pulse power system. With the trend of pulse power technology toward modularization, miniaturization, and accuracy control, higher requirements on electrical trigger and jitter of the switch have been put forward. A high-power low-jitter corona-stabilized triggered switch (CSTS) is designed in this paper. This kind of CSTS is based on corona stabilized mechanism, and it can be used as a main switch of an intense electron-beam accelerator (IEBA). Its main feature was the use of an annular trigger electrode instead of a traditional needle-like trigger electrode, taking main and side trigger rings to fix the discharging channels and using SF6/N2 gas mixture as its operation gas. In this paper, the strength of the local field enhancement was changed by a trigger electrode protrusion length Dp. The differences of self-breakdown voltage and its stability, delay time jitter, trigger requirements, and operation range of the switch were compared. Then the effect of different SF6/N2 mixture ratio on switch performance was explored. The experimental results show that when the SF6 is 15% with the pressure of 0.2 MPa, the hold-off voltage of the switch is 551 kV, the operating range is 46.4%-93.5% of the self-breakdown voltage, the jitter is 0.57 ns, and the minimum trigger voltage requirement is 55.8% of the peak. At present, the CSTS has been successfully applied to an IEBA for long time operation.

  5. An innovative scintillation process for correcting, cooling, and reducing the randomness of waveforms

    International Nuclear Information System (INIS)

    Shen, J.

    1991-01-01

    Research activities were concentrated on an innovative scintillation technique for high-energy collider detection. Heretofore, scintillation waveform data of high- energy physics events have been problematically random. This paper represents a bottleneck of data flow for the next generation of detectors for proton colliders like SSC or LHC. Prevailing problems to resolve were: additional time walk and jitter resulting from the random hitting positions of particles, increased walk and jitter caused by scintillation photon propagation dispersions, and quantum fluctuations of luminescence. However, these were manageable when the different aspects of randomness had been clarified in increased detail. For this purpose, these three were defined as pseudorandomness, quasi-randomness, and real randomness, respectively. A unique scintillation counter incorporating long scintillators with light guides, a drift chamber, and fast discriminators plus integrators was employed to resolve problems of correcting time walk and reducing the additional jitter by establishing an analytical waveform description of V(t,z) for a measured (z). Resolving problem was accomplished by reducing jitter by compressing V(t,z) with a nonlinear medium, called cooling scintillation. Resolving problem was proposed by orienting molecular and polarizing scintillation through the use of intense magnetic technology, called stabilizing the waveform

  6. Influence of incident light wavelength on time jitter of fast photomultipliers

    International Nuclear Information System (INIS)

    Moszynski, M.; Vacher, J.

    1977-01-01

    The study of the single photoelectron time resolution as a function of the wavelength of the incident light was performed for a 56 CVP photomultiplier having an S-1 photocathode. The light flash from the XP22 light emitting diode generator was passed through passband filters and illuminated the 5 mm diameter central part of the photocathode. A significant increase of the time resolution above 30% was observed when the wavelength of the incident light was changed from 790 nm to 580 nm. This gives experimental evidence that the time jitter resulting from the spread of the initial velocity of photoelectrons is proportional to the square root of the maximal initial energy of photoelectrons. Based on this conclusion the measured time jitter of C31024, RCA8850 and XP2020 photomultipliers with the use of the XP22 light emitting diode at 560 nm light wavelength was recalculated to estimate the time jitter at 400 nm near the maximum of the photocathode sensitivity. It shows an almost twice larger time spread at 400 nm for the C31024 and RCA8850 with a high gain first dynode and an about 1.5 times larger time spread for the XP2020 photomultiplier, than those measured at 560 nm. (Auth.)

  7. CONCEPTUAL STRUCTURALLOGIC DIAGRAM PRODUCTION AUTOMATION EXPERT STUDY ON THE ISSUE OF CORRECTNESS OF CALCULATION OF THE TAX ON PROFIT OF ORGANIZATIONS

    Directory of Open Access Journals (Sweden)

    Andrey N. Ishchenko

    2014-01-01

    Full Text Available In this article the possibility of automation of an expert study on the questionof correctness of tax calculation profi t organization. Considered are the problemsof formalization of the expert research inthis field, specify the structure of imprisonment. The author proposes a conceptual structural-logic diagram automation expertresearch in this area.

  8. High time resolution beam-based measurement of the rf-to-laser jitter in a photocathode rf gun

    Directory of Open Access Journals (Sweden)

    Zhen Zhang

    2014-03-01

    Full Text Available Characterizing the rf-to-laser jitter in the photocathode rf gun and its possible origins is important for improving the synchronization and beam quality of the linac based on the photocathode rf gun. A new method based on the rf compression effect in the photocathode rf gun is proposed to measure the rf-to-laser jitter in the gun. By taking advantage of the correlation between the rf compression and the laser injection phase, the error caused by the jitter of the accelerating field in the gun is minimized and thus 10 fs time resolution is expected. Experimental demonstration at the Tsinghua Thomson scattering x-ray source with a time resolution better than 35 fs is reported in this paper. The experimental results are successfully used to obtain information on the possible cause of the jitter and the accompanying drifts.

  9. Concentric needle single fiber electromyography: normative jitter values on voluntary activated Extensor Digitorum Communis Eletromiografia de fibra única com agulha concêntrica: valores normativos do jitter no estudo por contração voluntária do músculo Extensor Digitorum Communis

    Directory of Open Access Journals (Sweden)

    João Aris Kouyoumdjian

    2007-06-01

    Full Text Available Single fiber electromyography (SFEMG is the most sensitive clinical neurophysiological test for neuromuscular junction disorders, particularly myasthenia gravis. Normal values for jitter obtained with SFEMG electrode have been published, but there are few publications for concentric needle electrode (CNE. The aim of this study was to discuss the possibilities to analyse the jitter in CNE recordings and to get normal values of jitter for voluntary activated Extensor Digitorum Communis using disposable CNE. Fifty normal subjects were studied, 16 male and 34 female with a mean age of 37.1±10.3 years (19-55. The jitter values of action potentials pairs of isolated muscular fibers were expressed as the mean consecutive difference (MCD after 20 analysed potential pairs. The mean MCD (n=50 obtained was 24.2±2.8 µs (range of mean values in each subject was 18-31. Upper 95% confidence limit is 29.8 µs. The mean jitter of all potential pairs (n=1000 obtained was 24.07±7.30 µs (range 9-57. A practical upper limit for individual data is set to 46 µs. The mean interpotential interval (MIPI was 779±177 µs (range of individual mean values was 530-1412; there were no potentials with impulse blocking. The present study confirms that CNE is suitable for jitter analysis although certain precautions must be mentioned. Our findings of jitter values with CNE were similar to some other few reports in literature.Eletromiografia de fibra única (SFEMG é o método eletrofisiológico mais sensível para diagnóstico das desordens de junção neuromuscular, particularmente miastenia gravis. Jitter obtido por meio de eletrodo de SFEMG já foi padronizado, porém há poucas publicações com uso de eletrodo de agulha concêntrica (CNE. O objetivo deste estudo é discutir as possibilidades de analisar o jitter por registro com CNE e obter valores normativos para o músculo Extensor Digitorum Communis por ativação muscular mínima. Foram estudados 50 indiv

  10. Dynamics of the Drosophila circadian clock: theoretical anti-jitter network and controlled chaos.

    Directory of Open Access Journals (Sweden)

    Hassan M Fathallah-Shaykh

    Full Text Available BACKGROUND: Electronic clocks exhibit undesirable jitter or time variations in periodic signals. The circadian clocks of humans, some animals, and plants consist of oscillating molecular networks with peak-to-peak time of approximately 24 hours. Clockwork orange (CWO is a transcriptional repressor of Drosophila direct target genes. METHODOLOGY/PRINCIPAL FINDINGS: Theory and data from a model of the Drosophila circadian clock support the idea that CWO controls anti-jitter negative circuits that stabilize peak-to-peak time in light-dark cycles (LD. The orbit is confined to chaotic attractors in both LD and dark cycles and is almost periodic in LD; furthermore, CWO diminishes the Euclidean dimension of the chaotic attractor in LD. Light resets the clock each day by restricting each molecular peak to the proximity of a prescribed time. CONCLUSIONS/SIGNIFICANCE: The theoretical results suggest that chaos plays a central role in the dynamics of the Drosophila circadian clock and that a single molecule, CWO, may sense jitter and repress it by its negative loops.

  11. Complacency and Automation Bias in the Use of Imperfect Automation.

    Science.gov (United States)

    Wickens, Christopher D; Clegg, Benjamin A; Vieane, Alex Z; Sebok, Angelia L

    2015-08-01

    We examine the effects of two different kinds of decision-aiding automation errors on human-automation interaction (HAI), occurring at the first failure following repeated exposure to correctly functioning automation. The two errors are incorrect advice, triggering the automation bias, and missing advice, reflecting complacency. Contrasts between analogous automation errors in alerting systems, rather than decision aiding, have revealed that alerting false alarms are more problematic to HAI than alerting misses are. Prior research in decision aiding, although contrasting the two aiding errors (incorrect vs. missing), has confounded error expectancy. Participants performed an environmental process control simulation with and without decision aiding. For those with the aid, automation dependence was created through several trials of perfect aiding performance, and an unexpected automation error was then imposed in which automation was either gone (one group) or wrong (a second group). A control group received no automation support. The correct aid supported faster and more accurate diagnosis and lower workload. The aid failure degraded all three variables, but "automation wrong" had a much greater effect on accuracy, reflecting the automation bias, than did "automation gone," reflecting the impact of complacency. Some complacency was manifested for automation gone, by a longer latency and more modest reduction in accuracy. Automation wrong, creating the automation bias, appears to be a more problematic form of automation error than automation gone, reflecting complacency. Decision-aiding automation should indicate its lower degree of confidence in uncertain environments to avoid the automation bias. © 2015, Human Factors and Ergonomics Society.

  12. Microphone triggering circuit for elimination of mechanically induced frequency-jitter in diode laser spectrometers: implications for quantitative analysis.

    Science.gov (United States)

    Sams, R L; Fried, A

    1987-09-01

    An electronic timing circuit using a microphone triggering device has been developed for elimination of mechanically induced frequency-jitter in diode laser spectrometers employing closed-cycle refrigerators. Mechanical compressor piston shocks are detected by the microphone and actuate an electronic circuit which ultimately interrupts data acquisition until the mechanical vibrations are completely quenched. In this way, laser sweeps contaminated by compressor frequency-jitter are not co-averaged. Employing this circuit, measured linewidths were in better agreement with that calculated. The importance of eliminating this mechanically induced frequency-jitter when carrying out quantitative diode laser measurements is further discussed.

  13. Femtosecond resolution timing jitter correction on a TW scale Ti:sapphire laser system for FEL pump-probe experiments.

    Science.gov (United States)

    Csatari Divall, Marta; Mutter, Patrick; Divall, Edwin J; Hauri, Christoph P

    2015-11-16

    Intense ultrashort pulse lasers are used for fs resolution pump-probe experiments more and more at large scale facilities, such as free electron lasers (FEL). Measurement of the arrival time of the laser pulses and stabilization to the machine or other sub-systems on the target, is crucial for high time-resolution measurements. In this work we report on a single shot, spectrally resolved, non-collinear cross-correlator with sub-fs resolution. With a feedback applied we keep the output of the TW class Ti:sapphire amplifier chain in time with the seed oscillator to ~3 fs RMS level for several hours. This is well below the typical pulse duration used at FELs and supports fs resolution pump-probe experiments. Short term jitter and long term timing drift measurements are presented. Applicability to other wavelengths and integration into the timing infrastructure of the FEL are also covered to show the full potential of the device.

  14. Communication Networks - Analysis of jitter due to call-level fluctuations

    NARCIS (Netherlands)

    Mandjes, M.R.H.

    2007-01-01

    Abstract In communication networks used by constant bit rate applications, call-level dynamics (i.e. entering and leaving calls) lead to fluctuations in the load, and therefore also fluctuations in the delay (jitter). By intentionally delaying the packets at the destination, one can transform the

  15. Text recognition and correction for automated data collection by mobile devices

    Science.gov (United States)

    Ozarslan, Suleyman; Eren, P. Erhan

    2014-03-01

    Participatory sensing is an approach which allows mobile devices such as mobile phones to be used for data collection, analysis and sharing processes by individuals. Data collection is the first and most important part of a participatory sensing system, but it is time consuming for the participants. In this paper, we discuss automatic data collection approaches for reducing the time required for collection, and increasing the amount of collected data. In this context, we explore automated text recognition on images of store receipts which are captured by mobile phone cameras, and the correction of the recognized text. Accordingly, our first goal is to evaluate the performance of the Optical Character Recognition (OCR) method with respect to data collection from store receipt images. Images captured by mobile phones exhibit some typical problems, and common image processing methods cannot handle some of them. Consequently, the second goal is to address these types of problems through our proposed Knowledge Based Correction (KBC) method used in support of the OCR, and also to evaluate the KBC method with respect to the improvement on the accurate recognition rate. Results of the experiments show that the KBC method improves the accurate data recognition rate noticeably.

  16. Broadband noise limit in the photodetection of ultralow jitter optical pulses.

    Science.gov (United States)

    Sun, Wenlu; Quinlan, Franklyn; Fortier, Tara M; Deschenes, Jean-Daniel; Fu, Yang; Diddams, Scott A; Campbell, Joe C

    2014-11-14

    Applications with optical atomic clocks and precision timing often require the transfer of optical frequency references to the electrical domain with extremely high fidelity. Here we examine the impact of photocarrier scattering and distributed absorption on the photocurrent noise of high-speed photodiodes when detecting ultralow jitter optical pulses. Despite its small contribution to the total photocurrent, this excess noise can determine the phase noise and timing jitter of microwave signals generated by detecting ultrashort optical pulses. A Monte Carlo simulation of the photodetection process is used to quantitatively estimate the excess noise. Simulated phase noise on the 10 GHz harmonic of a photodetected pulse train shows good agreement with previous experimental data, leading to the conclusion that the lowest phase noise photonically generated microwave signals are limited by photocarrier scattering well above the quantum limit of the optical pulse train.

  17. Variable Delay Element For Jitter Control In High Speed Data Links

    Science.gov (United States)

    Livolsi, Robert R.

    2002-06-11

    A circuit and method for decreasing the amount of jitter present at the receiver input of high speed data links which uses a driver circuit for input from a high speed data link which comprises a logic circuit having a first section (1) which provides data latches, a second section (2) which provides a circuit generates a pre-destorted output and for compensating for level dependent jitter having an OR function element and a NOR function element each of which is coupled to two inputs and to a variable delay element as an input which provides a bi-modal delay for pulse width pre-distortion, a third section (3) which provides a muxing circuit, and a forth section (4) for clock distribution in the driver circuit. A fifth section is used for logic testing the driver circuit.

  18. Reduction of the jitter of single-flux-quantum time-to-digital converters for time-of-flight mass spectrometry

    International Nuclear Information System (INIS)

    Sano, K.; Muramatsu, Y.; Yamanashi, Y.; Yoshikawa, N.; Zen, N.; Ohkubo, M.

    2014-01-01

    Highlights: • We proposed single-flux-quantum (SFQ) time-to-digital converters (TDCs) for TOF-MS. • SFQ TDC can measure time intervals between multiple signals with high-resolution. • SFQ TDC can directly convert the time intervals into binary data. • We designed two types of SFQ TDCs to reduce the jitter. • The jitter is reduced to less than 100 ps. - Abstract: We have been developing a high-resolution superconducting time-of-flight mass spectrometry (TOF-MS) system, which utilizes a superconducting strip ion detector (SSID) and a single-flux-quantum (SFQ) time-to-digital converter (TDC). The SFQ TDC can measure time intervals between multiple input signals and directly convert them into binary data. In our previous study, 24-bit SFQ TDC with a 3 × 24-bit First-In First-Out (FIFO) buffer was designed and implemented using the AIST Nb standard process 2 (STP2), whose time resolution and dynamic range are 100 ps and 1.6 ms, respectively. In this study we reduce the jitter of the TDC by using two different approaches: one uses an on-chip clock generator with an on-chip low-pass filter for reducing the noise in the bias current, and the other uses a low-jitter external clock source at room temperature. We confirmed that the jitter is reduced to less than 100 ps in the latter approach

  19. Reduction of the jitter of single-flux-quantum time-to-digital converters for time-of-flight mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Sano, K., E-mail: sano-kyosuke-cw@ynu.jp [Department Electrical and Computer Engineering, Yokohama National University, 79-5 Tokiwadai, Hodogaya, Yokohama 240-8501 (Japan); Muramatsu, Y.; Yamanashi, Y.; Yoshikawa, N. [Department Electrical and Computer Engineering, Yokohama National University, 79-5 Tokiwadai, Hodogaya, Yokohama 240-8501 (Japan); Zen, N.; Ohkubo, M. [Research Institute of Instrumentation Frontier, National Institute of Advanced Industrial Science and Technology, 1-1-1 Umezono, Tsukuba 305-8568 (Japan)

    2014-09-15

    Highlights: • We proposed single-flux-quantum (SFQ) time-to-digital converters (TDCs) for TOF-MS. • SFQ TDC can measure time intervals between multiple signals with high-resolution. • SFQ TDC can directly convert the time intervals into binary data. • We designed two types of SFQ TDCs to reduce the jitter. • The jitter is reduced to less than 100 ps. - Abstract: We have been developing a high-resolution superconducting time-of-flight mass spectrometry (TOF-MS) system, which utilizes a superconducting strip ion detector (SSID) and a single-flux-quantum (SFQ) time-to-digital converter (TDC). The SFQ TDC can measure time intervals between multiple input signals and directly convert them into binary data. In our previous study, 24-bit SFQ TDC with a 3 × 24-bit First-In First-Out (FIFO) buffer was designed and implemented using the AIST Nb standard process 2 (STP2), whose time resolution and dynamic range are 100 ps and 1.6 ms, respectively. In this study we reduce the jitter of the TDC by using two different approaches: one uses an on-chip clock generator with an on-chip low-pass filter for reducing the noise in the bias current, and the other uses a low-jitter external clock source at room temperature. We confirmed that the jitter is reduced to less than 100 ps in the latter approach.

  20. A Conflict-Free Low-Jitter Guaranteed-Rate MAC Protocol for Base-Station Communications in Wireless Mesh Networks

    Science.gov (United States)

    Szymanski, T. H.

    A scheduling algorithm and MAC protocol which provides low-jitter guaranteed-rate (GR) communications between base-stations (BS) in a Wireless Mesh Network (WMN) is proposed. The protocol can provision long-term multimedia services such as VOIP, IPTV, or Video-on-Demand. The time-axis is partitioned into scheduling frames with F time-slots each. A directional antennae scheme is used to provide each directed link with a fixed transmission rate. A protocol such as IntServ is used to provision resources along an end-to-end path of BSs for GR sessions. The Guaranteed Rates between the BSs are then specified in a doubly stochastic traffic rate matrix, which is recursively decomposed to yield a low-jitter GR frame transmission schedule. In the resulting schedule, the end-to-end delay and jitter are small and bounded, and the cell loss rate due to primary scheduling conflicts is zero. For dual-channel WMNs, the MAC protocol can achieve 100% utilization, as well as near-minimal queueing delays and near minimal delay jitter. The scheduling time complexity is O(NFlogNF), where N is the number of BSs. Extensive simulation results are presented.

  1. An automated baseline correction protocol for infrared spectra of atmospheric aerosols collected on polytetrafluoroethylene (Teflon) filters

    Science.gov (United States)

    Kuzmiakova, Adele; Dillner, Ann M.; Takahama, Satoshi

    2016-06-01

    A growing body of research on statistical applications for characterization of atmospheric aerosol Fourier transform infrared (FT-IR) samples collected on polytetrafluoroethylene (PTFE) filters (e.g., Russell et al., 2011; Ruthenburg et al., 2014) and a rising interest in analyzing FT-IR samples collected by air quality monitoring networks call for an automated PTFE baseline correction solution. The existing polynomial technique (Takahama et al., 2013) is not scalable to a project with a large number of aerosol samples because it contains many parameters and requires expert intervention. Therefore, the question of how to develop an automated method for baseline correcting hundreds to thousands of ambient aerosol spectra given the variability in both environmental mixture composition and PTFE baselines remains. This study approaches the question by detailing the statistical protocol, which allows for the precise definition of analyte and background subregions, applies nonparametric smoothing splines to reproduce sample-specific PTFE variations, and integrates performance metrics from atmospheric aerosol and blank samples alike in the smoothing parameter selection. Referencing 794 atmospheric aerosol samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011, we start by identifying key FT-IR signal characteristics, such as non-negative absorbance or analyte segment transformation, to capture sample-specific transitions between background and analyte. While referring to qualitative properties of PTFE background, the goal of smoothing splines interpolation is to learn the baseline structure in the background region to predict the baseline structure in the analyte region. We then validate the model by comparing smoothing splines baseline-corrected spectra with uncorrected and polynomial baseline (PB)-corrected equivalents via three statistical applications: (1) clustering analysis, (2) functional group quantification

  2. Low-Jitter Clock Multiplication: a Comparison between PLLs and DLLs

    NARCIS (Netherlands)

    van de Beek, R.C.H.; Klumperink, Eric A.M.; Vaucher, Cicero S.; Nauta, Bram

    This paper shows that, for a given power budget, a practical phase-locked loop (PLL)-based clock multiplier generates less jitter than a delay-locked loop (DLL) equivalent. This is due to the fact that the delay cells in a PLL ring-oscillator can consume more power per cell than their counterparts

  3. Automated NLO QCD corrections with WHIZARD

    International Nuclear Information System (INIS)

    Weiss, Christian; Siegen Univ.; Chokoufe Nejad, Bijan; Reuter, Juergen; Kilian, Wolfgang

    2015-10-01

    We briefly discuss the current status of NLO QCD automation in the Monte Carlo event generator WHIZARD. The functionality is presented for the explicit study of off-shell top quark production with associated backgrounds at a lepton collider.

  4. Jitter Studies for a 2.4 GeV Light Source Accelerator Using LiTrack

    International Nuclear Information System (INIS)

    Penn, Gregory E.

    2010-01-01

    Electron beam quality is an important factor in the performance of a free electron laser (FEL). Parameters of particular interest are the electron beam energy, slice emittance and energy spread, peak current, and energy chirp. Jitter in average energy is typically many times the slice energy spread. A seeded FEL is sensitive not only to these local properties but also to factors such as shot-to-shot consistency and the uniformity of the energy and current profiles across the bunch. The timing and bunch length jitter should be controlled to maximize the interval of time over which the electron beam can be reliably seeded by a laser to produce good output in the FEL. LiTrack, a one-dimensional tracking code which includes the effect of longitudinal wakefields, is used to study the sensitivity of the accelerator portion of a 2.4 GeV FEL to sources of variability such as the radio frequency (RF) cavities, chicanes, and the timing and efficiency of electron production at the photocathode. The main contributors to jitter in the resulting electron beam are identified and quantified for various figures of merit.

  5. Low-sensitivity H ∞ filter design for linear delta operator systems with sampling time jitter

    Science.gov (United States)

    Guo, Xiang-Gui; Yang, Guang-Hong

    2012-04-01

    This article is concerned with the problem of designing H ∞ filters for a class of linear discrete-time systems with low-sensitivity to sampling time jitter via delta operator approach. Delta-domain model is used to avoid the inherent numerical ill-condition resulting from the use of the standard shift-domain model at high sampling rates. Based on projection lemma in combination with the descriptor system approach often used to solve problems related to delay, a novel bounded real lemma with three slack variables for delta operator systems is presented. A sensitivity approach based on this novel lemma is proposed to mitigate the effects of sampling time jitter on system performance. Then, the problem of designing a low-sensitivity filter can be reduced to a convex optimisation problem. An important consideration in the design of correlation filters is the optimal trade-off between the standard H ∞ criterion and the sensitivity of the transfer function with respect to sampling time jitter. Finally, a numerical example demonstrating the validity of the proposed design method is given.

  6. Automatic Power Factor Correction Using Capacitive Bank

    OpenAIRE

    Mr.Anant Kumar Tiwari,; Mrs. Durga Sharma

    2014-01-01

    The power factor correction of electrical loads is a problem common to all industrial companies. Earlier the power factor correction was done by adjusting the capacitive bank manually [1]. The automated power factor corrector (APFC) using capacitive load bank is helpful in providing the power factor correction. Proposed automated project involves measuring the power factor value from the load using microcontroller. The design of this auto-adjustable power factor correction is ...

  7. The effect of jitter on the performance of space coherent optical communication system with Costas loop

    Science.gov (United States)

    Li, Xin; Hong, Yifeng; Wang, Jinfang; Liu, Yang; Sun, Xun; Li, Mi

    2018-01-01

    Numerous communication techniques and optical devices successfully applied in space optical communication system indicates a good portability of it. With this good portability, typical coherent demodulation technique of Costas loop can be easily adopted in space optical communication system. As one of the components of pointing error, the effect of jitter plays an important role in the communication quality of such system. Here, we obtain the probability density functions (PDF) of different jitter degrees and explain their essential effect on the bit error rate (BER) space optical communication system. Also, under the effect of jitter, we research the bit error rate of space coherent optical communication system using Costas loop with different system parameters of transmission power, divergence angle, receiving diameter, avalanche photodiode (APD) gain, and phase deviation caused by Costas loop. Through a numerical simulation of this kind of communication system, we demonstrate the relationship between the BER and these system parameters, and some corresponding methods of system optimization are presented to enhance the communication quality.

  8. Linear Polarization, Circular Polarization, and Depolarization of Gamma-ray Bursts: A Simple Case of Jitter Radiation

    Energy Technology Data Exchange (ETDEWEB)

    Mao, Jirong; Wang, Jiancheng, E-mail: jirongmao@mail.ynao.ac.cn [Yunnan Observatories, Chinese Academy of Sciences, 650011 Kunming, Yunnan Province (China)

    2017-04-01

    Linear and circular polarizations of gamma-ray bursts (GRBs) have been detected recently. We adopt a simplified model to investigate GRB polarization characteristics in this paper. A compressed two-dimensional turbulent slab containing stochastic magnetic fields is considered, and jitter radiation can produce the linear polarization under this special magnetic field topology. Turbulent Faraday rotation measure (RM) of this slab makes strong wavelength-dependent depolarization. The jitter photons can also scatter with those magnetic clumps inside the turbulent slab, and a nonzero variance of the Stokes parameter V can be generated. Furthermore, the linearly and circularly polarized photons in the optical and radio bands may suffer heavy absorptions from the slab. Thus we consider the polarized jitter radiation transfer processes. Finally, we compare our model results with the optical detections of GRB 091018, GRB 121024A, and GRB 131030A. We suggest simultaneous observations of GRB multi-wavelength polarization in the future.

  9. Solving for the Surface: An Automated Approach to THEMIS Atmospheric Correction

    Science.gov (United States)

    Ryan, A. J.; Salvatore, M. R.; Smith, R.; Edwards, C. S.; Christensen, P. R.

    2013-12-01

    Here we present the initial results of an automated atmospheric correction algorithm for the Thermal Emission Imaging System (THEMIS) instrument, whereby high spectral resolution Thermal Emission Spectrometer (TES) data are queried to generate numerous atmospheric opacity values for each THEMIS infrared image. While the pioneering methods of Bandfield et al. [2004] also used TES spectra to atmospherically correct THEMIS data, the algorithm presented here is a significant improvement because of the reduced dependency on user-defined inputs for individual images. Additionally, this technique is particularly useful for correcting THEMIS images that have captured a range of atmospheric conditions and/or surface elevations, issues that have been difficult to correct for using previous techniques. Thermal infrared observations of the Martian surface can be used to determine the spatial distribution and relative abundance of many common rock-forming minerals. This information is essential to understanding the planet's geologic and climatic history. However, the Martian atmosphere also has absorptions in the thermal infrared which complicate the interpretation of infrared measurements obtained from orbit. TES has sufficient spectral resolution (143 bands at 10 cm-1 sampling) to linearly unmix and remove atmospheric spectral end-members from the acquired spectra. THEMIS has the benefit of higher spatial resolution (~100 m/pixel vs. 3x5 km/TES-pixel) but has lower spectral resolution (8 surface sensitive spectral bands). As such, it is not possible to isolate the surface component by unmixing the atmospheric contribution from the THEMIS spectra, as is done with TES. Bandfield et al. [2004] developed a technique using atmospherically corrected TES spectra as tie-points for constant radiance offset correction and surface emissivity retrieval. This technique is the primary method used to correct THEMIS but is highly susceptible to inconsistent results if great care in the

  10. Acquisition and Initial Analysis of H+- and H--Beam Centroid Jitter at LANSCE

    Science.gov (United States)

    Gilpatrick, J. D.; Bitteker, L.; Gulley, M. S.; Kerstiens, D.; Oothoudt, M.; Pillai, C.; Power, J.; Shelley, F.

    2006-11-01

    During the 2005 Los Alamos Neutron Science Center (LANSCE) beam runs, beam current and centroid-jitter data were observed, acquired, analyzed, and documented for both the LANSCE H+ and H- beams. These data were acquired using three beam position monitors (BPMs) from the 100-MeV Isotope Production Facility (IPF) beam line and three BPMs from the Switchyard transport line at the end of the LANSCE 800-MeV linac. The two types of data acquired, intermacropulse and intramacropulse, were analyzed for statistical and frequency characteristics as well as various other correlations including comparing their phase-space like characteristics in a coordinate system of transverse angle versus transverse position. This paper will briefly describe the measurements required to acquire these data, the initial analysis of these jitter data, and some interesting dilemmas these data presented.

  11. Short locking time and low jitter phase-locked loop based on slope charge pump control

    International Nuclear Information System (INIS)

    Guo Zhongjie; Liu Youbao; Wu Longsheng; Wang Xihu; Tang Wei

    2010-01-01

    A novel structure of a phase-locked loop (PLL) characterized by a short locking time and low jitter is presented, which is realized by generating a linear slope charge pump current dependent on monitoring the output of the phase frequency detector (PFD) to implement adaptive bandwidth control. This improved PLL is created by utilizing a fast start-up circuit and a slope current control on a conventional charge pump PLL. First, the fast start-up circuit is enabled to achieve fast pre-charging to the loop filter. Then, when the output pulse of the PFD is larger than a minimum value, the charge pump current is increased linearly by the slope current control to ensure a shorter locking time and a lower jitter. Additionally, temperature variation is attenuated with the temperature compensation in the charge pump current design. The proposed PLL has been fabricated in a kind of DSP chip based on a 0.35 μm CMOS process. Comparing the characteristics with the classical PLL, the proposed PLL shows that it can reduce the locking time by 60% with a low peak-to-peak jitter of 0.3% at a wide operation temperature range. (semiconductor integrated circuits)

  12. Note: Design and implementation of a home-built imaging system with low jitter for cold atom experiments

    Energy Technology Data Exchange (ETDEWEB)

    Hachtel, A. J.; Gillette, M. C.; Clements, E. R.; Zhong, S.; Weeks, M. R.; Bali, S., E-mail: balis@miamioh.edu [Department of Physics, Miami University, Oxford, Ohio 45056-1866 (United States)

    2016-05-15

    A novel home-built system for imaging cold atom samples is presented using a readily available astronomy camera which has the requisite sensitivity but no timing-control. We integrate the camera with LabVIEW achieving fast, low-jitter imaging with a convenient user-defined interface. We show that our system takes precisely timed millisecond exposures and offers significant improvements in terms of system jitter and readout time over previously reported home-built systems. Our system rivals current commercial “black box” systems in performance and user-friendliness.

  13. Setup accuracy of stereoscopic X-ray positioning with automated correction for rotational errors in patients treated with conformal arc radiotherapy for prostate cancer

    International Nuclear Information System (INIS)

    Soete, Guy; Verellen, Dirk; Tournel, Koen; Storme, Guy

    2006-01-01

    We evaluated setup accuracy of NovalisBody stereoscopic X-ray positioning with automated correction for rotational errors with the Robotics Tilt Module in patients treated with conformal arc radiotherapy for prostate cancer. The correction of rotational errors was shown to reduce random and systematic errors in all directions. (NovalisBody TM and Robotics Tilt Module TM are products of BrainLAB A.G., Heimstetten, Germany)

  14. Timing-jitter reduction in a dispersion-managed soliton system

    International Nuclear Information System (INIS)

    Mu, R.; Grigoryan, V.S.; Menyuk, C.R.; Golovchenko, E.A.; Pilipetskii, A.N.

    1998-01-01

    We found by using Monte Carlo simulations that the timing jitter in a dispersion-managed soliton system decreases as the strength of the dispersion management and hence the ratio of the pulse energy to the pulse bandwidth increases. The results are in qualitative but not quantitative agreement with earlier predictions that the decrease is inversely proportional to the square root of the pulse energy. Using an improved semi-analytical theory, we obtained quantitative agreement with the simulations. copyright 1998 Optical Society of America

  15. Noise Originating from Intra-pixel Structure and Satellite Attitude Jitter on COROT

    DEFF Research Database (Denmark)

    Karoff, Christoffer; Arentoft, Torben; Kjeldsen, Hans

    2006-01-01

    We present a study on noise in space-based photometry originating from sensitivity variations within individual pixels, known as intra-pixel variations, and satellite attitude jitter. We have measured the intra-pixel structure on an e2v 47-20 CCD and made simulations of the effects these structur...

  16. Experience of automation failures in training: effects on trust, automation bias, complacency and performance.

    Science.gov (United States)

    Sauer, Juergen; Chavaillaz, Alain; Wastell, David

    2016-06-01

    This work examined the effects of operators' exposure to various types of automation failures in training. Forty-five participants were trained for 3.5 h on a simulated process control environment. During training, participants either experienced a fully reliable, automatic fault repair facility (i.e. faults detected and correctly diagnosed), a misdiagnosis-prone one (i.e. faults detected but not correctly diagnosed) or a miss-prone one (i.e. faults not detected). One week after training, participants were tested for 3 h, experiencing two types of automation failures (misdiagnosis, miss). The results showed that automation bias was very high when operators trained on miss-prone automation encountered a failure of the diagnostic system. Operator errors resulting from automation bias were much higher when automation misdiagnosed a fault than when it missed one. Differences in trust levels that were instilled by the different training experiences disappeared during the testing session. Practitioner Summary: The experience of automation failures during training has some consequences. A greater potential for operator errors may be expected when an automatic system failed to diagnose a fault than when it failed to detect one.

  17. Timing jitter measurements at the SLC electron source

    International Nuclear Information System (INIS)

    Sodja, J.; Browne, M.J.; Clendenin, J.E.

    1989-03-01

    The SLC thermionic gun and electron source produce a beam of up to 15 /times/ 10 10 /sub e//minus/ in a single S-band bunch. A 170 keV, 2 ns FWHM pulse out of the gun is compressed by means of two subharmonic buncher cavities followed by an S-band buncher and a standard SLAC accelerating section. Ceramic gaps in the beam pipe at the output of the gun allow a measure of the beam intensity and timing. A measurement at these gaps of the timing jitter, with a resolution of <10 ps, is described. 3 refs., 5 figs

  18. Gigahertz repetition rate, sub-femtosecond timing jitter optical pulse train directly generated from a mode-locked Yb:KYW laser.

    Science.gov (United States)

    Yang, Heewon; Kim, Hyoji; Shin, Junho; Kim, Chur; Choi, Sun Young; Kim, Guang-Hoon; Rotermund, Fabian; Kim, Jungwon

    2014-01-01

    We show that a 1.13 GHz repetition rate optical pulse train with 0.70 fs high-frequency timing jitter (integration bandwidth of 17.5 kHz-10 MHz, where the measurement instrument-limited noise floor contributes 0.41 fs in 10 MHz bandwidth) can be directly generated from a free-running, single-mode diode-pumped Yb:KYW laser mode-locked by single-wall carbon nanotube-coated mirrors. To our knowledge, this is the lowest-timing-jitter optical pulse train with gigahertz repetition rate ever measured. If this pulse train is used for direct sampling of 565 MHz signals (Nyquist frequency of the pulse train), the jitter level demonstrated would correspond to the projected effective-number-of-bit of 17.8, which is much higher than the thermal noise limit of 50 Ω load resistance (~14 bits).

  19. Design of optical axis jitter control system for multi beam lasers based on FPGA

    Science.gov (United States)

    Ou, Long; Li, Guohui; Xie, Chuanlin; Zhou, Zhiqiang

    2018-02-01

    A design of optical axis closed-loop control system for multi beam lasers coherent combining based on FPGA was introduced. The system uses piezoelectric ceramics Fast Steering Mirrors (FSM) as actuator, the Fairfield spot detection of multi beam lasers by the high speed CMOS camera for optical detecting, a control system based on FPGA for real-time optical axis jitter suppression. The algorithm for optical axis centroid detecting and PID of anti-Integral saturation were realized by FPGA. Optimize the structure of logic circuit by reuse resource and pipeline, as a result of reducing logic resource but reduced the delay time, and the closed-loop bandwidth increases to 100Hz. The jitter of laser less than 40Hz was reduced 40dB. The cost of the system is low but it works stably.

  20. Generating a Square Switching Window for Timing Jitter Tolerant 160 Gb/s Demultiplexing by the Optical Fourier Transform Technique

    DEFF Research Database (Denmark)

    Oxenløwe, Leif Katsuo; Galili, Michael; Clausen, A. T:

    2006-01-01

    A square spectrum is optically Fourier transformed into a square pulse in the time domain. This is used to demultiplex a 160 Gb/s data signal with a significant increase in jitter tolerance to 2.6 ps.......A square spectrum is optically Fourier transformed into a square pulse in the time domain. This is used to demultiplex a 160 Gb/s data signal with a significant increase in jitter tolerance to 2.6 ps....

  1. Removal of jitter noise in 3D shape recovery from image focus by using Kalman filter.

    Science.gov (United States)

    Jang, Hoon-Seok; Muhammad, Mannan Saeed; Choi, Tae-Sun

    2018-02-01

    In regard to Shape from Focus, one critical factor impacting system application is mechanical vibration of the translational stage causing jitter noise along the optical axis. This noise is not detectable by simply observing the image. However, when focus measures are applied, inaccuracies in the depth occur. In this article, jitter noise and focus curves are modeled by Gaussian distribution and quadratic function, respectively. Then Kalman filter is designed and applied to eliminate this noise in the focus curves, as a post-processing step after the focus measure application. Experiments are implemented with simulated objects and real objects to show usefulness of proposed algorithm. © 2017 Wiley Periodicals, Inc.

  2. Jitter reduction of a reaction wheel by management of angular momentum using magnetic torquers in nano- and micro-satellites

    Science.gov (United States)

    Inamori, Takaya; Wang, Jihe; Saisutjarit, Phongsatorn; Nakasuka, Shinichi

    2013-07-01

    Nowadays, nano- and micro-satellites, which are smaller than conventional large satellites, provide access to space to many satellite developers, and they are attracting interest as an application of space development because development is possible over shorter time period at a lower cost. In most of these nano- and micro-satellite missions, the satellites generally must meet strict attitude requirements for obtaining scientific data under strict constraints of power consumption, space, and weight. In many satellite missions, the jitter of a reaction wheel degrades the performance of the mission detectors and attitude sensors; therefore, jitter should be controlled or isolated to reduce its effect on sensor devices. In conventional standard-sized satellites, tip-tilt mirrors (TTMs) and isolators are used for controlling or isolating the vibrations from reaction wheels; however, it is difficult to use these devices for nano- and micro-satellite missions under the strict power, space, and mass constraints. In this research, the jitter of reaction wheels is reduced by using accurate sensors, small reaction wheels, and slow rotation frequency reaction wheel instead of TTMs and isolators. The objective of a reaction wheel in many satellite missions is the management of the satellite's angular momentum, which increases because of attitude disturbances. If the magnitude of the disturbance is reduced in orbit or on the ground, the magnitude of the angular momentum that the reaction wheels gain from attitude disturbances in orbit becomes smaller; therefore, satellites can stabilize their attitude using only smaller reaction wheels or slow rotation speed, which cause relatively smaller vibration. In nano- and micro-satellite missions, the dominant attitude disturbance is a magnetic torque, which can be cancelled by using magnetic actuators. With the magnetic compensation, the satellite reduces the angular momentum that the reaction wheels gain, and therefore, satellites do

  3. A novel fair active queue management algorithm based on traffic delay jitter

    Science.gov (United States)

    Wang, Xue-Shun; Yu, Shao-Hua; Dai, Jin-You; Luo, Ting

    2009-11-01

    In order to guarantee the quantity of data traffic delivered in the network, congestion control strategy is adopted. According to the study of many active queue management (AQM) algorithms, this paper proposes a novel active queue management algorithm named JFED. JFED can stabilize queue length at a desirable level by adjusting output traffic rate and adopting a reasonable calculation of packet drop probability based on buffer queue length and traffic jitter; and it support burst packet traffic through the packet delay jitter, so that it can traffic flow medium data. JFED impose effective punishment upon non-responsible flow with a full stateless method. To verify the performance of JFED, it is implemented in NS2 and is compared with RED and CHOKe with respect to different performance metrics. Simulation results show that the proposed JFED algorithm outperforms RED and CHOKe in stabilizing instantaneous queue length and in fairness. It is also shown that JFED enables the link capacity to be fully utilized by stabilizing the queue length at a desirable level, while not incurring excessive packet loss ratio.

  4. Improved beam jitter control methods for high energy laser systems

    OpenAIRE

    Frist, Duane C.

    2009-01-01

    Approved for public release, distribution unlimited The objective of this research was to develop beam jitter control methods for a High Energy Laser (HEL) testbed. The first step was to characterize the new HEL testbed at NPS. This included determination of natural frequencies and component models which were used to create a Matlab/Simulink model of the testbed. Adaptive filters using Filtered-X Least Mean Squares (FX-LMS) and Filtered-X Recursive Least Square (FX-RLS) were then implement...

  5. Automation of NLO QCD and EW corrections with Sherpa and Recola

    Energy Technology Data Exchange (ETDEWEB)

    Biedermann, Benedikt; Denner, Ansgar; Pellen, Mathieu [Universitaet Wuerzburg, Institut fuer Theoretische Physik und Astrophysik, Wuerzburg (Germany); Braeuer, Stephan; Schumann, Steffen [Georg-August Universitaet Goettingen, II. Physikalisches Institut, Goettingen (Germany); Thompson, Jennifer M. [Universitaet Heidelberg, Institut fuer Theoretische Physik, Heidelberg (Germany)

    2017-07-15

    This publication presents the combination of the one-loop matrix-element generator Recola with the multipurpose Monte Carlo program Sherpa. Since both programs are highly automated, the resulting Sherpa +Recola framework allows for the computation of - in principle - any Standard Model process at both NLO QCD and EW accuracy. To illustrate this, three representative LHC processes have been computed at NLO QCD and EW: vector-boson production in association with jets, off-shell Z-boson pair production, and the production of a top-quark pair in association with a Higgs boson. In addition to fixed-order computations, when considering QCD corrections, all functionalities of Sherpa, i.e. particle decays, QCD parton showers, hadronisation, underlying events, etc. can be used in combination with Recola. This is demonstrated by the merging and matching of one-loop QCD matrix elements for Drell-Yan production in association with jets to the parton shower. The implementation is fully automatised, thus making it a perfect tool for both experimentalists and theorists who want to use state-of-the-art predictions at NLO accuracy. (orig.)

  6. Designing and commissioning of a setup for timing-jitter measurements using electro-optic temporal decoding

    Energy Technology Data Exchange (ETDEWEB)

    Borissenko, Dennis

    2016-12-15

    Precise measurements of the arrival time jitter between the ionization laser, used to create the plasma, and the driver beam in the PWFA setup of the FLASHForward project are of high interest for the operation and optimization of the experiment. In this thesis, an electro-optic temporal decoding (EOTD) setup with near crossed polarizer detection scheme is presented, which can measure the timing-jitter to an accuracy of around 30 fs. This result was obtained during several measurements conducted at the coherent transition radiation beamline CTR141 at FLASH, using a 100 μm thick GaP crystal and coherent diffraction/transition radiation, generated from the FLASH1 electron bunches. Measurements were performed during long and short electron bunch operation at FLASH, showing that best results are obtained with CDR from long electron bunches. Utilizing CTR led to a higher EO signal and ''over-compensation'' of the SHG background level during the measurement, which resulted in a double-peak structure of the observed THz pulses. To resolve the single-cycle nature of these THz pulses, the SHG background had to be adjusted properly. Furthermore, EOTD measurements during a short bunch operation run at FLASH exhibited strong oscillations in the EO signal, which were suspected to come either from internal lattice resonances of the EO crystal or internal reflections, or excitation of water vapor in the humid air in the laboratory. The oscillations spoiled the observed EOTD trace leading to no sensible measurements of the arrival time jitter during this short bunch operation. To evaluate the capabilities of the setup for monitoring the timing jitter of short PWFA accelerated electron bunches or very short driver bunches at FLASHForward, further investigations on the observed oscillations in the EOTD traces have to be performed during short bunch operation at FLASH with different crystals and under vacuum conditions, to understand the oscillations of the EO

  7. Designing and commissioning of a setup for timing-jitter measurements using electro-optic temporal decoding

    International Nuclear Information System (INIS)

    Borissenko, Dennis

    2016-12-01

    Precise measurements of the arrival time jitter between the ionization laser, used to create the plasma, and the driver beam in the PWFA setup of the FLASHForward project are of high interest for the operation and optimization of the experiment. In this thesis, an electro-optic temporal decoding (EOTD) setup with near crossed polarizer detection scheme is presented, which can measure the timing-jitter to an accuracy of around 30 fs. This result was obtained during several measurements conducted at the coherent transition radiation beamline CTR141 at FLASH, using a 100 μm thick GaP crystal and coherent diffraction/transition radiation, generated from the FLASH1 electron bunches. Measurements were performed during long and short electron bunch operation at FLASH, showing that best results are obtained with CDR from long electron bunches. Utilizing CTR led to a higher EO signal and ''over-compensation'' of the SHG background level during the measurement, which resulted in a double-peak structure of the observed THz pulses. To resolve the single-cycle nature of these THz pulses, the SHG background had to be adjusted properly. Furthermore, EOTD measurements during a short bunch operation run at FLASH exhibited strong oscillations in the EO signal, which were suspected to come either from internal lattice resonances of the EO crystal or internal reflections, or excitation of water vapor in the humid air in the laboratory. The oscillations spoiled the observed EOTD trace leading to no sensible measurements of the arrival time jitter during this short bunch operation. To evaluate the capabilities of the setup for monitoring the timing jitter of short PWFA accelerated electron bunches or very short driver bunches at FLASHForward, further investigations on the observed oscillations in the EOTD traces have to be performed during short bunch operation at FLASH with different crystals and under vacuum conditions, to understand the oscillations of the EO signal better.

  8. The fast correction coil feedback control system

    International Nuclear Information System (INIS)

    Coffield, F.; Caporaso, G.; Zentler, J.M.

    1989-01-01

    A model-based feedback control system has been developed to correct beam displacement errors in the Advanced Test Accelerator (ATA) electron beam accelerator. The feedback control system drives an X/Y dipole steering system that has a 40-MHz bandwidth and can produce ±300-Gauss-cm dipole fields. A simulator was used to develop the control algorithm and to quantify the expected performance in the presence of beam position measurement noise and accelerator timing jitter. The major problem to date has been protecting the amplifiers from the voltage that is inductively coupled to the steering bars by the beam. 3 refs., 8 figs

  9. Comparatively Studied Color Correction Methods for Color Calibration of Automated Microscopy Complex of Biomedical Specimens

    Directory of Open Access Journals (Sweden)

    T. A. Kravtsova

    2016-01-01

    Full Text Available The paper considers a task of generating the requirements and creating a calibration target for automated microscopy systems (AMS of biomedical specimens to provide the invariance of algorithms and software to the hardware configuration. The required number of color fields of the calibration target and their color coordinates are mostly determined by the color correction method, for which coefficients of the equations are estimated during the calibration process. The paper analyses existing color calibration techniques for digital imaging systems using an optical microscope and shows that there is a lack of published results of comparative studies to demonstrate a particular useful color correction method for microscopic images. A comparative study of ten image color correction methods in RGB space using polynomials and combinations of color coordinate of different orders was carried out. The method of conditioned least squares to estimate the coefficients in the color correction equations using captured images of 217 color fields of the calibration target Kodak Q60-E3 was applied. The regularization parameter in this method was chosen experimentally. It was demonstrated that the best color correction quality characteristics are provided by the method that uses a combination of color coordinates of the 3rd order. The study of the influence of the number and the set of color fields included in calibration target on color correction quality for microscopic images was performed. Six train sets containing 30, 35, 40, 50, 60 and 80 color fields, and test set of 47 color fields not included in any of the train sets were formed. It was found out that the train set of 60 color fields minimizes the color correction error values for both operating modes of digital camera: using "default" color settings and with automatic white balance. At the same time it was established that the use of color fields from the widely used now Kodak Q60-E3 target does not

  10. A low-jitter RF PLL frequency synthesizer with high-speed mixed-signal down-scaling circuits

    International Nuclear Information System (INIS)

    Tang Lu; Wang Zhigong; Xue Hong; He Xiaohu; Xu Yong; Sun Ling

    2010-01-01

    A low-jitter RF phase locked loop (PLL) frequency synthesizer with high-speed mixed-signal down-scaling circuits is proposed. Several techniques are proposed to reduce the design complexity and improve the performance of the mixed-signal down-scaling circuit in the PLL. An improved D-latch is proposed to increase the speed and the driving capability of the DMP in the down-scaling circuit. Through integrating the D-latch with 'OR' logic for dual-modulus operation, the delays associated with both the 'OR' and D-flip-flop (DFF) operations are reduced, and the complexity of the circuit is also decreased. The programmable frequency divider of the down-scaling circuit is realized in a new method based on deep submicron CMOS technology standard cells and a more accurate wire-load model. The charge pump in the PLL is also realized with a novel architecture to improve the current matching characteristic so as to reduce the jitter of the system. The proposed RF PLL frequency synthesizer is realized with a TSMC 0.18-μm CMOS process. The measured phase noise of the PLL frequency synthesizer output at 100 kHz offset from the center frequency is only -101.52 dBc/Hz. The circuit exhibits a low RMS jitter of 3.3 ps. The power consumption of the PLL frequency synthesizer is also as low as 36 mW at a 1.8 V power supply. (semiconductor integrated circuits)

  11. Automation of electroweak NLO corrections in general models

    Energy Technology Data Exchange (ETDEWEB)

    Lang, Jean-Nicolas [Universitaet Wuerzburg (Germany)

    2016-07-01

    I discuss the automation of generation of scattering amplitudes in general quantum field theories at next-to-leading order in perturbation theory. The work is based on Recola, a highly efficient one-loop amplitude generator for the Standard Model, which I have extended so that it can deal with general quantum field theories. Internally, Recola computes off-shell currents and for new models new rules for off-shell currents emerge which are derived from the Feynman rules. My work relies on the UFO format which can be obtained by a suited model builder, e.g. FeynRules. I have developed tools to derive the necessary counterterm structures and to perform the renormalization within Recola in an automated way. I describe the procedure using the example of the two-Higgs-doublet model.

  12. Low-timing-jitter, stretched-pulse passively mode-locked fiber laser with tunable repetition rate and high operation stability

    International Nuclear Information System (INIS)

    Liu, Yuanshan; Zhang, Jian-Guo; Chen, Guofu; Zhao, Wei; Bai, Jing

    2010-01-01

    We design a low-timing-jitter, repetition-rate-tunable, stretched-pulse passively mode-locked fiber laser by using a nonlinear amplifying loop mirror (NALM), a semiconductor saturable absorber mirror (SESAM), and a tunable optical delay line in the laser configuration. Low-timing-jitter optical pulses are stably produced when a SESAM and a 0.16 m dispersion compensation fiber are employed in the laser cavity. By inserting a tunable optical delay line between NALM and SESAM, the variable repetition-rate operation of a self-starting, passively mode-locked fiber laser is successfully demonstrated over a range from 49.65 to 50.47 MHz. The experimental results show that the newly designed fiber laser can maintain the mode locking at the pumping power of 160 mW to stably generate periodic optical pulses with width less than 170 fs and timing jitter lower than 75 fs in the 1.55 µm wavelength region, when the fundamental repetition rate of the laser is continuously tuned between 49.65 and 50.47 MHz. Moreover, this fiber laser has a feature of turn-key operation with high repeatability of its fundamental repetition rate in practice

  13. Optical beam transport to a remote location for low jitter pump-probe experiments with a free electron laser

    Directory of Open Access Journals (Sweden)

    P. Cinquegrana

    2014-04-01

    Full Text Available In this paper we propose a scheme that allows a strong reduction of the timing jitter between the pulses of a free electron laser (FEL and external laser pulses delivered simultaneously at the FEL experimental stations for pump-probe–type experiments. The technique, applicable to all seeding-based FEL schemes, relies on the free-space optical transport of a portion of the seed laser pulse from its optical table to the experimental stations. The results presented here demonstrate that a carefully designed laser beam transport, incorporating also a transverse beam position stabilization, allows one to keep the timing fluctuations, added by as much as 150 m of free space propagation and a number of beam folding mirrors, to less than 4 femtoseconds rms. By its nature our scheme removes the major common timing jitter sources, so the overall jitter in pump-probe measurements done in this way will be below 10 fs (with a margin to be lowered to below 5 fs, much better than the best results reported previously in the literature amounting to 33 fs rms.

  14. Thermal and Quantum Mechanical Noise of a Superfluid Gyroscope

    Science.gov (United States)

    Chui, Talso; Penanen, Konstantin

    2004-01-01

    A potential application of a superfluid gyroscope is for real-time measurements of the small variations in the rotational speed of the Earth, the Moon, and Mars. Such rotational jitter, if not measured and corrected for, will be a limiting factor on the resolution potential of a GPS system. This limitation will prevent many automation concepts in navigation, construction, and biomedical examination from being realized. We present the calculation of thermal and quantum-mechanical phase noise across the Josephson junction of a superfluid gyroscope. This allows us to derive the fundamental limits on the performance of a superfluid gyroscope. We show that the fundamental limit on real-time GPS due to rotational jitter can be reduced to well below 1 millimeter/day. Other limitations and their potential mitigation will also be discussed.

  15. Removing the Influence of Shimmer in the Calculation of Harmonics-To-Noise Ratios Using Ensemble-Averages in Voice Signals

    Directory of Open Access Journals (Sweden)

    Carlos Ferrer

    2009-01-01

    Full Text Available Harmonics-to-noise ratios (HNRs are affected by general aperiodicity in voiced speech signals. To specifically reflect a signal-to-additive-noise ratio, the measurement should be insensitive to other periodicity perturbations, like jitter, shimmer, and waveform variability. The ensemble averaging technique is a time-domain method which has been gradually refined in terms of its sensitivity to jitter and waveform variability and required number of pulses. In this paper, shimmer is introduced in the model of the ensemble average, and a formula is derived which allows the reduction of shimmer effects in HNR calculation. The validity of the technique is evaluated using synthetically shimmered signals, and the prerequisites (glottal pulse positions and amplitudes are obtained by means of fully automated methods. The results demonstrate the feasibility and usefulness of the correction.

  16. Efficient Photometry In-Frame Calibration (EPIC) Gaussian Corrections for Automated Background Normalization of Rate-Tracked Satellite Imagery

    Science.gov (United States)

    Griesbach, J.; Wetterer, C.; Sydney, P.; Gerber, J.

    Photometric processing of non-resolved Electro-Optical (EO) images has commonly required the use of dark and flat calibration frames that are obtained to correct for charge coupled device (CCD) dark (thermal) noise and CCD quantum efficiency/optical path vignetting effects respectively. It is necessary to account/calibrate for these effects so that the brightness of objects of interest (e.g. stars or resident space objects (RSOs)) may be measured in a consistent manner across the CCD field of view. Detected objects typically require further calibration using aperture photometry to compensate for sky background (shot noise). For this, annuluses are measured around each detected object whose contained pixels are used to estimate an average background level that is subtracted from the detected pixel measurements. In a new photometric calibration software tool developed for AFRL/RD, called Efficient Photometry In-Frame Calibration (EPIC), an automated background normalization technique is proposed that eliminates the requirement to capture dark and flat calibration images. The proposed technique simultaneously corrects for dark noise, shot noise, and CCD quantum efficiency/optical path vignetting effects. With this, a constant detection threshold may be applied for constant false alarm rate (CFAR) object detection without the need for aperture photometry corrections. The detected pixels may be simply summed (without further correction) for an accurate instrumental magnitude estimate. The noise distribution associated with each pixel is assumed to be sampled from a Poisson distribution. Since Poisson distributed data closely resembles Gaussian data for parameterized means greater than 10, the data may be corrected by applying bias subtraction and standard-deviation division. EPIC performs automated background normalization on rate-tracked satellite images using the following technique. A deck of approximately 50-100 images is combined by performing an independent median

  17. The effect of individual differences in working memory in older adults on performance with different degrees of automated technology.

    Science.gov (United States)

    Pak, Richard; McLaughlin, Anne Collins; Leidheiser, William; Rovira, Ericka

    2017-04-01

    A leading hypothesis to explain older adults' overdependence on automation is age-related declines in working memory. However, it has not been empirically examined. The purpose of the current experiment was to examine how working memory affected performance with different degrees of automation in older adults. In contrast to the well-supported idea that higher degrees of automation, when the automation is correct, benefits performance but higher degrees of automation, when the automation fails, increasingly harms performance, older adults benefited from higher degrees of automation when the automation was correct but were not differentially harmed by automation failures. Surprisingly, working memory did not interact with degree of automation but did interact with automation correctness or failure. When automation was correct, older adults with higher working memory ability had better performance than those with lower abilities. But when automation was incorrect, all older adults, regardless of working memory ability, performed poorly. Practitioner Summary: The design of automation intended for older adults should focus on ways of making the correctness of the automation apparent to the older user and suggest ways of helping them recover when it is malfunctioning.

  18. Femtosecond timing-jitter between photo-cathode laser and ultra-short electron bunches by means of hybrid compression

    CERN Document Server

    Pompili, Riccardo; Bellaveglia, M; Biagioni, A; Castorina, G; Chiadroni, E; Cianchi, A; Croia, M; Di Giovenale, D; Ferrario, M; Filippi, F; Gallo, A; Gatti, G; Giorgianni, F; Giribono, A; Li, W; Lupi, S; Mostacci, A; Petrarca, M; Piersanti, L; Di Pirro, G; Romeo, S; Scifo, J; Shpakov, V; Vaccarezza, C; Villa, F

    2017-01-01

    The generation of ultra-short electron bunches with ultra-low timing-jitter relative to the photo-cathode (PC) laser has been experimentally proved for the first time at the SPARC_LAB test-facility (INFN-LNF, Frascati) exploiting a two-stage hybrid compression scheme. The first stage employs RF-based compression (velocity-bunching), which shortens the bunch and imprints an energy chirp on it. The second stage is performed in a non-isochronous dogleg line, where the compression is completed resulting in a final bunch duration below 90 fs (rms). At the same time, the beam arrival timing-jitter with respect to the PC laser has been measured to be lower than 20 fs (rms). The reported results have been validated with numerical simulations.

  19. Femtosecond timing-jitter between photo-cathode laser and ultra-short electron bunches by means of hybrid compression

    International Nuclear Information System (INIS)

    Pompili, R; Anania, M P; Bellaveglia, M; Biagioni, A; Castorina, G; Chiadroni, E; Croia, M; Giovenale, D Di; Ferrario, M; Gallo, A; Gatti, G; Cianchi, A; Filippi, F; Giorgianni, F; Giribono, A; Lupi, S; Mostacci, A; Petrarca, M; Piersanti, L; Li, W

    2016-01-01

    The generation of ultra-short electron bunches with ultra-low timing-jitter relative to the photo-cathode (PC) laser has been experimentally proved for the first time at the SPARC-LAB test-facility (INFN-LNF, Frascati) exploiting a two-stage hybrid compression scheme. The first stage employs RF-based compression (velocity-bunching), which shortens the bunch and imprints an energy chirp on it. The second stage is performed in a non-isochronous dogleg line, where the compression is completed resulting in a final bunch duration below 90 fs (rms). At the same time, the beam arrival timing-jitter with respect to the PC laser has been measured to be lower than 20 fs (rms). The reported results have been validated with numerical simulations. (paper)

  20. REGULAR PATTERN MINING (WITH JITTER ON WEIGHTED-DIRECTED DYNAMIC GRAPHS

    Directory of Open Access Journals (Sweden)

    A. GUPTA

    2017-02-01

    Full Text Available Real world graphs are mostly dynamic in nature, exhibiting time-varying behaviour in structure of the graph, weight on the edges and direction of the edges. Mining regular patterns in the occurrence of edge parameters gives an insight into the consumer trends over time in ecommerce co-purchasing networks. But such patterns need not necessarily be precise as in the case when some product goes out of stock or a group of customers becomes unavailable for a short period of time. Ignoring them may lead to loss of useful information and thus taking jitter into account becomes vital. To the best of our knowledge, no work has been yet reported to extract regular patterns considering a jitter of length greater than unity. In this article, we propose a novel method to find quasi regular patterns on weight and direction sequences of such graphs. The method involves analysing the dynamic network considering the inconsistencies in the occurrence of edges. It utilizes the relation between the occurrence sequence and the corresponding weight and direction sequences to speed up this process. Further, these patterns are used to determine the most central nodes (such as the most profit yielding products. To accomplish this we introduce the concept of dynamic closeness centrality and dynamic betweenness centrality. Experiments on Enron e-mail dataset and a synthetic dynamic network show that the presented approach is efficient, so it can be used to find patterns in large scale networks consisting of many timestamps.

  1. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  2. Operational proof of automation

    International Nuclear Information System (INIS)

    Jaerschky, R.; Reifenhaeuser, R.; Schlicht, K.

    1976-01-01

    Automation of the power plant process may imply quite a number of problems. The automation of dynamic operations requires complicated programmes often interfering in several branched areas. This reduces clarity for the operating and maintenance staff, whilst increasing the possibilities of errors. The synthesis and the organization of standardized equipment have proved very successful. The possibilities offered by this kind of automation for improving the operation of power plants will only sufficiently and correctly be turned to profit, however, if the application of these technics of equipment is further improved and if its volume is tallied with a definite etc. (orig.) [de

  3. Operational proof of automation

    International Nuclear Information System (INIS)

    Jaerschky, R.; Schlicht, K.

    1977-01-01

    Automation of the power plant process may imply quite a number of problems. The automation of dynamic operations requires complicated programmes often interfering in several branched areas. This reduces clarity for the operating and maintenance staff, whilst increasing the possibilities of errors. The synthesis and the organization of standardized equipment have proved very successful. The possibilities offered by this kind of automation for improving the operation of power plants will only sufficiently and correctly be turned to profit, however, if the application of these equipment techniques is further improved and if it stands in a certain ratio with a definite efficiency. (orig.) [de

  4. Low jitter spark gap switch for repetitively pulsed parallel capacitor banks

    International Nuclear Information System (INIS)

    Rohwein, G.J.

    1980-01-01

    A two-section air insulated spark gap has been developed for switching multi-kilojoule plus-minus charged parallel capacitor banks which operate continuously at pulse rates up to 20 pps. The switch operates with less than 2 ns jitter, recovers its dielectric strength within 2 to 5 ms and has not shown degraded performance in sequential test runs totaling over a million shots. Its estimated life with copper electrodes is > 10 7 shots. All preliminary tests indicate that the switch is suitable for continuous running multi-kilojoule systems operating to at least 20 pps

  5. Automated movement correction for dynamic PET/CT images: evaluation with phantom and patient data.

    Science.gov (United States)

    Ye, Hu; Wong, Koon-Pong; Wardak, Mirwais; Dahlbom, Magnus; Kepe, Vladimir; Barrio, Jorge R; Nelson, Linda D; Small, Gary W; Huang, Sung-Cheng

    2014-01-01

    Head movement during a dynamic brain PET/CT imaging results in mismatch between CT and dynamic PET images. It can cause artifacts in CT-based attenuation corrected PET images, thus affecting both the qualitative and quantitative aspects of the dynamic PET images and the derived parametric images. In this study, we developed an automated retrospective image-based movement correction (MC) procedure. The MC method first registered the CT image to each dynamic PET frames, then re-reconstructed the PET frames with CT-based attenuation correction, and finally re-aligned all the PET frames to the same position. We evaluated the MC method's performance on the Hoffman phantom and dynamic FDDNP and FDG PET/CT images of patients with neurodegenerative disease or with poor compliance. Dynamic FDDNP PET/CT images (65 min) were obtained from 12 patients and dynamic FDG PET/CT images (60 min) were obtained from 6 patients. Logan analysis with cerebellum as the reference region was used to generate regional distribution volume ratio (DVR) for FDDNP scan before and after MC. For FDG studies, the image derived input function was used to generate parametric image of FDG uptake constant (Ki) before and after MC. Phantom study showed high accuracy of registration between PET and CT and improved PET images after MC. In patient study, head movement was observed in all subjects, especially in late PET frames with an average displacement of 6.92 mm. The z-direction translation (average maximum = 5.32 mm) and x-axis rotation (average maximum = 5.19 degrees) occurred most frequently. Image artifacts were significantly diminished after MC. There were significant differences (Pdynamic brain FDDNP and FDG PET/CT scans could improve the qualitative and quantitative aspects of images of both tracers.

  6. Parallel combinations of pre-ionized low jitter spark gaps

    International Nuclear Information System (INIS)

    Fitzsimmons, W.A.; Rosocha, L.A.

    1979-01-01

    The properties of 10 to 30 kV four electrode field emission pre-ionized triggered spark gaps have been studied. A mid-plane off-axis trigger electrode is biased at +V 0 /2, and a field emission point is located adjacent to and biased at the grounded cathode potential. Simultaneous application of a -V 0 trigger rapid pulse to both the electrodes results in the rapid sequential closing of the anode-trigger and trigger-cathode gaps. The observed jitter is about 1.5 ns. Parallel operation of these gaps (up to 10 so far) connected to a common capacitive load has been studied. A simple theory that predicts the number of gaps that may be expected to operate in parallel is discussed

  7. Advanced health monitor for automated driving functions

    NARCIS (Netherlands)

    Mikovski Iotov, I.

    2017-01-01

    There is a trend in the automotive domain where driving functions are taken from the driver by automated driving functions. In order to guarantee the correct behavior of these auto-mated driving functions, the report introduces an Advanced Health Monitor that uses Tem-poral Logic and Probabilistic

  8. A review on high-resolution CMOS delay lines: towards sub-picosecond jitter performance.

    Science.gov (United States)

    Abdulrazzaq, Bilal I; Abdul Halin, Izhal; Kawahito, Shoji; Sidek, Roslina M; Shafie, Suhaidi; Yunus, Nurul Amziah Md

    2016-01-01

    A review on CMOS delay lines with a focus on the most frequently used techniques for high-resolution delay step is presented. The primary types, specifications, delay circuits, and operating principles are presented. The delay circuits reported in this paper are used for delaying digital inputs and clock signals. The most common analog and digitally-controlled delay elements topologies are presented, focusing on the main delay-tuning strategies. IC variables, namely, process, supply voltage, temperature, and noise sources that affect delay resolution through timing jitter are discussed. The design specifications of these delay elements are also discussed and compared for the common delay line circuits. As a result, the main findings of this paper are highlighting and discussing the followings: the most efficient high-resolution delay line techniques, the trade-off challenge found between CMOS delay lines designed using either analog or digitally-controlled delay elements, the trade-off challenge between delay resolution and delay range and the proposed solutions for this challenge, and how CMOS technology scaling can affect the performance of CMOS delay lines. Moreover, the current trends and efforts used in order to generate output delayed signal with low jitter in the sub-picosecond range are presented.

  9. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  10. A low spur, low jitter 10-GHz phase-locked loop in 0.13-μm CMOS technology

    International Nuclear Information System (INIS)

    Mei Niansong; Sun Yu; Lu Bo; Pan Yaohua; Huang Yumei; Hong Zhiliang

    2011-01-01

    This paper presents a 10-GHz low spur and low jitter phase-locked loop (PLL). An improved low phase noise VCO and a dynamic phase frequency detector with a short delay reset time are employed to reduce the noise of the PLL. We also discuss the methodology to optimize the high frequency prescaler's noise and the charge pump's current mismatch. The chip was fabricated in a SMIC 0.13-μm RF CMOS process with a 1.2-V power supply. The measured integrated RMS jitter is 757 fs (1 kHz to 10 MHz); the phase noise is -89 and -118.1 dBc/Hz at 10 kHz and 1 MHz frequency offset, respectively; and the reference frequency spur is below -77 dBc. The chip size is 0.32 mm 2 and the power consumption is 30.6 mW. (semiconductor integrated circuits)

  11. Sub-fs electron bunch generation with sub-10-fs bunch arrival-time jitter via bunch slicing in a magnetic chicane

    Directory of Open Access Journals (Sweden)

    J. Zhu

    2016-05-01

    Full Text Available The generation of ultrashort electron bunches with ultrasmall bunch arrival-time jitter is of vital importance for laser-plasma wakefield acceleration with external injection. We study the production of 100-MeV electron bunches with bunch durations of subfemtosecond (fs and bunch arrival-time jitters of less than 10 fs, in an S-band photoinjector by using a weak magnetic chicane with a slit collimator. The beam dynamics inside the chicane is simulated by using two codes with different self-force models. The first code separates the self-force into a three-dimensional (3D quasistatic space-charge model and a one-dimensional coherent synchrotron radiation (CSR model, while the other one starts from the first principle with a so-called 3D sub-bunch method. The simulations indicate that the CSR effect dominates the horizontal emittance growth and the 1D CSR model underestimates the final bunch duration and emittance because of the very large transverse-to-longitudinal aspect ratio of the sub-fs bunch. Particularly, the CSR effect is also strongly affected by the vertical bunch size. Due to the coupling between the horizontal and longitudinal phase spaces, the bunch duration at the entrance of the last dipole magnet of the chicane is still significantly longer than that at the exit of the chicane, which considerably mitigates the impact of space charge and CSR effects on the beam quality. Exploiting this effect, a bunch charge of up to 4.8 pC in a sub-fs bunch could be simulated. In addition, we analytically and numerically investigate the impact of different jitter sources on the bunch arrival-time jitter downstream of the chicane, and define the tolerance budgets assuming realistic values of the stability of the linac for different bunch charges and compression schemes.

  12. Understanding human management of automation errors

    Science.gov (United States)

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  13. Automatic computation of radiative corrections

    International Nuclear Information System (INIS)

    Fujimoto, J.; Ishikawa, T.; Shimizu, Y.; Kato, K.; Nakazawa, N.; Kaneko, T.

    1997-01-01

    Automated systems are reviewed focusing on their general structure and requirement specific to the calculation of radiative corrections. Detailed description of the system and its performance is presented taking GRACE as a concrete example. (author)

  14. Advanced health monitor for automated driving functions

    OpenAIRE

    Mikovski Iotov, I.

    2017-01-01

    There is a trend in the automotive domain where driving functions are taken from the driver by automated driving functions. In order to guarantee the correct behavior of these auto-mated driving functions, the report introduces an Advanced Health Monitor that uses Tem-poral Logic and Probabilistic Analysis to indicate the system’s health.

  15. Building Automation Systems.

    Science.gov (United States)

    Honeywell, Inc., Minneapolis, Minn.

    A number of different automation systems for use in monitoring and controlling building equipment are described in this brochure. The system functions include--(1) collection of information, (2) processing and display of data at a central panel, and (3) taking corrective action by sounding alarms, making adjustments, or automatically starting and…

  16. An automated phase correction algorithm for retrieving permittivity and permeability of electromagnetic metamaterials

    Directory of Open Access Journals (Sweden)

    Z. X. Cao

    2014-06-01

    Full Text Available To retrieve complex-valued effective permittivity and permeability of electromagnetic metamaterials (EMMs based on resonant effect from scattering parameters using a complex logarithmic function is not inevitable. When complex values are expressed in terms of magnitude and phase, an infinite number of permissible phase angles is permissible due to the multi-valued property of complex logarithmic functions. Special attention needs to be paid to ensure continuity of the effective permittivity and permeability of lossy metamaterials as frequency sweeps. In this paper, an automated phase correction (APC algorithm is proposed to properly trace and compensate phase angles of the complex logarithmic function which may experience abrupt phase jumps near the resonant frequency region of the concerned EMMs, and hence the continuity of the effective optical properties of lossy metamaterials is ensured. The algorithm is then verified to extract effective optical properties from the simulated scattering parameters of the four different types of metamaterial media: a cut-wire cell array, a split ring resonator (SRR cell array, an electric-LC (E-LC resonator cell array, and a combined SRR and wire cell array respectively. The results demonstrate that the proposed algorithm is highly accurate and effective.

  17. Automation bias: empirical results assessing influencing factors.

    Science.gov (United States)

    Goddard, Kate; Roudsari, Abdul; Wyatt, Jeremy C

    2014-05-01

    To investigate the rate of automation bias - the propensity of people to over rely on automated advice and the factors associated with it. Tested factors were attitudinal - trust and confidence, non-attitudinal - decision support experience and clinical experience, and environmental - task difficulty. The paradigm of simulated decision support advice within a prescribing context was used. The study employed within participant before-after design, whereby 26 UK NHS General Practitioners were shown 20 hypothetical prescribing scenarios with prevalidated correct and incorrect answers - advice was incorrect in 6 scenarios. They were asked to prescribe for each case, followed by being shown simulated advice. Participants were then asked whether they wished to change their prescription, and the post-advice prescription was recorded. Rate of overall decision switching was captured. Automation bias was measured by negative consultations - correct to incorrect prescription switching. Participants changed prescriptions in 22.5% of scenarios. The pre-advice accuracy rate of the clinicians was 50.38%, which improved to 58.27% post-advice. The CDSS improved the decision accuracy in 13.1% of prescribing cases. The rate of automation bias, as measured by decision switches from correct pre-advice, to incorrect post-advice was 5.2% of all cases - a net improvement of 8%. More immediate factors such as trust in the specific CDSS, decision confidence, and task difficulty influenced rate of decision switching. Lower clinical experience was associated with more decision switching. Age, DSS experience and trust in CDSS generally were not significantly associated with decision switching. This study adds to the literature surrounding automation bias in terms of its potential frequency and influencing factors. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Automated processing for proton spectroscopic imaging using water reference deconvolution.

    Science.gov (United States)

    Maudsley, A A; Wu, Z; Meyerhoff, D J; Weiner, M W

    1994-06-01

    Automated formation of MR spectroscopic images (MRSI) is necessary before routine application of these methods is possible for in vivo studies; however, this task is complicated by the presence of spatially dependent instrumental distortions and the complex nature of the MR spectrum. A data processing method is presented for completely automated formation of in vivo proton spectroscopic images, and applied for analysis of human brain metabolites. This procedure uses the water reference deconvolution method (G. A. Morris, J. Magn. Reson. 80, 547(1988)) to correct for line shape distortions caused by instrumental and sample characteristics, followed by parametric spectral analysis. Results for automated image formation were found to compare favorably with operator dependent spectral integration methods. While the water reference deconvolution processing was found to provide good correction of spatially dependent resonance frequency shifts, it was found to be susceptible to errors for correction of line shape distortions. These occur due to differences between the water reference and the metabolite distributions.

  19. Effect of 3 Key Factors on Average End to End Delay and Jitter in MANET

    Directory of Open Access Journals (Sweden)

    Saqib Hakak

    2015-01-01

    Full Text Available A mobile ad-hoc network (MANET is a self-configuring infrastructure-less network of mobile devices connected by wireless links where each node or mobile device is independent to move in any desired direction and thus the links keep moving from one node to another. In such a network, the mobile nodes are equipped with CSMA/CA (carrier sense multiple access with collision avoidance transceivers and communicate with each other via radio. In MANETs, routing is considered one of the most difficult and challenging tasks. Because of this, most studies on MANETs have focused on comparing protocols under varying network conditions. But to the best of our knowledge no one has studied the effect of other factors on network performance indicators like throughput, jitter and so on, revealing how much influence a particular factor or group of factors has on each network performance indicator. Thus, in this study the effects of three key factors, i.e. routing protocol, packet size and DSSS rate, were evaluated on key network performance metrics, i.e. average delay and average jitter, as these parameters are crucial for network performance and directly affect the buffering requirements for all video devices and downstream networks.

  20. ASTROMETRIC JITTER OF THE SUN AS A STAR

    International Nuclear Information System (INIS)

    Makarov, V. V.; Parker, D.; Ulrich, R. K.

    2010-01-01

    The daily variation of the solar photocenter over some 11 yr is derived from the Mount Wilson data reprocessed by Ulrich et al. to closely match the surface distribution of solar irradiance. The standard deviations of astrometric jitter are 0.52 μAU and 0.39 μAU in the equatorial and the axial dimensions, respectively. The overall dispersion is strongly correlated with solar cycle, reaching 0.91 μAU at maximum activity in 2000. The largest short-term deviations from the running average (up to 2.6 μAU) occur when a group of large spots happen to lie on one side with respect to the center of the disk. The amplitude spectrum of the photocenter variations never exceeds 0.033 μAU for the range of periods 0.6-1.4 yr, corresponding to the orbital periods of planets in the habitable zone. Astrometric detection of Earth-like planets around stars as quiet as the Sun is not affected by star spot noise, but the prospects for more active stars may be limited to giant planets.

  1. Analysis of an automated background correction method for cardiovascular MR phase contrast imaging in children and young adults

    Energy Technology Data Exchange (ETDEWEB)

    Rigsby, Cynthia K.; Hilpipre, Nicholas; Boylan, Emma E.; Popescu, Andrada R.; Deng, Jie [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Medical Imaging, Chicago, IL (United States); McNeal, Gary R. [Siemens Medical Solutions USA Inc., Customer Solutions Group, Cardiovascular MR R and D, Chicago, IL (United States); Zhang, Gang [Ann and Robert H. Lurie Children' s Hospital of Chicago Research Center, Biostatistics Research Core, Chicago, IL (United States); Choi, Grace [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Pediatrics, Chicago, IL (United States); Greiser, Andreas [Siemens AG Healthcare Sector, Erlangen (Germany)

    2014-03-15

    Phase contrast magnetic resonance imaging (MRI) is a powerful tool for evaluating vessel blood flow. Inherent errors in acquisition, such as phase offset, eddy currents and gradient field effects, can cause significant inaccuracies in flow parameters. These errors can be rectified with the use of background correction software. To evaluate the performance of an automated phase contrast MRI background phase correction method in children and young adults undergoing cardiac MR imaging. We conducted a retrospective review of patients undergoing routine clinical cardiac MRI including phase contrast MRI for flow quantification in the aorta (Ao) and main pulmonary artery (MPA). When phase contrast MRI of the right and left pulmonary arteries was also performed, these data were included. We excluded patients with known shunts and metallic implants causing visible MRI artifact and those with more than mild to moderate aortic or pulmonary stenosis. Phase contrast MRI of the Ao, mid MPA, proximal right pulmonary artery (RPA) and left pulmonary artery (LPA) using 2-D gradient echo Fast Low Angle SHot (FLASH) imaging was acquired during normal respiration with retrospective cardiac gating. Standard phase image reconstruction and the automatic spatially dependent background-phase-corrected reconstruction were performed on each phase contrast MRI dataset. Non-background-corrected and background-phase-corrected net flow, forward flow, regurgitant volume, regurgitant fraction, and vessel cardiac output were recorded for each vessel. We compared standard non-background-corrected and background-phase-corrected mean flow values for the Ao and MPA. The ratio of pulmonary to systemic blood flow (Qp:Qs) was calculated for the standard non-background and background-phase-corrected data and these values were compared to each other and for proximity to 1. In a subset of patients who also underwent phase contrast MRI of the MPA, RPA, and LPA a comparison was made between standard non-background-corrected

  2. Evolution of a Benthic Imaging System From a Towed Camera to an Automated Habitat Characterization System

    Science.gov (United States)

    2008-09-01

    automated processing of images for color correction, segmentation of foreground targets from sediment and classification of targets to taxonomic category...element in the development of HabCam as a tool for habitat characterization is the automated processing of images for color correction, segmentation of

  3. Illumination correction in psoriasis lesions images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær

    2003-01-01

    An approach to automatically correct illumination problems in dermatological images is presented. The illumination function is estimated after combining the thematic map indicating skin-produced by an automated classification scheme- with the dermatological image data. The user is only required t...

  4. Explaining the morphology of supernova remnant (SNR) 1987A with the jittering jets explosion mechanism

    Science.gov (United States)

    Bear, Ealeal; Soker, Noam

    2018-04-01

    We find that the remnant of supernova (SN) 1987A shares some morphological features with four supernova remnants (SNRs) that have signatures of shaping by jets, and from that we strengthen the claim that jets played a crucial role in the explosion of SN 1987A. Some of the morphological features appear also in planetary nebulae (PNe) where jets are observed. The clumpy ejecta bring us to support the claim that the jittering jets explosion mechanism can account for the structure of the remnant of SN 1987A, i.e., SNR 1987A. We conduct a preliminary attempt to quantify the fluctuations in the angular momentum of the mass that is accreted on to the newly born neutron star via an accretion disk or belt. The accretion disk/belt launches the jets that explode core collapse supernovae (CCSNe). The relaxation time of the accretion disk/belt is comparable to the duration of a typical jet-launching episode in the jittering jets explosion mechanism, and hence the disk/belt has no time to relax. We suggest that this might explain two unequal opposite jets that later lead to unequal sides of the elongated structures in some SNRs of CCSNe. We reiterate our earlier call for a paradigm shift from neutrino-driven explosion to a jet-driven explosion of CCSNe.

  5. Optimum FIR filter for sampled signals in presence of jitter

    Science.gov (United States)

    Cattaneo, Paolo Walter

    1996-02-01

    The requirements of the integrated readout electronics for calorimetry at high luminosity hadron colliders pose new challenges both to hardware design and to the performance of signal processing algorithms. Both aspects have been treated in detail by the FERMI(RD16) collaboration [C. Alippi et al., Nucl. Instr. and Meth. A 344 (1994) 180], from which this work has been motivated. The estimation of the amplitude of sampled signals is usually performed with a digital FIR filter, or with a more sophisticated non linear digital filter using FIR filters as building blocks [S.J. Inkinen and J. Niittylahti, Trainable FIR-order statistic hybrid filters, to be published in IEEE Trans. Circuits and Systems; H. Alexanian et al., FERMI Collaboration, Optimized digital feature extraction in the FERMI microsystem Nucl. Instr. and Meth. A 357 (1995)]. In presence of significant signal phase jitter with respect to the clock, the phase dependence of the filter output can be a major source of error. This is especially true for measurements of large amplitudes for which the effect of electronic noise becomes negligible. This paper reports on the determination of digital FIR filters that optimize the signal over noise ratio due to known jitter distributions for different filter lengths. As the presence of electronic noise is neglected, the results are mainly relevant for measurements of large signals. FERMI is a collaboration with the aim of designing integrated electronics for the read out of calorimeter detectors in particle physics experiments at hadron colliders. It includes: CERN, Geneva, Switzerland; Department of Physics and Measurement Technology, University of Linköping, Sweden; Center for Industrial Microelectronics and Materials Technology, University of Linköping, Sweden; LPNHE Universities Paris VI-VII, Paris, France; Dipartimento di Elettronica, Politecnico di Milano, Italy, Sezine INFN, Pavia, Italy; Dipartimento di Fisica Nucleare e Teorica dell'Universitá e Sezione

  6. Automated imaging system for single molecules

    Science.gov (United States)

    Schwartz, David Charles; Runnheim, Rodney; Forrest, Daniel

    2012-09-18

    There is provided a high throughput automated single molecule image collection and processing system that requires minimal initial user input. The unique features embodied in the present disclosure allow automated collection and initial processing of optical images of single molecules and their assemblies. Correct focus may be automatically maintained while images are collected. Uneven illumination in fluorescence microscopy is accounted for, and an overall robust imaging operation is provided yielding individual images prepared for further processing in external systems. Embodiments described herein are useful in studies of any macromolecules such as DNA, RNA, peptides and proteins. The automated image collection and processing system and method of same may be implemented and deployed over a computer network, and may be ergonomically optimized to facilitate user interaction.

  7. RCrane: semi-automated RNA model building.

    Science.gov (United States)

    Keating, Kevin S; Pyle, Anna Marie

    2012-08-01

    RNA crystals typically diffract to much lower resolutions than protein crystals. This low-resolution diffraction results in unclear density maps, which cause considerable difficulties during the model-building process. These difficulties are exacerbated by the lack of computational tools for RNA modeling. Here, RCrane, a tool for the partially automated building of RNA into electron-density maps of low or intermediate resolution, is presented. This tool works within Coot, a common program for macromolecular model building. RCrane helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RCrane then allows the crystallographer to review the newly built structure and select alternative backbone conformations where desired. This tool can also be used to automatically correct the backbone structure of previously built nucleotides. These automated corrections can fix incorrect sugar puckers, steric clashes and other structural problems.

  8. Secondary wavelength stabilization of unbalanced Michelson interferometers for the generation of low-jitter pulse trains.

    Science.gov (United States)

    Shalloo, R J; Corner, L

    2016-09-01

    We present a double unbalanced Michelson interferometer producing up to four output pulses from a single input pulse. The interferometer is stabilized with the Hänsch-Couillaud method using an auxiliary low power continuous wave laser injected into the interferometer, allowing the stabilization of the temporal jitter of the output pulses to 0.02 fs. Such stabilized pulse trains would be suitable for driving multi-pulse laser wakefield accelerators, and the technique could be extended to include amplification in the arms of the interferometer.

  9. Judson_Mansouri_Automated_Chemical_Curation_QSAREnvRes_Data

    Data.gov (United States)

    U.S. Environmental Protection Agency — Here we describe the development of an automated KNIME workflow to curate and correct errors in the structure and identity of chemicals using the publically...

  10. RCrane: semi-automated RNA model building

    International Nuclear Information System (INIS)

    Keating, Kevin S.; Pyle, Anna Marie

    2012-01-01

    RCrane is a new tool for the partially automated building of RNA crystallographic models into electron-density maps of low or intermediate resolution. This tool helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RNA crystals typically diffract to much lower resolutions than protein crystals. This low-resolution diffraction results in unclear density maps, which cause considerable difficulties during the model-building process. These difficulties are exacerbated by the lack of computational tools for RNA modeling. Here, RCrane, a tool for the partially automated building of RNA into electron-density maps of low or intermediate resolution, is presented. This tool works within Coot, a common program for macromolecular model building. RCrane helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RCrane then allows the crystallographer to review the newly built structure and select alternative backbone conformations where desired. This tool can also be used to automatically correct the backbone structure of previously built nucleotides. These automated corrections can fix incorrect sugar puckers, steric clashes and other structural problems

  11. Automated planning of breast radiotherapy using cone beam CT imaging

    International Nuclear Information System (INIS)

    Amit, Guy; Purdie, Thomas G.

    2015-01-01

    Purpose: Develop and clinically validate a methodology for using cone beam computed tomography (CBCT) imaging in an automated treatment planning framework for breast IMRT. Methods: A technique for intensity correction of CBCT images was developed and evaluated. The technique is based on histogram matching of CBCT image sets, using information from “similar” planning CT image sets from a database of paired CBCT and CT image sets (n = 38). Automated treatment plans were generated for a testing subset (n = 15) on the planning CT and the corrected CBCT. The plans generated on the corrected CBCT were compared to the CT-based plans in terms of beam parameters, dosimetric indices, and dose distributions. Results: The corrected CBCT images showed considerable similarity to their corresponding planning CTs (average mutual information 1.0±0.1, average sum of absolute differences 185 ± 38). The automated CBCT-based plans were clinically acceptable, as well as equivalent to the CT-based plans with average gantry angle difference of 0.99°±1.1°, target volume overlap index (Dice) of 0.89±0.04 although with slightly higher maximum target doses (4482±90 vs 4560±84, P < 0.05). Gamma index analysis (3%, 3 mm) showed that the CBCT-based plans had the same dose distribution as plans calculated with the same beams on the registered planning CTs (average gamma index 0.12±0.04, gamma <1 in 99.4%±0.3%). Conclusions: The proposed method demonstrates the potential for a clinically feasible and efficient online adaptive breast IMRT planning method based on CBCT imaging, integrating automation

  12. AUTOMATION OF CONVEYOR BELT TRANSPORT

    Directory of Open Access Journals (Sweden)

    Nenad Marinović

    1990-12-01

    Full Text Available Belt conveyor transport, although one of the most economical mining transport system, introduce many problems to mantain the continuity of the operation. Every stop causes economical loses. Optimal operation require correct tension of the belt, correct belt position and velocity and faultless rolls, which are together input conditions for automation. Detection and position selection of the faults are essential for safety to eliminate fire hazard and for efficient maintenance. Detection and location of idler roll faults are still open problem and up to now not solved successfully (the paper is published in Croatian.

  13. Automated Testing of Event-Driven Applications

    DEFF Research Database (Denmark)

    Jensen, Casper Svenning

    may be tested by selecting an interesting input (i.e. a sequence of events), and deciding if a failure occurs when the selected input is applied to the event-driven application under test. Automated testing promises to reduce the workload for developers by automatically selecting interesting inputs...... and detect failures. However, it is non-trivial to conduct automated testing of event-driven applications because of, for example, infinite input spaces and the absence of specifications of correct application behavior. In this PhD dissertation, we identify a number of specific challenges when conducting...... automated testing of event-driven applications, and we present novel techniques for solving these challenges. First, we present an algorithm for stateless model-checking of event-driven applications with partial-order reduction, and we show how this algorithm may be used to systematically test web...

  14. A rigid motion correction method for helical computed tomography (CT)

    International Nuclear Information System (INIS)

    Kim, J-H; Kyme, A; Fulton, R; Nuyts, J; Kuncic, Z

    2015-01-01

    We propose a method to compensate for six degree-of-freedom rigid motion in helical CT of the head. The method is demonstrated in simulations and in helical scans performed on a 16-slice CT scanner. Scans of a Hoffman brain phantom were acquired while an optical motion tracking system recorded the motion of the bed and the phantom. Motion correction was performed by restoring projection consistency using data from the motion tracking system, and reconstructing with an iterative fully 3D algorithm. Motion correction accuracy was evaluated by comparing reconstructed images with a stationary reference scan. We also investigated the effects on accuracy of tracker sampling rate, measurement jitter, interpolation of tracker measurements, and the synchronization of motion data and CT projections. After optimization of these aspects, motion corrected images corresponded remarkably closely to images of the stationary phantom with correlation and similarity coefficients both above 0.9. We performed a simulation study using volunteer head motion and found similarly that our method is capable of compensating effectively for realistic human head movements. To the best of our knowledge, this is the first practical demonstration of generalized rigid motion correction in helical CT. Its clinical value, which we have yet to explore, may be significant. For example it could reduce the necessity for repeat scans and resource-intensive anesthetic and sedation procedures in patient groups prone to motion, such as young children. It is not only applicable to dedicated CT imaging, but also to hybrid PET/CT and SPECT/CT, where it could also ensure an accurate CT image for lesion localization and attenuation correction of the functional image data. (paper)

  15. Automated drawing generation system

    International Nuclear Information System (INIS)

    Yoshinaga, Toshiaki; Kawahata, Junichi; Yoshida, Naoto; Ono, Satoru

    1991-01-01

    Since automated CAD drawing generation systems still require human intervention, improvements were focussed on an interactive processing section (data input and correcting operation) which necessitates a vast amount of work. As a result, human intervention was eliminated, the original objective of a computerized system. This is the first step taken towards complete automation. The effects of development and commercialization of the system are as described below. (1) The interactive processing time required for generating drawings was improved. It was determined that introduction of the CAD system has reduced the time required for generating drawings. (2) The difference in skills between workers preparing drawings has been eliminated and the quality of drawings has been made uniform. (3) The extent of knowledge and experience demanded of workers has been reduced. (author)

  16. Automated MAD and MIR structure solution

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.; Berendzen, Joel

    1999-01-01

    A fully automated procedure for solving MIR and MAD structures has been developed using a scoring scheme to convert the structure-solution process into an optimization problem. Obtaining an electron-density map from X-ray diffraction data can be difficult and time-consuming even after the data have been collected, largely because MIR and MAD structure determinations currently require many subjective evaluations of the qualities of trial heavy-atom partial structures before a correct heavy-atom solution is obtained. A set of criteria for evaluating the quality of heavy-atom partial solutions in macromolecular crystallography have been developed. These have allowed the conversion of the crystal structure-solution process into an optimization problem and have allowed its automation. The SOLVE software has been used to solve MAD data sets with as many as 52 selenium sites in the asymmetric unit. The automated structure-solution process developed is a major step towards the fully automated structure-determination, model-building and refinement procedure which is needed for genomic scale structure determinations

  17. Method and system for correcting an aberration of a beam of charged particles

    Energy Technology Data Exchange (ETDEWEB)

    1975-06-20

    A beam of charged particles is deflected in a closed path such as a square over a cross wire grid, for example, at a constant velocity by an X Y deflection system. A small high frequency jitter is added at both axes of deflection to cause oscillation of the beam at 45deg to the X and Y axes. From the time that the leading edge of the oscillating beam passes over the wire until the trailing edge of the beam passes over the wire, an envelope of the oscillations produced by the jitter is obtained. A second envelope is obtained when the leading edge of the beam exits from being over the wire until the trailing edge of the beam ceases to be over the wire. Thus, a pair of envelopes is produced as the beam passes over each wire of the grid. The number of pulses exceeding ten percent of the peak voltage in the eight envelopes produced by the beam completing a cycle in its closed path around the grid are counted and compared with those counted during the previous cycle of the beam moving in its closed path over the grid. As the number of pulses decreases, the quality of the focus of the beam increases so that correction signals are applied to the focus coil in accordance with whether the number of pulses is increasing or decreasing.

  18. Method and system for correcting an aberration of a beam of charged particles

    International Nuclear Information System (INIS)

    1975-01-01

    A beam of charged particles is deflected in a closed path such as a square over a cross wire grid, for example, at a constant velocity by an X Y deflection system. A small high frequency jitter is added at both axes of deflection to cause oscillation of the beam at 45deg to the X and Y axes. From the time that the leading edge of the oscillating beam passes over the wire until the trailing edge of the beam passes over the wire, an envelope of the oscillations produced by the jitter is obtained. A second envelope is obtained when the leading edge of the beam exits from being over the wire until the trailing edge of the beam ceases to be over the wire. Thus, a pair of envelopes is produced as the beam passes over each wire of the grid. The number of pulses exceeding ten percent of the peak voltage in the eight envelopes produced by the beam completing a cycle in its closed path around the grid are counted and compared with those counted during the previous cycle of the beam moving in its closed path over the grid. As the number of pulses decreases, the quality of the focus of the beam increases so that correction signals are applied to the focus coil in accordance with whether the number of pulses is increasing or decreasing

  19. Automatic Contextual Text Correction Using The Linguistic Habits Graph Lhg

    Directory of Open Access Journals (Sweden)

    Marcin Gadamer

    2009-01-01

    Full Text Available Automatic text correction is an essential problem of today text processors and editors. Thispaper introduces a novel algorithm for automation of contextual text correction using a LinguisticHabit Graph (LHG also introduced in this paper. A specialist internet crawler hasbeen constructed for searching through web sites in order to build a Linguistic Habit Graphafter text corpuses gathered in polish web sites. The achieved correction results on a basis ofthis algorithm using this LHG were compared with commercial programs which also enableto make text correction: Microsoft Word 2007, Open Office Writer 3.0 and search engineGoogle. The achieved results of text correction were much better than correction made bythese commercial tools.

  20. Laboratory automation: trajectory, technology, and tactics.

    Science.gov (United States)

    Markin, R S; Whalen, S A

    2000-05-01

    Laboratory automation is in its infancy, following a path parallel to the development of laboratory information systems in the late 1970s and early 1980s. Changes on the horizon in healthcare and clinical laboratory service that affect the delivery of laboratory results include the increasing age of the population in North America, the implementation of the Balanced Budget Act (1997), and the creation of disease management companies. Major technology drivers include outcomes optimization and phenotypically targeted drugs. Constant cost pressures in the clinical laboratory have forced diagnostic manufacturers into less than optimal profitability states. Laboratory automation can be a tool for the improvement of laboratory services and may decrease costs. The key to improvement of laboratory services is implementation of the correct automation technology. The design of this technology should be driven by required functionality. Automation design issues should be centered on the understanding of the laboratory and its relationship to healthcare delivery and the business and operational processes in the clinical laboratory. Automation design philosophy has evolved from a hardware-based approach to a software-based approach. Process control software to support repeat testing, reflex testing, and transportation management, and overall computer-integrated manufacturing approaches to laboratory automation implementation are rapidly expanding areas. It is clear that hardware and software are functionally interdependent and that the interface between the laboratory automation system and the laboratory information system is a key component. The cost-effectiveness of automation solutions suggested by vendors, however, has been difficult to evaluate because the number of automation installations are few and the precision with which operational data have been collected to determine payback is suboptimal. The trend in automation has moved from total laboratory automation to a

  1. Detecting vocal fatigue in student singers using acoustic measures of mean fundamental frequency, jitter, shimmer, and harmonics-to-noise ratio

    Science.gov (United States)

    Sisakun, Siphan

    2000-12-01

    The purpose of this study is to explore the ability of four acoustic parameters, mean fundamental frequency, jitter, shimmer, and harmonics-to-noise ratio, to detect vocal fatigue in student singers. The participants are 15 voice students, who perform two distinct tasks, data collection task and vocal fatiguing task. The data collection task includes the sustained vowel /a/, reading a standard passage, and self-rate on a vocal fatigue form. The vocal fatiguing task is the vocal practice of musical scores for a total of 45 minutes. The four acoustic parameters are extracted using the software EZVoicePlus. The data analyses are performed to answer eight research questions. The first four questions relate to correlations of the self-rating scale and each of the four parameters. The next four research questions relate to differences in the parameters over time using one-factor repeated measures analysis of variance (ANOVA). The result yields a proposed acoustic profile of vocal fatigue in student singers. This profile is characterized by increased fundamental frequency; slightly decreased jitter; slightly decreased shimmer; and slightly increased harmonics-to-noise ratio. The proposed profile requires further investigation.

  2. Automated Orthorectification of VHR Satellite Images by SIFT-Based RPC Refinement

    Directory of Open Access Journals (Sweden)

    Hakan Kartal

    2018-06-01

    Full Text Available Raw remotely sensed images contain geometric distortions and cannot be used directly for map-based applications, accurate locational information extraction or geospatial data integration. A geometric correction process must be conducted to minimize the errors related to distortions and achieve the desired location accuracy before further analysis. A considerable number of images might be needed when working over large areas or in temporal domains in which manual geometric correction requires more labor and time. To overcome these problems, new algorithms have been developed to make the geometric correction process autonomous. The Scale Invariant Feature Transform (SIFT algorithm is an image matching algorithm used in remote sensing applications that has received attention in recent years. In this study, the effects of the incidence angle, surface topography and land cover (LC characteristics on SIFT-based automated orthorectification were investigated at three different study sites with different topographic conditions and LC characteristics using Pleiades very high resolution (VHR images acquired at different incidence angles. The results showed that the location accuracy of the orthorectified images increased with lower incidence angle images. More importantly, the topographic characteristics had no observable impacts on the location accuracy of SIFT-based automated orthorectification, and the results showed that Ground Control Points (GCPs are mainly concentrated in the “Forest” and “Semi Natural Area” LC classes. A multi-thread code was designed to reduce the automated processing time, and the results showed that the process performed 7 to 16 times faster using an automated approach. Analyses performed on various spectral modes of multispectral data showed that the arithmetic data derived from pan-sharpened multispectral images can be used in automated SIFT-based RPC orthorectification.

  3. Space Weather Magnetometer Set with Automated AC Spacecraft Field Correction for GEO-KOMPSAT-2A

    Science.gov (United States)

    Auster, U.; Magnes, W.; Delva, M.; Valavanoglou, A.; Leitner, S.; Hillenmaier, O.; Strauch, C.; Brown, P.; Whiteside, B.; Bendyk, M.; Hilgers, A.; Kraft, S.; Luntama, J. P.; Seon, J.

    2016-05-01

    Monitoring the solar wind conditions, in particular its magnetic field (interplanetary magnetic field) ahead of the Earth is essential in performing accurate and reliable space weather forecasting. The magnetic condition of the spacecraft itself is a key parameter for the successful performance of the magnetometer onboard. In practice a condition with negligible magnetic field of the spacecraft cannot always be fulfilled and magnetic sources on the spacecraft interfere with the natural magnetic field measured by the space magnetometer. The presented "ready-to-use" Service Oriented Spacecraft Magnetometer (SOSMAG) is developed for use on any satellite implemented without magnetic cleanliness programme. It enables detection of the spacecraft field AC variations on a proper time scale suitable to distinguish the magnetic field variations relevant to space weather phenomena, such as sudden increase in the interplanetary field or southward turning. This is achieved through the use of dual fluxgate magnetometers on a short boom (1m) and two additional AMR sensors on the spacecraft body, which monitor potential AC disturbers. The measurements of the latter sensors enable an automated correction of the AC signal contributions from the spacecraft in the final magnetic vector. After successful development and test of the EQM prototype, a flight model (FM) is being built for the Korean satellite Geo-Kompsat 2A, with launch foreseen in 2018.

  4. Asymmetric dual-loop feedback to suppress spurious tones and reduce timing jitter in self-mode-locked quantum-dash lasers emitting at 155 μm

    Science.gov (United States)

    Asghar, Haroon; McInerney, John G.

    2017-09-01

    We demonstrate an asymmetric dual-loop feedback scheme to suppress external cavity side-modes induced in self-mode-locked quantum-dash lasers with conventional single and dual-loop feedback. In this letter, we achieved optimal suppression of spurious tones by optimizing the length of second delay time. We observed that asymmetric dual-loop feedback, with large (~8x) disparity in cavity lengths, eliminates all external-cavity side-modes and produces flat RF spectra close to the main peak with low timing jitter compared to single-loop feedback. Significant reduction in RF linewidth and reduced timing jitter was also observed as a function of increased second feedback delay time. The experimental results based on this feedback configuration validate predictions of recently published numerical simulations. This interesting asymmetric dual-loop feedback scheme provides simplest, efficient and cost effective stabilization of side-band free optoelectronic oscillators based on mode-locked lasers.

  5. Using Modeling and Simulation to Predict Operator Performance and Automation-Induced Complacency With Robotic Automation: A Case Study and Empirical Validation.

    Science.gov (United States)

    Wickens, Christopher D; Sebok, Angelia; Li, Huiyang; Sarter, Nadine; Gacy, Andrew M

    2015-09-01

    The aim of this study was to develop and validate a computational model of the automation complacency effect, as operators work on a robotic arm task, supported by three different degrees of automation. Some computational models of complacency in human-automation interaction exist, but those are formed and validated within the context of fairly simplified monitoring failures. This research extends model validation to a much more complex task, so that system designers can establish, without need for human-in-the-loop (HITL) experimentation, merits and shortcomings of different automation degrees. We developed a realistic simulation of a space-based robotic arm task that could be carried out with three different levels of trajectory visualization and execution automation support. Using this simulation, we performed HITL testing. Complacency was induced via several trials of correctly performing automation and then was assessed on trials when automation failed. Following a cognitive task analysis of the robotic arm operation, we developed a multicomponent model of the robotic operator and his or her reliance on automation, based in part on visual scanning. The comparison of model predictions with empirical results revealed that the model accurately predicted routine performance and predicted the responses to these failures after complacency developed. However, the scanning models do not account for the entire attention allocation effects of complacency. Complacency modeling can provide a useful tool for predicting the effects of different types of imperfect automation. The results from this research suggest that focus should be given to supporting situation awareness in automation development. © 2015, Human Factors and Ergonomics Society.

  6. Automated Registration of Images from Multiple Bands of Resourcesat-2 Liss-4 camera

    Science.gov (United States)

    Radhadevi, P. V.; Solanki, S. S.; Jyothi, M. V.; Varadan, G.

    2014-11-01

    Continuous and automated co-registration and geo-tagging of images from multiple bands of Liss-4 camera is one of the interesting challenges of Resourcesat-2 data processing. Three arrays of the Liss-4 camera are physically separated in the focal plane in alongtrack direction. Thus, same line on the ground will be imaged by extreme bands with a time interval of as much as 2.1 seconds. During this time, the satellite would have covered a distance of about 14 km on the ground and the earth would have rotated through an angle of 30". A yaw steering is done to compensate the earth rotation effects, thus ensuring a first level registration between the bands. But this will not do a perfect co-registration because of the attitude fluctuations, satellite movement, terrain topography, PSM steering and small variations in the angular placement of the CCD lines (from the pre-launch values) in the focal plane. This paper describes an algorithm based on the viewing geometry of the satellite to do an automatic band to band registration of Liss-4 MX image of Resourcesat-2 in Level 1A. The algorithm is using the principles of photogrammetric collinearity equations. The model employs an orbit trajectory and attitude fitting with polynomials. Then, a direct geo-referencing with a global DEM with which every pixel in the middle band is mapped to a particular position on the surface of the earth with the given attitude. Attitude is estimated by interpolating measurement data obtained from star sensors and gyros, which are sampled at low frequency. When the sampling rate of attitude information is low compared to the frequency of jitter or micro-vibration, images processed by geometric correction suffer from distortion. Therefore, a set of conjugate points are identified between the bands to perform a relative attitude error estimation and correction which will ensure the internal accuracy and co-registration of bands. Accurate calculation of the exterior orientation parameters with

  7. Problems in Microgravity Fluid Mechanics: G-Jitter Convection

    Science.gov (United States)

    Homsy, G. M.

    2005-01-01

    This is the final report on our NASA grant, Problems in Microgravity Fluid Mechanics NAG3-2513: 12/14/2000 - 11/30/2003, extended through 11/30/2004. This grant was made to Stanford University and then transferred to the University of California at Santa Barbara when the PI relocated there in January 2001. Our main activity has been to conduct both experimental and theoretical studies of instabilities in fluids that are relevant to the microgravity environment, i.e. those that do not involve the action of buoyancy due to a steady gravitational field. Full details of the work accomplished under this grant are given below. Our work has focused on: (i) Theoretical and computational studies of the effect of g-jitter on instabilities of convective states where the convection is driven by forces other than buoyancy (ii) Experimental studies of instabilities during displacements of miscible fluid pairs in tubes, with a focus on the degree to which these mimic those found in immiscible fluids. (iii) Theoretical and experimental studies of the effect of time dependent electrohydrodynamic forces on chaotic advection in drops immersed in a second dielectric liquid. Our objectives are to acquire insight and understanding into microgravity fluid mechanics problems that bear on either fundamental issues or applications in fluid physics. We are interested in the response of fluids to either a fluctuating acceleration environment or to forces other than gravity that cause fluid mixing and convection. We have been active in several general areas.

  8. Quantitative evaluation of automated skull-stripping methods applied to contemporary and legacy images: effects of diagnosis, bias correction, and slice location

    DEFF Research Database (Denmark)

    Fennema-Notestine, Christine; Ozyurt, I Burak; Clark, Camellia P

    2006-01-01

    Performance of automated methods to isolate brain from nonbrain tissues in magnetic resonance (MR) structural images may be influenced by MR signal inhomogeneities, type of MR image set, regional anatomy, and age and diagnosis of subjects studied. The present study compared the performance of four...... methods: Brain Extraction Tool (BET; Smith [2002]: Hum Brain Mapp 17:143-155); 3dIntracranial (Ward [1999] Milwaukee: Biophysics Research Institute, Medical College of Wisconsin; in AFNI); a Hybrid Watershed algorithm (HWA, Segonne et al. [2004] Neuroimage 22:1060-1075; in FreeSurfer); and Brain Surface...... Extractor (BSE, Sandor and Leahy [1997] IEEE Trans Med Imag 16:41-54; Shattuck et al. [2001] Neuroimage 13:856-876) to manually stripped images. The methods were applied to uncorrected and bias-corrected datasets; Legacy and Contemporary T1-weighted image sets; and four diagnostic groups (depressed...

  9. A computational framework for automation of point defect calculations

    International Nuclear Information System (INIS)

    Goyal, Anuj; Gorai, Prashun; Peng, Haowei

    2017-01-01

    We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.

  10. Automatic physiological waveform processing for FMRI noise correction and analysis.

    Directory of Open Access Journals (Sweden)

    Daniel J Kelley

    2008-03-01

    Full Text Available Functional MRI resting state and connectivity studies of brain focus on neural fluctuations at low frequencies which share power with physiological fluctuations originating from lung and heart. Due to the lack of automated software to process physiological signals collected at high magnetic fields, a gap exists in the processing pathway between the acquisition of physiological data and its use in fMRI software for both physiological noise correction and functional analyses of brain activation and connectivity. To fill this gap, we developed an open source, physiological signal processing program, called PhysioNoise, in the python language. We tested its automated processing algorithms and dynamic signal visualization on resting monkey cardiac and respiratory waveforms. PhysioNoise consistently identifies physiological fluctuations for fMRI noise correction and also generates covariates for subsequent analyses of brain activation and connectivity.

  11. Automated system for review of radiotherapy treatment sheets

    International Nuclear Information System (INIS)

    Collado Chamorro, P.; Sanz Freire, C. J.; Vazquez Galinanes, A.; Diaz Pascual, V.; Gomez amez, J.; Martinez Sanchez, S.; Ossola Lentati, G. A.

    2011-01-01

    In many modern radiotherapy services begins to leaf treatment implemented in electronic form. In our department has developed an automated reporting system, that check the following parameters: treatment completed correctly, number of sessions and cumulative dose administered. Likewise treatments are verified in the allocated separate unit, and over-writing table parameters.

  12. System for Automated Calibration of Vector Modulators

    Science.gov (United States)

    Lux, James; Boas, Amy; Li, Samuel

    2009-01-01

    Vector modulators are used to impose baseband modulation on RF signals, but non-ideal behavior limits the overall performance. The non-ideal behavior of the vector modulator is compensated using data collected with the use of an automated test system driven by a LabVIEW program that systematically applies thousands of control-signal values to the device under test and collects RF measurement data. The technology innovation automates several steps in the process. First, an automated test system, using computer controlled digital-to-analog converters (DACs) and a computer-controlled vector network analyzer (VNA) systematically can apply different I and Q signals (which represent the complex number by which the RF signal is multiplied) to the vector modulator under test (VMUT), while measuring the RF performance specifically, gain and phase. The automated test system uses the LabVIEW software to control the test equipment, collect the data, and write it to a file. The input to the Lab - VIEW program is either user-input for systematic variation, or is provided in a file containing specific test values that should be fed to the VMUT. The output file contains both the control signals and the measured data. The second step is to post-process the file to determine the correction functions as needed. The result of the entire process is a tabular representation, which allows translation of a desired I/Q value to the required analog control signals to produce a particular RF behavior. In some applications, corrected performance is needed only for a limited range. If the vector modulator is being used as a phase shifter, there is only a need to correct I and Q values that represent points on a circle, not the entire plane. This innovation has been used to calibrate 2-GHz MMIC (monolithic microwave integrated circuit) vector modulators in the High EIRP Cluster Array project (EIRP is high effective isotropic radiated power). These calibrations were then used to create

  13. Development of an Automated Technique for Failure Modes and Effect Analysis

    DEFF Research Database (Denmark)

    Blanke, M.; Borch, Ole; Allasia, G.

    1999-01-01

    Advances in automation have provided integration of monitoring and control functions to enhance the operator's overview and ability to take remedy actions when faults occur. Automation in plant supervision is technically possible with integrated automation systems as platforms, but new design...... methods are needed to cope efficiently with the complexity and to ensure that the functionality of a supervisor is correct and consistent. In particular these methods are expected to significantly improve fault tolerance of the designed systems. The purpose of this work is to develop a software module...... implementing an automated technique for Failure Modes and Effects Analysis (FMEA). This technique is based on the matrix formulation of FMEA for the investigation of failure propagation through a system. As main result, this technique will provide the design engineer with decision tables for fault handling...

  14. Development of an automated technique for failure modes and effect analysis

    DEFF Research Database (Denmark)

    Blanke, Mogens; Borch, Ole; Bagnoli, F.

    1999-01-01

    Advances in automation have provided integration of monitoring and control functions to enhance the operator's overview and ability to take remedy actions when faults occur. Automation in plant supervision is technically possible with integrated automation systems as platforms, but new design...... methods are needed to cope efficiently with the complexity and to ensure that the functionality of a supervisor is correct and consistent. In particular these methods are expected to significantly improve fault tolerance of the designed systems. The purpose of this work is to develop a software module...... implementing an automated technique for Failure Modes and Effects Analysis (FMEA). This technique is based on the matrix formulation of FMEA for the investigation of failure propagation through a system. As main result, this technique will provide the design engineer with decision tables for fault handling...

  15. FAST AUTOMATED DECOUPLING AT RHIC

    International Nuclear Information System (INIS)

    BEEBE-WANG, J.J.

    2005-01-01

    Coupling correction is essential for the operational performance of RHIC. The independence of the transverse degrees of freedom makes diagnostics and tune control easier, and it is advantageous to operate an accelerator close to the coupling resonance to minimize nearby nonlinear sidebands. An automated coupling correction application iDQmini has been developed for RHIC routine operations. The application decouples RHIC globally by minimizing the tune separation through finding the optimal settings of two orthogonal skew quadrupole families. The program iDQmini provides options of automatic, semi-automatic and manual decoupling operations. It accesses tune information from all RHIC tune measurement systems: the PLL (phase lock loop), the high frequency Schottky system and the tune meter. It also supplies tune and skew quadrupole scans, finding the minimum tune separation, display the real time results and interface with the RHIC control system. We summarize the capabilities of the coupling correction application iDQmini, and discuss the operational protections incorporated in the program

  16. Reload safety analysis automation tools

    International Nuclear Information System (INIS)

    Havlůj, F.; Hejzlar, J.; Vočka, R.

    2013-01-01

    Performing core physics calculations for the sake of reload safety analysis is a very demanding and time consuming process. This process generally begins with the preparation of libraries for the core physics code using a lattice code. The next step involves creating a very large set of calculations with the core physics code. Lastly, the results of the calculations must be interpreted, correctly applying uncertainties and checking whether applicable limits are satisfied. Such a procedure requires three specialized experts. One must understand the lattice code in order to correctly calculate and interpret its results. The next expert must have a good understanding of the physics code in order to create libraries from the lattice code results and to correctly define all the calculations involved. The third expert must have a deep knowledge of the power plant and the reload safety analysis procedure in order to verify, that all the necessary calculations were performed. Such a procedure involves many steps and is very time consuming. At ÚJV Řež, a.s., we have developed a set of tools which can be used to automate and simplify the whole process of performing reload safety analysis. Our application QUADRIGA automates lattice code calculations for library preparation. It removes user interaction with the lattice code and reduces his task to defining fuel pin types, enrichments, assembly maps and operational parameters all through a very nice and user-friendly GUI. The second part in reload safety analysis calculations is done by CycleKit, a code which is linked with our core physics code ANDREA. Through CycleKit large sets of calculations with complicated interdependencies can be performed using simple and convenient notation. CycleKit automates the interaction with ANDREA, organizes all the calculations, collects the results, performs limit verification and displays the output in clickable html format. Using this set of tools for reload safety analysis simplifies

  17. Automated mass correction and data interpretation for protein open-access liquid chromatography-mass spectrometry.

    Science.gov (United States)

    Wagner, Craig D; Hall, John T; White, Wendy L; Miller, Luke A D; Williams, Jon D

    2007-02-01

    Characterization of recombinant protein purification fractions and final products by liquid chromatography-mass spectrometry (LC/MS) are requested more frequently each year. A protein open-access (OA) LC/MS system was developed in our laboratory to meet this demand. This paper compares the system that we originally implemented in our facilities in 2003 to the one now in use, and discusses, in more detail, recent enhancements that have improved its robustness, reliability, and data reporting capabilities. The system utilizes instruments equipped with reversed-phase chromatography and an orthogonal accelerated time-of-flight mass spectrometer fitted with an electrospray source. Sample analysis requests are accomplished using a simple form on a web-enabled laboratory information management system (LIMS). This distributed form is accessible from any intranet-connected company desktop computer. Automated data acquisition and processing are performed using a combination of in-house (OA-Self Service, OA-Monitor, and OA-Analysis Engine) and vendor-supplied programs (AutoLynx, and OpenLynx) located on acquisition computers and off-line processing workstations. Analysis results are then reported via the same web-based LIMS. Also presented are solutions to problems not addressed on commercially available, small-molecule OA-LC/MS systems. These include automated transforming of mass-to-charge (m/z) spectra to mass spectra and automated data interpretation that considers minor variants to the protein sequence-such as common post-translational modifications (PTMs). Currently, our protein OA-LC/MS platform runs on five LC/MS instruments located in three separate GlaxoSmithKline R&D sites in the US and UK. To date, more than 8000 protein OA-LC/MS samples have been analyzed. With these user friendly and highly automated OA systems in place, mass spectrometry plays a key role in assessing the quality of recombinant proteins, either produced at our facilities or bought from external

  18. Histograms of Oriented 3D Gradients for Fully Automated Fetal Brain Localization and Robust Motion Correction in 3 T Magnetic Resonance Images.

    Science.gov (United States)

    Serag, Ahmed; Macnaught, Gillian; Denison, Fiona C; Reynolds, Rebecca M; Semple, Scott I; Boardman, James P

    2017-01-01

    Fetal brain magnetic resonance imaging (MRI) is a rapidly emerging diagnostic imaging tool. However, automated fetal brain localization is one of the biggest obstacles in expediting and fully automating large-scale fetal MRI processing. We propose a method for automatic localization of fetal brain in 3 T MRI when the images are acquired as a stack of 2D slices that are misaligned due to fetal motion. First, the Histogram of Oriented Gradients (HOG) feature descriptor is extended from 2D to 3D images. Then, a sliding window is used to assign a score to all possible windows in an image, depending on the likelihood of it containing a brain, and the window with the highest score is selected. In our evaluation experiments using a leave-one-out cross-validation strategy, we achieved 96% of complete brain localization using a database of 104 MRI scans at gestational ages between 34 and 38 weeks. We carried out comparisons against template matching and random forest based regression methods and the proposed method showed superior performance. We also showed the application of the proposed method in the optimization of fetal motion correction and how it is essential for the reconstruction process. The method is robust and does not rely on any prior knowledge of fetal brain development.

  19. Histograms of Oriented 3D Gradients for Fully Automated Fetal Brain Localization and Robust Motion Correction in 3 T Magnetic Resonance Images

    Directory of Open Access Journals (Sweden)

    Ahmed Serag

    2017-01-01

    Full Text Available Fetal brain magnetic resonance imaging (MRI is a rapidly emerging diagnostic imaging tool. However, automated fetal brain localization is one of the biggest obstacles in expediting and fully automating large-scale fetal MRI processing. We propose a method for automatic localization of fetal brain in 3 T MRI when the images are acquired as a stack of 2D slices that are misaligned due to fetal motion. First, the Histogram of Oriented Gradients (HOG feature descriptor is extended from 2D to 3D images. Then, a sliding window is used to assign a score to all possible windows in an image, depending on the likelihood of it containing a brain, and the window with the highest score is selected. In our evaluation experiments using a leave-one-out cross-validation strategy, we achieved 96% of complete brain localization using a database of 104 MRI scans at gestational ages between 34 and 38 weeks. We carried out comparisons against template matching and random forest based regression methods and the proposed method showed superior performance. We also showed the application of the proposed method in the optimization of fetal motion correction and how it is essential for the reconstruction process. The method is robust and does not rely on any prior knowledge of fetal brain development.

  20. Trust in automation and meta-cognitive accuracy in NPP operating crews

    Energy Technology Data Exchange (ETDEWEB)

    Skraaning Jr, G.; Miberg Skjerve, A. B. [OECD Halden Reactor Project, PO Box 173, 1751 Halden (Norway)

    2006-07-01

    Nuclear power plant operators can over-trust or under-trust automation. Operator trust in automation is said to be mis-calibrated when the level of trust is not corresponding to the actual level of automation reliability. A possible consequence of mis-calibrated trust is degraded meta-cognitive accuracy. Meta-cognitive accuracy is the ability to correctly monitor the effectiveness of ones own performance while engaged in complex tasks. When operators misjudge their own performance, human control actions will be poorly regulated and safety and/or efficiency may suffer. An analysis of simulator data showed that meta-cognitive accuracy and trust in automation were highly correlated for knowledge-based scenarios, but uncorrelated for rule-based scenarios. In the knowledge-based scenarios, the operators overestimated their performance effectiveness under high levels of trust, they underestimated performance under low levels of trust, but showed realistic self-assessment under intermediate levels of trust in automation. The result was interpreted to suggest that trust in automation impact the meta-cognitive accuracy of the operators. (authors)

  1. Trust in automation and meta-cognitive accuracy in NPP operating crews

    International Nuclear Information System (INIS)

    Skraaning Jr, G.; Miberg Skjerve, A. B.

    2006-01-01

    Nuclear power plant operators can over-trust or under-trust automation. Operator trust in automation is said to be mis-calibrated when the level of trust is not corresponding to the actual level of automation reliability. A possible consequence of mis-calibrated trust is degraded meta-cognitive accuracy. Meta-cognitive accuracy is the ability to correctly monitor the effectiveness of ones own performance while engaged in complex tasks. When operators misjudge their own performance, human control actions will be poorly regulated and safety and/or efficiency may suffer. An analysis of simulator data showed that meta-cognitive accuracy and trust in automation were highly correlated for knowledge-based scenarios, but uncorrelated for rule-based scenarios. In the knowledge-based scenarios, the operators overestimated their performance effectiveness under high levels of trust, they underestimated performance under low levels of trust, but showed realistic self-assessment under intermediate levels of trust in automation. The result was interpreted to suggest that trust in automation impact the meta-cognitive accuracy of the operators. (authors)

  2. Human-centred automation programme: review of experiment related studies

    International Nuclear Information System (INIS)

    Grimstad, Tone; Andresen, Gisle; Skjerve, Ann Britt Miberg

    2000-04-01

    Twenty-three empirical studies concerning automation and performance have been reviewed. The purposes of the review are to support experimental studies in the Human-Centred Automation (HCA) programme and to develop a general theory on HCA. Each study was reviewed with regard to twelve study characteristics: domain, type of study, purpose, definition of automation, variables, theoretical basis, models of operator performance, methods applied, experimental design, outcome, stated scope of results, strengths and limitations. Seven of the studies involved domain experts, the rest used students as participants. The majority of the articles originated from the aviation domain: only the study conducted in HAMMLAB considered process control in power plants. In the experimental studies, the independent variable was level of automation, or reliability of automation, while the most common dependent variables were workload, situation awareness, complacency, trust, and criteria of performance, e.g., number of correct responses or response time. Although the studies highlight important aspects of human-automation interaction, it is still unclear how system performance is affected. Nevertheless, the fact that many factors seem to be involved is taken as support for the system-oriented approach of the HCA programme. In conclusion, the review provides valuable input both to the design of experiments and to the development of a general theory. (Author). refs

  3. Automated quantification of proliferation with automated hot-spot selection in phosphohistone H3/MART1 dual-stained stage I/II melanoma.

    Science.gov (United States)

    Nielsen, Patricia Switten; Riber-Hansen, Rikke; Schmidt, Henrik; Steiniche, Torben

    2016-04-09

    Staging of melanoma includes quantification of a proliferation index, i.e., presumed melanocytic mitoses of H&E stains are counted manually in hot spots. Yet, its reproducibility and prognostic impact increases by immunohistochemical dual staining for phosphohistone H3 (PHH3) and MART1, which also may enable fully automated quantification by image analysis. To ensure manageable workloads and repeatable measurements in modern pathology, the study aimed to present an automated quantification of proliferation with automated hot-spot selection in PHH3/MART1-stained melanomas. Formalin-fixed, paraffin-embedded tissue from 153 consecutive stage I/II melanoma patients was immunohistochemically dual-stained for PHH3 and MART1. Whole slide images were captured, and the number of PHH3/MART1-positive cells was manually and automatically counted in the global tumor area and in a manually and automatically selected hot spot, i.e., a fixed 1-mm(2) square. Bland-Altman plots and hypothesis tests compared manual and automated procedures, and the Cox proportional hazards model established their prognostic impact. The mean difference between manual and automated global counts was 2.9 cells/mm(2) (P = 0.0071) and 0.23 cells per hot spot (P = 0.96) for automated counts in manually and automatically selected hot spots. In 77 % of cases, manual and automated hot spots overlapped. Fully manual hot-spot counts yielded the highest prognostic performance with an adjusted hazard ratio of 5.5 (95 % CI, 1.3-24, P = 0.024) as opposed to 1.3 (95 % CI, 0.61-2.9, P = 0.47) for automated counts with automated hot spots. The automated index and automated hot-spot selection were highly correlated to their manual counterpart, but altogether their prognostic impact was noticeably reduced. Because correct recognition of only one PHH3/MART1-positive cell seems important, extremely high sensitivity and specificity of the algorithm is required for prognostic purposes. Thus, automated

  4. Automated three-dimensional X-ray analysis using a dual-beam FIB

    International Nuclear Information System (INIS)

    Schaffer, Miroslava; Wagner, Julian; Schaffer, Bernhard; Schmied, Mario; Mulders, Hans

    2007-01-01

    We present a fully automated method for three-dimensional (3D) elemental analysis demonstrated using a ceramic sample of chemistry (Ca)MgTiO x . The specimen is serially sectioned by a focused ion beam (FIB) microscope, and energy-dispersive X-ray spectrometry (EDXS) is used for elemental analysis of each cross-section created. A 3D elemental model is reconstructed from the stack of two-dimensional (2D) data. This work concentrates on issues arising from process automation, the large sample volume of approximately 17x17x10 μm 3 , and the insulating nature of the specimen. A new routine for post-acquisition data correction of different drift effects is demonstrated. Furthermore, it is shown that EDXS data may be erroneous for specimens containing voids, and that back-scattered electron images have to be used to correct for these errors

  5. Automated extraction of radiation dose information from CT dose report images.

    Science.gov (United States)

    Li, Xinhua; Zhang, Da; Liu, Bob

    2011-06-01

    The purpose of this article is to describe the development of an automated tool for retrieving texts from CT dose report images. Optical character recognition was adopted to perform text recognitions of CT dose report images. The developed tool is able to automate the process of analyzing multiple CT examinations, including text recognition, parsing, error correction, and exporting data to spreadsheets. The results were precise for total dose-length product (DLP) and were about 95% accurate for CT dose index and DLP of scanned series.

  6. Analysis of Salient Feature Jitter in the Cochlea for Objective Prediction of Temporally Localized Distortion in Synthesized Speech

    Directory of Open Access Journals (Sweden)

    Wenliang Lu

    2009-01-01

    Full Text Available Temporally localized distortions account for the highest variance in subjective evaluation of coded speech signals (Sen (2001 and Hall (2001. The ability to discern and decompose perceptually relevant temporally localized coding noise from other types of distortions is both of theoretical importance as well as a valuable tool for deploying and designing speech synthesis systems. The work described within uses a physiologically motivated cochlear model to provide a tractable analysis of salient feature trajectories as processed by the cochlea. Subsequent statistical analysis shows simple relationships between the jitter of these trajectories and temporal attributes of the Diagnostic Acceptability Measure (DAM.

  7. "Booster" training: evaluation of instructor-led bedside cardiopulmonary resuscitation skill training and automated corrective feedback to improve cardiopulmonary resuscitation compliance of Pediatric Basic Life Support providers during simulated cardiac arrest.

    Science.gov (United States)

    Sutton, Robert M; Niles, Dana; Meaney, Peter A; Aplenc, Richard; French, Benjamin; Abella, Benjamin S; Lengetti, Evelyn L; Berg, Robert A; Helfaer, Mark A; Nadkarni, Vinay

    2011-05-01

    To investigate the effectiveness of brief bedside "booster" cardiopulmonary resuscitation (CPR) training to improve CPR guideline compliance of hospital-based pediatric providers. Prospective, randomized trial. General pediatric wards at Children's Hospital of Philadelphia. Sixty-nine Basic Life Support-certified hospital-based providers. CPR recording/feedback defibrillators were used to evaluate CPR quality during simulated pediatric arrest. After a 60-sec pretraining CPR evaluation, subjects were randomly assigned to one of three instructional/feedback methods to be used during CPR booster training sessions. All sessions (training/CPR manikin practice) were of equal duration (2 mins) and differed only in the method of corrective feedback given to participants during the session. The study arms were as follows: 1) instructor-only training; 2) automated defibrillator feedback only; and 3) instructor training combined with automated feedback. Before instruction, 57% of the care providers performed compressions within guideline rate recommendations (rate >90 min(-1) and 38 mm); and 36% met overall CPR compliance (rate and depth within targets). After instruction, guideline compliance improved (instructor-only training: rate 52% to 87% [p .01], and overall CPR compliance, 43% to 78% [p CPR compliance, 35% to 96% [p training combined with automated feedback: rate 48% to 100% [p CPR compliance, 30% to 100% [p CPR instruction, most certified Pediatric Basic Life Support providers did not perform guideline-compliant CPR. After a brief bedside training, CPR quality improved irrespective of training content (instructor vs. automated feedback). Future studies should investigate bedside training to improve CPR quality during actual pediatric cardiac arrests.

  8. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.

    Directory of Open Access Journals (Sweden)

    Jun Yi Wang

    Full Text Available Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation to 0.978 (for SegAdapter-corrected segmentation for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large

  9. How automated image analysis techniques help scientists in species identification and classification?

    Science.gov (United States)

    Yousef Kalafi, Elham; Town, Christopher; Kaur Dhillon, Sarinder

    2017-09-04

    Identification of taxonomy at a specific level is time consuming and reliant upon expert ecologists. Hence the demand for automated species identification increased over the last two decades. Automation of data classification is primarily focussed on images, incorporating and analysing image data has recently become easier due to developments in computational technology. Research efforts in identification of species include specimens' image processing, extraction of identical features, followed by classifying them into correct categories. In this paper, we discuss recent automated species identification systems, categorizing and evaluating their methods. We reviewed and compared different methods in step by step scheme of automated identification and classification systems of species images. The selection of methods is influenced by many variables such as level of classification, number of training data and complexity of images. The aim of writing this paper is to provide researchers and scientists an extensive background study on work related to automated species identification, focusing on pattern recognition techniques in building such systems for biodiversity studies.

  10. Visualization and correction of automated segmentation, tracking and lineaging from 5-D stem cell image sequences.

    Science.gov (United States)

    Wait, Eric; Winter, Mark; Bjornsson, Chris; Kokovay, Erzsebet; Wang, Yue; Goderie, Susan; Temple, Sally; Cohen, Andrew R

    2014-10-03

    Neural stem cells are motile and proliferative cells that undergo mitosis, dividing to produce daughter cells and ultimately generating differentiated neurons and glia. Understanding the mechanisms controlling neural stem cell proliferation and differentiation will play a key role in the emerging fields of regenerative medicine and cancer therapeutics. Stem cell studies in vitro from 2-D image data are well established. Visualizing and analyzing large three dimensional images of intact tissue is a challenging task. It becomes more difficult as the dimensionality of the image data increases to include time and additional fluorescence channels. There is a pressing need for 5-D image analysis and visualization tools to study cellular dynamics in the intact niche and to quantify the role that environmental factors play in determining cell fate. We present an application that integrates visualization and quantitative analysis of 5-D (x,y,z,t,channel) and large montage confocal fluorescence microscopy images. The image sequences show stem cells together with blood vessels, enabling quantification of the dynamic behaviors of stem cells in relation to their vascular niche, with applications in developmental and cancer biology. Our application automatically segments, tracks, and lineages the image sequence data and then allows the user to view and edit the results of automated algorithms in a stereoscopic 3-D window while simultaneously viewing the stem cell lineage tree in a 2-D window. Using the GPU to store and render the image sequence data enables a hybrid computational approach. An inference-based approach utilizing user-provided edits to automatically correct related mistakes executes interactively on the system CPU while the GPU handles 3-D visualization tasks. By exploiting commodity computer gaming hardware, we have developed an application that can be run in the laboratory to facilitate rapid iteration through biological experiments. We combine unsupervised image

  11. Coordinated joint motion control system with position error correction

    Science.gov (United States)

    Danko, George L.

    2016-04-05

    Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.

  12. Experiences in Building Python Automation Framework for Verification and Data Collections

    Directory of Open Access Journals (Sweden)

    2010-09-01

    Full Text Available

    This paper describes our experiences in building a Python automation framework. Specifically, the automation framework is used to support verification and data collection scripts. The scripts control various test equipments in addition to the device under test (DUT to characterize a specific performance with a specific configuration or to evaluate the correctness of the behaviour of the DUT. The specific focus on this paper is on documenting our experiences in building an automation framework using Python: on the purposes, goals and the benefits, rather than on a tutorial of how to build such a framework.

  13. About development of automation control systems

    Science.gov (United States)

    Myshlyaev, L. P.; Wenger, K. G.; Ivushkin, K. A.; Makarov, V. N.

    2018-05-01

    The shortcomings of approaches to the development of modern control automation systems and ways of their improvement are given: the correct formation of objects for study and optimization; a joint synthesis of control objects and control systems, an increase in the structural diversity of the elements of control systems. Diagrams of control systems with purposefully variable structure of their elements are presented. Structures of control algorithms for an object with a purposefully variable structure are given.

  14. [Time consumption and quality of an automated fusion tool for SPECT and MRI images of the brain].

    Science.gov (United States)

    Fiedler, E; Platsch, G; Schwarz, A; Schmiedehausen, K; Tomandl, B; Huk, W; Rupprecht, Th; Rahn, N; Kuwert, T

    2003-10-01

    Although the fusion of images from different modalities may improve diagnostic accuracy, it is rarely used in clinical routine work due to logistic problems. Therefore we evaluated performance and time needed for fusing MRI and SPECT images using a semiautomated dedicated software. PATIENTS, MATERIAL AND METHOD: In 32 patients regional cerebral blood flow was measured using (99m)Tc ethylcystein dimer (ECD) and the three-headed SPECT camera MultiSPECT 3. MRI scans of the brain were performed using either a 0,2 T Open or a 1,5 T Sonata. Twelve of the MRI data sets were acquired using a 3D-T1w MPRAGE sequence, 20 with a 2D acquisition technique and different echo sequences. Image fusion was performed on a Syngo workstation using an entropy minimizing algorithm by an experienced user of the software. The fusion results were classified. We measured the time needed for the automated fusion procedure and in case of need that for manual realignment after automated, but insufficient fusion. The mean time of the automated fusion procedure was 123 s. It was for the 2D significantly shorter than for the 3D MRI datasets. For four of the 2D data sets and two of the 3D data sets an optimal fit was reached using the automated approach. The remaining 26 data sets required manual correction. The sum of the time required for automated fusion and that needed for manual correction averaged 320 s (50-886 s). The fusion of 3D MRI data sets lasted significantly longer than that of the 2D MRI data. The automated fusion tool delivered in 20% an optimal fit, in 80% manual correction was necessary. Nevertheless, each of the 32 SPECT data sets could be merged in less than 15 min with the corresponding MRI data, which seems acceptable for clinical routine use.

  15. Experiences of Using Automated Assessment in Computer Science Courses

    Directory of Open Access Journals (Sweden)

    John English

    2015-10-01

    Full Text Available In this paper we discuss the use of automated assessment in a variety of computer science courses that have been taught at Israel Academic College by the authors. The course assignments were assessed entirely automatically using Checkpoint, a web-based automated assessment framework. The assignments all used free-text questions (where the students type in their own answers. Students were allowed to correct errors based on feedback provided by the system and resubmit their answers. A total of 141 students were surveyed to assess their opinions of this approach, and we analysed their responses. Analysis of the questionnaire showed a low correlation between questions, indicating the statistical independence of the individual questions. As a whole, student feedback on using Checkpoint was very positive, emphasizing the benefits of multiple attempts, impartial marking, and a quick turnaround time for submissions. Many students said that Checkpoint gave them confidence in learning and motivation to practise. Students also said that the detailed feedback that Checkpoint generated when their programs failed helped them understand their mistakes and how to correct them.

  16. MRI intensity inhomogeneity correction by combining intensity and spatial information

    International Nuclear Information System (INIS)

    Vovk, Uros; Pernus, Franjo; Likar, Bostjan

    2004-01-01

    We propose a novel fully automated method for retrospective correction of intensity inhomogeneity, which is an undesired phenomenon in many automatic image analysis tasks, especially if quantitative analysis is the final goal. Besides most commonly used intensity features, additional spatial image features are incorporated to improve inhomogeneity correction and to make it more dynamic, so that local intensity variations can be corrected more efficiently. The proposed method is a four-step iterative procedure in which a non-parametric inhomogeneity correction is conducted. First, the probability distribution of image intensities and corresponding second derivatives is obtained. Second, intensity correction forces, condensing the probability distribution along the intensity feature, are computed for each voxel. Third, the inhomogeneity correction field is estimated by regularization of all voxel forces, and fourth, the corresponding partial inhomogeneity correction is performed. The degree of inhomogeneity correction dynamics is determined by the size of regularization kernel. The method was qualitatively and quantitatively evaluated on simulated and real MR brain images. The obtained results show that the proposed method does not corrupt inhomogeneity-free images and successfully corrects intensity inhomogeneity artefacts even if these are more dynamic

  17. Automated fetal brain segmentation from 2D MRI slices for motion correction.

    Science.gov (United States)

    Keraudren, K; Kuklisova-Murgasova, M; Kyriakopoulou, V; Malamateniou, C; Rutherford, M A; Kainz, B; Hajnal, J V; Rueckert, D

    2014-11-01

    Motion correction is a key element for imaging the fetal brain in-utero using Magnetic Resonance Imaging (MRI). Maternal breathing can introduce motion, but a larger effect is frequently due to fetal movement within the womb. Consequently, imaging is frequently performed slice-by-slice using single shot techniques, which are then combined into volumetric images using slice-to-volume reconstruction methods (SVR). For successful SVR, a key preprocessing step is to isolate fetal brain tissues from maternal anatomy before correcting for the motion of the fetal head. This has hitherto been a manual or semi-automatic procedure. We propose an automatic method to localize and segment the brain of the fetus when the image data is acquired as stacks of 2D slices with anatomy misaligned due to fetal motion. We combine this segmentation process with a robust motion correction method, enabling the segmentation to be refined as the reconstruction proceeds. The fetal brain localization process uses Maximally Stable Extremal Regions (MSER), which are classified using a Bag-of-Words model with Scale-Invariant Feature Transform (SIFT) features. The segmentation process is a patch-based propagation of the MSER regions selected during detection, combined with a Conditional Random Field (CRF). The gestational age (GA) is used to incorporate prior knowledge about the size and volume of the fetal brain into the detection and segmentation process. The method was tested in a ten-fold cross-validation experiment on 66 datasets of healthy fetuses whose GA ranged from 22 to 39 weeks. In 85% of the tested cases, our proposed method produced a motion corrected volume of a relevant quality for clinical diagnosis, thus removing the need for manually delineating the contours of the brain before motion correction. Our method automatically generated as a side-product a segmentation of the reconstructed fetal brain with a mean Dice score of 93%, which can be used for further processing. Copyright

  18. Smartnotebook: A semi-automated approach to protein sequential NMR resonance assignments

    International Nuclear Information System (INIS)

    Slupsky, Carolyn M.; Boyko, Robert F.; Booth, Valerie K.; Sykes, Brian D.

    2003-01-01

    Complete and accurate NMR spectral assignment is a prerequisite for high-throughput automated structure determination of biological macromolecules. However, completely automated assignment procedures generally encounter difficulties for all but the most ideal data sets. Sources of these problems include difficulty in resolving correlations in crowded spectral regions, as well as complications arising from dynamics, such as weak or missing peaks, or atoms exhibiting more than one peak due to exchange phenomena. Smartnotebook is a semi-automated assignment software package designed to combine the best features of the automated and manual approaches. The software finds and displays potential connections between residues, while the spectroscopist makes decisions on which connection is correct, allowing rapid and robust assignment. In addition, smartnotebook helps the user fit chains of connected residues to the primary sequence of the protein by comparing the experimentally determined chemical shifts with expected shifts derived from a chemical shift database, while providing bookkeeping throughout the assignment procedure

  19. Automated tuning of the advanced photon source booster synchrotron

    International Nuclear Information System (INIS)

    Biedron, S.G.; Milton, S.V.

    1997-01-01

    The acceleration cycle of the Advanced Photon Source (APS) booster synchrotron is completed within 223 ms and is repeated at 2 Hz. Unless properly corrected, transverse and longitudinal injection errors can lead to inefficient booster performance. In order to simplify daily operation, automated tuning methods have been developed. Through the use of beam position monitor (BPM) reading, transfer line corrector magnets, magnet ramp timing, and empirically determined response functions, the injection process is optimized by correcting the first turn trajectory to the measured closed orbit. These tuning algorithms and their implementation are described here along with an evaluation of their performance

  20. Automated calculations for massive fermion production with ai-bar Talc

    International Nuclear Information System (INIS)

    Lorca, A.; Riemann, T.

    2004-01-01

    The package ai-bar Talc has been developed for the automated calculation of radiative corrections to two-fermion production at e + e - colliders. The package uses Diana, Qgraf, Form, Fortran, FF, LoopTools, and further unix/linux tools. Numerical results are presented for e + e - -> e + e - , μ + μ - , bs-bar , tc-bar

  1. Level of Automation and Failure Frequency Effects on Simulated Lunar Lander Performance

    Science.gov (United States)

    Marquez, Jessica J.; Ramirez, Margarita

    2014-01-01

    A human-in-the-loop experiment was conducted at the NASA Ames Research Center Vertical Motion Simulator, where instrument-rated pilots completed a simulated terminal descent phase of a lunar landing. Ten pilots participated in a 2 x 2 mixed design experiment, with level of automation as the within-subjects factor and failure frequency as the between subjects factor. The two evaluated levels of automation were high (fully automated landing) and low (manual controlled landing). During test trials, participants were exposed to either a high number of failures (75% failure frequency) or low number of failures (25% failure frequency). In order to investigate the pilots' sensitivity to changes in levels of automation and failure frequency, the dependent measure selected for this experiment was accuracy of failure diagnosis, from which D Prime and Decision Criterion were derived. For each of the dependent measures, no significant difference was found for level of automation and no significant interaction was detected between level of automation and failure frequency. A significant effect was identified for failure frequency suggesting failure frequency has a significant effect on pilots' sensitivity to failure detection and diagnosis. Participants were more likely to correctly identify and diagnose failures if they experienced the higher levels of failures, regardless of level of automation

  2. Fast Automated Decoupling at RHIC

    CERN Document Server

    Beebe-Wang, Joanne

    2005-01-01

    Coupling correction is essential for the operational performance of RHIC. The independence of the transverse degrees of freedom makes diagnostics and tune control easier, and it is advantageous to operate an accelerator close to the coupling resonance to minimize nearby nonlinear sidebands. An automated decoupling application has been developed at RHIC for coupling correction during routine operations. The application decouples RHIC globally by minimizing the tune separation through finding the optimal settings of two orthogonal skew quadrupole families. The program provides options of automatic, semi-automatic and manual decoupling operations. It accesses tune information from all RHIC tune measurement systems: the PLL (Phase Lock Loop), the high frequency Schottky system, and the tune meter. It also supplies tune and skew quadrupole scans, finding the minimum tune separation, display the real time results and interface with the RHIC control system. We summarize the capabilities of the decoupling application...

  3. Reduction of density-modification bias by β correction

    International Nuclear Information System (INIS)

    Skubák, Pavol; Pannu, Navraj S.

    2011-01-01

    A cross-validation-based method for bias reduction in ‘classical’ iterative density modification of experimental X-ray crystallography maps provides significantly more accurate phase-quality estimates and leads to improved automated model building. Density modification often suffers from an overestimation of phase quality, as seen by escalated figures of merit. A new cross-validation-based method to address this estimation bias by applying a bias-correction parameter ‘β’ to maximum-likelihood phase-combination functions is proposed. In tests on over 100 single-wavelength anomalous diffraction data sets, the method is shown to produce much more reliable figures of merit and improved electron-density maps. Furthermore, significantly better results are obtained in automated model building iterated with phased refinement using the more accurate phase probability parameters from density modification

  4. Altering user' acceptance of automation through prior automation exposure.

    Science.gov (United States)

    Bekier, Marek; Molesworth, Brett R C

    2017-06-01

    Air navigation service providers worldwide see increased use of automation as one solution to overcome the capacity constraints imbedded in the present air traffic management (ATM) system. However, increased use of automation within any system is dependent on user acceptance. The present research sought to determine if the point at which an individual is no longer willing to accept or cooperate with automation can be manipulated. Forty participants underwent training on a computer-based air traffic control programme, followed by two ATM exercises (order counterbalanced), one with and one without the aid of automation. Results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation ('tipping point') decreased; suggesting it is indeed possible to alter automation acceptance. Practitioner Summary: This paper investigates whether the point at which a user of automation rejects automation (i.e. 'tipping point') is constant or can be manipulated. The results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation decreased; suggesting it is possible to alter automation acceptance.

  5. Dijkstra's interpretation of the approach to solving a problem of program correctness

    Directory of Open Access Journals (Sweden)

    Markoski Branko

    2010-01-01

    Full Text Available Proving the program correctness and designing the correct programs are two connected theoretical problems, which are of great practical importance. The first is solved within program analysis, and the second one in program synthesis, although intertwining of these two processes is often due to connection between the analysis and synthesis of programs. Nevertheless, having in mind the automated methods of proving correctness and methods of automatic program synthesis, the difference is easy to tell. This paper presents denotative interpretation of programming calculation explaining semantics by formulae φ and ψ, in such a way that they can be used for defining state sets for program P.

  6. Development of an automated asbestos counting software based on fluorescence microscopy.

    Science.gov (United States)

    Alexandrov, Maxym; Ichida, Etsuko; Nishimura, Tomoki; Aoki, Kousuke; Ishida, Takenori; Hirota, Ryuichi; Ikeda, Takeshi; Kawasaki, Tetsuo; Kuroda, Akio

    2015-01-01

    An emerging alternative to the commonly used analytical methods for asbestos analysis is fluorescence microscopy (FM), which relies on highly specific asbestos-binding probes to distinguish asbestos from interfering non-asbestos fibers. However, all types of microscopic asbestos analysis require laborious examination of large number of fields of view and are prone to subjective errors and large variability between asbestos counts by different analysts and laboratories. A possible solution to these problems is automated counting of asbestos fibers by image analysis software, which would lower the cost and increase the reliability of asbestos testing. This study seeks to develop a fiber recognition and counting software for FM-based asbestos analysis. We discuss the main features of the developed software and the results of its testing. Software testing showed good correlation between automated and manual counts for the samples with medium and high fiber concentrations. At low fiber concentrations, the automated counts were less accurate, leading us to implement correction mode for automated counts. While the full automation of asbestos analysis would require further improvements in accuracy of fiber identification, the developed software could already assist professional asbestos analysts and record detailed fiber dimensions for the use in epidemiological research.

  7. “Booster” training: Evaluation of instructor-led bedside cardiopulmonary resuscitation skill training and automated corrective feedback to improve cardiopulmonary resuscitation compliance of Pediatric Basic Life Support providers during simulated cardiac arrest

    Science.gov (United States)

    Sutton, Robert M.; Niles, Dana; Meaney, Peter A.; Aplenc, Richard; French, Benjamin; Abella, Benjamin S.; Lengetti, Evelyn L.; Berg, Robert A.; Helfaer, Mark A.; Nadkarni, Vinay

    2013-01-01

    Objective To investigate the effectiveness of brief bedside “booster” cardiopulmonary resuscitation (CPR) training to improve CPR guideline compliance of hospital-based pediatric providers. Design Prospective, randomized trial. Setting General pediatric wards at Children’s Hospital of Philadelphia. Subjects Sixty-nine Basic Life Support–certified hospital-based providers. Intervention CPR recording/feedback defibrillators were used to evaluate CPR quality during simulated pediatric arrest. After a 60-sec pretraining CPR evaluation, subjects were randomly assigned to one of three instructional/feedback methods to be used during CPR booster training sessions. All sessions (training/CPR manikin practice) were of equal duration (2 mins) and differed only in the method of corrective feedback given to participants during the session. The study arms were as follows: 1) instructor-only training; 2) automated defibrillator feedback only; and 3) instructor training combined with automated feedback. Measurements and Main Results Before instruction, 57% of the care providers performed compressions within guideline rate recommendations (rate >90 min−1 and 38 mm); and 36% met overall CPR compliance (rate and depth within targets). After instruction, guideline compliance improved (instructor-only training: rate 52% to 87% [p .01], and overall CPR compliance, 43% to 78% [p CPR compliance, 35% to 96% [p training combined with automated feedback: rate 48% to 100% [p CPR compliance, 30% to 100% [p CPR instruction, most certified Pediatric Basic Life Support providers did not perform guideline-compliant CPR. After a brief bedside training, CPR quality improved irrespective of training content (instructor vs. automated feedback). Future studies should investigate bedside training to improve CPR quality during actual pediatric cardiac arrests. PMID:20625336

  8. All-fiber interferometer-based repetition-rate stabilization of mode-locked lasers to 10-14-level frequency instability and 1-fs-level jitter over 1  s.

    Science.gov (United States)

    Kwon, Dohyeon; Kim, Jungwon

    2017-12-15

    We report on all-fiber Michelson interferometer-based repetition-rate stabilization of femtosecond mode-locked lasers down to 1.3×10 -14 frequency instability and 1.4 fs integrated jitter in a 1 s time scale. The use of a compactly packaged 10 km long single-mode fiber (SMF)-28 fiber link as a timing reference allows the scaling of phase noise at a 10 GHz carrier down to -80  dBc/Hz at 1 Hz Fourier frequency. We also tested a 500 m long low-thermal-sensitivity fiber as a reference and found that, compared to standard SMF-28 fiber, it can mitigate the phase noise divergence by ∼10  dB/dec in the 0.1-1 Hz Fourier frequency range. These results suggest that the use of a longer low-thermal-sensitivity fiber may achieve sub-femtosecond integrated timing jitter with sub-10 -14 -level frequency instability in repetition rate by a simple and robust all-fiber-photonic method.

  9. Automated landmark-guided deformable image registration.

    Science.gov (United States)

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-07

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency.

  10. Automated landmark-guided deformable image registration

    International Nuclear Information System (INIS)

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-01

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency. (paper)

  11. Analysis of the thoracic aorta using a semi-automated post processing tool

    International Nuclear Information System (INIS)

    Entezari, Pegah; Kino, Aya; Honarmand, Amir R.; Galizia, Mauricio S.; Yang, Yan; Collins, Jeremy; Yaghmai, Vahid; Carr, James C.

    2013-01-01

    Objective: To evaluates a semi-automated method for Thoracic Aortic Aneurysm (TAA) measurement using ECG-gated Dual Source CT Angiogram (DSCTA). Methods: This retrospective HIPAA compliant study was approved by our IRB. Transaxial maximum diameters of outer wall to outer wall were studied in fifty patients at seven anatomic locations of the thoracic aorta: annulus, sinus, sinotubular junction (STJ), mid ascending aorta (MAA) at the level of right pulmonary artery, proximal aortic arch (PROX) immediately proximal to innominate artery, distal aortic arch (DIST) immediately distal to left subclavian artery, and descending aorta (DESC) at the level of diaphragm. Measurements were performed using a manual method and semi-automated software. All readers repeated their measurements. Inter-method, intra-observer and inter-observer agreements were evaluated according to intraclass correlation coefficient (ICC) and Bland–Altman plot. The number of cases with manual contouring or center line adjustment for the semi-automated method and also the post-processing time for each method were recorded. Results: The mean difference between semi-automated and manual methods was less than 1.3 mm at all seven points. Strong inter-method, inter-observer and intra-observer agreement was recorded at all levels (ICC ≥ 0.9). The maximum rate of manual adjustment of center line and contour was at the level of annulus. The average time for manual post-processing of the aorta was 19 ± 0.3 min, while it took 8.26 ± 2.1 min to do the measurements with the semi-automated tool (Vitrea version 6.0.0.1 software). The center line was edited manually at all levels, with most corrections at the level of annulus (60%), while the contour was adjusted at all levels with highest and lowest number of corrections at the levels of annulus and DESC (75% and 0.07% of the cases), respectively. Conclusion: Compared to the commonly used manual method, semi-automated measurement of vessel dimensions is

  12. Time consumption and quality of an automated fusion tool for SPECT and MRI images of the brain

    International Nuclear Information System (INIS)

    Fiedler, E.; Platsch, G.; Schwarz, A.; Schmiedehausen, K.; Kuwert, T.; Tomandl, B.; Huk, W.; Rupprecht, Th.; Rahn, N.

    2003-01-01

    Aim: Although the fusion of images from different modalities may improve diagnostic accuracy, it is rarely used in clinical routine work due to logistic problems. Therefore we evaluated performance and time needed for fusing MRI and SPECT images using a semiautomated dedicated software. Patients, material and method: In 32 patients regional cerebral blood flow was measured using 99m Tc ethylcystein dimer (ECD) and the three-headed SPECT camera MultiSPECT 3. MRI scans of the brain were performed using either a 0,2 T Open or a 1,5 T Sonata. Twelve of the MRI data sets were acquired using a 3 D-T1 w MPRAGE sequence, 20 with a 2D acquisition technique and different echo sequences. Image fusion was performed on a Syngo workstation using an entropy minimizing algorithm by an experienced user of the software. The fusion results were classified. We measured the time needed for the automated fusion procedure and in case of need that for manual realignment after automated, but insufficient fusion. Results: The mean time of the automated fusion procedure was 123 s. It was for the 2D significantly shorter than for the 3D MRI datasets. For four of the 2D data sets and two of the 3D data sets an optimal fit was reached using the automated approach. The remaining 26 data sets required manual correction. The sum of the time required for automated fusion and that needed for manual correction averaged 320 s (50-886 s). Conclusion: The fusion of 3D MRI data sets lasted significantly longer than that of the 2D MRI data. The automated fusion tool delivered in 20% an optimal fit, in 80% manual correction was necessary. Nevertheless, each of the 32 SPECT data sets could be merged in less than 15 min with the corresponding MRI data, which seems acceptable for clinical routine use. (orig.) [de

  13. Photogrammetric approach to automated checking of DTMs

    DEFF Research Database (Denmark)

    Potucková, Marketa

    2005-01-01

    Geometrically accurate digital terrain models (DTMs) are essential for orthoimage production and many other applications. Collecting reference data or visual inspection are reliable but time consuming and therefore expensive methods for finding errors in DTMs. In this paper, a photogrammetric...... approach to automated checking and improving of DTMs is evaluated. Corresponding points in two overlapping orthoimages are found by means of area based matching. Provided the image orientation is correct, discovered displacements correspond to DTM errors. Improvements of the method regarding its...

  14. Space environments and their effects on space automation and robotics

    Science.gov (United States)

    Garrett, Henry B.

    1990-01-01

    Automated and robotic systems will be exposed to a variety of environmental anomalies as a result of adverse interactions with the space environment. As an example, the coupling of electrical transients into control systems, due to EMI from plasma interactions and solar array arcing, may cause spurious commands that could be difficult to detect and correct in time to prevent damage during critical operations. Spacecraft glow and space debris could introduce false imaging information into optical sensor systems. The presentation provides a brief overview of the primary environments (plasma, neutral atmosphere, magnetic and electric fields, and solid particulates) that cause such adverse interactions. The descriptions, while brief, are intended to provide a basis for the other papers presented at this conference which detail the key interactions with automated and robotic systems. Given the growing complexity and sensitivity of automated and robotic space systems, an understanding of adverse space environments will be crucial to mitigating their effects.

  15. White matter hyperintensities segmentation: a new semi-automated method

    Directory of Open Access Journals (Sweden)

    Mariangela eIorio

    2013-12-01

    Full Text Available White matter hyperintensities (WMH are brain areas of increased signal on T2-weighted or fluid attenuated inverse recovery magnetic resonance imaging (MRI scans. In this study we present a new semi-automated method to measure WMH load that is based on the segmentation of the intensity histogram of fluid-attenuated inversion recovery images. Thirty patients with Mild Cognitive Impairment with variable WMH load were enrolled. The semi-automated WMH segmentation included: removal of non-brain tissue, spatial normalization, removal of cerebellum and brain stem, spatial filtering, thresholding to segment probable WMH, manual editing for correction of false positives and negatives, generation of WMH map and volumetric estimation of the WMH load. Accuracy was quantitatively evaluated by comparing semi-automated and manual WMH segmentations performed by two independent raters. Differences between the two procedures were assessed using Student’s t tests and similarity was evaluated using linear regression model and Dice Similarity Coefficient (DSC. The volumes of the manual and semi-automated segmentations did not statistically differ (t-value= -1.79, DF=29, p= 0.839 for rater 1; t-value= 1.113, DF=29, p= 0.2749 for rater 2, were highly correlated (R²= 0.921, F (1,29 =155,54, p

  16. Improving the driver-automation interaction: an approach using automation uncertainty.

    Science.gov (United States)

    Beller, Johannes; Heesen, Matthias; Vollrath, Mark

    2013-12-01

    The aim of this study was to evaluate whether communicating automation uncertainty improves the driver-automation interaction. A false system understanding of infallibility may provoke automation misuse and can lead to severe consequences in case of automation failure. The presentation of automation uncertainty may prevent this false system understanding and, as was shown by previous studies, may have numerous benefits. Few studies, however, have clearly shown the potential of communicating uncertainty information in driving. The current study fills this gap. We conducted a driving simulator experiment, varying the presented uncertainty information between participants (no uncertainty information vs. uncertainty information) and the automation reliability (high vs.low) within participants. Participants interacted with a highly automated driving system while engaging in secondary tasks and were required to cooperate with the automation to drive safely. Quantile regressions and multilevel modeling showed that the presentation of uncertainty information increases the time to collision in the case of automation failure. Furthermore, the data indicated improved situation awareness and better knowledge of fallibility for the experimental group. Consequently, the automation with the uncertainty symbol received higher trust ratings and increased acceptance. The presentation of automation uncertaintythrough a symbol improves overall driver-automation cooperation. Most automated systems in driving could benefit from displaying reliability information. This display might improve the acceptance of fallible systems and further enhances driver-automation cooperation.

  17. Study of automated segmentation of the cerebellum and brainstem on brain MR images

    International Nuclear Information System (INIS)

    Hayashi, Norio; Matsuura, Yukihiro; Sanada, Shigeru; Suzuki, Masayuki

    2005-01-01

    MR imaging is an important method for diagnosing abnormalities of the brain. This paper presents an automated method to segment the cerebellum and brainstem for brain MR images. MR images were obtained from 10 normal subjects (male 4, female 6; 22-75 years old, average 31.0 years) and 15 patients with brain atrophy (male 3, female 12; 62-85 years of age, average 76.0 years). The automated method consisted of the following four steps: segmentation of the brain on original images, detection of an upper plane of the cerebellum using the Hough transform, correction of the plane using three-dimensional (3D) information, and segmentation of the cerebellum and brainstem using the plane. The results indicated that the regions obtained by the automated method were visually similar to those obtained by a manual method. The average rates of coincidence between the automated method and manual method were 83.0±9.0% in normal subjects and 86.4±3.6% in patients. (author)

  18. Process development for automated solar cell and module production. Task 4: automated array assembly. Quarterly report No. 5

    Energy Technology Data Exchange (ETDEWEB)

    Hagerty, J.J.

    1980-01-31

    Construction of an automated solar cell layup and interconnect system is now complete. This system incorporates a Unimate 2000 B industrial robot with an end effector consisting of a vacuum pick up and induction heating coil. The robot interfaces with a smart cell preparation station which correctly orients the cell, applies solder paste and forms and positions the correct lengths of interconnect lead. The system is controlled and monitored by a TRS-80 micro computer. The first operational tests of the fully integrated station have been run. These tests proved the soundness of the basic design concept but also pointed to areas in which modifications are necessary. These modifications are nearly complete and the improved parts are being integrated. Development of the controlling computer program is progressing to both reflect these changes and reduce operating time.

  19. Using historical wafermap data for automated yield analysis

    International Nuclear Information System (INIS)

    Tobin, K.W.; Karnowski, T.P.; Gleason, S.S.; Jensen, D.; Lakhani, F.

    1999-01-01

    To be productive and profitable in a modern semiconductor fabrication environment, large amounts of manufacturing data must be collected, analyzed, and maintained. This includes data collected from in- and off-line wafer inspection systems and from the process equipment itself. This data is increasingly being used to design new processes, control and maintain tools, and to provide the information needed for rapid yield learning and prediction. Because of increasing device complexity, the amount of data being generated is outstripping the yield engineer close-quote s ability to effectively monitor and correct unexpected trends and excursions. The 1997 SIA National Technology Roadmap for Semiconductors highlights a need to address these issues through open-quotes automated data reduction algorithms to source defects from multiple data sources and to reduce defect sourcing time.close quotes SEMATECH and the Oak Ridge National Laboratory have been developing new strategies and technologies for providing the yield engineer with higher levels of assisted data reduction for the purpose of automated yield analysis. In this article, we will discuss the current state of the art and trends in yield management automation. copyright 1999 American Vacuum Society

  20. Decision Making In A High-Tech World: Automation Bias and Countermeasures

    Science.gov (United States)

    Mosier, Kathleen L.; Skitka, Linda J.; Burdick, Mark R.; Heers, Susan T.; Rosekind, Mark R. (Technical Monitor)

    1996-01-01

    resultant errors. To what extent these effects generalize to performance situations is not yet empirically established. The two studies to be presented represent concurrent efforts, with student and professional pilot samples, to determine the effects of accountability pressures on automation bias and on the verification of the accurate functioning of automated aids. Students (Experiment 1) and commercial pilots (Experiment 2) performed simulated flight tasks using automated aids. In both studies, participants who perceived themselves as accountable for their strategies of interaction with the automation were significantly more likely to verify its correctness, and committed significantly fewer automation-related errors than those who did not report this perception.

  1. Automated cloud tracking system for the Akatsuki Venus Climate Orbiter data

    Science.gov (United States)

    Ogohara, Kazunori; Kouyama, Toru; Yamamoto, Hiroki; Sato, Naoki; Takagi, Masahiro; Imamura, Takeshi

    2012-02-01

    Japanese Venus Climate Orbiter, Akatsuki, is cruising to approach to Venus again although its first Venus orbital insertion (VOI) has been failed. At present, we focus on the next opportunity of VOI and the following scientific observations.We have constructed an automated cloud tracking system for processing data obtained by Akatsuki in the present study. In this system, correction of the pointing of the satellite is essentially important for improving accuracy of the cloud motion vectors derived using the cloud tracking. Attitude errors of the satellite are reduced by fitting an ellipse to limb of an imaged Venus disk. Next, longitude-latitude distributions of brightness (cloud patterns) are calculated to make it easy to derive the cloud motion vectors. The grid points are distributed at regular intervals in the longitude-latitude coordinate. After applying the solar zenith correction and a highpass filter to the derived longitude-latitude distributions of brightness, the cloud features are tracked using pairs of images. As a result, we obtain cloud motion vectors on longitude-latitude grid points equally spaced. These entire processes are pipelined and automated, and are applied to all data obtained by combinations of cameras and filters onboard Akatsuki. It is shown by several tests that the cloud motion vectors are determined with a sufficient accuracy. We expect that longitude-latitude data sets created by the automated cloud tracking system will contribute to the Venus meteorology.

  2. Re-verification of a Lip Synchronization Protocol using Robust Reachability

    Directory of Open Access Journals (Sweden)

    Piotr Kordy

    2010-03-01

    Full Text Available The timed automata formalism is an important model for specifying and analysing real-time systems. Robustness is the correctness of the model in the presence of small drifts on clocks or imprecision in testing guards. A symbolic algorithm for the analysis of the robustness of timed automata has been implemented. In this paper, we re-analyse an industrial case lip synchronization protocol using the new robust reachability algorithm. This lip synchronization protocol is an interesting case because timing aspects are crucial for the correctness of the protocol. Several versions of the model are considered: with an ideal video stream, with anchored jitter, and with non-anchored jitter.

  3. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.

    Science.gov (United States)

    Wang, Jun Yi; Ngo, Michael M; Hessl, David; Hagerman, Randi J; Rivera, Susan M

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as

  4. Computer-automated tuning of semiconductor double quantum dots into the single-electron regime

    NARCIS (Netherlands)

    Baart, T.A.; Eendebak, P.T.; Reichl, C.; Wegscheider, W.; Vandersypen, L.M.K.

    2016-01-01

    We report the computer-automated tuning of gate-defined semiconductor double quantum dots in GaAs heterostructures. We benchmark the algorithm by creating three double quantum dots inside a linear array of four quantum dots. The algorithm sets the correct gate voltages for all the gates to tune the

  5. An automated approach to the design of decision tree classifiers

    Science.gov (United States)

    Argentiero, P.; Chin, R.; Beaudet, P.

    1982-01-01

    An automated technique is presented for designing effective decision tree classifiers predicated only on a priori class statistics. The procedure relies on linear feature extractions and Bayes table look-up decision rules. Associated error matrices are computed and utilized to provide an optimal design of the decision tree at each so-called 'node'. A by-product of this procedure is a simple algorithm for computing the global probability of correct classification assuming the statistical independence of the decision rules. Attention is given to a more precise definition of decision tree classification, the mathematical details on the technique for automated decision tree design, and an example of a simple application of the procedure using class statistics acquired from an actual Landsat scene.

  6. Low cost automation

    International Nuclear Information System (INIS)

    1987-03-01

    This book indicates method of building of automation plan, design of automation facilities, automation and CHIP process like basics of cutting, NC processing machine and CHIP handling, automation unit, such as drilling unit, tapping unit, boring unit, milling unit and slide unit, application of oil pressure on characteristics and basic oil pressure circuit, application of pneumatic, automation kinds and application of process, assembly, transportation, automatic machine and factory automation.

  7. A Toolchain to Produce Correct-by-Construction OCaml Programs

    OpenAIRE

    Filliâtre , Jean-Christophe; Gondelman , Léon; Paskevich , Andrei; Pereira , Mário; Melo De Sousa , Simão

    2018-01-01

    This paper presents a methodology to get correct-by-construction OCaml programs using the Why3 tool. First, a formal behavioral specification is given in the form of an OCaml module signature extended with type invariants and function contracts, in the spirit of JML. Second, an implementation is written in the programming language of Why3 and then verified with respect to the specification. Finally, an OCaml program is obtained by an automated translation. Our methodology is illustrated with ...

  8. A fully automated system for ultrasonic power measurement and simulation accordingly to IEC 61161:2006

    International Nuclear Information System (INIS)

    Costa-Felix, Rodrigo P B; Alvarenga, Andre V; Hekkenberg, Rob

    2011-01-01

    The ultrasonic power measurement, worldwide accepted, standard is the IEC 61161, presently in its 2nd edition (2006), but under review. To fulfil its requirements, considering that a radiation force balance is to be used as ultrasonic power detector, a large amount of raw data (mass measurement) shall be collected as function of time to perform all necessary calculations and corrections. Uncertainty determination demands calculation effort of raw and processed data. Although it is possible to be undertaken in an old-fashion way, using spread sheets and manual data collection, automation software are often used in metrology to provide a virtually error free environment concerning data acquisition and repetitive calculations and corrections. Considering that, a fully automate ultrasonic power measurement system was developed and comprehensively tested. A 0,1 mg of precision balance model CP224S (Sartorius, Germany) was used as measuring device and a calibrated continuous wave ultrasound check source (Precision Acoustics, UK) was the device under test. A 150 ml container filled with degassed water and containing an absorbing target at the bottom was placed on the balance pan. Besides the feature of automation software, a routine of power measurement simulation was implemented. It was idealized as a teaching tool of how ultrasonic power emission behaviour is with a radiation force balance equipped with an absorbing target. Automation software was considered as an effective tool for speeding up ultrasonic power measurement, while allowing accurate calculation and attractive graphical partial and final results.

  9. Developing Formal Correctness Properties from Natural Language Requirements

    Science.gov (United States)

    Nikora, Allen P.

    2006-01-01

    This viewgraph presentation reviews the rationale of the program to transform natural language specifications into formal notation.Specifically, automate generation of Linear Temporal Logic (LTL)correctness properties from natural language temporal specifications. There are several reasons for this approach (1) Model-based techniques becoming more widely accepted, (2) Analytical verification techniques (e.g., model checking, theorem proving) significantly more effective at detecting types of specification design errors (e.g., race conditions, deadlock) than manual inspection, (3) Many requirements still written in natural language, which results in a high learning curve for specification languages, associated tools and increased schedule and budget pressure on projects reduce training opportunities for engineers, and (4) Formulation of correctness properties for system models can be a difficult problem. This has relevance to NASA in that it would simplify development of formal correctness properties, lead to more widespread use of model-based specification, design techniques, assist in earlier identification of defects and reduce residual defect content for space mission software systems. The presentation also discusses: potential applications, accomplishments and/or technological transfer potential and the next steps.

  10. Automated correction on X-rays calibration using transmission chamber and LabVIEWTM

    International Nuclear Information System (INIS)

    Betti, Flavio; Potiens, Maria da Penha Albuquerque

    2009-01-01

    Uncertainties during prolonged exposure times on X-rays calibration procedures at the Instruments Calibration facilities at IPEN may suffer from efficiency (and therefore intensity) variations on the industrial X-Ray generator used. Using a transmission chamber as an online reference chamber during the whole irradiation process is proposed in order to compensate for such error source. Also temperature (and pressure) fluctuations may arise from the performance limited calibration room air conditioning system. As an open ionization chamber, that monitor chamber does require calculation of a correction factor due to the temperature and pressure effects on air density. Sending and processing data from all related instruments (electrometer, thermometer and barometer) can be more easily achieved by interfacing them to a host computer running an especially developed algorithm using LabVIEW TM environment which will not only apply the proper correction factors during runtime, but also determine the exact length of time to reach a desired condition, which can be: time period, charge collected, or air kerma, based on the previous calibration of the whole system using a reference chamber traceable to primary standard dosimetry laboratories. When performing such calibration, two temperature sensors (secondary standard thermistors) are simultaneously used, one for the transmission chamber, and other for the reference chamber. As the substitution method is used during actual customer's calibration, the readings from the second thermistor can also be used when desired for further corrections. Use of LabVIEW TM programming language allowed for a shorter development time, and it is also extremely convenient to make things easier when improvements and modifications are called for. (author)

  11. Autonomy and Automation

    Science.gov (United States)

    Shively, Jay

    2017-01-01

    A significant level of debate and confusion has surrounded the meaning of the terms autonomy and automation. Automation is a multi-dimensional concept, and we propose that Remotely Piloted Aircraft Systems (RPAS) automation should be described with reference to the specific system and task that has been automated, the context in which the automation functions, and other relevant dimensions. In this paper, we present definitions of automation, pilot in the loop, pilot on the loop and pilot out of the loop. We further propose that in future, the International Civil Aviation Organization (ICAO) RPAS Panel avoids the use of the terms autonomy and autonomous when referring to automated systems on board RPA. Work Group 7 proposes to develop, in consultation with other workgroups, a taxonomy of Levels of Automation for RPAS.

  12. Reducing overlay sampling for APC-based correction per exposure by replacing measured data with computational prediction

    Science.gov (United States)

    Noyes, Ben F.; Mokaberi, Babak; Oh, Jong Hun; Kim, Hyun Sik; Sung, Jun Ha; Kea, Marc

    2016-03-01

    One of the keys to successful mass production of sub-20nm nodes in the semiconductor industry is the development of an overlay correction strategy that can meet specifications, reduce the number of layers that require dedicated chuck overlay, and minimize measurement time. Three important aspects of this strategy are: correction per exposure (CPE), integrated metrology (IM), and the prioritization of automated correction over manual subrecipes. The first and third aspects are accomplished through an APC system that uses measurements from production lots to generate CPE corrections that are dynamically applied to future lots. The drawback of this method is that production overlay sampling must be extremely high in order to provide the system with enough data to generate CPE. That drawback makes IM particularly difficult because of the throughput impact that can be created on expensive bottleneck photolithography process tools. The goal is to realize the cycle time and feedback benefits of IM coupled with the enhanced overlay correction capability of automated CPE without impacting process tool throughput. This paper will discuss the development of a system that sends measured data with reduced sampling via an optimized layout to the exposure tool's computational modelling platform to predict and create "upsampled" overlay data in a customizable output layout that is compatible with the fab user CPE APC system. The result is dynamic CPE without the burden of extensive measurement time, which leads to increased utilization of IM.

  13. Automated image-matching technique for comparative diagnosis of the liver on CT examination

    International Nuclear Information System (INIS)

    Okumura, Eiichiro; Sanada, Shigeru; Suzuki, Masayuki; Tsushima, Yoshito; Matsui, Osamu

    2005-01-01

    When interpreting enhanced computer tomography (CT) images of the upper abdomen, radiologists visually select a set of images of the same anatomical positions from two or more CT image series (i.e., non-enhanced and contrast-enhanced CT images at arterial and delayed phase) to depict and to characterize any abnormalities. The same process is also necessary to create subtraction images by computer. We have developed an automated image selection system using a template-matching technique that allows the recognition of image sets at the same anatomical position from two CT image series. Using the template-matching technique, we compared several anatomical structures in each CT image at the same anatomical position. As the position of the liver may shift according to respiratory movement, not only the shape of the liver but also the gallbladder and other prominent structures included in the CT images were compared to allow appropriate selection of a set of CT images. This novel technique was applied in 11 upper abdominal CT examinations. In CT images with a slice thickness of 7.0 or 7.5 mm, the percentage of image sets selected correctly by the automated procedure was 86.6±15.3% per case. In CT images with a slice thickness of 1.25 mm, the percentages of correct selection of image sets by the automated procedure were 79.4±12.4% (non-enhanced and arterial-phase CT images) and 86.4±10.1% (arterial- and delayed-phase CT images). This automated method is useful for assisting in interpreting CT images and in creating digital subtraction images. (author)

  14. Transmission line transformer for reliable and low-jitter triggering of a railgap switch.

    Science.gov (United States)

    Verma, Rishi; Mishra, Ekansh; Sagar, Karuna; Meena, Manraj; Shyam, Anurag

    2014-09-01

    The performance of railgap switch critically relies upon multichannel breakdown between the extended electrodes (rails) in order to ensure distributed current transfer along electrode length and to minimize the switch inductance. The initiation of several simultaneous arc channels along the switch length depends on the gap triggering technique and on the rate at which the electric field changes within the gap. This paper presents design, construction, and output characteristics of a coaxial cable based three-stage transmission line transformer (TLT) that is capable of initiating multichannel breakdown in a high voltage, low inductance railgap switch. In each stage three identical lengths of URM67 coaxial cables have been used in parallel and they have been wounded in separate cassettes to enhance the isolation of the output of transformer from the input. The cascaded output impedance of TLT is ~50 Ω. Along with multi-channel formation over the complete length of electrode rails, significant reduction in jitter (≤2 ns) and conduction delay (≤60 ns) has been observed by the realization of large amplitude (~80 kV), high dV/dt (~6 kV/ns) pulse produced by the indigenously developed TLT based trigger generator. The superior performance of TLT over conventional pulse transformer for railgap triggering application has been compared and demonstrated experimentally.

  15. High-speed atmospheric correction for spectral image processing

    Science.gov (United States)

    Perkins, Timothy; Adler-Golden, Steven; Cappelaere, Patrice; Mandl, Daniel

    2012-06-01

    Land and ocean data product generation from visible-through-shortwave-infrared multispectral and hyperspectral imagery requires atmospheric correction or compensation, that is, the removal of atmospheric absorption and scattering effects that contaminate the measured spectra. We have recently developed a prototype software system for automated, low-latency, high-accuracy atmospheric correction based on a C++-language version of the Spectral Sciences, Inc. FLAASH™ code. In this system, pre-calculated look-up tables replace on-the-fly MODTRAN® radiative transfer calculations, while the portable C++ code enables parallel processing on multicore/multiprocessor computer systems. The initial software has been installed on the Sensor Web at NASA Goddard Space Flight Center, where it is currently atmospherically correcting new data from the EO-1 Hyperion and ALI sensors. Computation time is around 10 s per data cube per processor. Further development will be conducted to implement the new atmospheric correction software on board the upcoming HyspIRI mission's Intelligent Payload Module, where it would generate data products in nearreal time for Direct Broadcast to the ground. The rapid turn-around of data products made possible by this software would benefit a broad range of applications in areas of emergency response, environmental monitoring and national defense.

  16. Repeatability of an automated Landolt C test, compared with the early treatment of diabetic retinopathy study (ETDRS) chart testing.

    Science.gov (United States)

    Ruamviboonsuk, Paisan; Tiensuwan, Montip; Kunawut, Catleya; Masayaanon, Patcharapim

    2003-10-01

    To evaluate the repeatability of visual acuity scores from the automated test and compare them with the Early Treatment of Diabetic Retinopathy Study (ETDRS) chart. Instrument validation study based on a model of repeatability study in two observations. SMETHODS: a prospective, clinic-based, comparative study. A total of 206 participants without ocular diseases and refractive errors in their right eyes were randomly enrolled in the automated group in which 107 participants performed the automated test and the ETDRS group in which 99 participants read the ETDRS chart. All participants were tested with only their right eyes without corrections at 4 meters and came back to have the same tests 1 week later. The automated test used the Landolt rings as optotypes and was conducted by a low-ended personal computer with a 15-inch monitor and a wireless keyboard. The "letter" score calculated by counting every correct response to optotypes, and the "threshold curve" score interpreted from the optotype size at the midpoint of a visual acuity threshold curve. The 95% confidence interval of test-retest of visual acuity scores from the automated test are comparable to the ETDRS chart (.143 compared with.125 for letter scores,.145 compared with.122 for threshold curve scores). The score repeatabilities, calculated from the standard deviations of test-retest, from the automated test are also comparable to the ETDRS chart (.201 compared with.177 for letter scores,.206 compared with.172 for threshold curve scores). All comparisons demonstrated no statistical difference (P >.05). The automated testing system in this study enables practical measuring visual acuity by the Landolt rings. The system's repeatability, which is comparable to the ETDRS chart, supports its role as an alternative tool for measuring outcome in new clinical research. Its ability to practically generate visual acuity threshold curves may also be useful in future clinical research studies.

  17. webPOISONCONTROL: can poison control be automated?

    Science.gov (United States)

    Litovitz, Toby; Benson, Blaine E; Smolinske, Susan

    2016-08-01

    A free webPOISONCONTROL app allows the public to determine the appropriate triage of poison ingestions without calling poison control. If accepted and safe, this alternative expands access to reliable poison control services to those who prefer the Internet over the telephone. This study assesses feasibility, safety, and user-acceptance of automated online triage of asymptomatic, nonsuicidal poison ingestion cases. The user provides substance name, amount, age, and weight in an automated online tool or downloadable app, and is given a specific triage recommendation to stay home, go to the emergency department, or call poison control for further guidance. Safety was determined by assessing outcomes of consecutive home-triaged cases with follow-up and by confirming the correct application of algorithms. Case completion times and user perceptions of speed and ease of use were measures of user-acceptance. Of 9256 cases, 73.3% were triaged to home, 2.1% to an emergency department, and 24.5% directed to call poison control. Children younger than 6 years were involved in 75.2% of cases. Automated follow-up was done in 31.2% of home-triaged cases; 82.3% of these had no effect. No major or fatal outcomes were reported. More than 91% of survey respondents found the tool quick and easy to use. Median case completion time was 4.1 minutes. webPOISONCONTROL augments traditional poison control services by providing automated, accurate online access to case-specific triage and first aid guidance for poison ingestions. It is safe, quick, and easy to use. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Early Validation of Automation Plant Control Software using Simulation Based on Assumption Modeling and Validation Use Cases

    Directory of Open Access Journals (Sweden)

    Veronika Brandstetter

    2015-10-01

    Full Text Available In automation plants, technical processes must be conducted in a way that products, substances, or services are produced reliably, with sufficient quality and with minimal strain on resources. A key driver in conducting these processes is the automation plant’s control software, which controls the technical plant components and thereby affects the physical, chemical, and mechanical processes that take place in automation plants. To this end, the control software of an automation plant must adhere to strict process requirements arising from the technical processes, and from the physical plant design. Currently, the validation of the control software often starts late in the engineering process in many cases – once the automation plant is almost completely constructed. However, as widely acknowledged, the later the control software of the automation plant is validated, the higher the effort for correcting revealed defects is, which can lead to serious budget overruns and project delays. In this article we propose an approach that allows the early validation of automation control software against the technical plant processes and assumptions about the physical plant design by means of simulation. We demonstrate the application of our approach on the example of an actual plant project from the automation industry and present it’s technical implementation

  19. Semi-automated Robust Quantification of Lesions (SRQL Toolbox

    Directory of Open Access Journals (Sweden)

    Kaori L Ito

    2017-05-01

    Full Text Available Quantifying lesions in a reliable manner is fundamental for studying the effects of neuroanatomical changes related to recovery in the post-stroke brain. However, the wide variability in lesion characteristics across individuals makes manual lesion segmentation a challenging and often subjective process. This often makes it difficult to combine stroke lesion data across multiple research sites, due to subjective differences in how lesions may be defined. Thus, we developed the Semi-automated Robust Quantification of Lesions (SRQL; https://github.com/npnl/SRQL; DOI: 10.5281/zenodo.557114 Toolbox that performs several analysis steps: 1 a white matter intensity correction that removes healthy white matter voxels from the lesion mask, thereby making lesions slightly more robust to subjective errors; 2 an automated report of descriptive statistics on lesions for simplified comparison between or across groups, and 3 an option to perform analyses in both native and standard space to facilitate analyses in either space. Here, we describe the methods implemented in the toolbox.

  20. Home Automation

    OpenAIRE

    Ahmed, Zeeshan

    2010-01-01

    In this paper I briefly discuss the importance of home automation system. Going in to the details I briefly present a real time designed and implemented software and hardware oriented house automation research project, capable of automating house's electricity and providing a security system to detect the presence of unexpected behavior.

  1. Optimization of automation: I. Estimation method of cognitive automation rates reflecting the effects of automation on human operators in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Seong, Poong Hyun

    2014-01-01

    Highlights: • We propose an estimation method of the automation rate by taking the advantages of automation as the estimation measures. • We conduct the experiments to examine the validity of the suggested method. • The higher the cognitive automation rate is, the greater the decreased rate of the working time will be. • The usefulness of the suggested estimation method is proved by statistical analyses. - Abstract: Since automation was introduced in various industrial fields, the concept of the automation rate has been used to indicate the inclusion proportion of automation among all work processes or facilities. Expressions of the inclusion proportion of automation are predictable, as is the ability to express the degree of the enhancement of human performance. However, many researchers have found that a high automation rate does not guarantee high performance. Therefore, to reflect the effects of automation on human performance, this paper proposes a new estimation method of the automation rate that considers the effects of automation on human operators in nuclear power plants (NPPs). Automation in NPPs can be divided into two types: system automation and cognitive automation. Some general descriptions and characteristics of each type of automation are provided, and the advantages of automation are investigated. The advantages of each type of automation are used as measures of the estimation method of the automation rate. One advantage was found to be a reduction in the number of tasks, and another was a reduction in human cognitive task loads. The system and the cognitive automation rate were proposed as quantitative measures by taking advantage of the aforementioned benefits. To quantify the required human cognitive task loads and thus suggest the cognitive automation rate, Conant’s information-theory-based model was applied. The validity of the suggested method, especially as regards the cognitive automation rate, was proven by conducting

  2. A comparison of semi-automated volumetric vs linear measurement of small vestibular schwannomas.

    Science.gov (United States)

    MacKeith, Samuel; Das, Tilak; Graves, Martin; Patterson, Andrew; Donnelly, Neil; Mannion, Richard; Axon, Patrick; Tysome, James

    2018-04-01

    Accurate and precise measurement of vestibular schwannoma (VS) size is key to clinical management decisions. Linear measurements are used in routine clinical practice but are prone to measurement error. This study aims to compare a semi-automated volume segmentation tool against standard linear method for measuring small VS. This study also examines whether oblique tumour orientation can contribute to linear measurement error. Experimental comparison of observer agreement using two measurement techniques. Tertiary skull base unit. Twenty-four patients with unilateral sporadic small (linear dimension following reformatting to correct for oblique orientation of VS. Intra-observer ICC was higher for semi-automated volumetric when compared with linear measurements, 0.998 (95% CI 0.994-0.999) vs 0.936 (95% CI 0.856-0.972), p linear measurements, 0.989 (95% CI 0.975-0.995) vs 0.946 (95% CI 0.880-0.976), p = 0.0045. The intra-observer %SDD was similar for volumetric and linear measurements, 9.9% vs 11.8%. However, the inter-observer %SDD was greater for volumetric than linear measurements, 20.1% vs 10.6%. Following oblique reformatting to correct tumour angulation, the mean increase in size was 1.14 mm (p = 0.04). Semi-automated volumetric measurements are more repeatable than linear measurements when measuring small VS and should be considered for use in clinical practice. Oblique orientation of VS may contribute to linear measurement error.

  3. Automated Video Analysis of Non-verbal Communication in a Medical Setting.

    Science.gov (United States)

    Hart, Yuval; Czerniak, Efrat; Karnieli-Miller, Orit; Mayo, Avraham E; Ziv, Amitai; Biegon, Anat; Citron, Atay; Alon, Uri

    2016-01-01

    Non-verbal communication plays a significant role in establishing good rapport between physicians and patients and may influence aspects of patient health outcomes. It is therefore important to analyze non-verbal communication in medical settings. Current approaches to measure non-verbal interactions in medicine employ coding by human raters. Such tools are labor intensive and hence limit the scale of possible studies. Here, we present an automated video analysis tool for non-verbal interactions in a medical setting. We test the tool using videos of subjects that interact with an actor portraying a doctor. The actor interviews the subjects performing one of two scripted scenarios of interviewing the subjects: in one scenario the actor showed minimal engagement with the subject. The second scenario included active listening by the doctor and attentiveness to the subject. We analyze the cross correlation in total kinetic energy of the two people in the dyad, and also characterize the frequency spectrum of their motion. We find large differences in interpersonal motion synchrony and entrainment between the two performance scenarios. The active listening scenario shows more synchrony and more symmetric followership than the other scenario. Moreover, the active listening scenario shows more high-frequency motion termed jitter that has been recently suggested to be a marker of followership. The present approach may be useful for analyzing physician-patient interactions in terms of synchrony and dominance in a range of medical settings.

  4. Process automation

    International Nuclear Information System (INIS)

    Moser, D.R.

    1986-01-01

    Process automation technology has been pursued in the chemical processing industries and to a very limited extent in nuclear fuel reprocessing. Its effective use has been restricted in the past by the lack of diverse and reliable process instrumentation and the unavailability of sophisticated software designed for process control. The Integrated Equipment Test (IET) facility was developed by the Consolidated Fuel Reprocessing Program (CFRP) in part to demonstrate new concepts for control of advanced nuclear fuel reprocessing plants. A demonstration of fuel reprocessing equipment automation using advanced instrumentation and a modern, microprocessor-based control system is nearing completion in the facility. This facility provides for the synergistic testing of all chemical process features of a prototypical fuel reprocessing plant that can be attained with unirradiated uranium-bearing feed materials. The unique equipment and mission of the IET facility make it an ideal test bed for automation studies. This effort will provide for the demonstration of the plant automation concept and for the development of techniques for similar applications in a full-scale plant. A set of preliminary recommendations for implementing process automation has been compiled. Some of these concepts are not generally recognized or accepted. The automation work now under way in the IET facility should be useful to others in helping avoid costly mistakes because of the underutilization or misapplication of process automation. 6 figs

  5. Automated Functional Testing based on the Navigation of Web Applications

    Directory of Open Access Journals (Sweden)

    Boni García

    2011-08-01

    Full Text Available Web applications are becoming more and more complex. Testing such applications is an intricate hard and time-consuming activity. Therefore, testing is often poorly performed or skipped by practitioners. Test automation can help to avoid this situation. Hence, this paper presents a novel approach to perform automated software testing for web applications based on its navigation. On the one hand, web navigation is the process of traversing a web application using a browser. On the other hand, functional requirements are actions that an application must do. Therefore, the evaluation of the correct navigation of web applications results in the assessment of the specified functional requirements. The proposed method to perform the automation is done in four levels: test case generation, test data derivation, test case execution, and test case reporting. This method is driven by three kinds of inputs: i UML models; ii Selenium scripts; iii XML files. We have implemented our approach in an open-source testing framework named Automatic Testing Platform. The validation of this work has been carried out by means of a case study, in which the target is a real invoice management system developed using a model-driven approach.

  6. A precise technique for manufacturing correction coil

    International Nuclear Information System (INIS)

    Schieber, L.

    1992-01-01

    An automated method of manufacturing correction coils has been developed which provides a precise embodiment of the coil design. Numerically controlled machines have been developed to accurately position coil windings on the beam tube. Two types of machines have been built. One machine bonds the wire to a substrate which is wrapped around the beam tube after it is completed while the second machine bonds the wire directly to the beam tube. Both machines use the Multiwire reg-sign technique of bonding the wire to the substrate utilizing an ultrasonic stylus. These machines are being used to manufacture coils for both the SSC and RHIC

  7. Corrected Integral Shape Averaging Applied to Obstructive Sleep Apnea Detection from the Electrocardiogram

    Directory of Open Access Journals (Sweden)

    C. O'Brien

    2007-01-01

    Full Text Available We present a technique called corrected integral shape averaging (CISA for quantifying shape and shape differences in a set of signals. CISA can be used to account for signal differences which are purely due to affine time warping (jitter and dilation/compression, and hence provide access to intrinsic shape fluctuations. CISA can also be used to define a distance between shapes which has useful mathematical properties; a mean shape signal for a set of signals can be defined, which minimizes the sum of squared shape distances of the set from the mean. The CISA procedure also allows joint estimation of the affine time parameters. Numerical simulations are presented to support the algorithm for obtaining the CISA mean and parameters. Since CISA provides a well-defined shape distance, it can be used in shape clustering applications based on distance measures such as k-means. We present an application in which CISA shape clustering is applied to P-waves extracted from the electrocardiogram of subjects suffering from sleep apnea. The resulting shape clustering distinguishes ECG segments recorded during apnea from those recorded during normal breathing with a sensitivity of 81% and specificity of 84%.

  8. Automated extraction of pleural effusion in three-dimensional thoracic CT images

    Science.gov (United States)

    Kido, Shoji; Tsunomori, Akinori

    2009-02-01

    It is important for diagnosis of pulmonary diseases to measure volume of accumulating pleural effusion in threedimensional thoracic CT images quantitatively. However, automated extraction of pulmonary effusion correctly is difficult. Conventional extraction algorithm using a gray-level based threshold can not extract pleural effusion from thoracic wall or mediastinum correctly, because density of pleural effusion in CT images is similar to those of thoracic wall or mediastinum. So, we have developed an automated extraction method of pulmonary effusion by use of extracting lung area with pleural effusion. Our method used a template of lung obtained from a normal lung for segmentation of lungs with pleural effusions. Registration process consisted of two steps. First step was a global matching processing between normal and abnormal lungs of organs such as bronchi, bones (ribs, sternum and vertebrae) and upper surfaces of livers which were extracted using a region-growing algorithm. Second step was a local matching processing between normal and abnormal lungs which were deformed by the parameter obtained from the global matching processing. Finally, we segmented a lung with pleural effusion by use of the template which was deformed by two parameters obtained from the global matching processing and the local matching processing. We compared our method with a conventional extraction method using a gray-level based threshold and two published methods. The extraction rates of pleural effusions obtained from our method were much higher than those obtained from other methods. Automated extraction method of pulmonary effusion by use of extracting lung area with pleural effusion is promising for diagnosis of pulmonary diseases by providing quantitative volume of accumulating pleural effusion.

  9. A facile and rapid automated synthesis of 3'-deoxy-3'-[18F]fluorothymidine

    International Nuclear Information System (INIS)

    Tang Ganghua; Tang Xiaolan; Wen Fuhua; Wang Mingfang; Li Baoyuan

    2010-01-01

    Aim: To develop a simplified and fully automated synthesis procedure of 3'-deoxy-3'-[ 18 F]fluorothymidine ([ 18 F]FLT) using PET-MF-2V-IT-I synthesis module. Methods: Synthesis of [ 18 F]FLT was performed using PET-MF-2V-IT-I synthesis module by one-pot two-step reaction procedure, including nucleophilic fluorination of 3-N-t-butoxycarbonyl-1-[5'-O-(4,4'-dimethoxy triphenylmethyl)-2'-deoxy-3'-O-(4-nitrobenzenesulfonyl) -β-D-threopentofuranosyl]thymine (15 mg) as the precursor molecule with [ 18 F]fluoride, and subsequent hydrolysis of the protecting group with 1.0 M HCl at the same reaction vessel and purification with SEP PAK cartridges instead of the HPLC system. Results: The automated synthesis of [ 18 F]FLT with SEP PAK purification gave corrected radiochemical yield of 23.2±2.6% (n=6, uncorrected yield: 16-22%) and radiochemical purity of >97% within the total synthesis time of 35 min. Conclusion: The fully one-pot automated synthesis procedure with SEP PAK purification can be applied to the fully automated synthesis of [ 18 F]FLT using commercial [ 18 F]FDG synthesis module.

  10. Formal verification of automated teller machine systems using SPIN

    Science.gov (United States)

    Iqbal, Ikhwan Mohammad; Adzkiya, Dieky; Mukhlash, Imam

    2017-08-01

    Formal verification is a technique for ensuring the correctness of systems. This work focuses on verifying a model of the Automated Teller Machine (ATM) system against some specifications. We construct the model as a state transition diagram that is suitable for verification. The specifications are expressed as Linear Temporal Logic (LTL) formulas. We use Simple Promela Interpreter (SPIN) model checker to check whether the model satisfies the formula. This model checker accepts models written in Process Meta Language (PROMELA), and its specifications are specified in LTL formulas.

  11. Automated one-loop calculations with GOSAM

    International Nuclear Information System (INIS)

    Cullen, Gavin; Greiner, Nicolas; Heinrich, Gudrun; Reiter, Thomas; Luisoni, Gionata

    2011-11-01

    We present the program package GoSam which is designed for the automated calculation of one-loop amplitudes for multi-particle processes in renormalisable quantum field theories. The amplitudes, which are generated in terms of Feynman diagrams, can be reduced using either D-dimensional integrand-level decomposition or tensor reduction. GoSam can be used to calculate one-loop QCD and/or electroweak corrections to Standard Model processes and offers the flexibility to link model files for theories Beyond the Standard Model. A standard interface to programs calculating real radiation is also implemented. We demonstrate the flexibility of the program by presenting examples of processes with up to six external legs attached to the loop. (orig.)

  12. Automated one-loop calculations with GOSAM

    Energy Technology Data Exchange (ETDEWEB)

    Cullen, Gavin [Edinburgh Univ. (United Kingdom). School of Physics and Astronomy; Deutsches Elektronen-Synchrotron, Zeuthen [DESY; Germany; Greiner, Nicolas [Illinois Univ., Urbana-Champaign, IL (United States). Dept. of Physics; Max-Planck-Institut fuer Physik, Muenchen (Germany); Heinrich, Gudrun; Reiter, Thomas [Max-Planck-Institut fuer Physik, Muenchen (Germany); Luisoni, Gionata [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology; Mastrolia, Pierpaolo [Max-Planck-Institut fuer Physik, Muenchen (Germany); Padua Univ. (Italy). Dipt. di Fisica; Ossola, Giovanni [New York City Univ., NY (United States). New York City College of Technology; New York City Univ., NY (United States). The Graduate School and University Center; Tramontano, Francesco [European Organization for Nuclear Research (CERN), Geneva (Switzerland)

    2011-11-15

    We present the program package GoSam which is designed for the automated calculation of one-loop amplitudes for multi-particle processes in renormalisable quantum field theories. The amplitudes, which are generated in terms of Feynman diagrams, can be reduced using either D-dimensional integrand-level decomposition or tensor reduction. GoSam can be used to calculate one-loop QCD and/or electroweak corrections to Standard Model processes and offers the flexibility to link model files for theories Beyond the Standard Model. A standard interface to programs calculating real radiation is also implemented. We demonstrate the flexibility of the program by presenting examples of processes with up to six external legs attached to the loop. (orig.)

  13. Automated MRI segmentation for individualized modeling of current flow in the human head.

    Science.gov (United States)

    Huang, Yu; Dmochowski, Jacek P; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C

    2013-12-01

    High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Fully automated individualized modeling may now be feasible

  14. A state variable approach to the BESSY II local beam-position-feedback system

    International Nuclear Information System (INIS)

    Gilpatrick, J.D.; Khan, S.; Kraemer, D.

    1996-01-01

    At the BESSY II facility, stability of the electron beam position and angle near insertion devices (IDs) is of utmost importance. Disturbances due to ground motion could result in unwanted broad-bandwidth beam-jitter which decreases the electron (and resultant photon) beam's effective brightness. Therefore, feedback techniques must be used. Operating over a frequency range of 100-Hz, a local feedback system will correct these beam-trajectory errors using the four bumps around IDs. This paper reviews how the state-variable feedback approach can be applied to real-time correction of these beam position and angle errors. A frequency-domain solution showing beam jitter reduction is presented. Finally, this paper reports results of a beam-feedback test at BESSY I

  15. Automated SmartPrep tracker positioning in liver MRI scans

    International Nuclear Information System (INIS)

    Goto, Takao; Kabasawa, Hiroyuki

    2013-01-01

    This paper presents a new method for automated SmartPrep tracker positioning in liver MRI scans. SmartPrep is used to monitor the contrast bolus signal in order to detect the arrival time of the bolus. Accurately placing the tracker in the aorta while viewing three planar scout images is a difficult task for the operator and is an important problem from the workflow standpoint. The development of an automated SmartPrep tracker would therefore help to improve workflow in liver MRI scans. In our proposed method, the aorta is detected using AdaBoost (which is a machine learning technique) by searching around the cerebral spinal fluid (CSF) in the spinal cord. Analysis of scout scan images showed that our detection method functioned properly for a variety of axial MR images without intensity correction. A total of 234 images reconstructed from the datasets of 64 volunteers were analyzed, and the results showed that the detection error for the aorta was approximately 3 mm. (author)

  16. Automated Execution and Tracking of the LHC Commissioning Tests

    CERN Document Server

    Fuchsberger, K; Galetzka, M; Gorbonosov, R; Pojer, M; Solfaroli Camillocci, M; Zerlauth, M

    2012-01-01

    To ensure the correct operation and prevent system failures, which can lead to equipment damage in the worst case, all critical systems in the Large Hadron Collider (LHC), among them the superconducting circuits, have to be tested thoroughly during dedicated commissioning phases after each intervention. In view of the around 7,000 individual tests to be performed each year after a Christmas stop, a lot of effort was already put into the automation of these tests at the beginning of LHC hardware commissioning in 2005, to assure the dependable execution and analysis of these tests. To further increase the productivity during the commissioning campaigns and to enforce a more consistent workflow, the development of a dedicated testing framework was launched. This new framework is designed to schedule and track the automated tests for all systems of the LHC and will also be extendable, e.g., to beam commissioning tests. This is achieved by re-using different, already existing execution frameworks. In this paper, w...

  17. Space station automation and robotics study. Operator-systems interface

    Science.gov (United States)

    1984-01-01

    This is the final report of a Space Station Automation and Robotics Planning Study, which was a joint project of the Boeing Aerospace Company, Boeing Commercial Airplane Company, and Boeing Computer Services Company. The study is in support of the Advanced Technology Advisory Committee established by NASA in accordance with a mandate by the U.S. Congress. Boeing support complements that provided to the NASA Contractor study team by four aerospace contractors, the Stanford Research Institute (SRI), and the California Space Institute. This study identifies automation and robotics (A&R) technologies that can be advanced by requirements levied by the Space Station Program. The methodology used in the study is to establish functional requirements for the operator system interface (OSI), establish the technologies needed to meet these requirements, and to forecast the availability of these technologies. The OSI would perform path planning, tracking and control, object recognition, fault detection and correction, and plan modifications in connection with extravehicular (EV) robot operations.

  18. An automated instrument for controlled-potential coulometry: System documentation

    Energy Technology Data Exchange (ETDEWEB)

    Holland, M K; Cordaro, J V

    1988-06-01

    An automated controlled-potential coulometer has been developed at the Savannah River Plant for the determination of plutonium. Two such coulometers have been assembled, evaluated, and applied. The software is based upon the methodology used at the Savannah River Plant, however the system is applicable with minimal software modifications to any of the methodologies used throughout the nuclear industry. These state-of-the-art coulometers feature electrical calibration of the integration system, background current corrections, and control-potential adjustment capabilities. Measurement precision within 0.1% has been demonstrated. The systems have also been successfully applied to the determination of pure neptunium solutions. The design and documentation of the automated instrument are described herein. Each individual module's operation, wiring layout, and alignment are described. Interconnection of the modules and system calibration are discussed. A complete set of system prints and a list of associated parts are included. 9 refs., 10 figs., 6 tabs.

  19. Automation bias and verification complexity: a systematic review.

    Science.gov (United States)

    Lyell, David; Coiera, Enrico

    2017-03-01

    While potentially reducing decision errors, decision support systems can introduce new types of errors. Automation bias (AB) happens when users become overreliant on decision support, which reduces vigilance in information seeking and processing. Most research originates from the human factors literature, where the prevailing view is that AB occurs only in multitasking environments. This review seeks to compare the human factors and health care literature, focusing on the apparent association of AB with multitasking and task complexity. EMBASE, Medline, Compendex, Inspec, IEEE Xplore, Scopus, Web of Science, PsycINFO, and Business Source Premiere from 1983 to 2015. Evaluation studies where task execution was assisted by automation and resulted in errors were included. Participants needed to be able to verify automation correctness and perform the task manually. Tasks were identified and grouped. Task and automation type and presence of multitasking were noted. Each task was rated for its verification complexity. Of 890 papers identified, 40 met the inclusion criteria; 6 were in health care. Contrary to the prevailing human factors view, AB was found in single tasks, typically involving diagnosis rather than monitoring, and with high verification complexity. The literature is fragmented, with large discrepancies in how AB is reported. Few studies reported the statistical significance of AB compared to a control condition. AB appears to be associated with the degree of cognitive load experienced in decision tasks, and appears to not be uniquely associated with multitasking. Strategies to minimize AB might focus on cognitive load reduction. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  20. Development of the RSAC Automation System for Reload Core of WH NPP

    International Nuclear Information System (INIS)

    Choi, Yu Sun; Bae, Sung Man; Koh, Byung Marn; Hong, Sun Kwan

    2006-01-01

    The Nuclear Design for Reload Core of Westinghouse Nuclear Power Plant consists of 'Reload Core Model Search', 'Safety Analysis(RSAC)', 'NDR(Nuclear Design Report) and OCAP(Operational Core Analysis Package Generation)' phases. Since scores of calculations for various accidents are required to confirm that the safety analysis assumptions are valid, the Safety Analysis(RSAC) is the most important and time and effort consuming phase of reload core design sequence. The Safety Analysis Automation System supports core designer by the automation of safety analysis calculations in 'Safety Analysis' phase(about 20 calculations). More than 10 kinds of codes, APA(ALPHA/PHOENIX/ANC), APOLLO, VENUS, PHIRE XEFIT, INCORE, etc. are being used for Safety Analysis calculations. Westinghouse code system needs numerous inputs and outputs, so the possibility of human errors could not be ignored during Safety Analysis calculations. To remove these inefficiencies, all input files for Safety Analysis calculations are automatically generated and executed by this Safety Analysis Automation System. All calculation notes are generated and the calculation results are summarized in RSAC (Reload Safety Analysis Checklist) by this system. Therefore, The Safety Analysis Automation System helps the reload core designer to perform safety analysis of the reload core model instantly and correctly

  1. Chromatic aberration correction: an enhancement to the calibration of low-cost digital dermoscopes.

    Science.gov (United States)

    Wighton, Paul; Lee, Tim K; Lui, Harvey; McLean, David; Atkins, M Stella

    2011-08-01

    We present a method for calibrating low-cost digital dermoscopes that corrects for color and inconsistent lighting and also corrects for chromatic aberration. Chromatic aberration is a form of radial distortion that often occurs in inexpensive digital dermoscopes and creates red and blue halo-like effects on edges. Being radial in nature, distortions due to chromatic aberration are not constant across the image, but rather vary in both magnitude and direction. As a result, distortions are not only visually distracting but could also mislead automated characterization techniques. Two low-cost dermoscopes, based on different consumer-grade cameras, were tested. Color is corrected by imaging a reference and applying singular value decomposition to determine the transformation required to ensure accurate color reproduction. Lighting is corrected by imaging a uniform surface and creating lighting correction maps. Chromatic aberration is corrected using a second-order radial distortion model. Our results for color and lighting calibration are consistent with previously published results, while distortions due to chromatic aberration can be reduced by 42-47% in the two systems considered. The disadvantages of inexpensive dermoscopy can be quickly substantially mitigated with a suitable calibration procedure. © 2011 John Wiley & Sons A/S.

  2. A New Tool for Automated Data Collection and Complete On-site Flux Data Processing for Eddy Covariance Measurements

    Science.gov (United States)

    Begashaw, I. G.; Kathilankal, J. C.; Li, J.; Beaty, K.; Ediger, K.; Forgione, A.; Fratini, G.; Johnson, D.; Velgersdyk, M.; Hupp, J. R.; Xu, L.; Burba, G. G.

    2014-12-01

    The eddy covariance method is widely used for direct measurements of turbulent exchange of gases and energy between the surface and atmosphere. In the past, raw data were collected first in the field and then processed back in the laboratory to achieve fully corrected publication-ready flux results. This post-processing consumed significant amount of time and resources, and precluded researchers from accessing near real-time final flux results. A new automated measurement system with novel hardware and software designs was developed, tested, and deployed starting late 2013. The major advancements with this automated flux system include: 1) Enabling logging high-frequency, three-dimensional wind speeds and multiple gas densities (CO2, H2O and CH4), low-frequency meteorological data, and site metadata simultaneously through a specially designed file format 2) Conducting fully corrected, real-time on-site flux computations using conventional as well as user-specified methods, by implementing EddyPro Software on a small low-power microprocessor 3) Providing precision clock control and coordinate information for data synchronization and inter-site data comparison by incorporating a GPS and Precision Time Protocol. Along with these innovations, a data management server application was also developed to chart fully corrected real-time fluxes to assist remote system monitoring, to send e-mail alerts, and to automate data QA/QC, transfer and archiving at individual stations or on a network level. Combination of all of these functions was designed to help save substantial amount of time and costs associated with managing a research site by eliminating the post-field data processing, reducing user errors and facilitating real-time access to fully corrected flux results. The design, functionality, and test results from this new eddy covariance measurement tool will be presented.

  3. Automated registration of multispectral MR vessel wall images of the carotid artery

    Energy Technology Data Exchange (ETDEWEB)

    Klooster, R. van ' t; Staring, M.; Reiber, J. H. C.; Lelieveldt, B. P. F.; Geest, R. J. van der, E-mail: rvdgeest@lumc.nl [Department of Radiology, Division of Image Processing, Leiden University Medical Center, 2300 RC Leiden (Netherlands); Klein, S. [Department of Radiology and Department of Medical Informatics, Biomedical Imaging Group Rotterdam, Erasmus MC, Rotterdam 3015 GE (Netherlands); Kwee, R. M.; Kooi, M. E. [Department of Radiology, Cardiovascular Research Institute Maastricht, Maastricht University Medical Center, Maastricht 6202 AZ (Netherlands)

    2013-12-15

    Purpose: Atherosclerosis is the primary cause of heart disease and stroke. The detailed assessment of atherosclerosis of the carotid artery requires high resolution imaging of the vessel wall using multiple MR sequences with different contrast weightings. These images allow manual or automated classification of plaque components inside the vessel wall. Automated classification requires all sequences to be in alignment, which is hampered by patient motion. In clinical practice, correction of this motion is performed manually. Previous studies applied automated image registration to correct for motion using only nondeformable transformation models and did not perform a detailed quantitative validation. The purpose of this study is to develop an automated accurate 3D registration method, and to extensively validate this method on a large set of patient data. In addition, the authors quantified patient motion during scanning to investigate the need for correction. Methods: MR imaging studies (1.5T, dedicated carotid surface coil, Philips) from 55 TIA/stroke patients with ipsilateral <70% carotid artery stenosis were randomly selected from a larger cohort. Five MR pulse sequences were acquired around the carotid bifurcation, each containing nine transverse slices: T1-weighted turbo field echo, time of flight, T2-weighted turbo spin-echo, and pre- and postcontrast T1-weighted turbo spin-echo images (T1W TSE). The images were manually segmented by delineating the lumen contour in each vessel wall sequence and were manually aligned by applying throughplane and inplane translations to the images. To find the optimal automatic image registration method, different masks, choice of the fixed image, different types of the mutual information image similarity metric, and transformation models including 3D deformable transformation models, were evaluated. Evaluation of the automatic registration results was performed by comparing the lumen segmentations of the fixed image and

  4. Automated registration of multispectral MR vessel wall images of the carotid artery

    International Nuclear Information System (INIS)

    Klooster, R. van 't; Staring, M.; Reiber, J. H. C.; Lelieveldt, B. P. F.; Geest, R. J. van der; Klein, S.; Kwee, R. M.; Kooi, M. E.

    2013-01-01

    Purpose: Atherosclerosis is the primary cause of heart disease and stroke. The detailed assessment of atherosclerosis of the carotid artery requires high resolution imaging of the vessel wall using multiple MR sequences with different contrast weightings. These images allow manual or automated classification of plaque components inside the vessel wall. Automated classification requires all sequences to be in alignment, which is hampered by patient motion. In clinical practice, correction of this motion is performed manually. Previous studies applied automated image registration to correct for motion using only nondeformable transformation models and did not perform a detailed quantitative validation. The purpose of this study is to develop an automated accurate 3D registration method, and to extensively validate this method on a large set of patient data. In addition, the authors quantified patient motion during scanning to investigate the need for correction. Methods: MR imaging studies (1.5T, dedicated carotid surface coil, Philips) from 55 TIA/stroke patients with ipsilateral <70% carotid artery stenosis were randomly selected from a larger cohort. Five MR pulse sequences were acquired around the carotid bifurcation, each containing nine transverse slices: T1-weighted turbo field echo, time of flight, T2-weighted turbo spin-echo, and pre- and postcontrast T1-weighted turbo spin-echo images (T1W TSE). The images were manually segmented by delineating the lumen contour in each vessel wall sequence and were manually aligned by applying throughplane and inplane translations to the images. To find the optimal automatic image registration method, different masks, choice of the fixed image, different types of the mutual information image similarity metric, and transformation models including 3D deformable transformation models, were evaluated. Evaluation of the automatic registration results was performed by comparing the lumen segmentations of the fixed image and

  5. Fator de correção para indivíduos com capacidade acomodativa baseado no uso do refrator automático Correction factor for individuals with accommodative capacity based on automated refractor

    Directory of Open Access Journals (Sweden)

    Rodrigo Ueno Takahagi

    2009-12-01

    Full Text Available OBJETIVO: Pesquisar um fator de correção para avaliação do erro refrativo sem a utilização da cicloplegia. MÉTODOS: Foram estudados 623 pacientes (1.246 olhos, de ambos os sexos, com idade entre 3 e 40 anos. As refratometrias estática e dinâmica foram obtidas usando-se o refrator automático Shin-Nippon Accuref-K 9001. A cicloplegia foi obtida com a instilação de uma gota de colírio ciclopentolato a 1%, com refratometria estática 30 minutos após. Os dados foram submetidos à análise estatística usando a técnica do modelo de regressão linear e modelo de regressão múltipla do valor dióptrico com e sem cicloplegia, em função da idade. RESULTADOS: A correlação entre valores dióptricos sem e com cicloplegia quanto ao erro astigmático variou de 81,52% a 92,27%. Quanto ao valor dióptrico esférico, a correlação foi menor (53,57% a 87,78%. O mesmo se observou em relação ao eixo do astigmatismo (28,86% a 58,80%. O modelo de regressão múltipla em função da idade mostrou coeficiente de determinação múltiplo maior para a miopia (86,38% e astigmatismo (79,79%. O menor coeficiente foi observado para o eixo do astigmatismo (17,70%. CONCLUSÃO: Avaliando-se os erros refrativos com e sem cicloplegia, observou-se alta correlação nas ametropias cilíndricas. Foram desenvolvidas equações matemáticas como fator de correção para refratometrias dos pacientes sem cicloplegia, portadores de ametropias cilíndricas e esféricas.PURPOSE: To determine a correction factor for refractive errors evaluated without cycloplegy effect. METHODS: A study was made with 623 patients (1,246 eyes of both sexes, aging between 3 and 40 years old. The dynamic and static refractometries were obtained using the automated refractor Shin-Nippon Accuref-K 9001. 1% Cyclopentolate was dropped and the static refractometry was performed in 30 minutes. Data were analyzed using the linear regression model and the multiple regression model of the diopter

  6. Note: Automated optical focusing on encapsulated devices for scanning light stimulation systems

    International Nuclear Information System (INIS)

    Bitzer, L. A.; Benson, N.; Schmechel, R.

    2014-01-01

    Recently, a scanning light stimulation system with an automated, adaptive focus correction during the measurement was introduced. Here, its application on encapsulated devices is discussed. This includes the changes an encapsulating optical medium introduces to the focusing process as well as to the subsequent light stimulation measurement. Further, the focusing method is modified to compensate for the influence of refraction and to maintain a minimum beam diameter on the sample surface

  7. Semi-automated Robust Quantification of Lesions (SRQL Toolbox

    Directory of Open Access Journals (Sweden)

    Kaori Ito

    2017-02-01

    Full Text Available Quantifying lesions in a robust manner is fundamental for studying the effects of neuroanatomical changes in the post-stroke brain on recovery. However, the wide variability in lesion characteristics across individuals makes manual lesion segmentation a challenging and often subjective process. This makes it difficult to combine stroke lesion data across multiple research sites, due to subjective differences in how lesions may be defined. We developed the Semi-automated Robust Quantification of Lesions (SRQL; https://github.com/npnl/SRQL; DOI: 10.5281/zenodo.267213 Toolbox that performs several analysis steps: 1 a white matter intensity correction that removes healthy white matter voxels from the lesion mask, thereby making lesions slightly more robust to subjective errors; 2 an automated report of descriptive statistics on lesions for simplified comparison between or across groups, and 3 an option to perform analyses in both native and standard space to facilitate analyses in either space, or comparisons between spaces. Here, we describe the methods implemented in the toolbox and demonstrate the outputs of the SRQL toolbox.

  8. Automized squark-neutralino production to next-to-leading order

    International Nuclear Information System (INIS)

    Binoth, Thomas; Wigmore, Ioan; Netto, Dorival Goncalves; Lopez-Val, David; Plehn, Tilman; Mawatari, Kentarou

    2011-01-01

    The production of one hard jet in association with missing transverse energy is a major LHC search channel motivated by many scenarios for physics beyond the standard model. In scenarios with a weakly interacting dark matter candidate, like supersymmetry, it arises from the associated production of a quark partner with the dark matter agent. We present the next-to-leading-order cross section calculation as the first application of the fully automized MadGolem package. We find moderate corrections to the production rate with a strongly reduced theory uncertainty.

  9. Lean automation development : applying lean principles to the automation development process

    OpenAIRE

    Granlund, Anna; Wiktorsson, Magnus; Grahn, Sten; Friedler, Niklas

    2014-01-01

    By a broad empirical study it is indicated that automation development show potential of improvement. In the paper, 13 lean product development principles are contrasted to the automation development process and it is suggested why and how these principles can facilitate, support and improve the automation development process. The paper summarises a description of what characterises a lean automation development process and what consequences it entails. Main differences compared to current pr...

  10. Automated borehole gravity meter system

    International Nuclear Information System (INIS)

    Lautzenhiser, Th.V.; Wirtz, J.D.

    1984-01-01

    An automated borehole gravity meter system for measuring gravity within a wellbore. The gravity meter includes leveling devices for leveling the borehole gravity meter, displacement devices for applying forces to a gravity sensing device within the gravity meter to bring the gravity sensing device to a predetermined or null position. Electronic sensing and control devices are provided for (i) activating the displacement devices, (ii) sensing the forces applied to the gravity sensing device, (iii) electronically converting the values of the forces into a representation of the gravity at the location in the wellbore, and (iv) outputting such representation. The system further includes electronic control devices with the capability of correcting the representation of gravity for tidal effects, as well as, calculating and outputting the formation bulk density and/or porosity

  11. A conceptual model of the automated credibility assessment of the volunteered geographic information

    International Nuclear Information System (INIS)

    Idris, N H; Jackson, M J; Ishak, M H I

    2014-01-01

    The use of Volunteered Geographic Information (VGI) in collecting, sharing and disseminating geospatially referenced information on the Web is increasingly common. The potentials of this localized and collective information have been seen to complement the maintenance process of authoritative mapping data sources and in realizing the development of Digital Earth. The main barrier to the use of this data in supporting this bottom up approach is the credibility (trust), completeness, accuracy, and quality of both the data input and outputs generated. The only feasible approach to assess these data is by relying on an automated process. This paper describes a conceptual model of indicators (parameters) and practical approaches to automated assess the credibility of information contributed through the VGI including map mashups, Geo Web and crowd – sourced based applications. There are two main components proposed to be assessed in the conceptual model – metadata and data. The metadata component comprises the indicator of the hosting (websites) and the sources of data / information. The data component comprises the indicators to assess absolute and relative data positioning, attribute, thematic, temporal and geometric correctness and consistency. This paper suggests approaches to assess the components. To assess the metadata component, automated text categorization using supervised machine learning is proposed. To assess the correctness and consistency in the data component, we suggest a matching validation approach using the current emerging technologies from Linked Data infrastructures and using third party reviews validation. This study contributes to the research domain that focuses on the credibility, trust and quality issues of data contributed by web citizen providers

  12. Hitchhiker'S Guide to Voxel Segmentation for Partial Volume Correction of in Vivo Magnetic Resonance Spectroscopy

    Directory of Open Access Journals (Sweden)

    Scott Quadrelli

    2016-01-01

    Full Text Available Partial volume effects have the potential to cause inaccuracies when quantifying metabolites using proton magnetic resonance spectroscopy (MRS. In order to correct for cerebrospinal fluid content, a spectroscopic voxel needs to be segmented according to different tissue contents. This article aims to detail how automated partial volume segmentation can be undertaken and provides a software framework for researchers to develop their own tools. While many studies have detailed the impact of partial volume correction on proton magnetic resonance spectroscopy quantification, there is a paucity of literature explaining how voxel segmentation can be achieved using freely available neuroimaging packages.

  13. Both Automation and Paper.

    Science.gov (United States)

    Purcell, Royal

    1988-01-01

    Discusses the concept of a paperless society and the current situation in library automation. Various applications of automation and telecommunications are addressed, and future library automation is considered. Automation at the Monroe County Public Library in Bloomington, Indiana, is described as an example. (MES)

  14. Automated Groundwater Screening

    International Nuclear Information System (INIS)

    Taylor, Glenn A.; Collard, Leonard B.

    2005-01-01

    The Automated Intruder Analysis has been extended to include an Automated Ground Water Screening option. This option screens 825 radionuclides while rigorously applying the National Council on Radiation Protection (NCRP) methodology. An extension to that methodology is presented to give a more realistic screening factor for those radionuclides which have significant daughters. The extension has the promise of reducing the number of radionuclides which must be tracked by the customer. By combining the Automated Intruder Analysis with the Automated Groundwater Screening a consistent set of assumptions and databases is used. A method is proposed to eliminate trigger values by performing rigorous calculation of the screening factor thereby reducing the number of radionuclides sent to further analysis. Using the same problem definitions as in previous groundwater screenings, the automated groundwater screening found one additional nuclide, Ge-68, which failed the screening. It also found that 18 of the 57 radionuclides contained in NCRP Table 3.1 failed the screening. This report describes the automated groundwater screening computer application

  15. SAMPO 90 - High resolution interactive gamma spectrum analysis including automation with macros

    International Nuclear Information System (INIS)

    Aarnio, P.A.; Nikkinen, M.T.; Routti, J.T.

    1991-01-01

    SAMPO 90 is a high performance gamma spectrum analysis program for personal computers. It uses high resolution color graphics to display calibrations, spectra, fitting results as multiplet components, and analysis results. All the analysis phases can be done either under full interactive user control or by using macros for automated measurement and analysis sequences including the control of MCAs and sample changers. Semi-automated calibrations for peak shapes (Gaussian with exponential tails), detector efficiency, and energy are available with a possibility for user intervention through interactive graphics. Accurate peak area determination of even the most complex multiplets, of up to 32 components, is accomplished using linear, non-linear and mixed mode fitting, where the component energies and areas can be either frozen or allowed to float in arbitrary combinations. Nuclide identification is done using associated lines techniques which allow interference correction for fully overlapping peaks. Peaked Background Subtraction can be performed and Minimum Detectable Activities calculated. Attenuation corrections can be taken into account in detector efficiency calculation. The most common PC-based MCA spectrum formats (Canberra S100, Ortec ACE, Nucleus PCA, ND AccuSpec) are supported as well as ASCII spectrum files. A gamma-line library is included together with an editor for user configurable libraries. The analysis reports and program parameters are fully customizable. Function key macros can be used to automate the most common analysis procedures. Small batch type modules are additionally available for routine work. SAMPO 90 is a result of over twenty man years of programming and contains 25,000 lines of Fortran, 10,000 lines of C, and 12,000 lines of assembler

  16. Analysis methods and performance of an automated system for measuring both concentration and enrichment of uranium in solutions

    International Nuclear Information System (INIS)

    Kelley, T.A.; Parker, J.L.; Sampson, T.E.

    1993-01-01

    For the 1992 INNM meeting, the authors reported on the general characteristics of an automated system--then under development--for measuring both the concentration and enrichment of uranium in solutions. That paper emphasized the automated control capability, the measurement sequences, and safety features of the system. In this paper, the authors report in detail on the measurement methods, the analysis algorithms, and the performance of the delivered system. The uranium concentration is measured by a transmission-corrected X-ray fluorescence method. Cobalt-57 is the fluorescing source and a combined 153 Gd and 57 Co source is used for the transmission measurements. Corrections are made for both the absorption of the exciting 57 Co gamma rays and the excited uranium X-rays. The 235 U concentration is measured by a transmission-corrected method, which employs the 185.7-keV gamma ray of 235 U and a transmission source of 75 Se to make corrections for the self-absorption of the 235 U gamma rays in the solution samples. Both measurements employ high-resolution gamma-ray spectrometry and use the same 50ml sample contained in a custom-molded, flat-bottomed, polypropylene bottle. Both measurements are intended for uranium solutions with concentrations ≥0.1 g U/l, although at higher enrichments the passive measurement will be even more sensitive

  17. Performance of optimized McRAPD in identification of 9 yeast species frequently isolated from patient samples: potential for automation.

    Science.gov (United States)

    Trtkova, Jitka; Pavlicek, Petr; Ruskova, Lenka; Hamal, Petr; Koukalova, Dagmar; Raclavsky, Vladislav

    2009-11-10

    Rapid, easy, economical and accurate species identification of yeasts isolated from clinical samples remains an important challenge for routine microbiological laboratories, because susceptibility to antifungal agents, probability to develop resistance and ability to cause disease vary in different species. To overcome the drawbacks of the currently available techniques we have recently proposed an innovative approach to yeast species identification based on RAPD genotyping and termed McRAPD (Melting curve of RAPD). Here we have evaluated its performance on a broader spectrum of clinically relevant yeast species and also examined the potential of automated and semi-automated interpretation of McRAPD data for yeast species identification. A simple fully automated algorithm based on normalized melting data identified 80% of the isolates correctly. When this algorithm was supplemented by semi-automated matching of decisive peaks in first derivative plots, 87% of the isolates were identified correctly. However, a computer-aided visual matching of derivative plots showed the best performance with average 98.3% of the accurately identified isolates, almost matching the 99.4% performance of traditional RAPD fingerprinting. Since McRAPD technique omits gel electrophoresis and can be performed in a rapid, economical and convenient way, we believe that it can find its place in routine identification of medically important yeasts in advanced diagnostic laboratories that are able to adopt this technique. It can also serve as a broad-range high-throughput technique for epidemiological surveillance.

  18. Automatic EEG-assisted retrospective motion correction for fMRI (aE-REMCOR).

    Science.gov (United States)

    Wong, Chung-Ki; Zotev, Vadim; Misaki, Masaya; Phillips, Raquel; Luo, Qingfei; Bodurka, Jerzy

    2016-04-01

    Head motions during functional magnetic resonance imaging (fMRI) impair fMRI data quality and introduce systematic artifacts that can affect interpretation of fMRI results. Electroencephalography (EEG) recordings performed simultaneously with fMRI provide high-temporal-resolution information about ongoing brain activity as well as head movements. Recently, an EEG-assisted retrospective motion correction (E-REMCOR) method was introduced. E-REMCOR utilizes EEG motion artifacts to correct the effects of head movements in simultaneously acquired fMRI data on a slice-by-slice basis. While E-REMCOR is an efficient motion correction approach, it involves an independent component analysis (ICA) of the EEG data and identification of motion-related ICs. Here we report an automated implementation of E-REMCOR, referred to as aE-REMCOR, which we developed to facilitate the application of E-REMCOR in large-scale EEG-fMRI studies. The aE-REMCOR algorithm, implemented in MATLAB, enables an automated preprocessing of the EEG data, an ICA decomposition, and, importantly, an automatic identification of motion-related ICs. aE-REMCOR has been used to perform retrospective motion correction for 305 fMRI datasets from 16 subjects, who participated in EEG-fMRI experiments conducted on a 3T MRI scanner. Performance of aE-REMCOR has been evaluated based on improvement in temporal signal-to-noise ratio (TSNR) of the fMRI data, as well as correction efficiency defined in terms of spike reduction in fMRI motion parameters. The results show that aE-REMCOR is capable of substantially reducing head motion artifacts in fMRI data. In particular, when there are significant rapid head movements during the scan, a large TSNR improvement and high correction efficiency can be achieved. Depending on a subject's motion, an average TSNR improvement over the brain upon the application of aE-REMCOR can be as high as 27%, with top ten percent of the TSNR improvement values exceeding 55%. The average

  19. Automated method of phasing difficult nuclear magnetic resonance spectra with application to the unsaturated carbon analysis of oils

    Energy Technology Data Exchange (ETDEWEB)

    Sterna, L.L.; Tong, V.P. (Shell Development Company, Houston, TX (USA). Westhollow Research Center)

    1991-08-01

    A new method for the automated phasing of n.m.r. spectra is described. The basis of the automation is that the software performs the phasing in the same fashion as a trained n.m.r. operator rather than using mathematical relationships between absorptive and dispersive spectra. The method is illustrated with processing of the {sup 13}C n.m.r. spectrum of a catalytic cracking feedstock. The software readily phased the spectrum even though the spectrum had very broad features and a significant baseline correction. The software performed well even when the time-domain data was left-shifted to introduce a large first-order phase error. The method was applied to measure the percentage of unsaturated carbon in hydrocarbons. Extensive tests were performed to compare automated processing with manual processing for this application; the automated method was found to give both better precision and accuracy. The method can be easily tailored to many other types of analyses. 9 refs., 4 figs., 3 tabs.

  20. Automating Hyperspectral Data for Rapid Response in Volcanic Emergencies

    Science.gov (United States)

    Davies, Ashley G.; Doubleday, Joshua R.; Chien, Steve A.

    2013-01-01

    In a volcanic emergency, time is of the essence. It is vital to quantify eruption parameters (thermal emission, effusion rate, location of activity) and distribute this information as quickly as possible to decision-makers in order to enable effective evaluation of eruption-related risk and hazard. The goal of this work was to automate and streamline processing of spacecraft hyperspectral data, automate product generation, and automate distribution of products. Visible and Short-Wave Infrared Images of volcanic eruption in Iceland in May 2010." class="caption" align="right">The software rapidly processes hyperspectral data, correcting for incident sunlight where necessary, and atmospheric transmission; detects thermally anomalous pixels; fits data with model black-body thermal emission spectra to determine radiant flux; calculates atmospheric convection thermal removal; and then calculates total heat loss. From these results, an estimation of effusion rate is made. Maps are generated of thermal emission and location (see figure). Products are posted online, and relevant parties notified. Effusion rate data are added to historical record and plotted to identify spikes in activity for persistently active eruptions. The entire process from start to end is autonomous. Future spacecraft, especially those in deep space, can react to detection of transient processes without the need to communicate with Earth, thus increasing science return. Terrestrially, this removes the need for human intervention.

  1. Automated model building

    CERN Document Server

    Caferra, Ricardo; Peltier, Nicholas

    2004-01-01

    This is the first book on automated model building, a discipline of automated deduction that is of growing importance Although models and their construction are important per se, automated model building has appeared as a natural enrichment of automated deduction, especially in the attempt to capture the human way of reasoning The book provides an historical overview of the field of automated deduction, and presents the foundations of different existing approaches to model construction, in particular those developed by the authors Finite and infinite model building techniques are presented The main emphasis is on calculi-based methods, and relevant practical results are provided The book is of interest to researchers and graduate students in computer science, computational logic and artificial intelligence It can also be used as a textbook in advanced undergraduate courses

  2. Health care professionals’ perspectives on automated multi-dose drug dispensing

    Directory of Open Access Journals (Sweden)

    Bardage C

    2014-12-01

    Full Text Available Background: During the 1980s, manual repackaging of multi-dose medications from pharmacies in Sweden was successively substituted with automated multi-dose drug dispensing (MDD. There are few studies evaluating the consequences of automated MDD with regard to patient safety, and those that investigate this issue are not very extensive. Objectives: To investigate Swedish health care professionals’ perceived experience of automated MDD and its effects on patient adherence and patient safety. Methods: Three questionnaire forms, one for physicians, nurses, and assistant nurses/nursing assistants, were developed based on reviews of the literature and pilot testing of the questions in the intended target groups. The target groups were health professionals prescribing or administrating MDD to patients. A sample (every sixth municipality was drawn from the sampling frame of Swedish municipalities, resulting in 40 municipalities, about 14% of all municipalities in Sweden. Email addresses of general practitioners were obtained from county councils, while the municipalities assisted in getting contact details for nurses, assistant nurses and nursing assistants. A total of 915 questionnaires were distributed electronically to physicians, 515 to nurses, and 4,118 to assistant nurses/nursing assistants. The data were collected in September and October 2012. Results: The response rate among physicians, nurses and assistant nurses/nursing assistants was 31%, 43% and 23%, respectively. The professionals reported that automated MDD reduces duplication of medication, contributes to correct dosages, helps patients take their medication at the right time, and reduces confusion among patients. Fifteen per cent of the physicians and about one-third of the nurses and assistant nurses/nursing assistants reported that generic substitution makes it more difficult for the patient to identify the various medicines available in the sachets. The physicians did, however

  3. A Novel Automated Method for Analyzing Cylindrical Computed Tomography Data

    Science.gov (United States)

    Roth, D. J.; Burke, E. R.; Rauser, R. W.; Martin, R. E.

    2011-01-01

    A novel software method is presented that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography. This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2-D sheets in the vertical direction in addition to volume rendering and normal plane views provided by traditional CT software. The method is based on interior and exterior surface edge detection and under proper conditions, is FULLY AUTOMATED and requires no input from the user except the correct voxel dimension from the CT scan. The software is available from NASA in 32- and 64-bit versions that can be applied to gigabyte-sized data sets, processing data either in random access memory or primarily on the computer hard drive. Please inquire with the presenting author if further interested. This software differentiates itself in total from other possible re-slicing software solutions due to complete automation and advanced processing and analysis capabilities.

  4. Elektronische monitoring van luchtwassers op veehouderijbedrijven = Automated process monitoring and data logging of air scrubbers at animal houses

    NARCIS (Netherlands)

    Melse, R.W.; Franssen, J.C.T.J.

    2010-01-01

    At 6 animal houses air scrubbers equipped with an automated process monitoring and data logging system were tested. The measured values were successfully stored but the measured values, especially the pH and EC of the recirculation water, appeared not to be correct at all times.

  5. Automated 3-D method for the correction of axial artifacts in spectral-domain optical coherence tomography images

    Science.gov (United States)

    Antony, Bhavna; Abràmoff, Michael D.; Tang, Li; Ramdas, Wishal D.; Vingerling, Johannes R.; Jansonius, Nomdo M.; Lee, Kyungmoo; Kwon, Young H.; Sonka, Milan; Garvin, Mona K.

    2011-01-01

    The 3-D spectral-domain optical coherence tomography (SD-OCT) images of the retina often do not reflect the true shape of the retina and are distorted differently along the x and y axes. In this paper, we propose a novel technique that uses thin-plate splines in two stages to estimate and correct the distinct axial artifacts in SD-OCT images. The method was quantitatively validated using nine pairs of OCT scans obtained with orthogonal fast-scanning axes, where a segmented surface was compared after both datasets had been corrected. The mean unsigned difference computed between the locations of this artifact-corrected surface after the single-spline and dual-spline correction was 23.36 ± 4.04 μm and 5.94 ± 1.09 μm, respectively, and showed a significant difference (p < 0.001 from two-tailed paired t-test). The method was also validated using depth maps constructed from stereo fundus photographs of the optic nerve head, which were compared to the flattened top surface from the OCT datasets. Significant differences (p < 0.001) were noted between the artifact-corrected datasets and the original datasets, where the mean unsigned differences computed over 30 optic-nerve-head-centered scans (in normalized units) were 0.134 ± 0.035 and 0.302 ± 0.134, respectively. PMID:21833377

  6. Automated Studies of Continuing Current in Lightning Flashes

    Science.gov (United States)

    Martinez-Claros, Jose

    Continuing current (CC) is a continuous luminosity in the lightning channel that lasts longer than 10 ms following a lightning return stroke to ground. Lightning flashes following CC are associated with direct damage to power lines and are thought to be responsible for causing lightning-induced forest fires. The development of an algorithm that automates continuing current detection by combining NLDN (National Lightning Detection Network) and LEFA (Langmuir Electric Field Array) datasets for CG flashes will be discussed. The algorithm was applied to thousands of cloud-to-ground (CG) flashes within 40 km of Langmuir Lab, New Mexico measured during the 2013 monsoon season. It counts the number of flashes in a single minute of data and the number of return strokes of an individual lightning flash; records the time and location of each return stroke; performs peak analysis on E-field data, and uses the slope of interstroke interval (ISI) E-field data fits to recognize whether continuing current (CC) exists within the interval. Following CC detection, duration and magnitude are measured. The longest observed C in 5588 flashes was 631 ms. The performance of the algorithm (vs. human judgement) was checked on 100 flashes. At best, the reported algorithm is "correct" 80% of the time, where correct means that multiple stations agree with each other and with a human on both the presence and duration of CC. Of the 100 flashes that were validated against human judgement, 62% were hybrid. Automated analysis detects the first but misses the second return stroke in many cases where the second return stroke is followed by long CC. This problem is also present in human interpretation of field change records.

  7. Simple automated preparation of O-[{sup 11}C]methyl-L-tyrosine for routine clinical use

    Energy Technology Data Exchange (ETDEWEB)

    Ishikawa, Yoichi [CYRIC Tohoku University, Aramaki, Aoba-ku, Sendai 980-8578 (Japan); Iwata, Ren [CYRIC Tohoku University, Aramaki, Aoba-ku, Sendai 980-8578 (Japan)]. E-mail: rencyric@cyric.tohoku.ac.jp; Furumoto, Shozo [TUBERO, Tohoku University, Sendai 980-8575 (Japan); Pascali, Claudio [National Cancer Institute, 20133 Milan (Italy); Bogni, Anna [National Cancer Institute, 20133 Milan (Italy); Kubota, Kazuo [International Medical Center, Tokyo 162-8655 (Japan); Ishiwata, Kiichi [Tokyo Metropolitan Institute of Gerontology, Tokyo 173-0022 (Japan)

    2005-07-01

    The previously reported preparation of O-[{sup 11}C]methyl-L-tyrosine ([{sup 11}C]MT), a promising tumor imaging agent, has been now considerably simplified and automated. Main changes were the use of [{sup 11}C]methyl iodide ([{sup 11}C]MeI) in the reaction with L-tyrosine disodium and the use of solid phase extraction on commercially available cartridges instead of HPLC for the final purification. An injectable saline solution of [{sup 11}C]MT was obtained within 30 min after EOB with radiochemical yield of ca. 60% (decay-corrected, based on [{sup 11}C]MeI). Radiochemical purity was over 97%. The automated preparation was carried out using a miniature module employing manifold valves.

  8. Automated one-loop calculations with GoSam

    International Nuclear Information System (INIS)

    Cullen, Gavin; Greiner, Nicolas; Heinrich, Gudrun; Reiter, Thomas; Luisoni, Gionata; Mastrolia, Pierpaolo; Ossola, Giovanni; Tramontano, Francesco

    2012-01-01

    We present the program package GoSam which is designed for the automated calculation of one-loop amplitudes for multi-particle processes in renormalisable quantum field theories. The amplitudes, which are generated in terms of Feynman diagrams, can be reduced using either D-dimensional integrand-level decomposition or tensor reduction. GoSam can be used to calculate one-loop QCD and/or electroweak corrections to Standard Model processes and offers the flexibility to link model files for theories Beyond the Standard Model. A standard interface to programs calculating real radiation is also implemented. We demonstrate the flexibility of the program by presenting examples of processes with up to six external legs attached to the loop. (orig.)

  9. Automated One-Loop Calculations with GoSam

    CERN Document Server

    Cullen, Gavin; Heinrich, Gudrun; Luisoni, Gionata; Mastrolia, Pierpaolo; Ossola, Giovanni; Reiter, Thomas; Tramontano, Francesco

    2012-01-01

    We present the program package GoSam which is designed for the automated calculation of one-loop amplitudes for multi-particle processes in renormalisable quantum field theories. The amplitudes, which are generated in terms of Feynman diagrams, can be reduced using either D-dimensional integrand-level decomposition or tensor reduction. GoSam can be used to calculate one-loop QCD and/or electroweak corrections to Standard Model processes and offers the flexibility to link model files for theories Beyond the Standard Model. A standard interface to programs calculating real radiation is also implemented. We demonstrate the flexibility of the program by presenting examples of processes with up to six external legs attached to the loop.

  10. Automated MR morphometry to predict Alzheimer's disease in mild cognitive impairment

    International Nuclear Information System (INIS)

    Fritzsche, Klaus H.; Schlindwein, Sarah; Bruggen, Thomas van; Meinzer, Hans-Peter; Stieltjes, Bram; Essig, Marco

    2010-01-01

    Prediction of progression from mild cognitive impairment (MCI) to Alzheimer's disease (AD) is challenging but essential for early treatment. This study aims to investigate the use of hippocampal atrophy markers for the automatic detection of MCI converters and to compare the predictive value to manually obtained hippocampal volume and temporal horn width. A study was performed with 15 patients with Alzheimer and 18 patients with MCI (ten converted, eight remained stable in a 3-year follow-up) as well as 15 healthy subjects. MRI scans were obtained at baseline and evaluated with an automated system for scoring of hippocampal atrophy. The predictive value of the automated system was compared with manual measurements of hippocampal volume and temporal horn width in the same subjects. The conversion to AD was correctly predicted in 77.8% of the cases (sensitivity 70%, specificity 87.5%) in the MCI group using automated morphometry and a plain linear classifier that was trained on the AD and healthy groups. Classification was improved by limiting analysis to the left cerebral hemisphere (accuracy 83.3%, sensitivity 70%, specificity 100%). The manual linear and volumetric approaches reached rates of 66.7% (40/100%) and 72.2% (60/87.5%), respectively. The automatic approach fulfills many important preconditions for clinical application. Contrary to the manual approaches, it is not observer-dependent and reduces human resource requirements. Automated assessment may be useful for individual patient assessment and for predicting progression to dementia. (orig.)

  11. Design of Service Net based Correctness Verification Approach for Multimedia Conferencing Service Orchestration

    Directory of Open Access Journals (Sweden)

    Cheng Bo

    2012-02-01

    Full Text Available Multimedia conferencing is increasingly becoming a very important and popular application over Internet. Due to the complexity of asynchronous communications and handle large and dynamically concurrent processes for multimedia conferencing, which confront relevant challenge to achieve sufficient correctness guarantees, and supporting the effective verification methods for multimedia conferencing services orchestration is an extremely difficult and challenging problem. In this paper, we firstly present the Business Process Execution Language (BPEL based conferencing service orchestration, and mainly focus on the service net based correction verification approach for multimedia conferencing services orchestration, which can automatically translated the BPEL based service orchestration into a corresponding Petri net model with the Petri Net Markup Language (PNML, and also present the BPEL service net reduction rules and multimedia conferencing service orchestration correction verification algorithms. We perform the correctness analysis and verification using the service net properties as safeness, reachability and deadlocks, and also provide an automated support tool for the formal analysis and soundness verification for the multimedia conferencing services orchestration scenarios. Finally, we give the comparison and evaluations.

  12. 78 FR 53466 - Modification of Two National Customs Automation Program (NCAP) Tests Concerning Automated...

    Science.gov (United States)

    2013-08-29

    ... Customs Automation Program (NCAP) Tests Concerning Automated Commercial Environment (ACE) Document Image... National Customs Automation Program (NCAP) tests concerning document imaging, known as the Document Image... the National Customs Automation Program (NCAP) tests concerning document imaging, known as the...

  13. An automated image processing method for classification of diabetic retinopathy stages from conjunctival microvasculature images

    Science.gov (United States)

    Khansari, Maziyar M.; O'Neill, William; Penn, Richard; Blair, Norman P.; Chau, Felix; Shahidi, Mahnaz

    2017-03-01

    The conjunctiva is a densely vascularized tissue of the eye that provides an opportunity for imaging of human microcirculation. In the current study, automated fine structure analysis of conjunctival microvasculature images was performed to discriminate stages of diabetic retinopathy (DR). The study population consisted of one group of nondiabetic control subjects (NC) and 3 groups of diabetic subjects, with no clinical DR (NDR), non-proliferative DR (NPDR), or proliferative DR (PDR). Ordinary least square regression and Fisher linear discriminant analyses were performed to automatically discriminate images between group pairs of subjects. Human observers who were masked to the grouping of subjects performed image discrimination between group pairs. Over 80% and 70% of images of subjects with clinical and non-clinical DR were correctly discriminated by the automated method, respectively. The discrimination rates of the automated method were higher than human observers. The fine structure analysis of conjunctival microvasculature images provided discrimination of DR stages and can be potentially useful for DR screening and monitoring.

  14. Optimization of automation: III. Development of optimization method for determining automation rate in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Kim, Man Cheol; Seong, Poong Hyun

    2016-01-01

    Highlights: • We propose an appropriate automation rate that enables the best human performance. • We analyze the shortest working time considering Situation Awareness Recovery (SAR). • The optimized automation rate is estimated by integrating the automation and ostracism rate estimation methods. • The process to derive the optimized automation rate is demonstrated through case studies. - Abstract: Automation has been introduced in various industries, including the nuclear field, because it is commonly believed that automation promises greater efficiency, lower workloads, and fewer operator errors through reducing operator errors and enhancing operator and system performance. However, the excessive introduction of automation has deteriorated operator performance due to the side effects of automation, which are referred to as Out-of-the-Loop (OOTL), and this is critical issue that must be resolved. Thus, in order to determine the optimal level of automation introduction that assures the best human operator performance, a quantitative method of optimizing the automation is proposed in this paper. In order to propose the optimization method for determining appropriate automation levels that enable the best human performance, the automation rate and ostracism rate, which are estimation methods that quantitatively analyze the positive and negative effects of automation, respectively, are integrated. The integration was conducted in order to derive the shortest working time through considering the concept of situation awareness recovery (SAR), which states that the automation rate with the shortest working time assures the best human performance. The process to derive the optimized automation rate is demonstrated through an emergency operation scenario-based case study. In this case study, four types of procedures are assumed through redesigning the original emergency operating procedure according to the introduced automation and ostracism levels. Using the

  15. Toward fully automated processing of dynamic susceptibility contrast perfusion MRI for acute ischemic cerebral stroke.

    Science.gov (United States)

    Kim, Jinsuh; Leira, Enrique C; Callison, Richard C; Ludwig, Bryan; Moritani, Toshio; Magnotta, Vincent A; Madsen, Mark T

    2010-05-01

    We developed fully automated software for dynamic susceptibility contrast (DSC) MR perfusion-weighted imaging (PWI) to efficiently and reliably derive critical hemodynamic information for acute stroke treatment decisions. Brain MR PWI was performed in 80 consecutive patients with acute nonlacunar ischemic stroke within 24h after onset of symptom from January 2008 to August 2009. These studies were automatically processed to generate hemodynamic parameters that included cerebral blood flow and cerebral blood volume, and the mean transit time (MTT). To develop reliable software for PWI analysis, we used computationally robust algorithms including the piecewise continuous regression method to determine bolus arrival time (BAT), log-linear curve fitting, arrival time independent deconvolution method and sophisticated motion correction methods. An optimal arterial input function (AIF) search algorithm using a new artery-likelihood metric was also developed. Anatomical locations of the automatically determined AIF were reviewed and validated. The automatically computed BAT values were statistically compared with estimated BAT by a single observer. In addition, gamma-variate curve-fitting errors of AIF and inter-subject variability of AIFs were analyzed. Lastly, two observes independently assessed the quality and area of hypoperfusion mismatched with restricted diffusion area from motion corrected MTT maps and compared that with time-to-peak (TTP) maps using the standard approach. The AIF was identified within an arterial branch and enhanced areas of perfusion deficit were visualized in all evaluated cases. Total processing time was 10.9+/-2.5s (mean+/-s.d.) without motion correction and 267+/-80s (mean+/-s.d.) with motion correction on a standard personal computer. The MTT map produced with our software adequately estimated brain areas with perfusion deficit and was significantly less affected by random noise of the PWI when compared with the TTP map. Results of image

  16. World-wide distribution automation systems

    International Nuclear Information System (INIS)

    Devaney, T.M.

    1994-01-01

    A worldwide power distribution automation system is outlined. Distribution automation is defined and the status of utility automation is discussed. Other topics discussed include a distribution management system, substation feeder, and customer functions, potential benefits, automation costs, planning and engineering considerations, automation trends, databases, system operation, computer modeling of system, and distribution management systems

  17. WIDAFELS flexible automation systems

    International Nuclear Information System (INIS)

    Shende, P.S.; Chander, K.P.; Ramadas, P.

    1990-01-01

    After discussing the various aspects of automation, some typical examples of various levels of automation are given. One of the examples is of automated production line for ceramic fuel pellets. (M.G.B.)

  18. Low jitter RF distribution system

    Science.gov (United States)

    Wilcox, Russell; Doolittle, Lawrence; Huang, Gang

    2012-09-18

    A timing signal distribution system includes an optical frequency stabilized laser signal amplitude modulated at an rf frequency. A transmitter box transmits a first portion of the laser signal and receive a modified optical signal, and outputs a second portion of the laser signal and a portion of the modified optical signal. A first optical fiber carries the first laser signal portion and the modified optical signal, and a second optical fiber carries the second portion of the laser signal and the returned modified optical signal. A receiver box receives the first laser signal portion, shifts the frequency of the first laser signal portion outputs the modified optical signal, and outputs an electrical signal on the basis of the laser signal. A detector at the end of the second optical fiber outputs a signal based on the modified optical signal. An optical delay sensing circuit outputs a data signal based on the detected modified optical signal. An rf phase detect and correct signal circuit outputs a signal corresponding to a phase stabilized rf signal based on the data signal and the frequency received from the receiver box.

  19. GoSam. A program for automated one-loop calculations

    Energy Technology Data Exchange (ETDEWEB)

    Cullen, G. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Greiner, N.; Heinrich, G.; Reiter, T. [Max-Planck-Institut fuer Physik, Muenchen (Germany); Luisoni, G. [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology; Mastrolia, P. [Max-Planck-Institut fuer Physik, Muenchen (Germany); Padua Univ. (Italy). Dipt. di Fisica; Ossola, G. [City Univ. of New York, NY (United States). New York City College of Technology; Tramontano, F. [European Organization for Nuclear Research (CERN), Geneva (Switzerland)

    2011-11-15

    The program package GoSam is presented which aims at the automated calculation of one-loop amplitudes for multi-particle processes. The amplitudes are generated in terms of Feynman diagrams and can be reduced using either D-dimensional integrand-level decomposition or tensor reduction, or a combination of both. GoSam can be used to calculate one-loop corrections to both QCD and electroweak theory, and model files for theories Beyond the Standard Model can be linked as well. A standard interface to programs calculating real radiation is also included. The flexibility of the program is demonstrated by various examples. (orig.)

  20. GoSam. A program for automated one-loop calculations

    International Nuclear Information System (INIS)

    Cullen, G.; Greiner, N.; Heinrich, G.; Reiter, T.; Luisoni, G.

    2011-11-01

    The program package GoSam is presented which aims at the automated calculation of one-loop amplitudes for multi-particle processes. The amplitudes are generated in terms of Feynman diagrams and can be reduced using either D-dimensional integrand-level decomposition or tensor reduction, or a combination of both. GoSam can be used to calculate one-loop corrections to both QCD and electroweak theory, and model files for theories Beyond the Standard Model can be linked as well. A standard interface to programs calculating real radiation is also included. The flexibility of the program is demonstrated by various examples. (orig.)

  1. GoSam: A program for automated one-loop calculations

    International Nuclear Information System (INIS)

    Cullen, G; Greiner, N; Heinrich, G; Mastrolia, P; Reiter, T; Luisoni, G; Ossola, G; Tramontano, F

    2012-01-01

    The program package GoSam is presented which aims at the automated calculation of one-loop amplitudes for multi-particle processes. The amplitudes are generated in terms of Feynman diagrams and can be reduced using either D-dimensional integrand-level decomposition or tensor reduction, or a combination of both. GoSam can be used to calculate one-loop corrections to both QCD and electroweak theory, and model files for theories Beyond the Standard Model can be linked as well. A standard interface to programs calculating real radiation is also included. The flexibility of the program is demonstrated by various examples.

  2. Fatigue Crack Growth Rate and Stress-Intensity Factor Corrections for Out-of-Plane Crack Growth

    Science.gov (United States)

    Forth, Scott C.; Herman, Dave J.; James, Mark A.

    2003-01-01

    Fatigue crack growth rate testing is performed by automated data collection systems that assume straight crack growth in the plane of symmetry and use standard polynomial solutions to compute crack length and stress-intensity factors from compliance or potential drop measurements. Visual measurements used to correct the collected data typically include only the horizontal crack length, which for cracks that propagate out-of-plane, under-estimates the crack growth rates and over-estimates the stress-intensity factors. The authors have devised an approach for correcting both the crack growth rates and stress-intensity factors based on two-dimensional mixed mode-I/II finite element analysis (FEA). The approach is used to correct out-of-plane data for 7050-T7451 and 2025-T6 aluminum alloys. Results indicate the correction process works well for high DeltaK levels but fails to capture the mixed-mode effects at DeltaK levels approaching threshold (da/dN approximately 10(exp -10) meter/cycle).

  3. A novel method to correct for pitch and yaw patient setup errors in helical tomotherapy

    International Nuclear Information System (INIS)

    Boswell, Sarah A.; Jeraj, Robert; Ruchala, Kenneth J.; Olivera, Gustavo H.; Jaradat, Hazim A.; James, Joshua A.; Gutierrez, Alonso; Pearson, Dave; Frank, Gary; Mackie, T. Rock

    2005-01-01

    An accurate means of determining and correcting for daily patient setup errors is important to the cancer outcome in radiotherapy. While many tools have been developed to detect setup errors, difficulty may arise in accurately adjusting the patient to account for the rotational error components. A novel, automated method to correct for rotational patient setup errors in helical tomotherapy is proposed for a treatment couch that is restricted to motion along translational axes. In tomotherapy, only a narrow superior/inferior section of the target receives a dose at any instant, thus rotations in the sagittal and coronal planes may be approximately corrected for by very slow continuous couch motion in a direction perpendicular to the scanning direction. Results from proof-of-principle tests indicate that the method improves the accuracy of treatment delivery, especially for long and narrow targets. Rotational corrections about an axis perpendicular to the transverse plane continue to be implemented easily in tomotherapy by adjustment of the initial gantry angle

  4. Automation in Clinical Microbiology

    Science.gov (United States)

    Ledeboer, Nathan A.

    2013-01-01

    Historically, the trend toward automation in clinical pathology laboratories has largely bypassed the clinical microbiology laboratory. In this article, we review the historical impediments to automation in the microbiology laboratory and offer insight into the reasons why we believe that we are on the cusp of a dramatic change that will sweep a wave of automation into clinical microbiology laboratories. We review the currently available specimen-processing instruments as well as the total laboratory automation solutions. Lastly, we outline the types of studies that will need to be performed to fully assess the benefits of automation in microbiology laboratories. PMID:23515547

  5. Virtual automation.

    Science.gov (United States)

    Casis, E; Garrido, A; Uranga, B; Vives, A; Zufiaurre, C

    2001-01-01

    Total laboratory automation (TLA) can be substituted in mid-size laboratories by a computer sample workflow control (virtual automation). Such a solution has been implemented in our laboratory using PSM, software developed in cooperation with Roche Diagnostics (Barcelona, Spain), to this purpose. This software is connected to the online analyzers and to the laboratory information system and is able to control and direct the samples working as an intermediate station. The only difference with TLA is the replacement of transport belts by personnel of the laboratory. The implementation of this virtual automation system has allowed us the achievement of the main advantages of TLA: workload increase (64%) with reduction in the cost per test (43%), significant reduction in the number of biochemistry primary tubes (from 8 to 2), less aliquoting (from 600 to 100 samples/day), automation of functional testing, drastic reduction of preanalytical errors (from 11.7 to 0.4% of the tubes) and better total response time for both inpatients (from up to 48 hours to up to 4 hours) and outpatients (from up to 10 days to up to 48 hours). As an additional advantage, virtual automation could be implemented without hardware investment and significant headcount reduction (15% in our lab).

  6. Automation of Test Cases for Web Applications : Automation of CRM Test Cases

    OpenAIRE

    Seyoum, Alazar

    2012-01-01

    The main theme of this project was to design a test automation framework for automating web related test cases. Automating test cases designed for testing a web interface provide a means of improving a software development process by shortening the testing phase in the software development life cycle. In this project an existing AutoTester framework and iMacros test automation tools were used. CRM Test Agent was developed to integrate AutoTester to iMacros and to enable the AutoTester,...

  7. Development of electronic document management system for scientific and technical design administration automation (evidence from European Organization for Nuclear Research)

    International Nuclear Information System (INIS)

    Titov, R.N.

    2011-01-01

    The new principles and methods of electronic document management system construction are developed. The software package for electronic document handling is made, it provides automation of work flow management and permits to trace and correct on-line the flow of documents. The formal models of electronic documents describing complex hierarchic structures of data with the use of XML-trees are considered. On the base of investigations conducted the CERN electronic document management system has been upgraded, it allowed to shorten more than twofold the time for automation of new business processes [ru

  8. Automated Performance Characterization of DSN System Frequency Stability Using Spacecraft Tracking Data

    Science.gov (United States)

    Pham, Timothy T.; Machuzak, Richard J.; Bedrossian, Alina; Kelly, Richard M.; Liao, Jason C.

    2012-01-01

    This software provides an automated capability to measure and qualify the frequency stability performance of the Deep Space Network (DSN) ground system, using daily spacecraft tracking data. The results help to verify if the DSN performance is meeting its specification, therefore ensuring commitments to flight missions; in particular, the radio science investigations. The rich set of data also helps the DSN Operations and Maintenance team to identify the trends and patterns, allowing them to identify the antennas of lower performance and implement corrective action in a timely manner. Unlike the traditional approach where the performance can only be obtained from special calibration sessions that are both time-consuming and require manual setup, the new method taps into the daily spacecraft tracking data. This new approach significantly increases the amount of data available for analysis, roughly by two orders of magnitude, making it possible to conduct trend analysis with good confidence. The software is built with automation in mind for end-to-end processing. From the inputs gathering to computation analysis and later data visualization of the results, all steps are done automatically, making the data production at near zero cost. This allows the limited engineering resource to focus on high-level assessment and to follow up with the exceptions/deviations. To make it possible to process the continual stream of daily incoming data without much effort, and to understand the results quickly, the processing needs to be automated and the data summarized at a high level. Special attention needs to be given to data gathering, input validation, handling anomalous conditions, computation, and presenting the results in a visual form that makes it easy to spot items of exception/deviation so that further analysis can be directed and corrective actions followed.

  9. Comparison between manual and automated techniques for assessment of data from dynamic antral scintigraphy

    International Nuclear Information System (INIS)

    Misiara, Gustavo P.; Troncon, Luiz E.A.; Secaf, Marie; Moraes, Eder R.

    2008-01-01

    This work aimed at determining whether data from dynamic antral scintigraphy (DAS) yielded by a simple, manual technique are as accurate as those generated by a conventional automated technique (fast Fourier transform) for assessing gastric contractility. Seventy-one stretches (4 min) of 'activity versus time' curves obtained by DAS from 10 healthy volunteers and 11 functional dyspepsia patients, after ingesting a liquid meal (320 ml, 437 kcal) labeled with technetium-99m ( 99m Tc)-phytate, were independently analyzed by manual and automated techniques. Data obtained by both techniques for the frequency of antral contractions were similar. Contraction amplitude determined by the manual technique was significantly higher than that estimated by the automated method, in both patients and controls. The contraction frequency 30 min post-meal was significantly lower in patients than in controls, which was correctly shown by both techniques. A manual technique using ordinary resources of the gamma camera workstation, despite yielding higher figures for the amplitude of gastric contractions, is as accurate as the conventional automated technique of DAS analysis. These findings may favor a more intensive use of DAS coupled to gastric emptying studies, which would provide a more comprehensive assessment of gastric motor function in disease. (author)

  10. An Automation Planning Primer.

    Science.gov (United States)

    Paynter, Marion

    1988-01-01

    This brief planning guide for library automation incorporates needs assessment and evaluation of options to meet those needs. A bibliography of materials on automation planning and software reviews, library software directories, and library automation journals is included. (CLB)

  11. Automation of workplace lifting hazard assessment for musculoskeletal injury prevention.

    Science.gov (United States)

    Spector, June T; Lieblich, Max; Bao, Stephen; McQuade, Kevin; Hughes, Margaret

    2014-01-01

    Existing methods for practically evaluating musculoskeletal exposures such as posture and repetition in workplace settings have limitations. We aimed to automate the estimation of parameters in the revised United States National Institute for Occupational Safety and Health (NIOSH) lifting equation, a standard manual observational tool used to evaluate back injury risk related to lifting in workplace settings, using depth camera (Microsoft Kinect) and skeleton algorithm technology. A large dataset (approximately 22,000 frames, derived from six subjects) of simultaneous lifting and other motions recorded in a laboratory setting using the Kinect (Microsoft Corporation, Redmond, Washington, United States) and a standard optical motion capture system (Qualysis, Qualysis Motion Capture Systems, Qualysis AB, Sweden) was assembled. Error-correction regression models were developed to improve the accuracy of NIOSH lifting equation parameters estimated from the Kinect skeleton. Kinect-Qualysis errors were modelled using gradient boosted regression trees with a Huber loss function. Models were trained on data from all but one subject and tested on the excluded subject. Finally, models were tested on three lifting trials performed by subjects not involved in the generation of the model-building dataset. Error-correction appears to produce estimates for NIOSH lifting equation parameters that are more accurate than those derived from the Microsoft Kinect algorithm alone. Our error-correction models substantially decreased the variance of parameter errors. In general, the Kinect underestimated parameters, and modelling reduced this bias, particularly for more biased estimates. Use of the raw Kinect skeleton model tended to result in falsely high safe recommended weight limits of loads, whereas error-corrected models gave more conservative, protective estimates. Our results suggest that it may be possible to produce reasonable estimates of posture and temporal elements of tasks

  12. Automation in Immunohematology

    Directory of Open Access Journals (Sweden)

    Meenu Bajpai

    2012-01-01

    Full Text Available There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process.

  13. Investigating Semi-Automated Cadastral Boundaries Extraction from Airborne Laser Scanned Data

    Directory of Open Access Journals (Sweden)

    Xianghuan Luo

    2017-09-01

    Full Text Available Many developing countries have witnessed the urgent need of accelerating cadastral surveying processes. Previous studies found that large portions of cadastral boundaries coincide with visible physical objects, namely roads, fences, and building walls. This research explores the application of airborne laser scanning (ALS techniques on cadastral surveys. A semi-automated workflow is developed to extract cadastral boundaries from an ALS point clouds. Firstly, a two-phased workflow was developed that focused on extracting digital representations of physical objects. In the automated extraction phase, after classifying points into semantic components, the outline of planar objects such as building roofs and road surfaces were generated by an α-shape algorithm, whilst the centerlines delineatiation approach was fitted into the lineate object—a fence. Afterwards, the extracted vector lines were edited and refined during the post-refinement phase. Secondly, we quantitatively evaluated the workflow performance by comparing results against an exiting cadastral map as reference. It was found that the workflow achieved promising results: around 80% completeness and 60% correctness on average, although the spatial accuracy is still modest. It is argued that the semi-automated extraction workflow could effectively speed up cadastral surveying, with both human resources and equipment costs being reduced

  14. Increased Automation in Stereo Camera Calibration Techniques

    Directory of Open Access Journals (Sweden)

    Brandi House

    2006-08-01

    Full Text Available Robotic vision has become a very popular field in recent years due to the numerous promising applications it may enhance. However, errors within the cameras and in their perception of their environment can cause applications in robotics to fail. To help correct these internal and external imperfections, stereo camera calibrations are performed. There are currently many accurate methods of camera calibration available; however, most or all of them are time consuming and labor intensive. This research seeks to automate the most labor intensive aspects of a popular calibration technique developed by Jean-Yves Bouguet. His process requires manual selection of the extreme corners of a checkerboard pattern. The modified process uses embedded LEDs in the checkerboard pattern to act as active fiducials. Images are captured of the checkerboard with the LEDs on and off in rapid succession. The difference of the two images automatically highlights the location of the four extreme corners, and these corner locations take the place of the manual selections. With this modification to the calibration routine, upwards of eighty mouse clicks are eliminated per stereo calibration. Preliminary test results indicate that accuracy is not substantially affected by the modified procedure. Improved automation to camera calibration procedures may finally penetrate the barriers to the use of calibration in practice.

  15. Hybrid Cascading Outage Analysis of Extreme Events with Optimized Corrective Actions

    Energy Technology Data Exchange (ETDEWEB)

    Vallem, Mallikarjuna R.; Vyakaranam, Bharat GNVSR; Holzer, Jesse T.; Samaan, Nader A.; Makarov, Yuri V.; Diao, Ruisheng; Huang, Qiuhua; Ke, Xinda

    2017-10-19

    Power system are vulnerable to extreme contingencies (like an outage of a major generating substation) that can cause significant generation and load loss and can lead to further cascading outages of other transmission facilities and generators in the system. Some cascading outages are seen within minutes following a major contingency, which may not be captured exclusively using the dynamic simulation of the power system. The utilities plan for contingencies either based on dynamic or steady state analysis separately which may not accurately capture the impact of one process on the other. We address this gap in cascading outage analysis by developing Dynamic Contingency Analysis Tool (DCAT) that can analyze hybrid dynamic and steady state behavior of the power system, including protection system models in dynamic simulations, and simulating corrective actions in post-transient steady state conditions. One of the important implemented steady state processes is to mimic operator corrective actions to mitigate aggravated states caused by dynamic cascading. This paper presents an Optimal Power Flow (OPF) based formulation for selecting corrective actions that utility operators can take during major contingency and thus automate the hybrid dynamic-steady state cascading outage process. The improved DCAT framework with OPF based corrective actions is demonstrated on IEEE 300 bus test system.

  16. Automated Budget System -

    Data.gov (United States)

    Department of Transportation — The Automated Budget System (ABS) automates management and planning of the Mike Monroney Aeronautical Center (MMAC) budget by providing enhanced capability to plan,...

  17. Bioprocessing automation in cell therapy manufacturing: Outcomes of special interest group automation workshop.

    Science.gov (United States)

    Ball, Oliver; Robinson, Sarah; Bure, Kim; Brindley, David A; Mccall, David

    2018-04-01

    Phacilitate held a Special Interest Group workshop event in Edinburgh, UK, in May 2017. The event brought together leading stakeholders in the cell therapy bioprocessing field to identify present and future challenges and propose potential solutions to automation in cell therapy bioprocessing. Here, we review and summarize discussions from the event. Deep biological understanding of a product, its mechanism of action and indication pathogenesis underpin many factors relating to bioprocessing and automation. To fully exploit the opportunities of bioprocess automation, therapeutics developers must closely consider whether an automation strategy is applicable, how to design an 'automatable' bioprocess and how to implement process modifications with minimal disruption. Major decisions around bioprocess automation strategy should involve all relevant stakeholders; communication between technical and business strategy decision-makers is of particular importance. Developers should leverage automation to implement in-process testing, in turn applicable to process optimization, quality assurance (QA)/ quality control (QC), batch failure control, adaptive manufacturing and regulatory demands, but a lack of precedent and technical opportunities can complicate such efforts. Sparse standardization across product characterization, hardware components and software platforms is perceived to complicate efforts to implement automation. The use of advanced algorithmic approaches such as machine learning may have application to bioprocess and supply chain optimization. Automation can substantially de-risk the wider supply chain, including tracking and traceability, cryopreservation and thawing and logistics. The regulatory implications of automation are currently unclear because few hardware options exist and novel solutions require case-by-case validation, but automation can present attractive regulatory incentives. Copyright © 2018 International Society for Cellular Therapy

  18. Fully automated synthesis system of 3'-deoxy-3'-[18F]fluorothymidine

    International Nuclear Information System (INIS)

    Oh, Seung Jun; Mosdzianowski, Christoph; Chi, Dae Yoon; Kim, Jung Young; Kang, Se Hun; Ryu, Jin Sook; Yeo, Jeong Seok; Moon, Dae Hyuk

    2004-01-01

    We developed a new fully automated method for the synthesis of 3'-deoxy-3'-[ 18 F]fluorothymidine ([ 18 F]FLT), by modifying a commercial FDG synthesizer and its disposable fluid pathway. Optimal labeling condition was that 40 mg of precursor in acetonitrile (2 mL) was heated at 150 degree sign C for 100 sec, followed by heating at 85 degree sign C for 450 sec and hydrolysis with 1 N HCl at 105 degree sign C for 300 sec. Using 3.7 GBq of [ 18 F]F - as starting activity, [ 18 F]FLT was obtained with a yield of 50.5±5.2% (n=28, decay corrected) within 60.0±5.4 min including HPLC purification. With 37.0 GBq, we obtained 48.7±5.6% (n=10). The [ 18 F]FLT showed the good stability for 6 h. This new automated synthesis procedure combines high and reproducible yields with the benefits of a disposable cassette system

  19. Activities of daily living measured by the Harvard Automated Phone Task track with cognitive decline over time in non-demented elderly

    Science.gov (United States)

    Marshall, Gad A.; Aghjayan, Sarah L.; Dekhtyar, Maria; Locascio, Joseph J.; Jethwani, Kamal; Amariglio, Rebecca E.; Johnson, Keith A.; Sperling, Reisa A.; Rentz, Dorene M.

    2017-01-01

    Background Impairment in activities of daily living is a major burden to both patients and caregivers. Mild impairment in instrumental activities of daily living is often seen at the stage of mild cognitive impairment. The field of Alzheimer’s disease is moving toward earlier diagnosis and intervention and more sensitive and ecologically valid assessments of instrumental or complex activities of daily living are needed. The Harvard Automated Phone Task, a novel performance-based activities of daily living instrument, has the potential to fill this gap. Objective To further validate the Harvard Automated Phone Task by assessing its longitudinal relationship to global cognition and specific cognitive domains in clinically normal elderly and individuals with mild cognitive impairment. Design In a longitudinal study, the Harvard Automated Phone Task was associated with cognitive measures using mixed effects models. The Harvard Automated Phone Task’s ability to discriminate across diagnostic groups at baseline was also assessed. Setting Academic clinical research center. Participants Two hundred and seven participants (45 young normal, 141 clinically normal elderly, and 21 mild cognitive impairment) were recruited from the community and the memory disorders clinics at Brigham and Women’s Hospital and Massachusetts General Hospital. Measurements Participants performed the three tasks of the Harvard Automated Phone Task, which consist of navigating an interactive voice response system to refill a prescription (APT-Script), select a new primary care physician (APT-PCP), and make a bank account transfer and payment (APT-Bank). The 3 tasks were scored based on time, errors, repetitions, and correct completion of the task. The primary outcome measure used for each of the tasks was total time adjusted for correct completion. Results The Harvard Automated Phone Task discriminated well between young normal, clinically normal elderly, and mild cognitive impairment

  20. 78 FR 66039 - Modification of National Customs Automation Program Test Concerning Automated Commercial...

    Science.gov (United States)

    2013-11-04

    ... Customs Automation Program Test Concerning Automated Commercial Environment (ACE) Cargo Release (Formerly...) plan to both rename and modify the National Customs Automation Program (NCAP) test concerning the... data elements required to obtain release for cargo transported by air. The test will now be known as...

  1. Automation-aided Task Loads Index based on the Automation Rate Reflecting the Effects on Human Operators in NPPs

    International Nuclear Information System (INIS)

    Lee, Seungmin; Seong, Poonghyun; Kim, Jonghyun

    2013-01-01

    Many researchers have found that a high automation rate does not guarantee high performance. Therefore, to reflect the effects of automation on human performance, a new estimation method of the automation rate that considers the effects of automation on human operators in nuclear power plants (NPPs) was suggested. These suggested measures express how much automation support human operators but it cannot express the change of human operators' workload, whether the human operators' workload is increased or decreased. Before considering automation rates, whether the adopted automation is good or bad might be estimated in advance. In this study, to estimate the appropriateness of automation according to the change of the human operators' task loads, automation-aided task loads index is suggested based on the concept of the suggested automation rate. To insure plant safety and efficiency on behalf of human operators, various automation systems have been installed in NPPs, and many works which were previously conducted by human operators can now be supported by computer-based operator aids. According to the characteristics of the automation types, the estimation method of the system automation and the cognitive automation rate were suggested. The proposed estimation method concentrates on the effects of introducing automation, so it directly express how much the automated system support human operators. Based on the suggested automation rates, the way to estimate how much the automated system can affect the human operators' cognitive task load is suggested in this study. When there is no automation, the calculated index is 1, and it means there is no change of human operators' task load

  2. Exploring Deep Computing in CMS for Automated Data Validation in DQM

    CERN Document Server

    Fernandez Madrazo, Celia

    2017-01-01

    This project has explored the possibility of inclusion of a variational autoencoder in Automated Data Validation in DQM. The analysis has been carried out only with muon features. The main goal is to reconstruct the given lumisections and check if they can be separated between good and bad lumisections by means of the latent space representation given by the developed autoencoder. At the end, many features of good lumisections seem to be correctly reconstructed but the latent space representation does not give a proper distintion between both types of samples.

  3. Samen werken aan Automatische VoertuigGeleiding: aanzet tot een businessplan [Working together on Automated Vehicle Guidance; Preliminary business plan

    NARCIS (Netherlands)

    Coemet, M.J.; Vos, A.P. de; Arem, B. van; Brookhuis, K.A.; Heijer, T.; Marchau, V.A.W.J.

    1998-01-01

    Automated Vehicle Guidance (AVG) systems are expected to have a major impact on traffic and transport. In order to reap the benefits and offset or avoid the disadvantages of AVG, correct and timely choices will have to be made. The Ministry of Transport, Public Works and Water Manage-ment, the

  4. Automated system for review of radiotherapy treatment sheets; Sistema automatizado pra la revision de hojas de tratamiento en radioterapia

    Energy Technology Data Exchange (ETDEWEB)

    Collado Chamorro, P.; Sanz Freire, C. J.; Vazquez Galinanes, A.; Diaz Pascual, V.; Gomez amez, J.; Martinez Sanchez, S.; Ossola Lentati, G. A.

    2011-07-01

    In many modern radiotherapy services begins to leaf treatment implemented in electronic form. In our department has developed an automated reporting system, that check the following parameters: treatment completed correctly, number of sessions and cumulative dose administered. Likewise treatments are verified in the allocated separate unit, and over-writing table parameters.

  5. Asleep at the automated wheel-Sleepiness and fatigue during highly automated driving.

    Science.gov (United States)

    Vogelpohl, Tobias; Kühn, Matthias; Hummel, Thomas; Vollrath, Mark

    2018-03-20

    Due to the lack of active involvement in the driving situation and due to monotonous driving environments drivers with automation may be prone to become fatigued faster than manual drivers (e.g. Schömig et al., 2015). However, little is known about the progression of fatigue during automated driving and its effects on the ability to take back manual control after a take-over request. In this driving simulator study with Nö=ö60 drivers we used a three factorial 2ö×ö2ö×ö12 mixed design to analyze the progression (12ö×ö5ömin; within subjects) of driver fatigue in drivers with automation compared to manual drivers (between subjects). Driver fatigue was induced as either mainly sleep related or mainly task related fatigue (between subjects). Additionally, we investigated the drivers' reactions to a take-over request in a critical driving scenario to gain insights into the ability of fatigued drivers to regain manual control and situation awareness after automated driving. Drivers in the automated driving condition exhibited facial indicators of fatigue after 15 to 35ömin of driving. Manual drivers only showed similar indicators of fatigue if they suffered from a lack of sleep and then only after a longer period of driving (approx. 40ömin). Several drivers in the automated condition closed their eyes for extended periods of time. In the driving with automation condition mean automation deactivation times after a take-over request were slower for a certain percentage (about 30%) of the drivers with a lack of sleep (Mö=ö3.2; SDö=ö2.1ös) compared to the reaction times after a long drive (Mö=ö2.4; SDö=ö0.9ös). Drivers with automation also took longer than manual drivers to first glance at the speed display after a take-over request and were more likely to stay behind a braking lead vehicle instead of overtaking it. Drivers are unable to stay alert during extended periods of automated driving without non-driving related tasks. Fatigued drivers could

  6. Automated dual-wavelength spectrophotometer optimized for phytochrome assay

    International Nuclear Information System (INIS)

    Pratt, L.H.; Wampler, J.E.; Rich, E.S. Jr.

    1985-01-01

    A microcomputer-controlled dual-wavelength spectrophotometer suitable for automated phytochrome assay is described. The optomechanical unit provides for sequential irradiation of the sample by the two measuring wavelengths with intervening dark intervals and for actinic irradiation to interconvert phytochrome between its two forms. Photomultiplier current is amplified, converted to a digital value and transferred into the computer using a custom-designed IEEE-488 bus interface. The microcomputer calculates mathematically both absorbance and absorbance difference values with dynamic correction for photomultiplier dark current. In addition, the computer controls the operating parameters of the spectrophotometer via a separate interface. These parameters include control of the durations of measuring and actinic irradiation intervals and their sequence. 14 references, 4 figures

  7. Procedure automation: the effect of automated procedure execution on situation awareness and human performance

    International Nuclear Information System (INIS)

    Andresen, Gisle; Svengren, Haakan; Heimdal, Jan O.; Nilsen, Svein; Hulsund, John-Einar; Bisio, Rossella; Debroise, Xavier

    2004-04-01

    As advised by the procedure workshop convened in Halden in 2000, the Halden Project conducted an experiment on the effect of automation of Computerised Procedure Systems (CPS) on situation awareness and human performance. The expected outcome of the study was to provide input for guidance on CPS design, and to support the Halden Project's ongoing research on human reliability analysis. The experiment was performed in HAMMLAB using the HAMBO BWR simulator and the COPMA-III CPS. Eight crews of operators from Forsmark 3 and Oskarshamn 3 participated. Three research questions were investigated: 1) Does procedure automation create Out-Of-The-Loop (OOTL) performance problems? 2) Does procedure automation affect situation awareness? 3) Does procedure automation affect crew performance? The independent variable, 'procedure configuration', had four levels: paper procedures, manual CPS, automation with breaks, and full automation. The results showed that the operators experienced OOTL problems in full automation, but that situation awareness and crew performance (response time) were not affected. One possible explanation for this is that the operators monitored the automated procedure execution conscientiously, something which may have prevented the OOTL problems from having negative effects on situation awareness and crew performance. In a debriefing session, the operators clearly expressed their dislike for the full automation condition, but that automation with breaks could be suitable for some tasks. The main reason why the operators did not like the full automation was that they did not feel being in control. A qualitative analysis addressing factors contributing to response time delays revealed that OOTL problems did not seem to cause delays, but that some delays could be explained by the operators having problems with the freeze function of the CPS. Also other factors such as teamwork and operator tendencies were of importance. Several design implications were drawn

  8. Advances toward fully automated in vivo assessment of oral epithelial dysplasia by nuclear endomicroscopy-A pilot study.

    Science.gov (United States)

    Liese, Jan; Winter, Karsten; Glass, Änne; Bertolini, Julia; Kämmerer, Peer Wolfgang; Frerich, Bernhard; Schiefke, Ingolf; Remmerbach, Torsten W

    2017-11-01

    Uncertainties in detection of oral epithelial dysplasia (OED) frequently result from sampling error especially in inflammatory oral lesions. Endomicroscopy allows non-invasive, "en face" imaging of upper oral epithelium, but parameters of OED are unknown. Mucosal nuclei were imaged in 34 toluidine blue-stained oral lesions with a commercial endomicroscopy. Histopathological diagnosis showed four biopsies in "dys-/neoplastic," 23 in "inflammatory," and seven in "others" disease groups. Strength of different assessment strategies of nuclear scoring, nuclear count, and automated nuclear analysis were measured by area under ROC curve (AUC) to identify histopathological "dys-/neoplastic" group. Nuclear objects from automated image analysis were visually corrected. Best-performing parameters of nuclear-to-image ratios were the count of large nuclei (AUC=0.986) and 6-nearest neighborhood relation (AUC=0.896), and best parameters of nuclear polymorphism were the count of atypical nuclei (AUC=0.996) and compactness of nuclei (AUC=0.922). Excluding low-grade OED, nuclear scoring and count reached 100% sensitivity and 98% specificity for detection of dys-/neoplastic lesions. In automated analysis, combination of parameters enhanced diagnostic strength. Sensitivity of 100% and specificity of 87% were seen for distances of 6-nearest neighbors and aspect ratios even in uncorrected objects. Correction improved measures of nuclear polymorphism only. The hue of background color was stronger than nuclear density (AUC=0.779 vs 0.687) to detect dys-/neoplastic group indicating that macroscopic aspect is biased. Nuclear-to-image ratios are applicable for automated optical in vivo diagnostics for oral potentially malignant disorders. Nuclear endomicroscopy may promote non-invasive, early detection of dys-/neoplastic lesions by reducing sampling error. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. Correction

    DEFF Research Database (Denmark)

    Pinkevych, Mykola; Cromer, Deborah; Tolstrup, Martin

    2016-01-01

    [This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.].......[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.]....

  10. Automated radiosynthesis of [{sup 11}C]morphine for clinical investigation

    Energy Technology Data Exchange (ETDEWEB)

    Fan Jinda [Department of Radiology, Washington University School of Medicine, 510 South Kingshighway Blvd. St. Louis, MO 63110 (United States); Meissner, Konrad [Department of Anesthesiology, Washington University School of Medicine, 510 South Kingshighway Blvd. St. Louis, MO 63110 (United States); Gaehle, Gregory G.; Li Shihong [Department of Radiology, Washington University School of Medicine, 510 South Kingshighway Blvd. St. Louis, MO 63110 (United States); Kharasch, Evan D. [Department of Anesthesiology, Washington University School of Medicine, 510 South Kingshighway Blvd. St. Louis, MO 63110 (United States); Mach, Robert H. [Department of Radiology, Washington University School of Medicine, 510 South Kingshighway Blvd. St. Louis, MO 63110 (United States); Tu Zhude, E-mail: tuz@mir.wustl.ed [Department of Radiology, Washington University School of Medicine, 510 South Kingshighway Blvd. St. Louis, MO 63110 (United States)

    2011-02-15

    To meet a multiple-dose clinical evaluation of the P-gp modulation of [{sup 11}C]morphine delivery into the human brain, radiosynthesis of [{sup 11}C]morphine was accomplished on an automated system by N-methylation of normorphine with [{sup 11}C]CH{sub 3}I. A methodology employing optimized solid phase extraction of the HPLC eluent was developed. Radiosynthesis took 45 min with a radiochemical yield ranging from 45% to 50% and specific activity ranging from 20 to 26 Ci/{mu}mol (decay corrected to end-of-bombardment); radiochemical and chemical purities were >95% (n=28).

  11. SAMPL4 & DOCK3.7: lessons for automated docking procedures

    Science.gov (United States)

    Coleman, Ryan G.; Sterling, Teague; Weiss, Dahlia R.

    2014-03-01

    The SAMPL4 challenges were used to test current automated methods for solvation energy, virtual screening, pose and affinity prediction of the molecular docking pipeline DOCK 3.7. Additionally, first-order models of binding affinity were proposed as milestones for any method predicting binding affinity. Several important discoveries about the molecular docking software were made during the challenge: (1) Solvation energies of ligands were five-fold worse than any other method used in SAMPL4, including methods that were similarly fast, (2) HIV Integrase is a challenging target, but automated docking on the correct allosteric site performed well in terms of virtual screening and pose prediction (compared to other methods) but affinity prediction, as expected, was very poor, (3) Molecular docking grid sizes can be very important, serious errors were discovered with default settings that have been adjusted for all future work. Overall, lessons from SAMPL4 suggest many changes to molecular docking tools, not just DOCK 3.7, that could improve the state of the art. Future difficulties and projects will be discussed.

  12. Automation-aided Task Loads Index based on the Automation Rate Reflecting the Effects on Human Operators in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seungmin; Seong, Poonghyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Kim, Jonghyun [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2013-05-15

    Many researchers have found that a high automation rate does not guarantee high performance. Therefore, to reflect the effects of automation on human performance, a new estimation method of the automation rate that considers the effects of automation on human operators in nuclear power plants (NPPs) was suggested. These suggested measures express how much automation support human operators but it cannot express the change of human operators' workload, whether the human operators' workload is increased or decreased. Before considering automation rates, whether the adopted automation is good or bad might be estimated in advance. In this study, to estimate the appropriateness of automation according to the change of the human operators' task loads, automation-aided task loads index is suggested based on the concept of the suggested automation rate. To insure plant safety and efficiency on behalf of human operators, various automation systems have been installed in NPPs, and many works which were previously conducted by human operators can now be supported by computer-based operator aids. According to the characteristics of the automation types, the estimation method of the system automation and the cognitive automation rate were suggested. The proposed estimation method concentrates on the effects of introducing automation, so it directly express how much the automated system support human operators. Based on the suggested automation rates, the way to estimate how much the automated system can affect the human operators' cognitive task load is suggested in this study. When there is no automation, the calculated index is 1, and it means there is no change of human operators' task load.

  13. Automated system of monitoring and positioning of functional units of mining technological machines for coal-mining enterprises

    Directory of Open Access Journals (Sweden)

    Meshcheryakov Yaroslav

    2018-01-01

    Full Text Available This article is show to the development of an automated monitoring and positioning system for functional nodes of mining technological machines. It describes the structure, element base, algorithms for identifying the operating states of a walking excavator; various types of errors in the functioning of microelectromechanical gyroscopes and accelerometers, as well as methods for their correction based on the Madgwick fusion filter. The results of industrial tests of an automated monitoring and positioning system for functional units on one of the opencast coal mines of Kuzbass are presented. This work is addressed to specialists working in the fields of the development of embedded systems and control systems, radio electronics, mechatronics, and robotics.

  14. Automation of radioimmunoassay

    International Nuclear Information System (INIS)

    Yamaguchi, Chisato; Yamada, Hideo; Iio, Masahiro

    1974-01-01

    Automation systems for measuring Australian antigen by radioimmunoassay under development were discussed. Samples were processed as follows: blood serum being dispensed by automated sampler to the test tube, and then incubated under controlled time and temperature; first counting being omitted; labelled antibody being dispensed to the serum after washing; samples being incubated and then centrifuged; radioactivities in the precipitate being counted by auto-well counter; measurements being tabulated by automated typewriter. Not only well-type counter but also position counter was studied. (Kanao, N.)

  15. Fully automated one-pot radiosynthesis of O-(2-[{sup 18}F]fluoroethyl)-L-tyrosine on the TracerLab FX{sub FN} module

    Energy Technology Data Exchange (ETDEWEB)

    Bourdier, Thomas, E-mail: bts@ansto.gov.au [LifeSciences, Australian Nuclear Science and Technology Organisation, Locked Bag 2001, Kirrawee DC NSW 2232, Sydney (Australia); Greguric, Ivan [LifeSciences, Australian Nuclear Science and Technology Organisation, Locked Bag 2001, Kirrawee DC NSW 2232, Sydney (Australia); Roselt, Peter [Centre for Molecular Imaging, Peter MacCallum Cancer Centre, 12 St Andrew' s Place, East Melbourne, VIC, 3002 (Australia); Jackson, Tim; Faragalla, Jane; Katsifis, Andrew [LifeSciences, Australian Nuclear Science and Technology Organisation, Locked Bag 2001, Kirrawee DC NSW 2232, Sydney (Australia)

    2011-07-15

    Introduction: An efficient fully automated method for the radiosynthesis of enantiomerically pure O-(2-[{sup 18}F]fluoroethyl)-L-tyrosine ([{sup 18}F]FET) using the GE TracerLab FX{sub FN} synthesis module via the O-(2-tosyloxyethyl)-N-trityl-L-tyrosine tert-butylester precursor has been developed. Methods: The radiolabelling of [{sup 18}F]FET involved a classical [{sup 18}F]fluoride nucleophilic substitution performed in acetonitrile using potassium carbonate and Kryptofix 222, followed by acid hydrolysis using 2N hydrochloric acid. Results: [{sup 18}F]FET was produced in 35{+-}5% (n=22) yield non-decay-corrected (55{+-}5% decay-corrected) and with radiochemical and enantiomeric purity of >99% with a specific activity of >90 GBq/{mu}mol after 63 min of radiosynthesis including HPLC purification and formulation. Conclusion: The automated radiosynthesis provides high and reproducible yields suitable for routine clinical use.

  16. 77 FR 48527 - National Customs Automation Program (NCAP) Test Concerning Automated Commercial Environment (ACE...

    Science.gov (United States)

    2012-08-14

    ... National Customs Automation Program (NCAP) test concerning the simplified entry functionality in the... DEPARTMENT OF HOMELAND SECURITY U.S. Customs and Border Protection National Customs Automation Program (NCAP) Test Concerning Automated Commercial Environment (ACE) Simplified Entry: Modification of...

  17. galaxie--CGI scripts for sequence identification through automated phylogenetic analysis.

    Science.gov (United States)

    Nilsson, R Henrik; Larsson, Karl-Henrik; Ursing, Björn M

    2004-06-12

    The prevalent use of similarity searches like BLAST to identify sequences and species implicitly assumes the reference database to be of extensive sequence sampling. This is often not the case, restraining the correctness of the outcome as a basis for sequence identification. Phylogenetic inference outperforms similarity searches in retrieving correct phylogenies and consequently sequence identities, and a project was initiated to design a freely available script package for sequence identification through automated Web-based phylogenetic analysis. Three CGI scripts were designed to facilitate qualified sequence identification from a Web interface. Query sequences are aligned to pre-made alignments or to alignments made by ClustalW with entries retrieved from a BLAST search. The subsequent phylogenetic analysis is based on the PHYLIP package for inferring neighbor-joining and parsimony trees. The scripts are highly configurable. A service installation and a version for local use are found at http://andromeda.botany.gu.se/galaxiewelcome.html and http://galaxie.cgb.ki.se

  18. Supervised learning for the automated transcription of spacer classification from spoligotype films

    Directory of Open Access Journals (Sweden)

    Abernethy Neil

    2009-08-01

    Full Text Available Abstract Background Molecular genotyping of bacteria has revolutionized the study of tuberculosis epidemiology, yet these established laboratory techniques typically require subjective and laborious interpretation by trained professionals. In the context of a Tuberculosis Case Contact study in The Gambia we used a reverse hybridization laboratory assay called spoligotype analysis. To facilitate processing of spoligotype images we have developed tools and algorithms to automate the classification and transcription of these data directly to a database while allowing for manual editing. Results Features extracted from each of the 1849 spots on a spoligo film were classified using two supervised learning algorithms. A graphical user interface allows manual editing of the classification, before export to a database. The application was tested on ten films of differing quality and the results of the best classifier were compared to expert manual classification, giving a median correct classification rate of 98.1% (inter quartile range: 97.1% to 99.2%, with an automated processing time of less than 1 minute per film. Conclusion The software implementation offers considerable time savings over manual processing whilst allowing expert editing of the automated classification. The automatic upload of the classification to a database reduces the chances of transcription errors.

  19. Automated Diatom Analysis Applied to Traditional Light Microscopy: A Proof-of-Concept Study

    Science.gov (United States)

    Little, Z. H. L.; Bishop, I.; Spaulding, S. A.; Nelson, H.; Mahoney, C.

    2017-12-01

    Diatom identification and enumeration by high resolution light microscopy is required for many areas of research and water quality assessment. Such analyses, however, are both expertise and labor-intensive. These challenges motivate the need for an automated process to efficiently and accurately identify and enumerate diatoms. Improvements in particle analysis software have increased the likelihood that diatom enumeration can be automated. VisualSpreadsheet software provides a possible solution for automated particle analysis of high-resolution light microscope diatom images. We applied the software, independent of its complementary FlowCam hardware, to automated analysis of light microscope images containing diatoms. Through numerous trials, we arrived at threshold settings to correctly segment 67% of the total possible diatom valves and fragments from broad fields of view. (183 light microscope images were examined containing 255 diatom particles. Of the 255 diatom particles present, 216 diatoms valves and fragments of valves were processed, with 170 properly analyzed and focused upon by the software). Manual analysis of the images yielded 255 particles in 400 seconds, whereas the software yielded a total of 216 particles in 68 seconds, thus highlighting that the software has an approximate five-fold efficiency advantage in particle analysis time. As in past efforts, incomplete or incorrect recognition was found for images with multiple valves in contact or valves with little contrast. The software has potential to be an effective tool in assisting taxonomists with diatom enumeration by completing a large portion of analyses. Benefits and limitations of the approach are presented to allow for development of future work in image analysis and automated enumeration of traditional light microscope images containing diatoms.

  20. Laboratory Automation and Middleware.

    Science.gov (United States)

    Riben, Michael

    2015-06-01

    The practice of surgical pathology is under constant pressure to deliver the highest quality of service, reduce errors, increase throughput, and decrease turnaround time while at the same time dealing with an aging workforce, increasing financial constraints, and economic uncertainty. Although not able to implement total laboratory automation, great progress continues to be made in workstation automation in all areas of the pathology laboratory. This report highlights the benefits and challenges of pathology automation, reviews middleware and its use to facilitate automation, and reviews the progress so far in the anatomic pathology laboratory. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Toward fully automated genotyping: Genotyping microsatellite markers by deconvolution

    Energy Technology Data Exchange (ETDEWEB)

    Perlin, M.W.; Lancia, G.; See-Kiong, Ng [Carnegie Mellon Univ., Pittsburgh, PA (United States)

    1995-11-01

    Dense genetic linkage maps have been constructed for the human and mouse genomes, with average densities of 2.9 cM and 0.35 cM, respectively. These genetic maps are crucial for mapping both Mendelian and complex traits and are useful in clinical genetic diagnosis. Current maps are largely comprised of abundant, easily assayed, and highly polymorphic PCR-based microsatellite markers, primarily dinucleotide (CA){sub n} repeats. One key limitation of these length polymorphisms is the PCR stutter (or slippage) artifact that introduces additional stutter bands. With two (or more) closely spaced alleles, the stutter bands overlap, and it is difficult to accurately determine the correct alleles; this stutter phenomenon has all but precluded full automation, since a human must visually inspect the allele data. We describe here novel deconvolution methods for accurate genotyping that mathematically remove PCR stutter artifact from microsatellite markers. These methods overcome the manual interpretation bottleneck and thereby enable full automation of genetic map construction and use. New functionalities, including the pooling of DNAs and the pooling of markers, are described that may greatly reduce the associated experimentation requirements. 32 refs., 5 figs., 3 tabs.

  2. Managing laboratory automation.

    Science.gov (United States)

    Saboe, T J

    1995-01-01

    This paper discusses the process of managing automated systems through their life cycles within the quality-control (QC) laboratory environment. The focus is on the process of directing and managing the evolving automation of a laboratory; system examples are given. The author shows how both task and data systems have evolved, and how they interrelate. A BIG picture, or continuum view, is presented and some of the reasons for success or failure of the various examples cited are explored. Finally, some comments on future automation need are discussed.

  3. Program Correctness, Verification and Testing for Exascale (Corvette)

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Koushik [Univ. of California, Berkeley, CA (United States); Iancu, Costin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Demmel, James W [UC Berkeley

    2018-01-26

    The goal of this project is to provide tools to assess the correctness of parallel programs written using hybrid parallelism. There is a dire lack of both theoretical and engineering know-how in the area of finding bugs in hybrid or large scale parallel programs, which our research aims to change. In the project we have demonstrated novel approaches in several areas: 1. Low overhead automated and precise detection of concurrency bugs at scale. 2. Using low overhead bug detection tools to guide speculative program transformations for performance. 3. Techniques to reduce the concurrency required to reproduce a bug using partial program restart/replay. 4. Techniques to provide reproducible execution of floating point programs. 5. Techniques for tuning the floating point precision used in codes.

  4. Determination of the Optimized Automation Rate considering Effects of Automation on Human Operators in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun; Kim, Jong Hyun; Kim, Man Cheol

    2015-01-01

    Automation refers to the use of a device or a system to perform a function previously performed by a human operator. It is introduced to reduce the human errors and to enhance the performance in various industrial fields, including the nuclear industry. However, these positive effects are not always achieved in complex systems such as nuclear power plants (NPPs). An excessive introduction of automation can generate new roles for human operators and change activities in unexpected ways. As more automation systems are accepted, the ability of human operators to detect automation failures and resume manual control is diminished. This disadvantage of automation is called the Out-of-the- Loop (OOTL) problem. We should consider the positive and negative effects of automation at the same time to determine the appropriate level of the introduction of automation. Thus, in this paper, we suggest an estimation method to consider the positive and negative effects of automation at the same time to determine the appropriate introduction of automation. This concept is limited in that it does not consider the effects of automation on human operators. Thus, a new estimation method for automation rate was suggested to overcome this problem

  5. 78 FR 44142 - Modification of Two National Customs Automation Program (NCAP) Tests Concerning Automated...

    Science.gov (United States)

    2013-07-23

    ... Customs Automation Program (NCAP) Tests Concerning Automated Commercial Environment (ACE) Document Image... (CBP's) plan to modify the National Customs Automation Program (NCAP) tests concerning document imaging... entry process by reducing the number of data elements required to obtain release for cargo transported...

  6. The Science of Home Automation

    Science.gov (United States)

    Thomas, Brian Louis

    Smart home technologies and the concept of home automation have become more popular in recent years. This popularity has been accompanied by social acceptance of passive sensors installed throughout the home. The subsequent increase in smart homes facilitates the creation of home automation strategies. We believe that home automation strategies can be generated intelligently by utilizing smart home sensors and activity learning. In this dissertation, we hypothesize that home automation can benefit from activity awareness. To test this, we develop our activity-aware smart automation system, CARL (CASAS Activity-aware Resource Learning). CARL learns the associations between activities and device usage from historical data and utilizes the activity-aware capabilities to control the devices. To help validate CARL we deploy and test three different versions of the automation system in a real-world smart environment. To provide a foundation of activity learning, we integrate existing activity recognition and activity forecasting into CARL home automation. We also explore two alternatives to using human-labeled data to train the activity learning models. The first unsupervised method is Activity Detection, and the second is a modified DBSCAN algorithm that utilizes Dynamic Time Warping (DTW) as a distance metric. We compare the performance of activity learning with human-defined labels and with automatically-discovered activity categories. To provide evidence in support of our hypothesis, we evaluate CARL automation in a smart home testbed. Our results indicate that home automation can be boosted through activity awareness. We also find that the resulting automation has a high degree of usability and comfort for the smart home resident.

  7. Adaptive Automation Design and Implementation

    Science.gov (United States)

    2015-09-17

    with an automated system to a real-world adaptive au- tomation system implementation. There have been plenty of adaptive automation 17 Adaptive...of systems without increasing manpower requirements by allocating routine tasks to automated aids, improving safety through the use of au- tomated ...between intermediate levels of au- tomation , explicitly defining which human task a given level automates. Each model aids the creation and classification

  8. Layered distributed architecture for plant automation

    International Nuclear Information System (INIS)

    Aravamuthan, G.; Verma, Yachika; Ranjan, Jyoti; Chachondia, Alka S.; Ganesh, G.

    2005-01-01

    The development of plant automation system and associated software remains one of the greatest challenges to the widespread implementation of highly adaptive re-configurable automation technology. This paper presents a layered distributed architecture for a plant automation system designed to support rapid reconfiguration and redeployment of automation components. The paper first presents evolution of automation architecture and their associated environment in the past few decades and then presents the concept of layered system architecture and the use of automation components to support the construction of a wide variety of automation system. It also highlights the role of standards and technology, which can be used in the development of automation components. We have attempted to adhere to open standards and technology for the development of automation component at a various layers. It also highlights the application of this concept in the development of an Operator Information System (OIS) for Advanced Heavy Water Reactor (AHWR). (author)

  9. NMRNet: A deep learning approach to automated peak picking of protein NMR spectra.

    Science.gov (United States)

    Klukowski, Piotr; Augoff, Michal; Zieba, Maciej; Drwal, Maciej; Gonczarek, Adam; Walczak, Michal J

    2018-03-14

    Automated selection of signals in protein NMR spectra, known as peak picking, has been studied for over 20 years, nevertheless existing peak picking methods are still largely deficient. Accurate and precise automated peak picking would accelerate the structure calculation, and analysis of dynamics and interactions of macromolecules. Recent advancement in handling big data, together with an outburst of machine learning techniques, offer an opportunity to tackle the peak picking problem substantially faster than manual picking and on par with human accuracy. In particular, deep learning has proven to systematically achieve human-level performance in various recognition tasks, and thus emerges as an ideal tool to address automated identification of NMR signals. We have applied a convolutional neural network for visual analysis of multidimensional NMR spectra. A comprehensive test on 31 manually-annotated spectra has demonstrated top-tier average precision (AP) of 0.9596, 0.9058 and 0.8271 for backbone, side-chain and NOESY spectra, respectively. Furthermore, a combination of extracted peak lists with automated assignment routine, FLYA, outperformed other methods, including the manual one, and led to correct resonance assignment at the levels of 90.40%, 89.90% and 90.20% for three benchmark proteins. The proposed model is a part of a Dumpling software (platform for protein NMR data analysis), and is available at https://dumpling.bio/. michaljerzywalczak@gmail.compiotr.klukowski@pwr.edu.pl. Supplementary data are available at Bioinformatics online.

  10. Automated MR morphometry to predict Alzheimer's disease in mild cognitive impairment

    Energy Technology Data Exchange (ETDEWEB)

    Fritzsche, Klaus H.; Schlindwein, Sarah; Bruggen, Thomas van; Meinzer, Hans-Peter [German Cancer Research Center, Division of Medical and Biological Informatics, Heidelberg (Germany); Stieltjes, Bram; Essig, Marco [German Cancer Research Center, Division of Radiology, Heidelberg (Germany)

    2010-12-15

    Prediction of progression from mild cognitive impairment (MCI) to Alzheimer's disease (AD) is challenging but essential for early treatment. This study aims to investigate the use of hippocampal atrophy markers for the automatic detection of MCI converters and to compare the predictive value to manually obtained hippocampal volume and temporal horn width. A study was performed with 15 patients with Alzheimer and 18 patients with MCI (ten converted, eight remained stable in a 3-year follow-up) as well as 15 healthy subjects. MRI scans were obtained at baseline and evaluated with an automated system for scoring of hippocampal atrophy. The predictive value of the automated system was compared with manual measurements of hippocampal volume and temporal horn width in the same subjects. The conversion to AD was correctly predicted in 77.8% of the cases (sensitivity 70%, specificity 87.5%) in the MCI group using automated morphometry and a plain linear classifier that was trained on the AD and healthy groups. Classification was improved by limiting analysis to the left cerebral hemisphere (accuracy 83.3%, sensitivity 70%, specificity 100%). The manual linear and volumetric approaches reached rates of 66.7% (40/100%) and 72.2% (60/87.5%), respectively. The automatic approach fulfills many important preconditions for clinical application. Contrary to the manual approaches, it is not observer-dependent and reduces human resource requirements. Automated assessment may be useful for individual patient assessment and for predicting progression to dementia. (orig.)

  11. Semantics-based Automated Web Testing

    Directory of Open Access Journals (Sweden)

    Hai-Feng Guo

    2015-08-01

    Full Text Available We present TAO, a software testing tool performing automated test and oracle generation based on a semantic approach. TAO entangles grammar-based test generation with automated semantics evaluation using a denotational semantics framework. We show how TAO can be incorporated with the Selenium automation tool for automated web testing, and how TAO can be further extended to support automated delta debugging, where a failing web test script can be systematically reduced based on grammar-directed strategies. A real-life parking website is adopted throughout the paper to demonstrate the effectivity of our semantics-based web testing approach.

  12. Automation in organizations: Eternal conflict

    Science.gov (United States)

    Dieterly, D. L.

    1981-01-01

    Some ideas on and insights into the problems associated with automation in organizations are presented with emphasis on the concept of automation, its relationship to the individual, and its impact on system performance. An analogy is drawn, based on an American folk hero, to emphasize the extent of the problems encountered when dealing with automation within an organization. A model is proposed to focus attention on a set of appropriate dimensions. The function allocation process becomes a prominent aspect of the model. The current state of automation research is mentioned in relation to the ideas introduced. Proposed directions for an improved understanding of automation's effect on the individual's efficiency are discussed. The importance of understanding the individual's perception of the system in terms of the degree of automation is highlighted.

  13. Mobile home automation-merging mobile value added services and home automation technologies

    OpenAIRE

    Rosendahl, Andreas; Hampe, Felix J.; Botterweck, Goetz

    2007-01-01

    non-peer-reviewed In this paper we study mobile home automation, a field that emerges from an integration of mobile application platforms and home automation technologies. In a conceptual introduction we first illustrate the need for such applications by introducing a two-dimensional conceptual model of mobility. Subsequently we suggest an architecture and discuss different options of how a user might access a mobile home automation service and the controlled devices. As another contrib...

  14. Future Trends in Process Automation

    OpenAIRE

    Jämsä-Jounela, Sirkka-Liisa

    2007-01-01

    The importance of automation in the process industries has increased dramatically in recent years. In the highly industrialized countries, process automation serves to enhance product quality, master the whole range of products, improve process safety and plant availability, efficiently utilize resources and lower emissions. In the rapidly developing countries, mass production is the main motivation for applying process automation. The greatest demand for process automation is in the chemical...

  15. Automation in Warehouse Development

    CERN Document Server

    Verriet, Jacques

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and supports the quality of picking processes. Secondly, the development of models to simulate and analyse warehouse designs and their components facilitates the challenging task of developing warehouses that take into account each customer’s individual requirements and logistic processes. Automation in Warehouse Development addresses both types of automation from the innovative perspective of applied science. In particular, it describes the outcomes of the Falcon project, a joint endeavour by a consortium of industrial and academic partners. The results include a model-based approach to automate warehouse control design, analysis models for warehouse design, concepts for robotic item handling and computer vision, and auton...

  16. 76 FR 34246 - Automated Commercial Environment (ACE); Announcement of National Customs Automation Program Test...

    Science.gov (United States)

    2011-06-13

    ... Environment (ACE); Announcement of National Customs Automation Program Test of Automated Procedures for In... Customs Automation Program (NCAP) test relating to highway movements of commercial goods that are transported in-bond through the United States from one point in Canada to another point in Canada. The NCAP...

  17. Automated cloning methods.; TOPICAL

    International Nuclear Information System (INIS)

    Collart, F.

    2001-01-01

    Argonne has developed a series of automated protocols to generate bacterial expression clones by using a robotic system designed to be used in procedures associated with molecular biology. The system provides plate storage, temperature control from 4 to 37 C at various locations, and Biomek and Multimek pipetting stations. The automated system consists of a robot that transports sources from the active station on the automation system. Protocols for the automated generation of bacterial expression clones can be grouped into three categories (Figure 1). Fragment generation protocols are initiated on day one of the expression cloning procedure and encompass those protocols involved in generating purified coding region (PCR)

  18. Automated correction of spin-history related motion artefacts in fMRI : Simulated and phantom data

    NARCIS (Netherlands)

    Muresan, L; Renken, R.; Roerdink, J.B.T.M.; Duifhuis, H.

    This paper concerns the problem of correcting spin-history artefacts in fMRI data. We focus on the influence of through-plane motion on the history of magnetization. A change in object position will disrupt the tissue’s steady-state magnetization. The disruption will propagate to the next few

  19. Automation, Performance and International Competition

    DEFF Research Database (Denmark)

    Kromann, Lene; Sørensen, Anders

    This paper presents new evidence on trade‐induced automation in manufacturing firms using unique data combining a retrospective survey that we have assembled with register data for 2005‐2010. In particular, we establish a causal effect where firms that have specialized in product types for which...... the Chinese exports to the world market has risen sharply invest more in automated capital compared to firms that have specialized in other product types. We also study the relationship between automation and firm performance and find that firms with high increases in scale and scope of automation have faster...... productivity growth than other firms. Moreover, automation improves the efficiency of all stages of the production process by reducing setup time, run time, and inspection time and increasing uptime and quantity produced per worker. The efficiency improvement varies by type of automation....

  20. Systematic review automation technologies

    Science.gov (United States)

    2014-01-01

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time. PMID:25005128

  1. Heel effect adaptive flat field correction of digital x-ray detectors

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Yongjian [X-ray Products, Varian Medical Systems Inc., Liverpool, New York 13088 (United States); Wang, Jue [Department of Mathematics, Union College, Schenectady, New York 12308 (United States)

    2013-08-15

    digital x-ray imaging in an SID-variant environment. The technique is relatively simple, and can be easily incorporated into multiple-point gain calibration/correction techniques. It offers a potentially valuable tool for preprocessing digital x-ray images to boost image quality of mammography, chest and cardiac radiography, as well as automated computer aided diagnostic radiology.

  2. Heel effect adaptive flat field correction of digital x-ray detectors

    International Nuclear Information System (INIS)

    Yu, Yongjian; Wang, Jue

    2013-01-01

    digital x-ray imaging in an SID-variant environment. The technique is relatively simple, and can be easily incorporated into multiple-point gain calibration/correction techniques. It offers a potentially valuable tool for preprocessing digital x-ray images to boost image quality of mammography, chest and cardiac radiography, as well as automated computer aided diagnostic radiology

  3. The Automation-by-Expertise-by-Training Interaction.

    Science.gov (United States)

    Strauch, Barry

    2017-03-01

    I introduce the automation-by-expertise-by-training interaction in automated systems and discuss its influence on operator performance. Transportation accidents that, across a 30-year interval demonstrated identical automation-related operator errors, suggest a need to reexamine traditional views of automation. I review accident investigation reports, regulator studies, and literature on human computer interaction, expertise, and training and discuss how failing to attend to the interaction of automation, expertise level, and training has enabled operators to commit identical automation-related errors. Automated systems continue to provide capabilities exceeding operators' need for effective system operation and provide interfaces that can hinder, rather than enhance, operator automation-related situation awareness. Because of limitations in time and resources, training programs do not provide operators the expertise needed to effectively operate these automated systems, requiring them to obtain the expertise ad hoc during system operations. As a result, many do not acquire necessary automation-related system expertise. Integrating automation with expected operator expertise levels, and within training programs that provide operators the necessary automation expertise, can reduce opportunities for automation-related operator errors. Research to address the automation-by-expertise-by-training interaction is needed. However, such research must meet challenges inherent to examining realistic sociotechnical system automation features with representative samples of operators, perhaps by using observational and ethnographic research. Research in this domain should improve the integration of design and training and, it is hoped, enhance operator performance.

  4. Implementing BosonSampling with time-bin encoding: Analysis of loss, mode mismatch, and time jitter

    Science.gov (United States)

    Motes, Keith R.; Dowling, Jonathan P.; Gilchrist, Alexei; Rohde, Peter P.

    2015-11-01

    It was recently shown by Motes, Gilchrist, Dowling, and Rohde [Phys. Rev. Lett. 113, 120501 (2014), 10.1103/PhysRevLett.113.120501] that a time-bin encoded fiber-loop architecture can implement an arbitrary passive linear optics transformation. This was shown in the case of an ideal scheme whereby the architecture has no sources of error. In any realistic implementation, however, physical errors are present, which corrupt the output of the transformation. We investigate the dominant sources of error in this architecture—loss and mode mismatch—and consider how it affects the BosonSampling protocol, a key application for passive linear optics. For our loss analysis we consider two major components that contribute to loss—fiber and switches—and calculate how this affects the success probability and fidelity of the device. Interestingly, we find that errors due to loss are not uniform (unique to time-bin encoding), which asymmetrically biases the implemented unitary. Thus loss necessarily limits the class of unitaries that may be implemented, and therefore future implementations must prioritize minimizing loss rates if arbitrary unitaries are to be implemented. Our formalism for mode mismatch is generalized to account for various phenomenon that may cause mode mismatch, but we focus on two—errors in fiber-loop lengths and time jitter of the photon source. These results provide a guideline for how well future experimental implementations might perform in light of these error mechanisms.

  5. Distribution automation

    International Nuclear Information System (INIS)

    Gruenemeyer, D.

    1991-01-01

    This paper reports on a Distribution Automation (DA) System enhances the efficiency and productivity of a utility. It also provides intangible benefits such as improved public image and market advantages. A utility should evaluate the benefits and costs of such a system before committing funds. The expenditure for distribution automation is economical when justified by the deferral of a capacity increase, a decrease in peak power demand, or a reduction in O and M requirements

  6. Human-centred automation: an explorative study

    International Nuclear Information System (INIS)

    Hollnagel, Erik; Miberg, Ann Britt

    1999-05-01

    The purpose of the programme activity on human-centred automation at the HRP is to develop knowledge (in the form of models and theories) and tools (in the form of techniques and simulators) to support design of automation that ensures effective human performance and comprehension. This report presents the work done on both the analytical and experimental side of this project. The analytical work has surveyed common definitions of automation and traditional design principles. A general finding is that human-centred automation usually is defined in terms of what it is not. This is partly due to a lack of adequate models and of human-automation interaction. Another result is a clarification of the consequences of automation, in particular with regard to situation awareness and workload. The experimental work has taken place as an explorative experiment in HAMMLAB in collaboration with IPSN (France). The purpose of this experiment was to increase the understanding of how automation influences operator performance in NPP control rooms. Two different types of automation (extensive and limited) were considered in scenarios having two different degrees of complexity (high and low), and involving diagnostic and procedural tasks. Six licensed NPP crews from the NPP at Loviisa, Finland, participated in the experiment. The dependent variables applied were plant performance, operator performance, self-rated crew performance, situation awareness, workload, and operator trust in the automation. The results from the diagnostic scenarios indicated that operators' judgement of crew efficiency was related to their level of trust in the automation, and further that operators trusted automation least and rated crew performance lowest in situations where crew performance was efficient and vice versa. The results from procedural scenarios indicated that extensive automation efficiently supported operators' performance, and further that operator' judgement of crew performance efficiency

  7. Configuration Management Automation (CMA) -

    Data.gov (United States)

    Department of Transportation — Configuration Management Automation (CMA) will provide an automated, integrated enterprise solution to support CM of FAA NAS and Non-NAS assets and investments. CMA...

  8. Toward designing for trust in database automation

    Energy Technology Data Exchange (ETDEWEB)

    Duez, P. P.; Jamieson, G. A. [Cognitive Engineering Laboratory, Univ. of Toronto, 5 King' s College Rd., Toronto, Ont. M5S 3G8 (Canada)

    2006-07-01

    Appropriate reliance on system automation is imperative for safe and productive work, especially in safety-critical systems. It is unsafe to rely on automation beyond its designed use; conversely, it can be both unproductive and unsafe to manually perform tasks that are better relegated to automated tools. Operator trust in automated tools mediates reliance, and trust appears to affect how operators use technology. As automated agents become more complex, the question of trust in automation is increasingly important. In order to achieve proper use of automation, we must engender an appropriate degree of trust that is sensitive to changes in operating functions and context. In this paper, we present research concerning trust in automation in the domain of automated tools for relational databases. Lee and See have provided models of trust in automation. One model developed by Lee and See identifies three key categories of information about the automation that lie along a continuum of attributional abstraction. Purpose-, process-and performance-related information serve, both individually and through inferences between them, to describe automation in such a way as to engender r properly-calibrated trust. Thus, one can look at information from different levels of attributional abstraction as a general requirements analysis for information key to appropriate trust in automation. The model of information necessary to engender appropriate trust in automation [1] is a general one. Although it describes categories of information, it does not provide insight on how to determine the specific information elements required for a given automated tool. We have applied the Abstraction Hierarchy (AH) to this problem in the domain of relational databases. The AH serves as a formal description of the automation at several levels of abstraction, ranging from a very abstract purpose-oriented description to a more concrete description of the resources involved in the automated process

  9. Toward designing for trust in database automation

    International Nuclear Information System (INIS)

    Duez, P. P.; Jamieson, G. A.

    2006-01-01

    Appropriate reliance on system automation is imperative for safe and productive work, especially in safety-critical systems. It is unsafe to rely on automation beyond its designed use; conversely, it can be both unproductive and unsafe to manually perform tasks that are better relegated to automated tools. Operator trust in automated tools mediates reliance, and trust appears to affect how operators use technology. As automated agents become more complex, the question of trust in automation is increasingly important. In order to achieve proper use of automation, we must engender an appropriate degree of trust that is sensitive to changes in operating functions and context. In this paper, we present research concerning trust in automation in the domain of automated tools for relational databases. Lee and See have provided models of trust in automation. One model developed by Lee and See identifies three key categories of information about the automation that lie along a continuum of attributional abstraction. Purpose-, process-and performance-related information serve, both individually and through inferences between them, to describe automation in such a way as to engender r properly-calibrated trust. Thus, one can look at information from different levels of attributional abstraction as a general requirements analysis for information key to appropriate trust in automation. The model of information necessary to engender appropriate trust in automation [1] is a general one. Although it describes categories of information, it does not provide insight on how to determine the specific information elements required for a given automated tool. We have applied the Abstraction Hierarchy (AH) to this problem in the domain of relational databases. The AH serves as a formal description of the automation at several levels of abstraction, ranging from a very abstract purpose-oriented description to a more concrete description of the resources involved in the automated process

  10. Automated electron microprobe

    International Nuclear Information System (INIS)

    Thompson, K.A.; Walker, L.R.

    1986-01-01

    The Plant Laboratory at the Oak Ridge Y-12 Plant has recently obtained a Cameca MBX electron microprobe with a Tracor Northern TN5500 automation system. This allows full stage and spectrometer automation and digital beam control. The capabilities of the system include qualitative and quantitative elemental microanalysis for all elements above and including boron in atomic number, high- and low-magnification imaging and processing, elemental mapping and enhancement, and particle size, shape, and composition analyses. Very low magnification, quantitative elemental mapping using stage control (which is of particular interest) has been accomplished along with automated size, shape, and composition analysis over a large relative area

  11. Correcting Inconsistencies and Errors in Bacterial Genome Metadata Using an Automated Curation Tool in Excel (AutoCurE).

    Science.gov (United States)

    Schmedes, Sarah E; King, Jonathan L; Budowle, Bruce

    2015-01-01

    Whole-genome data are invaluable for large-scale comparative genomic studies. Current sequencing technologies have made it feasible to sequence entire bacterial genomes with relative ease and time with a substantially reduced cost per nucleotide, hence cost per genome. More than 3,000 bacterial genomes have been sequenced and are available at the finished status. Publically available genomes can be readily downloaded; however, there are challenges to verify the specific supporting data contained within the download and to identify errors and inconsistencies that may be present within the organizational data content and metadata. AutoCurE, an automated tool for bacterial genome database curation in Excel, was developed to facilitate local database curation of supporting data that accompany downloaded genomes from the National Center for Biotechnology Information. AutoCurE provides an automated approach to curate local genomic databases by flagging inconsistencies or errors by comparing the downloaded supporting data to the genome reports to verify genome name, RefSeq accession numbers, the presence of archaea, BioProject/UIDs, and sequence file descriptions. Flags are generated for nine metadata fields if there are inconsistencies between the downloaded genomes and genomes reports and if erroneous or missing data are evident. AutoCurE is an easy-to-use tool for local database curation for large-scale genome data prior to downstream analyses.

  12. Automate functional testing

    Directory of Open Access Journals (Sweden)

    Ramesh Kalindri

    2014-06-01

    Full Text Available Currently, software engineers are increasingly turning to the option of automating functional tests, but not always have successful in this endeavor. Reasons range from low planning until over cost in the process. Some principles that can guide teams in automating these tests are described in this article.

  13. Driver Psychology during Automated Platooning

    NARCIS (Netherlands)

    Heikoop, D.D.

    2017-01-01

    With the rapid increase in vehicle automation technology, the call for understanding how humans behave while driving in an automated vehicle becomes more urgent. Vehicles that have automated systems such as Lane Keeping Assist (LKA) or Adaptive Cruise Control (ACC) not only support drivers in their

  14. Ionized calcium analyzer with a built-in pH correction.

    Science.gov (United States)

    Fogh-Andersen, N

    1981-07-01

    We describe a new semi-automated apparatus for simultaneously measuring the concentration of free calcium ion and of hydrogen ion (pH) at 37 degrees C. The sample volume is 110 microL. In addition to the actual values for these concentrations in the sample, the apparatus calculates the concentration of free calcium ion at pH 7.40. Mean values for serum from 51 fasting bedridden patients without calcium metabolic disorders and 64 fasting hospital employees were 1.192 and 1.232 mmol/L, respectively, with SD of 0.042 and 0.040 mmol/L, respectively. The within-series analytical SD was 12 mumol/L and the day-to-day SD of the pH-corrected concentration of free calcium ion was 21 mumol/L, as calculated from measurements made on a serum pool after equilibration with a CO2--air mixture. The mean dependency on pH as determined in 120 consecutive patients' sera equalled the built-in pH correction. The accuracy was evaluated by comparison with other calcium ion-selective electrodes.

  15. Modeling Increased Complexity and the Reliance on Automation: FLightdeck Automation Problems (FLAP) Model

    Science.gov (United States)

    Ancel, Ersin; Shih, Ann T.

    2014-01-01

    This paper highlights the development of a model that is focused on the safety issue of increasing complexity and reliance on automation systems in transport category aircraft. Recent statistics show an increase in mishaps related to manual handling and automation errors due to pilot complacency and over-reliance on automation, loss of situational awareness, automation system failures and/or pilot deficiencies. Consequently, the aircraft can enter a state outside the flight envelope and/or air traffic safety margins which potentially can lead to loss-of-control (LOC), controlled-flight-into-terrain (CFIT), or runway excursion/confusion accidents, etc. The goal of this modeling effort is to provide NASA's Aviation Safety Program (AvSP) with a platform capable of assessing the impacts of AvSP technologies and products towards reducing the relative risk of automation related accidents and incidents. In order to do so, a generic framework, capable of mapping both latent and active causal factors leading to automation errors, is developed. Next, the framework is converted into a Bayesian Belief Network model and populated with data gathered from Subject Matter Experts (SMEs). With the insertion of technologies and products, the model provides individual and collective risk reduction acquired by technologies and methodologies developed within AvSP.

  16. Cerebral perfusion and automated individual analysis using SPECT among an obsessive-compulsive population

    Directory of Open Access Journals (Sweden)

    Euclides Timóteo da Rocha

    2011-01-01

    Full Text Available OBJECTIVE: To make individual assessments using automated quantification methodology in order to screen for perfusion abnormalities in cerebral SPECT examinations among a sample of subjects with OCD. METHODS: Statistical parametric mapping (SPM was used to compare 26 brain SPECT images from patients with OCD individually with an image bank of 32 normal subjects, using the statistical threshold of p < 0.05 (corrected for multiple comparisons at the level of individual voxels or clusters. The maps were analyzed, and regions presenting voxels that remained above this threshold were sought. RESULTS: Six patients from a sample of 26 OCD images showed abnormalities at cluster or voxel level, considering the criteria described above, which represented 23.07%. However, seven images from the normal group of 32 were also indicated as cases of perfusional abnormality, representing 21.8% of the sample. CONCLUSION: The automated quantification method was not considered to be a useful tool for clinical practice, for analyses complementary to visual inspection.

  17. Automated solid-phase subcloning based on beads brought into proximity by magnetic force.

    Science.gov (United States)

    Hudson, Elton P; Nikoshkov, Andrej; Uhlen, Mathias; Rockberg, Johan

    2012-01-01

    In the fields of proteomics, metabolic engineering and synthetic biology there is a need for high-throughput and reliable cloning methods to facilitate construction of expression vectors and genetic pathways. Here, we describe a new approach for solid-phase cloning in which both the vector and the gene are immobilized to separate paramagnetic beads and brought into proximity by magnetic force. Ligation events were directly evaluated using fluorescent-based microscopy and flow cytometry. The highest ligation efficiencies were obtained when gene- and vector-coated beads were brought into close contact by application of a magnet during the ligation step. An automated procedure was developed using a laboratory workstation to transfer genes into various expression vectors and more than 95% correct clones were obtained in a number of various applications. The method presented here is suitable for efficient subcloning in an automated manner to rapidly generate a large number of gene constructs in various vectors intended for high throughput applications.

  18. Automation of a Beckman liquid scintillation counter for data capture and data-base management

    International Nuclear Information System (INIS)

    Neil, W.; Irwin, T.J.; Yang, J.J.

    1988-01-01

    A software package for the automation of a Beckman LS9000 liquid scintillation counter is presented. The package provides effective on-line data capture (with a Perkin Elmer 3230 32-bit minicomputer), data-base management, audit trail and archiving facilities. Key features of the package are rapid and flexible data entry, background subtraction, half-life correction, ability to queue several sample sets pending scintillation counting, and formatted report generation. A brief discussion is given on the development of customized data processing programs. (author)

  19. Automation systems for radioimmunoassay

    International Nuclear Information System (INIS)

    Yamasaki, Paul

    1974-01-01

    The application of automation systems for radioimmunoassay (RIA) was discussed. Automated systems could be useful in the second step, of the four basic processes in the course of RIA, i.e., preparation of sample for reaction. There were two types of instrumentation, a semi-automatic pipete, and a fully automated pipete station, both providing for fast and accurate dispensing of the reagent or for the diluting of sample with reagent. Illustrations of the instruments were shown. (Mukohata, S.)

  20. An automated system for the preparation of Large Size Dried (LSD) Spikes

    International Nuclear Information System (INIS)

    Verbruggen, A.; Bauwens, J.; Jakobsson, U.; Eykens, R.; Wellum, R.; Aregbe, Y.; Van De Steene, N.

    2008-01-01

    Large size dried (LSD) spikes have been produced to fulfill the existing requirement for reliable and traceable isotopic reference materials for nuclear safeguards. A system to produce certified nuclear isotopic reference material as a U/Pu mixture in the form of large size dried spikes, comparable to those produced using traditional methods has been installed in collaboration with Nucomat, a company with a recognized reputation in design and development of integrated automated systems. The major components of the system are a robot, two balances, a dispenser and a drying unit fitted into a glove box. The robot is software driven and designed to control all movements inside the glove-box, to identify unambiguously the penicillin vials with a bar-code reader, to dispense the LSD batch solution into the vials and to weigh the amount dispensed. The system functionality has been evaluated and the performance validated by comparing the results from a series of samples dispensed and weighed by the automated system with the results by manual substitution weighing. After applying the proper correction factors to the data from the automated system balance no significant difference was observed between the two. However, an additional component of uncertainty of 3*10 -4 is introduced in the uncertainty budget for the certified weights provided by the automatic system. (authors)

  1. An automated system for the preparation of Large Size Dried (LSD) Spikes

    Energy Technology Data Exchange (ETDEWEB)

    Verbruggen, A.; Bauwens, J.; Jakobsson, U.; Eykens, R.; Wellum, R.; Aregbe, Y. [European Commission - Joint Research Centre, Institute for Reference Materials and Measurements (IRMM), Retieseweg 211, B2440 Geel (Belgium); Van De Steene, N. [Nucomat, Mercatorstraat 206, B9100 Sint Niklaas (Belgium)

    2008-07-01

    Large size dried (LSD) spikes have been produced to fulfill the existing requirement for reliable and traceable isotopic reference materials for nuclear safeguards. A system to produce certified nuclear isotopic reference material as a U/Pu mixture in the form of large size dried spikes, comparable to those produced using traditional methods has been installed in collaboration with Nucomat, a company with a recognized reputation in design and development of integrated automated systems. The major components of the system are a robot, two balances, a dispenser and a drying unit fitted into a glove box. The robot is software driven and designed to control all movements inside the glove-box, to identify unambiguously the penicillin vials with a bar-code reader, to dispense the LSD batch solution into the vials and to weigh the amount dispensed. The system functionality has been evaluated and the performance validated by comparing the results from a series of samples dispensed and weighed by the automated system with the results by manual substitution weighing. After applying the proper correction factors to the data from the automated system balance no significant difference was observed between the two. However, an additional component of uncertainty of 3*10{sup -4} is introduced in the uncertainty budget for the certified weights provided by the automatic system. (authors)

  2. RNL automated ultrasonic inspection of the PISC II PWR inlet nozzle (Plate 3)

    International Nuclear Information System (INIS)

    Rogerson, A.; Poulter, L.N.J.; Clough, P.; Cooper, A.G.

    1987-01-01

    In June 1984, Risley Nuclear Laboratories (RNL) performed an automated ultrasonic inspection of the Pressurized Water Reactor (PWR) inlet nozzle (plate 3) from the international Programme of Inspection of Steel Components (PISC II) round-robin inspection programme. High-sensitivity pulse-echo detection and predominantly time-of-flight diffraction sizing techniques were employed from the clad inner surface of the nozzle using digital data collection, analysis, and display facilities developed at RNL. RNL detected 30 out of 31 intended weld flaws, achieved one hundred per cent correct acceptance of all acceptable flaws and had a correct rejection frequency on all rejectable flaws of 0.86. The results confirm that well-conceived automated inspection procedures, similar to those used by RNL in this nozzle inspection, could form the basis of a PSI/ISI procedure for reactor pressure vessel nozzle regions. Analysis of the RNL results with regard to the influence of flaw characteristics on inspection performance lends strong support to the general conclusions drawn by the PISC Data Analysis Group. In particular, the most difficult flaws to accurately size were circular smooth and rough flaws. Examination of the RNL results on individual flaws reveals valuable information on the strengths and weaknesses of the adopted procedures and points towards procedural changes that would improve inspection performance. This report describes the procedures adopted by RNL, in the inspection, and reviews the results in the light of definitive flaw information. (author)

  3. Classification of Automated Search Traffic

    Science.gov (United States)

    Buehrer, Greg; Stokes, Jack W.; Chellapilla, Kumar; Platt, John C.

    As web search providers seek to improve both relevance and response times, they are challenged by the ever-increasing tax of automated search query traffic. Third party systems interact with search engines for a variety of reasons, such as monitoring a web site’s rank, augmenting online games, or possibly to maliciously alter click-through rates. In this paper, we investigate automated traffic (sometimes referred to as bot traffic) in the query stream of a large search engine provider. We define automated traffic as any search query not generated by a human in real time. We first provide examples of different categories of query logs generated by automated means. We then develop many different features that distinguish between queries generated by people searching for information, and those generated by automated processes. We categorize these features into two classes, either an interpretation of the physical model of human interactions, or as behavioral patterns of automated interactions. Using the these detection features, we next classify the query stream using multiple binary classifiers. In addition, a multiclass classifier is then developed to identify subclasses of both normal and automated traffic. An active learning algorithm is used to suggest which user sessions to label to improve the accuracy of the multiclass classifier, while also seeking to discover new classes of automated traffic. Performance analysis are then provided. Finally, the multiclass classifier is used to predict the subclass distribution for the search query stream.

  4. An automated swimming respirometer

    DEFF Research Database (Denmark)

    STEFFENSEN, JF; JOHANSEN, K; BUSHNELL, PG

    1984-01-01

    An automated respirometer is described that can be used for computerized respirometry of trout and sharks.......An automated respirometer is described that can be used for computerized respirometry of trout and sharks....

  5. Contaminant analysis automation, an overview

    International Nuclear Information System (INIS)

    Hollen, R.; Ramos, O. Jr.

    1996-01-01

    To meet the environmental restoration and waste minimization goals of government and industry, several government laboratories, universities, and private companies have formed the Contaminant Analysis Automation (CAA) team. The goal of this consortium is to design and fabricate robotics systems that standardize and automate the hardware and software of the most common environmental chemical methods. In essence, the CAA team takes conventional, regulatory- approved (EPA Methods) chemical analysis processes and automates them. The automation consists of standard laboratory modules (SLMs) that perform the work in a much more efficient, accurate, and cost- effective manner

  6. Top-quark physics as a prime application of automated higher-order corrections

    Energy Technology Data Exchange (ETDEWEB)

    Weiss, Christian

    2017-07-15

    Experiments in high energy physics have reached an unprecedented accuracy. This accuracy has to be matched by the theoretical predictions used to search for new physics. For this purpose, sophisticated computer programs are necessary, both for the calculation of matrix elements (tree-level and loop) and in the field of Monte-Carlo event generation. The hadronic initial state at the LHC poses significant challenges for measurement and simulation. A future lepton collider, like the proposed international linear collider (ILC) in Japan or compact linear collider (CLIC) at CERN would have a much cleaner initial state. Such a machine would achieve an even higher precision. In the field of lepton colliders, the Whizard event generator has been established as the program of choice due to its unique treatment of beam structure functions and initial-state radiation. In this thesis, we present the extension of Whizard to next-to-leading order accuracy, thus augmenting it to the state of the art. We use the Frixione-Kunszt-Signer (FKS) subtraction scheme to subtract divergences, of which a detailed outline is given. This new functionality is used to perform in-depth studies of the top quark. Being the heaviest particle in the standard model, its strong connection to the Higgs sector as well as its abundant production at a future lepton collider makes it an excellent object of study. Yet, its lifetime is very short and high-multiplicity final-states of its decay products are decayed in the detector. This thesis investigates the influence of NLO QCD corrections to the fully off-shell top production processes e{sup +}e{sup -}→μ{sup +}ν{sub μ}e{sup -} anti ν{sub e}b anti b and e{sup +}e{sup -}→μ{sup +}ν{sub μ}e{sup -} anti ν{sub e}b anti bH. These calculations have not been performed for the first time. Moreover, the incorporation of NLO QCD corrections into the resummation of the top production threshold and its matching to the relativistic continuum for the process

  7. Top-quark physics as a prime application of automated higher-order corrections

    International Nuclear Information System (INIS)

    Weiss, Christian

    2017-07-01

    Experiments in high energy physics have reached an unprecedented accuracy. This accuracy has to be matched by the theoretical predictions used to search for new physics. For this purpose, sophisticated computer programs are necessary, both for the calculation of matrix elements (tree-level and loop) and in the field of Monte-Carlo event generation. The hadronic initial state at the LHC poses significant challenges for measurement and simulation. A future lepton collider, like the proposed international linear collider (ILC) in Japan or compact linear collider (CLIC) at CERN would have a much cleaner initial state. Such a machine would achieve an even higher precision. In the field of lepton colliders, the Whizard event generator has been established as the program of choice due to its unique treatment of beam structure functions and initial-state radiation. In this thesis, we present the extension of Whizard to next-to-leading order accuracy, thus augmenting it to the state of the art. We use the Frixione-Kunszt-Signer (FKS) subtraction scheme to subtract divergences, of which a detailed outline is given. This new functionality is used to perform in-depth studies of the top quark. Being the heaviest particle in the standard model, its strong connection to the Higgs sector as well as its abundant production at a future lepton collider makes it an excellent object of study. Yet, its lifetime is very short and high-multiplicity final-states of its decay products are decayed in the detector. This thesis investigates the influence of NLO QCD corrections to the fully off-shell top production processes e"+e"-→μ"+ν_μe"- anti ν_eb anti b and e"+e"-→μ"+ν_μe"- anti ν_eb anti bH. These calculations have not been performed for the first time. Moreover, the incorporation of NLO QCD corrections into the resummation of the top production threshold and its matching to the relativistic continuum for the process e"+e"-→bW"++ anti bW"-. All results are obtained with

  8. Selecting automation for the clinical chemistry laboratory.

    Science.gov (United States)

    Melanson, Stacy E F; Lindeman, Neal I; Jarolim, Petr

    2007-07-01

    Laboratory automation proposes to improve the quality and efficiency of laboratory operations, and may provide a solution to the quality demands and staff shortages faced by today's clinical laboratories. Several vendors offer automation systems in the United States, with both subtle and obvious differences. Arriving at a decision to automate, and the ensuing evaluation of available products, can be time-consuming and challenging. Although considerable discussion concerning the decision to automate has been published, relatively little attention has been paid to the process of evaluating and selecting automation systems. To outline a process for evaluating and selecting automation systems as a reference for laboratories contemplating laboratory automation. Our Clinical Chemistry Laboratory staff recently evaluated all major laboratory automation systems in the United States, with their respective chemistry and immunochemistry analyzers. Our experience is described and organized according to the selection process, the important considerations in clinical chemistry automation, decisions and implementation, and we give conclusions pertaining to this experience. Including the formation of a committee, workflow analysis, submitting a request for proposal, site visits, and making a final decision, the process of selecting chemistry automation took approximately 14 months. We outline important considerations in automation design, preanalytical processing, analyzer selection, postanalytical storage, and data management. Selecting clinical chemistry laboratory automation is a complex, time-consuming process. Laboratories considering laboratory automation may benefit from the concise overview and narrative and tabular suggestions provided.

  9. Improving medical stores management through automation and effective communication.

    Science.gov (United States)

    Kumar, Ashok; Cariappa, M P; Marwaha, Vishal; Sharma, Mukti; Arora, Manu

    2016-01-01

    Medical stores management in hospitals is a tedious and time consuming chore with limited resources tasked for the purpose and poor penetration of Information Technology. The process of automation is slow paced due to various inherent factors and is being challenged by the increasing inventory loads and escalating budgets for procurement of drugs. We carried out an indepth case study at the Medical Stores of a tertiary care health care facility. An iterative six step Quality Improvement (QI) process was implemented based on the Plan-Do-Study-Act (PDSA) cycle. The QI process was modified as per requirement to fit the medical stores management model. The results were evaluated after six months. After the implementation of QI process, 55 drugs of the medical store inventory which had expired since 2009 onwards were replaced with fresh stock by the suppliers as a result of effective communication through upgraded database management. Various pending audit objections were dropped due to the streamlined documentation and processes. Inventory management improved drastically due to automation, with disposal orders being initiated four months prior to the expiry of drugs and correct demands being generated two months prior to depletion of stocks. The monthly expense summary of drugs was now being done within ten days of the closing month. Improving communication systems within the hospital with vendor database management and reaching out to clinicians is important. Automation of inventory management requires to be simple and user-friendly, utilizing existing hardware. Physical stores monitoring is indispensable, especially due to the scattered nature of stores. Staff training and standardized documentation protocols are the other keystones for optimal medical store management.

  10. Automated profiling of individual cell-cell interactions from high-throughput time-lapse imaging microscopy in nanowell grids (TIMING).

    Science.gov (United States)

    Merouane, Amine; Rey-Villamizar, Nicolas; Lu, Yanbin; Liadi, Ivan; Romain, Gabrielle; Lu, Jennifer; Singh, Harjeet; Cooper, Laurence J N; Varadarajan, Navin; Roysam, Badrinath

    2015-10-01

    There is a need for effective automated methods for profiling dynamic cell-cell interactions with single-cell resolution from high-throughput time-lapse imaging data, especially, the interactions between immune effector cells and tumor cells in adoptive immunotherapy. Fluorescently labeled human T cells, natural killer cells (NK), and various target cells (NALM6, K562, EL4) were co-incubated on polydimethylsiloxane arrays of sub-nanoliter wells (nanowells), and imaged using multi-channel time-lapse microscopy. The proposed cell segmentation and tracking algorithms account for cell variability and exploit the nanowell confinement property to increase the yield of correctly analyzed nanowells from 45% (existing algorithms) to 98% for wells containing one effector and a single target, enabling automated quantification of cell locations, morphologies, movements, interactions, and deaths without the need for manual proofreading. Automated analysis of recordings from 12 different experiments demonstrated automated nanowell delineation accuracy >99%, automated cell segmentation accuracy >95%, and automated cell tracking accuracy of 90%, with default parameters, despite variations in illumination, staining, imaging noise, cell morphology, and cell clustering. An example analysis revealed that NK cells efficiently discriminate between live and dead targets by altering the duration of conjugation. The data also demonstrated that cytotoxic cells display higher motility than non-killers, both before and during contact. broysam@central.uh.edu or nvaradar@central.uh.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Advances in the simulation and automated measurement of well-sorted granular material: 2. Direct measures of particle properties

    Science.gov (United States)

    Buscombe, D.; Rubin, D. M.

    2012-06-01

    In this, the second of a pair of papers on the structure of well-sorted natural granular material (sediment), new methods are described for automated measurements from images of sediment, of: 1) particle-size standard deviation (arithmetic sorting) with and without apparent void fraction; and 2) mean particle size in material with void fraction. A variety of simulations of granular material are used for testing purposes, in addition to images of natural sediment. Simulations are also used to establish that the effects on automated particle sizing of grains visible through the interstices of the grains at the very surface of a granular material continue to a depth of approximately 4 grain diameters and that this is independent of mean particle size. Ensemble root-mean squared error between observed and estimated arithmetic sorting coefficients for 262 images of natural silts, sands and gravels (drawn from 8 populations) is 31%, which reduces to 27% if adjusted for bias (slope correction between observed and estimated values). These methods allow non-intrusive and fully automated measurements of surfaces of unconsolidated granular material. With no tunable parameters or empirically derived coefficients, they should be broadly universal in appropriate applications. However, empirical corrections may need to be applied for the most accurate results. Finally, analytical formulas are derived for the one-step pore-particle transition probability matrix, estimated from the image's autocorrelogram, from which void fraction of a section of granular material can be estimated directly. This model gives excellent predictions of bulk void fraction yet imperfect predictions of pore-particle transitions.

  12. Automated Vehicles Symposium 2014

    CERN Document Server

    Beiker, Sven; Road Vehicle Automation 2

    2015-01-01

    This paper collection is the second volume of the LNMOB series on Road Vehicle Automation. The book contains a comprehensive review of current technical, socio-economic, and legal perspectives written by experts coming from public authorities, companies and universities in the U.S., Europe and Japan. It originates from the Automated Vehicle Symposium 2014, which was jointly organized by the Association for Unmanned Vehicle Systems International (AUVSI) and the Transportation Research Board (TRB) in Burlingame, CA, in July 2014. The contributions discuss the challenges arising from the integration of highly automated and self-driving vehicles into the transportation system, with a focus on human factors and different deployment scenarios. This book is an indispensable source of information for academic researchers, industrial engineers, and policy makers interested in the topic of road vehicle automation.

  13. Work Planing Automation at Mechanical Subdivision

    OpenAIRE

    Dzindzelėta, Vytautas

    2005-01-01

    Work planing automation, installation possibilities and future outlook at mechanical subdivision. To study how the work planing has changed before and after automation process and to analyse automation process methodology.

  14. Isochronous wireless network for real-time communication in industrial automation

    CERN Document Server

    Trsek, Henning

    2016-01-01

    This dissertation proposes and investigates an isochronous wireless network for industrial control applications with guaranteed latencies and jitter. Based on a requirements analysis of real industrial applications and the characterisation of the wireless channel, the solution approach is developed. It consists of a TDMA-based medium access control, a dynamic resource allocation and the provision of a global time base for the wired and the wireless network. Due to the global time base, the solution approach allows a seamless and synchronous integration into existing wired Real-time Ethernet systems.

  15. Automated Controlled-Potential Coulometer for the IAEA

    International Nuclear Information System (INIS)

    Cordaro, J.V.; Holland, M.K.; Fields, T.

    1998-01-01

    An automated controlled-potential coulometer has been developed at the Savannah River Site (SRS) for the determination of plutonium for use at the International Atomic Energy Agency's (IAEA) Safeguards Analytical Laboratory in Siebersdorf, Austria. The system is functionally the same as earlier systems built for use at the Savannah River Site's Analytical Laboratory. All electronic circuits and printed circuits boards have been upgraded with state-of-the-art components. A higher amperage potentiostat with improved control stability has been developed. The system achieves electronic calibration accuracy and linearity of better than 0.01 percent, with a precision and accuracy better than 0.1 percent has been demonstrated. This coulometer features electrical calibration of the integration system, electrolysis current background corrections, and control-potential adjustment capabilities. These capabilities allow application of the system to plutonium measurements without chemical standards, achieving traceability to the international measurement system through electrical standards and Faraday's constant. the chemist is provided with the capability to perform measurements without depending upon chemical standards, which is a significant advantage for applications such as characterization of primary and secondary standards. Additional benefits include reducing operating cost to procure, prepare and measure calibration standards and the corresponding decrease in radioactive waste generation. The design and documentation of the automated instrument are provided herein. Each individual module's operation, wiring, layout, and alignment are described. Interconnection of the modules and system calibration are discussed. A complete set of prints and a list of associated parts are included

  16. Physiological Self-Regulation and Adaptive Automation

    Science.gov (United States)

    Prinzell, Lawrence J.; Pope, Alan T.; Freeman, Frederick G.

    2007-01-01

    Adaptive automation has been proposed as a solution to current problems of human-automation interaction. Past research has shown the potential of this advanced form of automation to enhance pilot engagement and lower cognitive workload. However, there have been concerns voiced regarding issues, such as automation surprises, associated with the use of adaptive automation. This study examined the use of psychophysiological self-regulation training with adaptive automation that may help pilots deal with these problems through the enhancement of cognitive resource management skills. Eighteen participants were assigned to 3 groups (self-regulation training, false feedback, and control) and performed resource management, monitoring, and tracking tasks from the Multiple Attribute Task Battery. The tracking task was cycled between 3 levels of task difficulty (automatic, adaptive aiding, manual) on the basis of the electroencephalogram-derived engagement index. The other two tasks remained in automatic mode that had a single automation failure. Those participants who had received self-regulation training performed significantly better and reported lower National Aeronautics and Space Administration Task Load Index scores than participants in the false feedback and control groups. The theoretical and practical implications of these results for adaptive automation are discussed.

  17. Semi-automated volumetric analysis of lymph node metastases in patients with malignant melanoma stage III/IV-A feasibility study

    International Nuclear Information System (INIS)

    Fabel, M.; Tengg-Kobligk, H. von; Giesel, F.L.; Delorme, S.; Kauczor, H.-U.; Bornemann, L.; Dicken, V.; Kopp-Schneider, A.; Moser, C.

    2008-01-01

    Therapy monitoring in oncological patient care requires accurate and reliable imaging and post-processing methods. RECIST criteria are the current standard, with inherent disadvantages. The aim of this study was to investigate the feasibility of semi-automated volumetric analysis of lymph node metastases in patients with malignant melanoma compared to manual volumetric analysis and RECIST. Multislice CT was performed in 47 patients, covering the chest, abdomen and pelvis. In total, 227 suspicious, enlarged lymph nodes were evaluated retrospectively by two radiologists regarding diameters (RECIST), manually measured volume by placement of ROIs and semi-automated volumetric analysis. Volume (ml), quality of segmentation (++/-) and time effort (s) were evaluated in the study. The semi-automated volumetric analysis software tool was rated acceptable to excellent in 81% of all cases (reader 1) and 79% (reader 2). Median time for the entire segmentation process and necessary corrections was shorter with the semi-automated software than by manual segmentation. Bland-Altman plots showed a significantly lower interobserver variability for semi-automated volumetric than for RECIST measurements. The study demonstrated feasibility of volumetric analysis of lymph node metastases. The software allows a fast and robust segmentation in up to 80% of all cases. Ease of use and time needed are acceptable for application in the clinical routine. Variability and interuser bias were reduced to about one third of the values found for RECIST measurements. (orig.)

  18. Buying Program of the Standard Automated Materiel Management System. Automated Small Purchase System: Defense Supply Center Philadelphia

    National Research Council Canada - National Science Library

    2001-01-01

    The Standard Automated Materiel Management System Automated Small Purchase System is a fully automated micro-purchases system used by the General and Industrial Directorate at the Defense Supply Center Philadelphia...

  19. Comparison of Size Modulation Standard Automated Perimetry and Conventional Standard Automated Perimetry with a 10-2 Test Program in Glaucoma Patients.

    Science.gov (United States)

    Hirasawa, Kazunori; Takahashi, Natsumi; Satou, Tsukasa; Kasahara, Masayuki; Matsumura, Kazuhiro; Shoji, Nobuyuki

    2017-08-01

    This prospective observational study compared the performance of size modulation standard automated perimetry with the Octopus 600 10-2 test program, with stimulus size modulation during testing, based on stimulus intensity and conventional standard automated perimetry, with that of the Humphrey 10-2 test program in glaucoma patients. Eighty-seven eyes of 87 glaucoma patients underwent size modulation standard automated perimetry with Dynamic strategy and conventional standard automated perimetry using the SITA standard strategy. The main outcome measures were global indices, point-wise threshold, visual defect size and depth, reliability indices, and test duration; these were compared between size modulation standard automated perimetry and conventional standard automated perimetry. Global indices and point-wise threshold values between size modulation standard automated perimetry and conventional standard automated perimetry were moderately to strongly correlated (p 33.40, p modulation standard automated perimetry than with conventional standard automated perimetry, but the visual-field defect size was smaller (p modulation-standard automated perimetry than on conventional standard automated perimetry. The reliability indices, particularly the false-negative response, of size modulation standard automated perimetry were worse than those of conventional standard automated perimetry (p modulation standard automated perimetry than with conventional standard automated perimetry (p = 0.02). Global indices and the point-wise threshold value of the two testing modalities correlated well. However, the potential of a large stimulus presented at an area with a decreased sensitivity with size modulation standard automated perimetry could underestimate the actual threshold in the 10-2 test protocol, as compared with conventional standard automated perimetry.

  20. Investigation and survey of occasions when humans saved and improved a situation where the automation was insufficient or failed

    International Nuclear Information System (INIS)

    Lackman, Tomas

    2011-01-01

    operators case studies of nuclear specific cases of automation are suggested. Since accidents has its origins in latent as well as in direct causes another suggestion for future studies is to look at tasks that operators carry out to correct automatic functions in their everyday work. Such operator interventions in an early stage may also be of great importance for the defense in depth if studied more closely

  1. Automated Stellar Classification for Large Surveys with EKF and RBF Neural Networks

    Institute of Scientific and Technical Information of China (English)

    Ling Bai; Ping Guo; Zhan-Yi Hu

    2005-01-01

    An automated classification technique for large size stellar surveys is proposed. It uses the extended Kalman filter as a feature selector and pre-classifier of the data, and the radial basis function neural networks for the classification.Experiments with real data have shown that the correct classification rate can reach as high as 93%, which is quite satisfactory. When different system models are selected for the extended Kalman filter, the classification results are relatively stable. It is shown that for this particular case the result using extended Kalman filter is better than using principal component analysis.

  2. Laboratory automation and LIMS in forensics

    DEFF Research Database (Denmark)

    Stangegaard, Michael; Hansen, Anders Johannes; Morling, Niels

    2013-01-01

    . Furthermore, implementation of automated liquid handlers reduces the risk of sample misplacement. A LIMS can efficiently control the sample flow through the laboratory and manage the results of the conducted tests for each sample. Integration of automated liquid handlers with a LIMS provides the laboratory......Implementation of laboratory automation and LIMS in a forensic laboratory enables the laboratory, to standardize sample processing. Automated liquid handlers can increase throughput and eliminate manual repetitive pipetting operations, known to result in occupational injuries to the technical staff...... with the tools required for setting up automated production lines of complex laboratory processes and monitoring the whole process and the results. Combined, this enables processing of a large number of samples. Selection of the best automated solution for an individual laboratory should be based on user...

  3. Automation and robotics

    Science.gov (United States)

    Montemerlo, Melvin

    1988-01-01

    The Autonomous Systems focus on the automation of control systems for the Space Station and mission operations. Telerobotics focuses on automation for in-space servicing, assembly, and repair. The Autonomous Systems and Telerobotics each have a planned sequence of integrated demonstrations showing the evolutionary advance of the state-of-the-art. Progress is briefly described for each area of concern.

  4. Automating the radiographic NDT process

    International Nuclear Information System (INIS)

    Aman, J.K.

    1986-01-01

    Automation, the removal of the human element in inspection, has not been generally applied to film radiographic NDT. The justication for automating is not only productivity but also reliability of results. Film remains in the automated system of the future because of its extremely high image content, approximately 8 x 10 9 bits per 14 x 17. The equivalent to 2200 computer floppy discs. Parts handling systems and robotics applied for manufacturing and some NDT modalities, should now be applied to film radiographic NDT systems. Automatic film handling can be achieved with the daylight NDT film handling system. Automatic film processing is becoming the standard in industry and can be coupled to the daylight system. Robots offer the opportunity to automate fully the exposure step. Finally, computer aided interpretation appears on the horizon. A unit which laser scans a 14 x 17 (inch) film in 6 - 8 seconds can digitize film information for further manipulation and possible automatic interrogations (computer aided interpretation). The system called FDRS (for Film Digital Radiography System) is moving toward 50 micron (*approx* 16 lines/mm) resolution. This is believed to meet the need of the majority of image content needs. We expect the automated system to appear first in parts (modules) as certain operations are automated. The future will see it all come together in an automated film radiographic NDT system (author) [pt

  5. Automated Vehicles Symposium 2015

    CERN Document Server

    Beiker, Sven

    2016-01-01

    This edited book comprises papers about the impacts, benefits and challenges of connected and automated cars. It is the third volume of the LNMOB series dealing with Road Vehicle Automation. The book comprises contributions from researchers, industry practitioners and policy makers, covering perspectives from the U.S., Europe and Japan. It is based on the Automated Vehicles Symposium 2015 which was jointly organized by the Association of Unmanned Vehicle Systems International (AUVSI) and the Transportation Research Board (TRB) in Ann Arbor, Michigan, in July 2015. The topical spectrum includes, but is not limited to, public sector activities, human factors, ethical and business aspects, energy and technological perspectives, vehicle systems and transportation infrastructure. This book is an indispensable source of information for academic researchers, industrial engineers and policy makers interested in the topic of road vehicle automation.

  6. Automated PCB Inspection System

    Directory of Open Access Journals (Sweden)

    Syed Usama BUKHARI

    2017-05-01

    Full Text Available Development of an automated PCB inspection system as per the need of industry is a challenging task. In this paper a case study is presented, to exhibit, a proposed system for an immigration process of a manual PCB inspection system to an automated PCB inspection system, with a minimal intervention on the existing production flow, for a leading automotive manufacturing company. A detailed design of the system, based on computer vision followed by testing and analysis was proposed, in order to aid the manufacturer in the process of automation.

  7. Automation of Tabular Application Formation

    Directory of Open Access Journals (Sweden)

    S. V. Zykin

    2013-01-01

    Full Text Available The paper considers automation problems of the interface formation between a table and a relational database. The task description is formalized and the description of the existing approaches to formation of data representations on an example of widely widespread CASE-tools is submitted. The definition of intermediate data representation as a ”join table” is offered, which is used for maintenance of correctness of data representation formation, and also is necessary for direct and inverse data transformations. On the basis of lossless join property and realized dependencies, the concept and a way of context formation of the application and restrictions is introduced. The considered material is further used for constructing an inverse data transformation from tabular presentation into a relational one. On the basis of relationships properties on a database scheme, the partial order on the relations is established, and the restriction of acyclic databases schemes is introduced. The received results are further used at the analysis of principles of formation of inverse data transformation, and the basic details of such a transformation algorithm are considered.

  8. $ANBA; a rapid, combined data acquisition and correction program for the SEMQ electron microprobe

    Science.gov (United States)

    McGee, James J.

    1983-01-01

    $ANBA is a program developed for rapid data acquisition and correction on an automated SEMQ electron microprobe. The program provides increased analytical speed and reduced disk read/write operations compared with the manufacturer's software, resulting in a doubling of analytical throughput. In addition, the program provides enhanced analytical features such as averaging, rapid and compact data storage, and on-line plotting. The program is described with design philosophy, flow charts, variable names, a complete program listing, and system requirements. A complete operating example and notes to assist in running the program are included.

  9. An automated high throughput screening-compatible assay to identify regulators of stem cell neural differentiation.

    Science.gov (United States)

    Casalino, Laura; Magnani, Dario; De Falco, Sandro; Filosa, Stefania; Minchiotti, Gabriella; Patriarca, Eduardo J; De Cesare, Dario

    2012-03-01

    The use of Embryonic Stem Cells (ESCs) holds considerable promise both for drug discovery programs and the treatment of degenerative disorders in regenerative medicine approaches. Nevertheless, the successful use of ESCs is still limited by the lack of efficient control of ESC self-renewal and differentiation capabilities. In this context, the possibility to modulate ESC biological properties and to obtain homogenous populations of correctly specified cells will help developing physiologically relevant screens, designed for the identification of stem cell modulators. Here, we developed a high throughput screening-suitable ESC neural differentiation assay by exploiting the Cell(maker) robotic platform and demonstrated that neural progenies can be generated from ESCs in complete automation, with high standards of accuracy and reliability. Moreover, we performed a pilot screening providing proof of concept that this assay allows the identification of regulators of ESC neural differentiation in full automation.

  10. Automated lung nodule classification following automated nodule detection on CT: A serial approach

    International Nuclear Information System (INIS)

    Armato, Samuel G. III; Altman, Michael B.; Wilkie, Joel; Sone, Shusuke; Li, Feng; Doi, Kunio; Roy, Arunabha S.

    2003-01-01

    We have evaluated the performance of an automated classifier applied to the task of differentiating malignant and benign lung nodules in low-dose helical computed tomography (CT) scans acquired as part of a lung cancer screening program. The nodules classified in this manner were initially identified by our automated lung nodule detection method, so that the output of automated lung nodule detection was used as input to automated lung nodule classification. This study begins to narrow the distinction between the 'detection task' and the 'classification task'. Automated lung nodule detection is based on two- and three-dimensional analyses of the CT image data. Gray-level-thresholding techniques are used to identify initial lung nodule candidates, for which morphological and gray-level features are computed. A rule-based approach is applied to reduce the number of nodule candidates that correspond to non-nodules, and the features of remaining candidates are merged through linear discriminant analysis to obtain final detection results. Automated lung nodule classification merges the features of the lung nodule candidates identified by the detection algorithm that correspond to actual nodules through another linear discriminant classifier to distinguish between malignant and benign nodules. The automated classification method was applied to the computerized detection results obtained from a database of 393 low-dose thoracic CT scans containing 470 confirmed lung nodules (69 malignant and 401 benign nodules). Receiver operating characteristic (ROC) analysis was used to evaluate the ability of the classifier to differentiate between nodule candidates that correspond to malignant nodules and nodule candidates that correspond to benign lesions. The area under the ROC curve for this classification task attained a value of 0.79 during a leave-one-out evaluation

  11. Automated ISS Flight Utilities

    Science.gov (United States)

    Offermann, Jan Tuzlic

    2016-01-01

    EVADES output. As mentioned above, GEnEVADOSE makes extensive use of ROOT version 6, the data analysis framework developed at the European Organization for Nuclear Research (CERN), and the code is written to the C++11 standard (as are the other projects). My second project is the Automated Mission Reference Exposure Utility (AMREU).Unlike GEnEVADOSE, AMREU is a combination of three frameworks written in both Python and C++, also making use of ROOT (and PyROOT). Run as a combination of daily and weekly cron jobs, these macros query the SRAG database system to determine the active ISS missions, and query minute-by-minute radiation dose information from ISS-TEPC (Tissue Equivalent Proportional Counter), one of the radiation detectors onboard the ISS. Using this information, AMREU creates a corrected data set of daily radiation doses, addressing situations where TEPC may be offline or locked up by correcting doses for days with less than 95% live time (the total amount time the instrument acquires data) by averaging the past 7 days. As not all errors may be automatically detectable, AMREU also allows for manual corrections, checking an updated plaintext file each time it runs. With the corrected data, AMREU generates cumulative dose plots for each mission, and uses a Python script to generate a flight note file (.docx format) containing these plots, as well as information sections to be filled in and modified by the space weather environment officers with information specific to the week. AMREU is set up to run without requiring any user input, and it automatically archives old flight notes and information files for missions that are no longer active. My other projects involve cleaning up a large data set from the Charged Particle Directional Spectrometer (CPDS), joining together many different data sets in order to clean up information in SRAG SQL databases, and developing other automated utilities for displaying information on active solar regions, that may be used by the

  12. Automation of Electrical Cable Harnesses Testing

    Directory of Open Access Journals (Sweden)

    Zhuming Bi

    2017-12-01

    Full Text Available Traditional automated systems, such as industrial robots, are applied in well-structured environments, and many automated systems have a limited adaptability to deal with complexity and uncertainty; therefore, the applications of industrial robots in small- and medium-sized enterprises (SMEs are very limited. The majority of manual operations in SMEs are too complicated for automation. The rapidly developed information technologies (IT has brought new opportunities for the automation of manufacturing and assembly processes in the ill-structured environments. Note that an automation solution should be designed to meet the given requirements of the specified application, and it differs from one application to another. In this paper, we look into the feasibility of automated testing for electric cable harnesses, and our focus is on some of the generic strategies for the improvement of the adaptability of automation solutions. Especially, the concept of modularization is adopted in developing hardware and software to maximize system adaptability in testing a wide scope of products. A proposed system has been implemented, and the system performances have been evaluated by executing tests on actual products. The testing experiments have shown that the automated system outperformed manual operations greatly in terms of cost-saving, productivity and reliability. Due to the potential of increasing system adaptability and cost reduction, the presented work has its theoretical and practical significance for an extension for other automation solutions in SMEs.

  13. Automated DBS microsampling, microscale automation and microflow LC-MS for therapeutic protein PK.

    Science.gov (United States)

    Zhang, Qian; Tomazela, Daniela; Vasicek, Lisa A; Spellman, Daniel S; Beaumont, Maribel; Shyong, BaoJen; Kenny, Jacqueline; Fauty, Scott; Fillgrove, Kerry; Harrelson, Jane; Bateman, Kevin P

    2016-04-01

    Reduce animal usage for discovery-stage PK studies for biologics programs using microsampling-based approaches and microscale LC-MS. We report the development of an automated DBS-based serial microsampling approach for studying the PK of therapeutic proteins in mice. Automated sample preparation and microflow LC-MS were used to enable assay miniaturization and improve overall assay throughput. Serial sampling of mice was possible over the full 21-day study period with the first six time points over 24 h being collected using automated DBS sample collection. Overall, this approach demonstrated comparable data to a previous study using single mice per time point liquid samples while reducing animal and compound requirements by 14-fold. Reduction in animals and drug material is enabled by the use of automated serial DBS microsampling for mice studies in discovery-stage studies of protein therapeutics.

  14. Quantitative Estimation for the Effectiveness of Automation

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun

    2012-01-01

    In advanced MCR, various automation systems are applied to enhance the human performance and reduce the human errors in industrial fields. It is expected that automation provides greater efficiency, lower workload, and fewer human errors. However, these promises are not always fulfilled. As the new types of events related to application of the imperfect and complex automation are occurred, it is required to analyze the effects of automation system for the performance of human operators. Therefore, we suggest the quantitative estimation method to analyze the effectiveness of the automation systems according to Level of Automation (LOA) classification, which has been developed over 30 years. The estimation of the effectiveness of automation will be achieved by calculating the failure probability of human performance related to the cognitive activities

  15. Automating spectral measurements

    Science.gov (United States)

    Goldstein, Fred T.

    2008-09-01

    This paper discusses the architecture of software utilized in spectroscopic measurements. As optical coatings become more sophisticated, there is mounting need to automate data acquisition (DAQ) from spectrophotometers. Such need is exacerbated when 100% inspection is required, ancillary devices are utilized, cost reduction is crucial, or security is vital. While instrument manufacturers normally provide point-and-click DAQ software, an application programming interface (API) may be missing. In such cases automation is impossible or expensive. An API is typically provided in libraries (*.dll, *.ocx) which may be embedded in user-developed applications. Users can thereby implement DAQ automation in several Windows languages. Another possibility, developed by FTG as an alternative to instrument manufacturers' software, is the ActiveX application (*.exe). ActiveX, a component of many Windows applications, provides means for programming and interoperability. This architecture permits a point-and-click program to act as automation client and server. Excel, for example, can control and be controlled by DAQ applications. Most importantly, ActiveX permits ancillary devices such as barcode readers and XY-stages to be easily and economically integrated into scanning procedures. Since an ActiveX application has its own user-interface, it can be independently tested. The ActiveX application then runs (visibly or invisibly) under DAQ software control. Automation capabilities are accessed via a built-in spectro-BASIC language with industry-standard (VBA-compatible) syntax. Supplementing ActiveX, spectro-BASIC also includes auxiliary serial port commands for interfacing programmable logic controllers (PLC). A typical application is automatic filter handling.

  16. Ask the experts: automation: part I.

    Science.gov (United States)

    Allinson, John L; Blick, Kenneth E; Cohen, Lucinda; Higton, David; Li, Ming

    2013-08-01

    Bioanalysis invited a selection of leading researchers to express their views on automation in the bioanalytical laboratory. The topics discussed include the challenges that the modern bioanalyst faces when integrating automation into existing drug-development processes, the impact of automation and how they envision the modern bioanalytical laboratory changing in the near future. Their enlightening responses provide a valuable insight into the impact of automation and the future of the constantly evolving bioanalytical laboratory.

  17. Semi-automated preparation of the dopamine transporter ligand [18F]FECNT for human PET imaging studies

    International Nuclear Information System (INIS)

    Voll, Ronald J.; McConathy, Jonathan; Waldrep, Michael S.; Crowe, Ronald J.; Goodman, Mark M.

    2005-01-01

    The fluorine-18 labeled dopamine transport (DAT) ligand 2β-carbomethoxy-3β-(4-chlorophenyl)-8-(2-fluoroethyl)nortropane (FECNT) has shown promising properties as an in vivo DAT imaging agent in human and monkey PET studies. A semi-automated synthesis has been developed to reliably produce [ 18 F]FECNT in a 16% decay corrected yield. This method utilizes a new [ 18 F]fluoralkylating agent and provides high purity [ 18 F]FECNT in a formulation suitable for human use

  18. An Automation Survival Guide for Media Centers.

    Science.gov (United States)

    Whaley, Roger E.

    1989-01-01

    Reviews factors that should affect the decision to automate a school media center and offers suggestions for the automation process. Topics discussed include getting the library collection ready for automation, deciding what automated functions are needed, evaluating software vendors, selecting software, and budgeting. (CLB)

  19. Automated radiosynthesis of no-carrier-added 4-[18F]fluoroiodobenzene: a versatile building block in 18F radiochemistry.

    Science.gov (United States)

    Way, Jenilee Dawn; Wuest, Frank

    2014-02-01

    4-[18F]Fluoroiodobenzene ([18F]FIB) is a versatile building block in 18F radiochemistry used in various transition metal-mediated C-C and C-N cross-coupling reactions and [18F]fluoroarylation reactions. Various synthesis routes have been described for the preparation of [18F]FIB. However, to date, no automated synthesis of [18F]FIB has been reported to allow access to larger amounts of [18F]FIB in high radiochemical and chemical purity. Herein, we describe an automated synthesis of no-carrier-added [18F]FIB on a GE TRACERlab™ FX automated synthesis unit starting from commercially available(4-iodophenyl)diphenylsulfonium triflate as the labelling precursor. [18F]FIB was prepared in high radiochemical yields of 89 ± 10% (decay-corrected, n = 7) within 60 min, including HPLC purification. The radiochemical purity exceeded 95%, and specific activity was greater than 40 GBq/μmol. Typically, from an experiment, 6.4 GBq of [18F]FIB could be obtained starting from 10.4 GBq of [18F]fluoride.

  20. Demands on digital automation; Anforderungen an die Digitale Automation

    Energy Technology Data Exchange (ETDEWEB)

    Bieler, P.

    1995-12-31

    In chapter 12 of the anthology about building control the demands on digital automation are presented. The following aspects are discussed: variety of the companies` philosophies, demands of the customer/investor, demands of the use of buildings/rooms, the user, point of view of manufacturer of technical plants. (BWI) [Deutsch] Kapitel 12 des Sammelbandes ueber Building Control stellt die Anforderungen an die Digitale Automation vor. In diesem Zusammenhang wird auf folgende Themenbereiche eingegangen: Spektrum der Firmenphilosophien, Forderungen der Auftraggeber/Investoren, der Gebaeude-/Raumnutzung, der Betreiber sowie Sicht der Ersteller betriebstechnischer Anlagen. (BWI)

  1. Disassembly automation automated systems with cognitive abilities

    CERN Document Server

    Vongbunyong, Supachai

    2015-01-01

    This book presents a number of aspects to be considered in the development of disassembly automation, including the mechanical system, vision system and intelligent planner. The implementation of cognitive robotics increases the flexibility and degree of autonomy of the disassembly system. Disassembly, as a step in the treatment of end-of-life products, can allow the recovery of embodied value left within disposed products, as well as the appropriate separation of potentially-hazardous components. In the end-of-life treatment industry, disassembly has largely been limited to manual labor, which is expensive in developed countries. Automation is one possible solution for economic feasibility. The target audience primarily comprises researchers and experts in the field, but the book may also be beneficial for graduate students.

  2. Augmented Automated Material Accounting Statistics System (AMASS)

    International Nuclear Information System (INIS)

    Lumb, R.F.; Messinger, M.; Tingey, F.H.

    1983-01-01

    This paper describes an extension of the AMASS methodology which was previously presented at the 1981 INMM annual meeting. The main thrust of the current effort is to develop procedures and a computer program for estimating the variance of an Inventory Difference when many sources of variability, other than measurement error, are admitted in the model. Procedures also are included for the estimation of the variances associated with measurement error estimates and their effect on the estimated limit of error of the inventory difference (LEID). The algorithm for the LEID measurement component uncertainty involves the propagated component measurement variance estimates as well as their associated degrees of freedom. The methodology and supporting computer software is referred to as the augmented Automated Material Accounting Statistics System (AMASS). Specifically, AMASS accommodates five source effects. These are: (1) measurement errors (2) known but unmeasured effects (3) measurement adjustment effects (4) unmeasured process hold-up effects (5) residual process variation A major result of this effort is a procedure for determining the effect of bias correction on LEID, properly taking into account all the covariances that exist. This paper briefly describes the basic models that are assumed; some of the estimation procedures consistent with the model; data requirements, emphasizing availability and other practical considerations; discusses implications for bias corrections; and concludes by briefly describing the supporting computer program

  3. Problems of collaborative work of the automated process control system (APCS) and the its information security and solutions.

    Science.gov (United States)

    Arakelyan, E. K.; Andryushin, A. V.; Mezin, S. V.; Kosoy, A. A.; Kalinina, Ya V.; Khokhlov, I. S.

    2017-11-01

    The principle of interaction of the specified systems of technological protections by the Automated process control system (APCS) and information safety in case of incorrect execution of the algorithm of technological protection is offered. - checking the correctness of the operation of technological protection in each specific situation using the functional relationship between the monitored parameters. The methodology for assessing the economic feasibility of developing and implementing an information security system.

  4. Automations influence on nuclear power plants: a look at three accidents and how automation played a role.

    Science.gov (United States)

    Schmitt, Kara

    2012-01-01

    Nuclear power is one of the ways that we can design an efficient sustainable future. Automation is the primary system used to assist operators in the task of monitoring and controlling nuclear power plants (NPP). Automation performs tasks such as assessing the status of the plant's operations as well as making real time life critical situational specific decisions. While the advantages and disadvantages of automation are well studied in variety of domains, accidents remind us that there is still vulnerability to unknown variables. This paper will look at the effects of automation within three NPP accidents and incidents and will consider why automation failed in preventing these accidents from occurring. It will also review the accidents at the Three Mile Island, Chernobyl, and Fukushima Daiichi NPP's in order to determine where better use of automation could have resulted in a more desirable outcome.

  5. Automated Formal Verification for PLC Control Systems

    CERN Multimedia

    Fernández Adiego, Borja

    2014-01-01

    Programmable Logic Controllers (PLCs) are widely used devices used in industrial control systems. Ensuring that the PLC software is compliant with its specification is a challenging task. Formal verification has become a recommended practice to ensure the correctness of the safety-critical software. However, these techniques are still not widely applied in industry due to the complexity of building formal models, which represent the system and the formalization of requirement specifications. We propose a general methodology to perform automated model checking of complex properties expressed in temporal logics (e.g. CTL, LTL) on PLC programs. This methodology is based on an Intermediate Model (IM), meant to transform PLC programs written in any of the languages described in the IEC 61131-3 standard (ST, IL, etc.) to different modeling languages of verification tools. This approach has been applied to CERN PLC programs validating the methodology.

  6. Semantics and correctness proofs for programs with partial functions

    International Nuclear Information System (INIS)

    Yakhnis, A.; Yakhnis, V.

    1996-01-01

    This paper presents a portion of the work on specification, design, and implementation of safety-critical systems such as reactor control systems. A natural approach to this problem, once all the requirements are captured, would be to state the requirements formally and then either to prove (preferably via automated tools) that the system conforms to spec (program verification), or to try to simultaneously generate the system and a mathematical proof that the requirements are being met (program derivation). An obstacle to this is frequent presence of partially defined operations within the software and its specifications. Indeed, the usual proofs via first order logic presuppose everywhere defined operations. Recognizing this problem, David Gries, in ''The Science of Programming,'' 1981, introduced the concept of partial functions into the mainstream of program correctness and gave hints how his treatment of partial functions could be formalized. Still, however, existing theorem provers and software verifiers have difficulties in checking software with partial functions, because of absence of uniform first order treatment of partial functions within classical 2-valued logic. Several rigorous mechanisms that took partiality into account were introduced [Wirsing 1990, Breu 1991, VDM 1986, 1990, etc.]. However, they either did not discuss correctness proofs or departed from first order logic. To fill this gap, the authors provide a semantics for software correctness proofs with partial functions within classical 2-valued 1st order logic. They formalize the Gries treatment of partial functions and also cover computations of functions whose argument lists may be only partially available. An example is nuclear reactor control relying on sensors which may fail to deliver sense data. This approach is sufficiently general to cover correctness proofs in various implementation languages

  7. Automated System Marketplace 1994.

    Science.gov (United States)

    Griffiths, Jose-Marie; Kertis, Kimberly

    1994-01-01

    Reports results of the 1994 Automated System Marketplace survey based on responses from 60 vendors. Highlights include changes in the library automation marketplace; estimated library systems revenues; minicomputer and microcomputer-based systems; marketplace trends; global markets and mergers; research needs; new purchase processes; and profiles…

  8. Automation in Warehouse Development

    NARCIS (Netherlands)

    Hamberg, R.; Verriet, J.

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and

  9. Chef infrastructure automation cookbook

    CERN Document Server

    Marschall, Matthias

    2013-01-01

    Chef Infrastructure Automation Cookbook contains practical recipes on everything you will need to automate your infrastructure using Chef. The book is packed with illustrated code examples to automate your server and cloud infrastructure.The book first shows you the simplest way to achieve a certain task. Then it explains every step in detail, so that you can build your knowledge about how things work. Eventually, the book shows you additional things to consider for each approach. That way, you can learn step-by-step and build profound knowledge on how to go about your configuration management

  10. Electrically evoked compound action potentials artefact rejection by independent component analysis: procedure automation.

    Science.gov (United States)

    Akhoun, Idrick; McKay, Colette; El-Deredy, Wael

    2015-01-15

    Independent-components-analysis (ICA) successfully separated electrically-evoked compound action potentials (ECAPs) from the stimulation artefact and noise (ECAP-ICA, Akhoun et al., 2013). This paper shows how to automate the ECAP-ICA artefact cancellation process. Raw-ECAPs without artefact rejection were consecutively recorded for each stimulation condition from at least 8 intra-cochlear electrodes. Firstly, amplifier-saturated recordings were discarded, and the data from different stimulus conditions (different current-levels) were concatenated temporally. The key aspect of the automation procedure was the sequential deductive source categorisation after ICA was applied with a restriction to 4 sources. The stereotypical aspect of the 4 sources enables their automatic classification as two artefact components, a noise and the sought ECAP based on theoretical and empirical considerations. The automatic procedure was tested using 8 cochlear implant (CI) users and one to four stimulus electrodes. The artefact and noise sources were successively identified and discarded, leaving the ECAP as the remaining source. The automated ECAP-ICA procedure successfully extracted the correct ECAPs compared to standard clinical forward masking paradigm in 22 out of 26 cases. ECAP-ICA does not require extracting the ECAP from a combination of distinct buffers as it is the case with regular methods. It is an alternative that does not have the possible bias of traditional artefact rejections such as alternate-polarity or forward-masking paradigms. The ECAP-ICA procedure bears clinical relevance, for example as the artefact rejection sub-module of automated ECAP-threshold detection techniques, which are common features of CI clinical fitting software. Copyright © 2014. Published by Elsevier B.V.

  11. Weighted Mean of Signal Intensity for Unbiased Fiber Tracking of Skeletal Muscles: Development of a New Method and Comparison With Other Correction Techniques.

    Science.gov (United States)

    Giraudo, Chiara; Motyka, Stanislav; Weber, Michael; Resinger, Christoph; Thorsten, Feiweier; Traxler, Hannes; Trattnig, Siegfried; Bogner, Wolfgang

    2017-08-01

    The aim of this study was to investigate the origin of random image artifacts in stimulated echo acquisition mode diffusion tensor imaging (STEAM-DTI), assess the role of averaging, develop an automated artifact postprocessing correction method using weighted mean of signal intensities (WMSIs), and compare it with other correction techniques. Institutional review board approval and written informed consent were obtained. The right calf and thigh of 10 volunteers were scanned on a 3 T magnetic resonance imaging scanner using a STEAM-DTI sequence.Artifacts (ie, signal loss) in STEAM-based DTI, presumably caused by involuntary muscle contractions, were investigated in volunteers and ex vivo (ie, human cadaver calf and turkey leg using the same DTI parameters as for the volunteers). An automated postprocessing artifact correction method based on the WMSI was developed and compared with previous approaches (ie, iteratively reweighted linear least squares and informed robust estimation of tensors by outlier rejection [iRESTORE]). Diffusion tensor imaging and fiber tracking metrics, using different averages and artifact corrections, were compared for region of interest- and mask-based analyses. One-way repeated measures analysis of variance with Greenhouse-Geisser correction and Bonferroni post hoc tests were used to evaluate differences among all tested conditions. Qualitative assessment (ie, images quality) for native and corrected images was performed using the paired t test. Randomly localized and shaped artifacts affected all volunteer data sets. Artifact burden during voluntary muscle contractions increased on average from 23.1% to 77.5% but were absent ex vivo. Diffusion tensor imaging metrics (mean diffusivity, fractional anisotropy, radial diffusivity, and axial diffusivity) had a heterogeneous behavior, but in the range reported by literature. Fiber track metrics (number, length, and volume) significantly improved in both calves and thighs after artifact

  12. An algorithm developed in Matlab for the automatic selection of cut-off frequencies, in the correction of strong motion data

    Science.gov (United States)

    Sakkas, Georgios; Sakellariou, Nikolaos

    2018-05-01

    Strong motion recordings are the key in many earthquake engineering applications and are also fundamental for seismic design. The present study focuses on the automated correction of accelerograms, analog and digital. The main feature of the proposed algorithm is the automatic selection for the cut-off frequencies based on a minimum spectral value in a predefined frequency bandwidth, instead of the typical signal-to-noise approach. The algorithm follows the basic steps of the correction procedure (instrument correction, baseline correction and appropriate filtering). Besides the corrected time histories, Peak Ground Acceleration, Peak Ground Velocity, Peak Ground Displacement values and the corrected Fourier Spectra are also calculated as well as the response spectra. The algorithm is written in Matlab environment, is fast enough and can be used for batch processing or in real-time applications. In addition, the possibility to also perform a signal-to-noise ratio is added as well as to perform causal or acausal filtering. The algorithm has been tested in six significant earthquakes (Kozani-Grevena 1995, Aigio 1995, Athens 1999, Lefkada 2003 and Kefalonia 2014) of the Greek territory with analog and digital accelerograms.

  13. Future Computer, Communication, Control and Automation

    CERN Document Server

    2011 International Conference on Computer, Communication, Control and Automation

    2012-01-01

    The volume includes a set of selected papers extended and revised from the 2011 International Conference on Computer, Communication, Control and Automation (3CA 2011). 2011 International Conference on Computer, Communication, Control and Automation (3CA 2011) has been held in Zhuhai, China, November 19-20, 2011. This volume topics covered include wireless communications, advances in wireless video, wireless sensors networking, security in wireless networks, network measurement and management, hybrid and discrete-event systems, internet analytics and automation, robotic system and applications, reconfigurable automation systems, machine vision in automation. We hope that researchers, graduate students and other interested readers benefit scientifically from the proceedings and also find it stimulating in the process.

  14. "First generation" automated DNA sequencing technology.

    Science.gov (United States)

    Slatko, Barton E; Kieleczawa, Jan; Ju, Jingyue; Gardner, Andrew F; Hendrickson, Cynthia L; Ausubel, Frederick M

    2011-10-01

    Beginning in the 1980s, automation of DNA sequencing has greatly increased throughput, reduced costs, and enabled large projects to be completed more easily. The development of automation technology paralleled the development of other aspects of DNA sequencing: better enzymes and chemistry, separation and imaging technology, sequencing protocols, robotics, and computational advancements (including base-calling algorithms with quality scores, database developments, and sequence analysis programs). Despite the emergence of high-throughput sequencing platforms, automated Sanger sequencing technology remains useful for many applications. This unit provides background and a description of the "First-Generation" automated DNA sequencing technology. It also includes protocols for using the current Applied Biosystems (ABI) automated DNA sequencing machines. © 2011 by John Wiley & Sons, Inc.

  15. Introduction matters: Manipulating trust in automation and reliance in automated driving.

    Science.gov (United States)

    Körber, Moritz; Baseler, Eva; Bengler, Klaus

    2018-01-01

    Trust in automation is a key determinant for the adoption of automated systems and their appropriate use. Therefore, it constitutes an essential research area for the introduction of automated vehicles to road traffic. In this study, we investigated the influence of trust promoting (Trust promoted group) and trust lowering (Trust lowered group) introductory information on reported trust, reliance behavior and take-over performance. Forty participants encountered three situations in a 17-min highway drive in a conditionally automated vehicle (SAE Level 3). Situation 1 and Situation 3 were non-critical situations where a take-over was optional. Situation 2 represented a critical situation where a take-over was necessary to avoid a collision. A non-driving-related task (NDRT) was presented between the situations to record the allocation of visual attention. Participants reporting a higher trust level spent less time looking at the road or instrument cluster and more time looking at the NDRT. The manipulation of introductory information resulted in medium differences in reported trust and influenced participants' reliance behavior. Participants of the Trust promoted group looked less at the road or instrument cluster and more at the NDRT. The odds of participants of the Trust promoted group to overrule the automated driving system in the non-critical situations were 3.65 times (Situation 1) to 5 times (Situation 3) higher. In Situation 2, the Trust promoted group's mean take-over time was extended by 1154 ms and the mean minimum time-to-collision was 933 ms shorter. Six participants from the Trust promoted group compared to no participant of the Trust lowered group collided with the obstacle. The results demonstrate that the individual trust level influences how much drivers monitor the environment while performing an NDRT. Introductory information influences this trust level, reliance on an automated driving system, and if a critical take-over situation can be

  16. Flexible Method for the Automated Offline-Detection of Artifacts in Multi-Channel Electroencephalogram Recordings

    DEFF Research Database (Denmark)

    Waser, Markus; Garn, Heinrich; Benke, Thomas

    2017-01-01

    . However, these preprocessing steps do not allow for complete artifact correction. We propose a method for the automated offline-detection of remaining artifacts after preprocessing in multi-channel EEG recordings. In contrast to existing methods it requires neither adaptive parameters varying between...... recordings nor a topography template. It is suited for short EEG segments and is flexible with regard to target applications. The algorithm was developed and tested on 60 clinical EEG samples of 20 seconds each that were recorded both in resting state and during cognitive activation to gain a realistic...

  17. Susy-QCD corrections to neutrlino pair production in association with a jet

    Energy Technology Data Exchange (ETDEWEB)

    Cullen, Gavin [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Greiner, Nicolas; Heinrich, Gudrun [Max-Planck-Institut fuer Physik, Muenchen (Germany)

    2012-12-15

    We present the NLO Susy-QCD corrections to the production of a pair of the lightest neutralinos plus one jet at the LHC, appearing as a monojet signature in combination with missing energy. We fully include all non-resonant diagrams, i.e. we do not assume that production and decay factorise. We derive a parameter point based on the p19MSSM which is compatible with current experimental bounds and show distributions based on missing transverse energy and jet observables. Our results are produced with the program GoSam for automated one-loop calculations in combination with MadDipole/- MadGraph for the real radiation part.

  18. Automating the Small Library.

    Science.gov (United States)

    Skapura, Robert

    1987-01-01

    Discusses the use of microcomputers for automating school libraries, both for entire systems and for specific library tasks. Highlights include available library management software, newsletters that evaluate software, constructing an evaluation matrix, steps to consider in library automation, and a brief discussion of computerized card catalogs.…

  19. GUI test automation for Qt application

    OpenAIRE

    Wang, Lei

    2015-01-01

    GUI test automation is a popular and interesting subject in the testing industry. Many companies plan to start test automation projects in order to implement efficient, less expensive software testing. However, there are challenges for the testing team who lack experience performing GUI tests automation. Many GUI test automation projects have ended in failure due to mistakes made during the early stages of the project. The major work of this thesis is to find a solution to the challenges of e...

  20. Automation synthesis modules review

    International Nuclear Information System (INIS)

    Boschi, S.; Lodi, F.; Malizia, C.; Cicoria, G.; Marengo, M.

    2013-01-01

    The introduction of 68 Ga labelled tracers has changed the diagnostic approach to neuroendocrine tumours and the availability of a reliable, long-lived 68 Ge/ 68 Ga generator has been at the bases of the development of 68 Ga radiopharmacy. The huge increase in clinical demand, the impact of regulatory issues and a careful radioprotection of the operators have boosted for extensive automation of the production process. The development of automated systems for 68 Ga radiochemistry, different engineering and software strategies and post-processing of the eluate were discussed along with impact of automation with regulations. - Highlights: ► Generators availability and robust chemistry boosted for the huge diffusion of 68Ga radiopharmaceuticals. ► Different technological approaches for 68Ga radiopharmaceuticals will be discussed. ► Generator eluate post processing and evolution to cassette based systems were the major issues in automation. ► Impact of regulations on the technological development will be also considered

  1. 76 FR 69755 - National Customs Automation Program Test Concerning Automated Commercial Environment (ACE...

    Science.gov (United States)

    2011-11-09

    ... DEPARTMENT OF HOMELAND SECURITY U.S. Customs and Border Protection National Customs Automation... announces U.S. Customs and Border Protection's (CBP's) plan to conduct a National Customs Automation Program... conveyance transporting the cargo to the United States. This data will fulfill merchandise entry requirements...

  2. I trust it, but I don't know why: effects of implicit attitudes toward automation on trust in an automated system.

    Science.gov (United States)

    Merritt, Stephanie M; Heimbaugh, Heather; LaChapell, Jennifer; Lee, Deborah

    2013-06-01

    This study is the first to examine the influence of implicit attitudes toward automation on users' trust in automation. Past empirical work has examined explicit (conscious) influences on user level of trust in automation but has not yet measured implicit influences. We examine concurrent effects of explicit propensity to trust machines and implicit attitudes toward automation on trust in an automated system. We examine differential impacts of each under varying automation performance conditions (clearly good, ambiguous, clearly poor). Participants completed both a self-report measure of propensity to trust and an Implicit Association Test measuring implicit attitude toward automation, then performed an X-ray screening task. Automation performance was manipulated within-subjects by varying the number and obviousness of errors. Explicit propensity to trust and implicit attitude toward automation did not significantly correlate. When the automation's performance was ambiguous, implicit attitude significantly affected automation trust, and its relationship with propensity to trust was additive: Increments in either were related to increases in trust. When errors were obvious, a significant interaction between the implicit and explicit measures was found, with those high in both having higher trust. Implicit attitudes have important implications for automation trust. Users may not be able to accurately report why they experience a given level of trust. To understand why users trust or fail to trust automation, measurements of implicit and explicit predictors may be necessary. Furthermore, implicit attitude toward automation might be used as a lever to effectively calibrate trust.

  3. Automated EEG sleep staging in the term-age baby using a generative modelling approach

    Science.gov (United States)

    Pillay, Kirubin; Dereymaeker, Anneleen; Jansen, Katrien; Naulaers, Gunnar; Van Huffel, Sabine; De Vos, Maarten

    2018-06-01

    Objective. We develop a method for automated four-state sleep classification of preterm and term-born babies at term-age of 38-40 weeks postmenstrual age (the age since the last menstrual cycle of the mother) using multichannel electroencephalogram (EEG) recordings. At this critical age, EEG differentiates from broader quiet sleep (QS) and active sleep (AS) stages to four, more complex states, and the quality and timing of this differentiation is indicative of the level of brain development. However, existing methods for automated sleep classification remain focussed only on QS and AS sleep classification. Approach. EEG features were calculated from 16 EEG recordings, in 30 s epochs, and personalized feature scaling used to correct for some of the inter-recording variability, by standardizing each recording’s feature data using its mean and standard deviation. Hidden Markov models (HMMs) and Gaussian mixture models (GMMs) were trained, with the HMM incorporating knowledge of the sleep state transition probabilities. Performance of the GMM and HMM (with and without scaling) were compared, and Cohen’s kappa agreement calculated between the estimates and clinicians’ visual labels. Main results. For four-state classification, the HMM proved superior to the GMM. With the inclusion of personalized feature scaling, mean kappa (±standard deviation) was 0.62 (±0.16) compared to the GMM value of 0.55 (±0.15). Without feature scaling, kappas for the HMM and GMM dropped to 0.56 (±0.18) and 0.51 (±0.15), respectively. Significance. This is the first study to present a successful method for the automated staging of four states in term-age sleep using multichannel EEG. Results suggested a benefit in incorporating transition information using an HMM, and correcting for inter-recording variability through personalized feature scaling. Determining the timing and quality of these states are indicative of developmental delays in both preterm and term-born babies that may

  4. Robotics/Automated Systems Technicians.

    Science.gov (United States)

    Doty, Charles R.

    Major resources exist that can be used to develop or upgrade programs in community colleges and technical institutes that educate robotics/automated systems technicians. The first category of resources is Economic, Social, and Education Issues. The Office of Technology Assessment (OTA) report, "Automation and the Workplace," presents analyses of…

  5. Evaluation of an Automated Keywording System.

    Science.gov (United States)

    Malone, Linda C.; And Others

    1990-01-01

    Discussion of automated indexing techniques focuses on ways to statistically document improvements in the development of an automated keywording system over time. The system developed by the Joint Chiefs of Staff to automate the storage, categorization, and retrieval of information from military exercises is explained, and performance measures are…

  6. Development and Evaluation of A Novel and Cost-Effective Approach for Low-Cost NO₂ Sensor Drift Correction.

    Science.gov (United States)

    Sun, Li; Westerdahl, Dane; Ning, Zhi

    2017-08-19

    Emerging low-cost gas sensor technologies have received increasing attention in recent years for air quality measurements due to their small size and convenient deployment. However, in the diverse applications these sensors face many technological challenges, including sensor drift over long-term deployment that cannot be easily addressed using mathematical correction algorithms or machine learning methods. This study aims to develop a novel approach to auto-correct the drift of commonly used electrochemical nitrogen dioxide (NO₂) sensor with comprehensive evaluation of its application. The impact of environmental factors on the NO₂ electrochemical sensor in low-ppb concentration level measurement was evaluated in laboratory and the temperature and relative humidity correction algorithm was evaluated. An automated zeroing protocol was developed and assessed using a chemical absorbent to remove NO₂ as a means to perform zero correction in varying ambient conditions. The sensor system was operated in three different environments in which data were compared to a reference NO₂ analyzer. The results showed that the zero-calibration protocol effectively corrected the observed drift of the sensor output. This technique offers the ability to enhance the performance of low-cost sensor based systems and these findings suggest extension of the approach to improve data quality from sensors measuring other gaseous pollutants in urban air.

  7. Future Control and Automation : Proceedings of the 2nd International Conference on Future Control and Automation

    CERN Document Server

    2012-01-01

    This volume Future Control and Automation- Volume 2 includes best papers from 2012 2nd International Conference on Future Control and Automation (ICFCA 2012) held on July 1-2, 2012, Changsha, China. Future control and automation is the use of control systems and information technologies to reduce the need for human work in the production of goods and services. This volume can be divided into six sessions on the basis of the classification of manuscripts considered, which is listed as follows: Mathematical Modeling, Analysis and Computation, Control Engineering, Reliable Networks Design, Vehicular Communications and Networking, Automation and Mechatronics.

  8. Automation for a base station stability testing

    OpenAIRE

    Punnek, Elvis

    2016-01-01

    This Batchelor’s thesis was commissioned by Oy LM Ericsson Ab Oulu. The aim of it was to help to investigate and create a test automation solution for the stability testing of the LTE base station. The main objective was to create a test automation for a predefined test set. This test automation solution had to be created for specific environments and equipment. This work included creating the automation for the test cases and putting them to daily test automation jobs. The key factor...

  9. Computer-automated tuning of semiconductor double quantum dots into the single-electron regime

    Energy Technology Data Exchange (ETDEWEB)

    Baart, T. A.; Vandersypen, L. M. K. [QuTech, Delft University of Technology, P.O. Box 5046, 2600 GA Delft (Netherlands); Kavli Institute of Nanoscience, Delft University of Technology, P.O. Box 5046, 2600 GA Delft (Netherlands); Eendebak, P. T. [QuTech, Delft University of Technology, P.O. Box 5046, 2600 GA Delft (Netherlands); Netherlands Organisation for Applied Scientific Research (TNO), P.O. Box 155, 2600 AD Delft (Netherlands); Reichl, C.; Wegscheider, W. [Solid State Physics Laboratory, ETH Zürich, 8093 Zürich (Switzerland)

    2016-05-23

    We report the computer-automated tuning of gate-defined semiconductor double quantum dots in GaAs heterostructures. We benchmark the algorithm by creating three double quantum dots inside a linear array of four quantum dots. The algorithm sets the correct gate voltages for all the gates to tune the double quantum dots into the single-electron regime. The algorithm only requires (1) prior knowledge of the gate design and (2) the pinch-off value of the single gate T that is shared by all the quantum dots. This work significantly alleviates the user effort required to tune multiple quantum dot devices.

  10. Automated security management

    CERN Document Server

    Al-Shaer, Ehab; Xie, Geoffrey

    2013-01-01

    In this contributed volume, leading international researchers explore configuration modeling and checking, vulnerability and risk assessment, configuration analysis, and diagnostics and discovery. The authors equip readers to understand automated security management systems and techniques that increase overall network assurability and usability. These constantly changing networks defend against cyber attacks by integrating hundreds of security devices such as firewalls, IPSec gateways, IDS/IPS, authentication servers, authorization/RBAC servers, and crypto systems. Automated Security Managemen

  11. Automating methods to improve precision in Monte-Carlo event generation for particle colliders

    International Nuclear Information System (INIS)

    Gleisberg, Tanju

    2008-01-01

    The subject of this thesis was the development of tools for the automated calculation of exact matrix elements, which are a key for the systematic improvement of precision and confidence for theoretical predictions. Part I of this thesis concentrates on the calculations of cross sections at tree level. A number of extensions have been implemented in the matrix element generator AMEGIC++, namely new interaction models such as effective loop-induced couplings of the Higgs boson with massless gauge bosons, required for a number of channels for the Higgs boson search at LHC and anomalous gauge couplings, parameterizing a number of models beyond th SM. Further a special treatment to deal with complicated decay chains of heavy particles has been constructed. A significant effort went into the implementation of methods to push the limits on particle multiplicities. Two recursive methods have been implemented, the Cachazo-Svrcek-Witten recursion and the colour dressed Berends-Giele recursion. For the latter the new module COMIX has been added to the SHERPA framework. The Monte-Carlo phase space integration techniques have been completely revised, which led to significantly reduced statistical error estimates when calculating cross sections and a greatly improved unweighting efficiency for the event generation. Special integration methods have been developed to cope with the newly accessible final states. The event generation framework SHERPA directly benefits from those new developments, improving the precision and the efficiency. Part II was addressed to the automation of QCD calculations at next-to-leading order. A code has been developed, that, for the first time fully automates the real correction part of a NLO calculation. To calculate the correction for a m-parton process obeying the Catani-Seymour dipole subtraction method the following components are provided: 1. the corresponding m+1-parton tree level matrix elements, 2. a number dipole subtraction terms to remove

  12. Automating methods to improve precision in Monte-Carlo event generation for particle colliders

    Energy Technology Data Exchange (ETDEWEB)

    Gleisberg, Tanju

    2008-07-01

    The subject of this thesis was the development of tools for the automated calculation of exact matrix elements, which are a key for the systematic improvement of precision and confidence for theoretical predictions. Part I of this thesis concentrates on the calculations of cross sections at tree level. A number of extensions have been implemented in the matrix element generator AMEGIC++, namely new interaction models such as effective loop-induced couplings of the Higgs boson with massless gauge bosons, required for a number of channels for the Higgs boson search at LHC and anomalous gauge couplings, parameterizing a number of models beyond th SM. Further a special treatment to deal with complicated decay chains of heavy particles has been constructed. A significant effort went into the implementation of methods to push the limits on particle multiplicities. Two recursive methods have been implemented, the Cachazo-Svrcek-Witten recursion and the colour dressed Berends-Giele recursion. For the latter the new module COMIX has been added to the SHERPA framework. The Monte-Carlo phase space integration techniques have been completely revised, which led to significantly reduced statistical error estimates when calculating cross sections and a greatly improved unweighting efficiency for the event generation. Special integration methods have been developed to cope with the newly accessible final states. The event generation framework SHERPA directly benefits from those new developments, improving the precision and the efficiency. Part II was addressed to the automation of QCD calculations at next-to-leading order. A code has been developed, that, for the first time fully automates the real correction part of a NLO calculation. To calculate the correction for a m-parton process obeying the Catani-Seymour dipole subtraction method the following components are provided: 1. the corresponding m+1-parton tree level matrix elements, 2. a number dipole subtraction terms to remove

  13. Development of automated operating procedure system using fuzzy colored petri nets for nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Seong, Poong Hyun

    2004-01-01

    In this work, AuTomated Operating Procedure System (ATOPS) is developed. ATOPS is an automation system for emergency operation of a nuclear power plant (NPP) and it can monitor signals, diagnose statuses, and generate control actions according to corresponding operating procedures without any human operator's help. Main functions of ATOPS are an anomaly detection function and a procedure execution function but only the procedure execution function is implemented in this work because this work is just the first step. In the procedure execution function, operating procedures of NPPs are analyzed and modeled using Fuzzy Colored Petri Nets (FCPN) and executed depending on decision making of the inference engine. In this work, ATOPS prototype is developed to demonstrate its feasibility and it is also validated using the FISA-2/WS simulator. The validation is performed for the cases of a loss of coolant accident (LOCA) and a steam generator tube rupture (SGTR). The simulation results show that ATOPS works correctly in the emergency situations

  14. Automation System Products and Research

    OpenAIRE

    Rintala, Mikko; Sormunen, Jussi; Kuisma, Petri; Rahkala, Matti

    2014-01-01

    Automation systems are used in most buildings nowadays. In the past they were mainly used in industry to control and monitor critical systems. During the past few decades the automation systems have become more common and are used today from big industrial solutions to homes of private customers. With the growing need for ecologic and cost-efficient management systems, home and building automation systems are becoming a standard way of controlling lighting, ventilation, heating etc. Auto...

  15. Guidelines for Automation Project Execution

    OpenAIRE

    Takkinen, Heidi

    2011-01-01

    The purpose of this Master’s thesis was to create instructions for executing an automation project. Sarlin Oy Ab needed directions on how to execute an automation project. Sarlin is starting up a new business area offering total project solutions for customers. Sarlin focuses on small and minor automation projects on domestic markets. The thesis represents issues related to project execution starting from the theory of the project to its kick-off and termination. Site work is one importan...

  16. Methods for Automated and Continuous Commissioning of Building Systems

    Energy Technology Data Exchange (ETDEWEB)

    Larry Luskay; Michael Brambley; Srinivas Katipamula

    2003-04-30

    Avoidance of poorly installed HVAC systems is best accomplished at the close of construction by having a building and its systems put ''through their paces'' with a well conducted commissioning process. This research project focused on developing key components to enable the development of tools that will automatically detect and correct equipment operating problems, thus providing continuous and automatic commissioning of the HVAC systems throughout the life of a facility. A study of pervasive operating problems reveled the following would most benefit from an automated and continuous commissioning process: (1) faulty economizer operation; (2) malfunctioning sensors; (3) malfunctioning valves and dampers, and (4) access to project design data. Methodologies for detecting system operation faults in these areas were developed and validated in ''bare-bones'' forms within standard software such as spreadsheets, databases, statistical or mathematical packages. Demonstrations included flow diagrams and simplified mock-up applications. Techniques to manage data were demonstrated by illustrating how test forms could be populated with original design information and the recommended sequence of operation for equipment systems. Proposed tools would use measured data, design data, and equipment operating parameters to diagnosis system problems. Steps for future research are suggested to help more toward practical application of automated commissioning and its high potential to improve equipment availability, increase occupant comfort, and extend the life of system equipment.

  17. Automated Reuse of Scientific Subroutine Libraries through Deductive Synthesis

    Science.gov (United States)

    Lowry, Michael R.; Pressburger, Thomas; VanBaalen, Jeffrey; Roach, Steven

    1997-01-01

    Systematic software construction offers the potential of elevating software engineering from an art-form to an engineering discipline. The desired result is more predictable software development leading to better quality and more maintainable software. However, the overhead costs associated with the formalisms, mathematics, and methods of systematic software construction have largely precluded their adoption in real-world software development. In fact, many mainstream software development organizations, such as Microsoft, still maintain a predominantly oral culture for software development projects; which is far removed from a formalism-based culture for software development. An exception is the limited domain of safety-critical software, where the high-assuiance inherent in systematic software construction justifies the additional cost. We believe that systematic software construction will only be adopted by mainstream software development organization when the overhead costs have been greatly reduced. Two approaches to cost mitigation are reuse (amortizing costs over many applications) and automation. For the last four years, NASA Ames has funded the Amphion project, whose objective is to automate software reuse through techniques from systematic software construction. In particular, deductive program synthesis (i.e., program extraction from proofs) is used to derive a composition of software components (e.g., subroutines) that correctly implements a specification. The construction of reuse libraries of software components is the standard software engineering solution for improving software development productivity and quality.

  18. Organizational changes and automation: Towards a customer-oriented automation: Part 3

    International Nuclear Information System (INIS)

    Van Gelder, J.W.

    1994-01-01

    Automation offers great opportunities in the efforts of energy utilities in the Netherlands to reorganize towards more customer-oriented businesses. However, automation in itself is not enough. First, the organizational structure has to be changed considerably. Various energy utilities have already started on it. The restructuring principle is the same everywhere, but the way it is implemented differs widely. In this article attention is paid to the necessity of realizing an integrated computerized system, which, however, is not feasible at the moment. The second best alternative is to use various computerized systems, capable of two-way data exchange. Two viable approaches are discussed: (1) one operating system on which all automated systems within a company should run, or (2) a selective system linking on the basis of required speed information exchange. Option (2) offers more freedom of selecting the system. 2 figs

  19. Intelligent viewing control for robotic and automation systems

    Science.gov (United States)

    Schenker, Paul S.; Peters, Stephen F.; Paljug, Eric D.; Kim, Won S.

    1994-10-01

    We present a new system for supervisory automated control of multiple remote cameras. Our primary purpose in developing this system has been to provide capability for knowledge- based, `hands-off' viewing during execution of teleoperation/telerobotic tasks. The reported technology has broader applicability to remote surveillance, telescience observation, automated manufacturing workcells, etc. We refer to this new capability as `Intelligent Viewing Control (IVC),' distinguishing it from a simple programmed camera motion control. In the IVC system, camera viewing assignment, sequencing, positioning, panning, and parameter adjustment (zoom, focus, aperture, etc.) are invoked and interactively executed by real-time by a knowledge-based controller, drawing on a priori known task models and constraints, including operator preferences. This multi-camera control is integrated with a real-time, high-fidelity 3D graphics simulation, which is correctly calibrated in perspective to the actual cameras and their platform kinematics (translation/pan-tilt). Such merged graphics- with-video design allows the system user to preview and modify the planned (`choreographed') viewing sequences. Further, during actual task execution, the system operator has available both the resulting optimized video sequence, as well as supplementary graphics views from arbitrary perspectives. IVC, including operator-interactive designation of robot task actions, is presented to the user as a well-integrated video-graphic single screen user interface allowing easy access to all relevant telerobot communication/command/control resources. We describe and show pictorial results of a preliminary IVC system implementation for telerobotic servicing of a satellite.

  20. You're a What? Automation Technician

    Science.gov (United States)

    Mullins, John

    2010-01-01

    Many people think of automation as laborsaving technology, but it sure keeps Jim Duffell busy. Defined simply, automation is a technique for making a device run or a process occur with minimal direct human intervention. But the functions and technologies involved in automated manufacturing are complex. Nearly all functions, from orders coming in…

  1. Does Automated Feedback Improve Writing Quality?

    Science.gov (United States)

    Wilson, Joshua; Olinghouse, Natalie G.; Andrada, Gilbert N.

    2014-01-01

    The current study examines data from students in grades 4-8 who participated in a statewide computer-based benchmark writing assessment that featured automated essay scoring and automated feedback. We examined whether the use of automated feedback was associated with gains in writing quality across revisions to an essay, and with transfer effects…

  2. System reliability, performance and trust in adaptable automation.

    Science.gov (United States)

    Chavaillaz, Alain; Wastell, David; Sauer, Jürgen

    2016-01-01

    The present study examined the effects of reduced system reliability on operator performance and automation management in an adaptable automation environment. 39 operators were randomly assigned to one of three experimental groups: low (60%), medium (80%), and high (100%) reliability of automation support. The support system provided five incremental levels of automation which operators could freely select according to their needs. After 3 h of training on a simulated process control task (AutoCAMS) in which the automation worked infallibly, operator performance and automation management were measured during a 2.5-h testing session. Trust and workload were also assessed through questionnaires. Results showed that although reduced system reliability resulted in lower levels of trust towards automation, there were no corresponding differences in the operators' reliance on automation. While operators showed overall a noteworthy ability to cope with automation failure, there were, however, decrements in diagnostic speed and prospective memory with lower reliability. Copyright © 2015. Published by Elsevier Ltd.

  3. NLO electroweak automation and precise predictions for W+ multijet production at the LHC

    International Nuclear Information System (INIS)

    Kallweit, S.; Lindert, J.M.; Maierhöfer, P.; Pozzorini, S.; Schönherr, M.

    2015-01-01

    We present a fully automated implementation of next-to-leading order electroweak (NLO EW) corrections in the OPENLOOPS matrix-element generator combined with the SHERPA and MUNICH Monte Carlo frameworks. The process-independent character of the implemented algorithms opens the door to NLO QCD+EW simulations for a vast range of Standard Model processes, up to high particle multiplicity, at current and future colliders. As a first application, we present NLO QCD+EW predictions for the production of positively charged on-shell W bosons in association with up to three jets at the Large Hadron Collider. At the TeV energy scale, due to the presence of large Sudakov logarithms, EW corrections reach the 20–40% level and play an important role for searches of physics beyond the Standard Model. The dependence of NLO EW effects on the jet multiplicity is investigated in detail, and we find that W+ multijet final states feature genuinely different EW effects as compared to the case of W+1 jet.

  4. Automated estimation of defects in magnetographic defectoscopy. 1. Automated magnetographic flow detectors

    International Nuclear Information System (INIS)

    Mikhajlov, S.P.; Vaulin, S.L.; Shcherbinin, V.E.; Shur, M.L.

    1993-01-01

    Consideration is given to specific features and possible functions of equipment for automated estimation of stretched continuity defects for samples with plane surface in magnetographic defectoscopy are discussed. Two models of automated magnetographic flow detectors, those with built-in microcomputer and in the form computer attachment, are described. Directions of further researches and development are discussed. 35 refs., 6 figs

  5. Order Division Automated System.

    Science.gov (United States)

    Kniemeyer, Justin M.; And Others

    This publication was prepared by the Order Division Automation Project staff to fulfill the Library of Congress' requirement to document all automation efforts. The report was originally intended for internal use only and not for distribution outside the Library. It is now felt that the library community at-large may have an interest in the…

  6. Myths in test automation

    Directory of Open Access Journals (Sweden)

    Jazmine Francis

    2014-12-01

    Full Text Available Myths in automation of software testing is an issue of discussion that echoes about the areas of service in validation of software industry. Probably, the first though that appears in knowledgeable reader would be Why this old topic again? What's New to discuss the matter? But, for the first time everyone agrees that undoubtedly automation testing today is not today what it used to be ten or fifteen years ago, because it has evolved in scope and magnitude. What began as a simple linear scripts for web applications today has a complex architecture and a hybrid framework to facilitate the implementation of testing applications developed with various platforms and technologies. Undoubtedly automation has advanced, but so did the myths associated with it. The change in perspective and knowledge of people on automation has altered the terrain. This article reflects the points of views and experience of the author in what has to do with the transformation of the original myths in new versions, and how they are derived; also provides his thoughts on the new generation of myths.

  7. Myths in test automation

    Directory of Open Access Journals (Sweden)

    Jazmine Francis

    2015-01-01

    Full Text Available Myths in automation of software testing is an issue of discussion that echoes about the areas of service in validation of software industry. Probably, the first though that appears in knowledgeable reader would be Why this old topic again? What's New to discuss the matter? But, for the first time everyone agrees that undoubtedly automation testing today is not today what it used to be ten or fifteen years ago, because it has evolved in scope and magnitude. What began as a simple linear scripts for web applications today has a complex architecture and a hybrid framework to facilitate the implementation of testing applications developed with various platforms and technologies. Undoubtedly automation has advanced, but so did the myths associated with it. The change in perspective and knowledge of people on automation has altered the terrain. This article reflects the points of views and experience of the author in what has to do with the transformation of the original myths in new versions, and how they are derived; also provides his thoughts on the new generation of myths.

  8. Hydraulic correction method (HCM) to enhance the efficiency of SRTM DEM in flood modeling

    Science.gov (United States)

    Chen, Huili; Liang, Qiuhua; Liu, Yong; Xie, Shuguang

    2018-04-01

    Digital Elevation Model (DEM) is one of the most important controlling factors determining the simulation accuracy of hydraulic models. However, the currently available global topographic data is confronted with limitations for application in 2-D hydraulic modeling, mainly due to the existence of vegetation bias, random errors and insufficient spatial resolution. A hydraulic correction method (HCM) for the SRTM DEM is proposed in this study to improve modeling accuracy. Firstly, we employ the global vegetation corrected DEM (i.e. Bare-Earth DEM), developed from the SRTM DEM to include both vegetation height and SRTM vegetation signal. Then, a newly released DEM, removing both vegetation bias and random errors (i.e. Multi-Error Removed DEM), is employed to overcome the limitation of height errors. Last, an approach to correct the Multi-Error Removed DEM is presented to account for the insufficiency of spatial resolution, ensuring flow connectivity of the river networks. The approach involves: (a) extracting river networks from the Multi-Error Removed DEM using an automated algorithm in ArcGIS; (b) correcting the location and layout of extracted streams with the aid of Google Earth platform and Remote Sensing imagery; and (c) removing the positive biases of the raised segment in the river networks based on bed slope to generate the hydraulically corrected DEM. The proposed HCM utilizes easily available data and tools to improve the flow connectivity of river networks without manual adjustment. To demonstrate the advantages of HCM, an extreme flood event in Huifa River Basin (China) is simulated on the original DEM, Bare-Earth DEM, Multi-Error removed DEM, and hydraulically corrected DEM using an integrated hydrologic-hydraulic model. A comparative analysis is subsequently performed to assess the simulation accuracy and performance of four different DEMs and favorable results have been obtained on the corrected DEM.

  9. Automated uranium analysis by delayed-neutron counting

    International Nuclear Information System (INIS)

    Kunzendorf, H.; Loevborg, L.; Christiansen, E.M.

    1980-10-01

    Automated uranium analysis by fission-induced delayed-neutron counting is described. A short description is given of the instrumentation including transfer system, process control, irradiation and counting sites, and computer operations. Characteristic parameters of the facility (sample preparations, background, and standards) are discussed. A sensitivity of 817 +- 22 counts per 10 -6 g U is found using irradiation, delay, and counting times of 20 s, 5 s, and 10 s, respectively. Presicion is generally less than 1% for normal geological samples. Critical level and detection limits for 7.5 g samples are 8 and 16 ppb, respectively. The importance of some physical and elemental interferences are outlined. Dead-time corrections of measured count rates are necessary and a polynomical expression is used for count rates up to 10 5 . The presence of rare earth elements is regarded as the most important elemental interference. A typical application is given and other areas of application are described. (auther)

  10. Proof-of-concept automation of propellant processing

    Science.gov (United States)

    Ramohalli, Kumar; Schallhorn, P. A.

    1989-01-01

    For space-based propellant production, automation of the process is needed. Currently, all phases of terrestrial production have some form of human interaction. A mixer was acquired to help perform the tasks of automation. A heating system to be used with the mixer was designed, built, and installed. Tests performed on the heating system verify design criteria. An IBM PS/2 personal computer was acquired for the future automation work. It is hoped that some the mixing process itself will be automated. This is a concept demonstration task; proving that propellant production can be automated reliably.

  11. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ...

  12. Automated Test-Form Generation

    Science.gov (United States)

    van der Linden, Wim J.; Diao, Qi

    2011-01-01

    In automated test assembly (ATA), the methodology of mixed-integer programming is used to select test items from an item bank to meet the specifications for a desired test form and optimize its measurement accuracy. The same methodology can be used to automate the formatting of the set of selected items into the actual test form. Three different…

  13. Context-Aware user interfaces in automation

    DEFF Research Database (Denmark)

    Olsen, Mikkel Holm

    2007-01-01

    Automation is deployed in a great range of different domains such as the chemical industry, the production of consumer goods, the production of energy (both in terms of power plants and in the petrochemical industry), transportation and several others. Through several decades the complexity...... of automation systems and the level of automation have been rising. This has caused problems regarding the operator's ability to comprehend the overall situation and state of the automation system, in particular in abnormal situations. The amount of data available to the operator results in information overload....... Since context-aware applications have been developed in other research areas it seems natural to analyze the findings of this research and examine how this can be applied to the domain of automation systems. By evaluating existing architectures for the development of context-aware applications we find...

  14. Automated transit planning, operation, and applications

    CERN Document Server

    Liu, Rongfang

    2016-01-01

    This book analyzes the successful implementations of automated transit in various international locations, such as Paris, Toronto, London, and Kuala Lumpur, and investigates the apparent lack of automated transit applications in the urban environment in the United States. The book begins with a brief definition of automated transit and its historical development. After a thorough description of the technical specifications, the author highlights a few applications from each sub-group of the automated transit spectrum. International case studies display various technologies and their applications, and identify vital factors that affect each system and performance evaluations of existing applications. The book then discusses the planning and operation of automated transit applications at both macro and micro levels. Finally, the book covers a number of less successful concepts, as well as the lessons learned, allow ng readers to gain a comprehensive understanding of the topic.

  15. Participation through Automation: Fully Automated Critical PeakPricing in Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Piette, Mary Ann; Watson, David S.; Motegi, Naoya; Kiliccote,Sila; Linkugel, Eric

    2006-06-20

    California electric utilities have been exploring the use of dynamic critical peak prices (CPP) and other demand response programs to help reduce peaks in customer electric loads. CPP is a tariff design to promote demand response. Levels of automation in DR can be defined as follows: Manual Demand Response involves a potentially labor-intensive approach such as manually turning off or changing comfort set points at each equipment switch or controller. Semi-Automated Demand Response involves a pre-programmed demand response strategy initiated by a person via centralized control system. Fully Automated Demand Response does not involve human intervention, but is initiated at a home, building, or facility through receipt of an external communications signal. The receipt of the external signal initiates pre-programmed demand response strategies. They refer to this as Auto-DR. This paper describes the development, testing, and results from automated CPP (Auto-CPP) as part of a utility project in California. The paper presents the project description and test methodology. This is followed by a discussion of Auto-DR strategies used in the field test buildings. They present a sample Auto-CPP load shape case study, and a selection of the Auto-CPP response data from September 29, 2005. If all twelve sites reached their maximum saving simultaneously, a total of approximately 2 MW of DR is available from these twelve sites that represent about two million ft{sup 2}. The average DR was about half that value, at about 1 MW. These savings translate to about 0.5 to 1.0 W/ft{sup 2} of demand reduction. They are continuing field demonstrations and economic evaluations to pursue increasing penetrations of automated DR that has demonstrated ability to provide a valuable DR resource for California.

  16. The impact of recreational MDMA 'ecstasy' use on global form processing.

    Science.gov (United States)

    White, Claire; Edwards, Mark; Brown, John; Bell, Jason

    2014-11-01

    The ability to integrate local orientation information into a global form percept was investigated in long-term ecstasy users. Evidence suggests that ecstasy disrupts the serotonin system, with the visual areas of the brain being particularly susceptible. Previous research has found altered orientation processing in the primary visual area (V1) of users, thought to be due to disrupted serotonin-mediated lateral inhibition. The current study aimed to investigate whether orientation deficits extend to higher visual areas involved in global form processing. Forty-five participants completed a psychophysical (Glass pattern) study allowing an investigation into the mechanisms underlying global form processing and sensitivity to changes in the offset of the stimuli (jitter). A subgroup of polydrug-ecstasy users (n=6) with high ecstasy use had significantly higher thresholds for the detection of Glass patterns than controls (n=21, p=0.039) after Bonferroni correction. There was also a significant interaction between jitter level and drug-group, with polydrug-ecstasy users showing reduced sensitivity to alterations in jitter level (p=0.003). These results extend previous research, suggesting disrupted global form processing and reduced sensitivity to orientation jitter with ecstasy use. Further research is needed to investigate this finding in a larger sample of heavy ecstasy users and to differentiate the effects of other drugs. © The Author(s) 2014.

  17. The role of auditory feedback in music-supported stroke rehabilitation: A single-blinded randomised controlled intervention.

    Science.gov (United States)

    van Vugt, F T; Kafczyk, T; Kuhn, W; Rollnik, J D; Tillmann, B; Altenmüller, E

    2016-01-01

    Learning to play musical instruments such as piano was previously shown to benefit post-stroke motor rehabilitation. Previous work hypothesised that the mechanism of this rehabilitation is that patients use auditory feedback to correct their movements and therefore show motor learning. We tested this hypothesis by manipulating the auditory feedback timing in a way that should disrupt such error-based learning. We contrasted a patient group undergoing music-supported therapy on a piano that emits sounds immediately (as in previous studies) with a group whose sounds are presented after a jittered delay. The delay was not noticeable to patients. Thirty-four patients in early stroke rehabilitation with moderate motor impairment and no previous musical background learned to play the piano using simple finger exercises and familiar children's songs. Rehabilitation outcome was not impaired in the jitter group relative to the normal group. Conversely, some clinical tests suggests the jitter group outperformed the normal group. Auditory feedback-based motor learning is not the beneficial mechanism of music-supported therapy. Immediate auditory feedback therapy may be suboptimal. Jittered delay may increase efficacy of the proposed therapy and allow patients to fully benefit from motivational factors of music training. Our study shows a novel way to test hypotheses concerning music training in a single-blinded way, which is an important improvement over existing unblinded tests of music interventions.

  18. Managing laboratory automation

    OpenAIRE

    Saboe, Thomas J.

    1995-01-01

    This paper discusses the process of managing automated systems through their life cycles within the quality-control (QC) laboratory environment. The focus is on the process of directing and managing the evolving automation of a laboratory; system examples are given. The author shows how both task and data systems have evolved, and how they interrelate. A BIG picture, or continuum view, is presented and some of the reasons for success or failure of the various examples cited are explored. Fina...

  19. LIBRARY AUTOMATION IN NIGERAN UNIVERSITIES

    African Journals Online (AJOL)

    facilitate services and access to information in libraries is widely acceptable. ... Moreover, Ugah (2001) reports that the automation process at the. Abubakar ... blueprint in 1987 and a turn-key system of automation was suggested for the library.

  20. Automated species-level identification and segmentation of planktonic foraminifera using convolutional neural networks

    Science.gov (United States)

    Marchitto, T. M., Jr.; Mitra, R.; Zhong, B.; Ge, Q.; Kanakiya, B.; Lobaton, E.

    2017-12-01

    Identification and picking of foraminifera from sediment samples is often a laborious and repetitive task. Previous attempts to automate this process have met with limited success, but we show that recent advances in machine learning can be brought to bear on the problem. As a `proof of concept' we have developed a system that is capable of recognizing six species of extant planktonic foraminifera that are commonly used in paleoceanographic studies. Our pipeline begins with digital photographs taken under 16 different illuminations using an LED ring, which are then fused into a single 3D image. Labeled image sets were used to train various types of image classification algorithms, and performance on unlabeled image sets was measured in terms of precision (whether IDs are correct) and recall (what fraction of the target species are found). We find that Convolutional Neural Network (CNN) approaches achieve precision and recall values between 80 and 90%, which is similar precision and better recall than human expert performance using the same type of photographs. We have also trained a CNN to segment the 3D images into individual chambers and apertures, which can not only improve identification performance but also automate the measurement of foraminifera for morphometric studies. Given that there are only 35 species of extant planktonic foraminifera larger than 150 μm, we suggest that a fully automated characterization of this assemblage is attainable. This is the first step toward the realization of a foram picking robot.

  1. Estimating Regional Mass Balance of Himalayan Glaciers Using Hexagon Imagery: An Automated Approach

    Science.gov (United States)

    Maurer, J. M.; Rupper, S.

    2013-12-01

    Currently there is much uncertainty regarding the present and future state of Himalayan glaciers, which supply meltwater for river systems vital to more than 1.4 billion people living throughout Asia. Previous assessments of regional glacier mass balance in the Himalayas using various remote sensing and field-based methods give inconsistent results, and most assessments are over relatively short (e.g., single decade) timescales. This study aims to quantify multi-decadal changes in volume and extent of Himalayan glaciers through efficient use of the large database of declassified 1970-80s era Hexagon stereo imagery. Automation of the DEM extraction process provides an effective workflow for many images to be processed and glacier elevation changes quantified with minimal user input. The tedious procedure of manual ground control point selection necessary for block-bundle adjustment (as ephemeral data is not available for the declassified images) is automated using the Maximally Stable Extremal Regions algorithm, which matches image elements between raw Hexagon images and georeferenced Landsat 15 meter panchromatic images. Additional automated Hexagon DEM processing, co-registration, and bias correction allow for direct comparison with modern ASTER and SRTM elevation data, thus quantifying glacier elevation and area changes over several decades across largely inaccessible mountainous regions. As consistent methodology is used for all glaciers, results will likely reveal significant spatial and temporal patterns in regional ice mass balance. Ultimately, these findings could have important implications for future water resource management in light of environmental change.

  2. Automated 741 document preparation: Oak Ridge National Laboratory's Automated Safeguards Information System (OASIS)

    International Nuclear Information System (INIS)

    Austin, H.C.; Gray, L.M.

    1982-01-01

    OASIS has been providing for Oak Ridge National Laboratory's total safeguards needs since being place on line in April 1980. The system supports near real-time nuclear materials safeguards and accountability control. The original design of OASIS called for an automated facsimile of a 741 document to be prepared as a functional by-product of updating the inventory. An attempt was made to utilize, intact, DOE-Albuquerque's automated 741 system to generate the facsimile; however, the five page document produced proved too cumbersome. Albuquerque's programs were modified to print an original 741 document utilizing standard DOE/NRC 741 forms. It is felt that the best features of both the automated and manually generated 741 documents have been incorporated. Automation of the source data for 741 shipping documents produces greater efficiency while reducing possible errors. Through utilization of the standard DOE/NRC form, continuity within the NMMSS system is maintained, thus minimizing the confusion and redundancy associated with facsimiles. OASIS now fulfills the original concept of near real-time accountability by furnishing a viable 741 document as a function of updating the inventory

  3. Individual differences in the calibration of trust in automation.

    Science.gov (United States)

    Pop, Vlad L; Shrewsbury, Alex; Durso, Francis T

    2015-06-01

    The objective was to determine whether operators with an expectancy that automation is trustworthy are better at calibrating their trust to changes in the capabilities of automation, and if so, why. Studies suggest that individual differences in automation expectancy may be able to account for why changes in the capabilities of automation lead to a substantial change in trust for some, yet only a small change for others. In a baggage screening task, 225 participants searched for weapons in 200 X-ray images of luggage. Participants were assisted by an automated decision aid exhibiting different levels of reliability. Measures of expectancy that automation is trustworthy were used in conjunction with subjective measures of trust and perceived reliability to identify individual differences in trust calibration. Operators with high expectancy that automation is trustworthy were more sensitive to changes (both increases and decreases) in automation reliability. This difference was eliminated by manipulating the causal attribution of automation errors. Attributing the cause of automation errors to factors external to the automation fosters an understanding of tasks and situations in which automation differs in reliability and may lead to more appropriate trust. The development of interventions can lead to calibrated trust in automation. © 2014, Human Factors and Ergonomics Society.

  4. Automation of Taxiing

    Directory of Open Access Journals (Sweden)

    Jaroslav Bursík

    2017-01-01

    Full Text Available The article focuses on the possibility of automation of taxiing, which is the part of a flight, which, under adverse weather conditions, greatly reduces the operational usability of an airport, and is the only part of a flight that has not been affected by automation, yet. Taxiing is currently handled manually by the pilot, who controls the airplane based on information from visual perception. The article primarily deals with possible ways of obtaining navigational information, and its automatic transfer to the controls. Analyzed wand assessed were currently available technologies such as computer vision, Light Detection and Ranging and Global Navigation Satellite System, which are useful for navigation and their general implementation into an airplane was designed. Obstacles to the implementation were identified, too. The result is a proposed combination of systems along with their installation into airplane’s systems so that it is possible to use the automated taxiing.

  5. Programmable automation systems in PSA

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1997-06-01

    The Finnish safety authority (STUK) requires plant specific PSAs, and quantitative safety goals are set on different levels. The reliability analysis is more problematic when critical safety functions are realized by applying programmable automation systems. Conventional modeling techniques do not necessarily apply to the analysis of these systems, and the quantification seems to be impossible. However, it is important to analyze contribution of programmable automation systems to the plant safety and PSA is the only method with system analytical view over the safety. This report discusses the applicability of PSA methodology (fault tree analyses, failure modes and effects analyses) in the analysis of programmable automation systems. The problem of how to decompose programmable automation systems for reliability modeling purposes is discussed. In addition to the qualitative analysis and structural reliability modeling issues, the possibility to evaluate failure probabilities of programmable automation systems is considered. One solution to the quantification issue is the use of expert judgements, and the principles to apply expert judgements is discussed in the paper. A framework to apply expert judgements is outlined. Further, the impacts of subjective estimates on the interpretation of PSA results are discussed. (orig.) (13 refs.)

  6. How to assess sustainability in automated manufacturing

    DEFF Research Database (Denmark)

    Dijkman, Teunis Johannes; Rödger, Jan-Markus; Bey, Niki

    2015-01-01

    The aim of this paper is to describe how sustainability in automation can be assessed. The assessment method is illustrated using a case study of a robot. Three aspects of sustainability assessment in automation are identified. Firstly, we consider automation as part of a larger system...... that fulfills the market demand for a given functionality. Secondly, three aspects of sustainability have to be assessed: environment, economy, and society. Thirdly, automation is part of a system with many levels, with different actors on each level, resulting in meeting the market demand. In this system......, (sustainability) specifications move top-down, which helps avoiding sub-optimization and problem shifting. From these three aspects, sustainable automation is defined as automation that contributes to products that fulfill a market demand in a more sustainable way. The case study presents the carbon footprints...

  7. Automation of coal mining equipment

    Energy Technology Data Exchange (ETDEWEB)

    Yamada, Ryuji

    1986-12-25

    Major machines used in the working face include the shearer and the self-advancing frame. The shearer has been changed from the radio-controlled model to the microcomputer operated machine, while automating various functions. In addition, a system for comprehensively examining operating conditions and natural conditions in the working face for further automation. The selfadvancing frame has been modified from the sequence controlled model to the microcomputer aided electrohydraulic control system. In order to proceed further with automation and introduce robotics, detectors, control units and valves must be made smaller in higher reliability. The system will be controlled above the ground in the future, provided that the machines in the working face are remote controlled at the gate while transmitting relevant data above the ground from this system. Thus, automated working face will be realized. (2 figs, 1 photo)

  8. Controls and automation in the SPIRAL project

    International Nuclear Information System (INIS)

    Bothner, U.; Boulot, A.; Maherault, J.; Martial, L.

    1999-01-01

    The control and automation team of the R and D of Accelerator-Exotic Beam Department has had in the framework of SPIRAL collaboration the following tasks: 1. automation of the resonator high frequency equipment of the CIME cyclotron; 2. automation of the vacuum equipment, i.e. the low energy line (TBE), the CIME cyclotron, the low energy line (BE); 3. automation of load safety for power supply; 4. for each of these tasks a circuitry file based on the SCHEMA software has been worked out. The programs required in the automation of load safety for power supply (STEP5, PROTOOL, DESIGNER 4.1) were developed and implemented for PC

  9. Automated controlled-potential coulometric determination of uranium

    International Nuclear Information System (INIS)

    Knight, C.H.; Clegg, D.E.; Wright, K.D.; Cassidy, R.M.

    1982-06-01

    A controlled-potential coulometer has been automated in our laboratory for routine determination of uranium in solution. The CRNL-designed automated system controls degassing, prereduction, and reduction of the sample. The final result is displayed on a digital coulometer readout. Manual and automated modes of operation are compared to show the precision and accuracy of the automated system. Results are also shown for the coulometric titration of typical uranium-aluminum alloy samples

  10. Office automation: a look beyond word processing

    OpenAIRE

    DuBois, Milan Ephriam, Jr.

    1983-01-01

    Approved for public release; distribution is unlimited Word processing was the first of various forms of office automation technologies to gain widespread acceptance and usability in the business world. For many, it remains the only form of office automation technology. Office automation, however, is not just word processing, although it does include the function of facilitating and manipulating text. In reality, office automation is not one innovation, or one office system, or one tech...

  11. The Employment-Impact of Automation in Canada

    OpenAIRE

    McLean, Colin Alexander

    2015-01-01

    Standard neoclassical models of labour demand predict that automation does not produce long-term increases in unemployment. Supporting evidence in Canada between 1970 and 2008 is explained by the reallocation of labour from industries with high levels of automation such as Manufacturing to industries with low levels of automation such as Retail and Wholesale Trade, and Business Services. Recent evidence indicates however that on-going technological advances are now driving labour automation i...

  12. Comparison of manual and semi-automated delineation of regions of interest for radioligand PET imaging analysis

    International Nuclear Information System (INIS)

    Chow, Tiffany W; Verhoeff, Nicolaas PLG; Takeshita, Shinichiro; Honjo, Kie; Pataky, Christina E; St Jacques, Peggy L; Kusano, Maggie L; Caldwell, Curtis B; Ramirez, Joel; Black, Sandra

    2007-01-01

    As imaging centers produce higher resolution research scans, the number of man-hours required to process regional data has become a major concern. Comparison of automated vs. manual methodology has not been reported for functional imaging. We explored validation of using automation to delineate regions of interest on positron emission tomography (PET) scans. The purpose of this study was to ascertain improvements in image processing time and reproducibility of a semi-automated brain region extraction (SABRE) method over manual delineation of regions of interest (ROIs). We compared 2 sets of partial volume corrected serotonin 1a receptor binding potentials (BPs) resulting from manual vs. semi-automated methods. BPs were obtained from subjects meeting consensus criteria for frontotemporal degeneration and from age- and gender-matched healthy controls. Two trained raters provided each set of data to conduct comparisons of inter-rater mean image processing time, rank order of BPs for 9 PET scans, intra- and inter-rater intraclass correlation coefficients (ICC), repeatability coefficients (RC), percentages of the average parameter value (RM%), and effect sizes of either method. SABRE saved approximately 3 hours of processing time per PET subject over manual delineation (p < .001). Quality of the SABRE BP results was preserved relative to the rank order of subjects by manual methods. Intra- and inter-rater ICC were high (>0.8) for both methods. RC and RM% were lower for the manual method across all ROIs, indicating less intra-rater variance across PET subjects' BPs. SABRE demonstrated significant time savings and no significant difference in reproducibility over manual methods, justifying the use of SABRE in serotonin 1a receptor radioligand PET imaging analysis. This implies that semi-automated ROI delineation is a valid methodology for future PET imaging analysis

  13. Complex Automated Negotiations Theories, Models, and Software Competitions

    CERN Document Server

    Zhang, Minjie; Robu, Valentin; Matsuo, Tokuro

    2013-01-01

    Complex Automated Negotiations are a widely studied, emerging area in the field of Autonomous Agents and Multi-Agent Systems. In general, automated negotiations can be complex, since there are a lot of factors that characterize such negotiations. For this book, we solicited papers on all aspects of such complex automated negotiations, which are studied in the field of Autonomous Agents and Multi-Agent Systems. This book includes two parts, which are Part I: Agent-based Complex Automated Negotiations and Part II: Automated Negotiation Agents Competition. Each chapter in Part I is an extended version of ACAN 2011 papers after peer reviews by three PC members. Part II includes ANAC 2011 (The Second Automated Negotiating Agents Competition), in which automated agents who have different negotiation strategies and implemented by different developers are automatically negotiate in the several negotiation domains. ANAC is an international competition in which automated negotiation strategies, submitted by a number of...

  14. Automating the Generation of Heterogeneous Aviation Safety Cases

    Science.gov (United States)

    Denney, Ewen W.; Pai, Ganesh J.; Pohl, Josef M.

    2012-01-01

    A safety case is a structured argument, supported by a body of evidence, which provides a convincing and valid justification that a system is acceptably safe for a given application in a given operating environment. This report describes the development of a fragment of a preliminary safety case for the Swift Unmanned Aircraft System. The construction of the safety case fragment consists of two parts: a manually constructed system-level case, and an automatically constructed lower-level case, generated from formal proof of safety-relevant correctness properties. We provide a detailed discussion of the safety considerations for the target system, emphasizing the heterogeneity of sources of safety-relevant information, and use a hazard analysis to derive safety requirements, including formal requirements. We evaluate the safety case using three classes of metrics for measuring degrees of coverage, automation, and understandability. We then present our preliminary conclusions and make suggestions for future work.

  15. Python Leap Second Management and Implementation of Precise Barycentric Correction (barycorrpy)

    Science.gov (United States)

    Kanodia, Shubham; Wright, Jason

    2018-01-01

    We announce barycorrpy (BCPy) , a Python implementation to calculate precise barycentric corrections well below the 1 cm/s level, following the algorithm of Wright and Eastman (2014). This level of precision is required in the search for 1 Earth mass planets in the Habitable Zones of Sun-like stars by the Radial Velocity (RV) method, where the maximum semi-amplitude is about 9 cm/s. We have developed BCPy to be used in the pipeline for the next generation Doppler Spectrometers - Habitable-zone Planet Finder (HPF) and NEID. In this work, we also develop an automated leap second management routine to improve upon the one available in Astropy. It checks for and downloads a new leap second file before converting from the UT time scale to TDB.

  16. Software for Generating Troposphere Corrections for InSAR Using GPS and Weather Model Data

    Science.gov (United States)

    Moore, Angelyn W.; Webb, Frank H.; Fishbein, Evan F.; Fielding, Eric J.; Owen, Susan E.; Granger, Stephanie L.; Bjoerndahl, Fredrik; Loefgren, Johan; Fang, Peng; Means, James D.; hide

    2013-01-01

    Atmospheric errors due to the troposphere are a limiting error source for spaceborne interferometric synthetic aperture radar (InSAR) imaging. This software generates tropospheric delay maps that can be used to correct atmospheric artifacts in InSAR data. The software automatically acquires all needed GPS (Global Positioning System), weather, and Digital Elevation Map data, and generates a tropospheric correction map using a novel algorithm for combining GPS and weather information while accounting for terrain. Existing JPL software was prototypical in nature, required a MATLAB license, required additional steps to acquire and ingest needed GPS and weather data, and did not account for topography in interpolation. Previous software did not achieve a level of automation suitable for integration in a Web portal. This software overcomes these issues. GPS estimates of tropospheric delay are a source of corrections that can be used to form correction maps to be applied to InSAR data, but the spacing of GPS stations is insufficient to remove short-wavelength tropospheric artifacts. This software combines interpolated GPS delay with weather model precipitable water vapor (PWV) and a digital elevation model to account for terrain, increasing the spatial resolution of the tropospheric correction maps and thus removing short wavelength tropospheric artifacts to a greater extent. It will be integrated into a Web portal request system, allowing use in a future L-band SAR Earth radar mission data system. This will be a significant contribution to its technology readiness, building on existing investments in in situ space geodetic networks, and improving timeliness, quality, and science value of the collected data

  17. Automated Methods of Corrosion Measurements

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov

    1997-01-01

    . Mechanical control, recording, and data processing must therefore be automated to a high level of precision and reliability. These general techniques and the apparatus involved have been described extensively. The automated methods of such high-resolution microscopy coordinated with computerized...

  18. Development of an automated operating procedure system using fuzzy colored petri nets for nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Seong, Poong Hyun

    2002-01-01

    In this work, AuTomated Operating Procedure System (ATOPS) is developed. ATOPS is an automation system for operation of a nuclear power plant (NPP) which can monitor signals, diagnose statuses, and generate control actions according to corresponding operating procedures, without any human operator's help. Main functions of ATOPS are anomaly detection function and procedure execution function, but only the procedure execution function is implemented because this work is just the first step. In the procedure execution function, operating procedures of NPP are analyzed and modeled using Fuzzy Colored Petri Nets (FCPN), and executed depending on decision making of the inference engine. In this work, an ATOPS prototype is developed in order to demonstrate its feasibility and it is also validated using FISA-2/WS simulator. The validation is performed for the cases of a loss of coolant accident (LOCA) and a steam generator tube rupture (SGTR). The simulation results show that ATOPS works correctly in the emergency situations

  19. Methods for Motion Correction Evaluation Using 18F-FDG Human Brain Scans on a High-Resolution PET Scanner

    DEFF Research Database (Denmark)

    Keller, Sune H.; Sibomana, Merence; Olesen, Oline Vinter

    2012-01-01

    Many authors have reported the importance of motion correction (MC) for PET. Patient motion during scanning disturbs kinetic analysis and degrades resolution. In addition, using misaligned transmission for attenuation and scatter correction may produce regional quantification bias in the reconstr......Many authors have reported the importance of motion correction (MC) for PET. Patient motion during scanning disturbs kinetic analysis and degrades resolution. In addition, using misaligned transmission for attenuation and scatter correction may produce regional quantification bias...... in the reconstructed emission images. The purpose of this work was the development of quality control (QC) methods for MC procedures based on external motion tracking (EMT) for human scanning using an optical motion tracking system. Methods: Two scans with minor motion and 5 with major motion (as reported...... (automated image registration) software. The following 3 QC methods were used to evaluate the EMT and AIR MC: a method using the ratio between 2 regions of interest with gray matter voxels (GM) and white matter voxels (WM), called GM/WM; mutual information; and cross correlation. Results: The results...

  20. Human-centered automation: Development of a philosophy

    Science.gov (United States)

    Graeber, Curtis; Billings, Charles E.

    1990-01-01

    Information on human-centered automation philosophy is given in outline/viewgraph form. It is asserted that automation of aircraft control will continue in the future, but that automation should supplement, not supplant the human management and control function in civil air transport.