WorldWideScience

Sample records for automated jitter correction

  1. Automated jitter correction for IR image processing to assess the quality of W7-X high heat flux components

    International Nuclear Information System (INIS)

    Greuner, H; De Marne, P; Herrmann, A; Boeswirth, B; Schindler, T; Smirnow, M

    2009-01-01

    An automated IR image processing method was developed to evaluate the surface temperature distribution of cyclically loaded high heat flux (HHF) plasma facing components. IPP Garching will perform the HHF testing of a high percentage of the series production of the WENDELSTEIN 7-X (W7-X) divertor targets to minimize the number of undiscovered uncertainties in the finally installed components. The HHF tests will be performed as quality assurance (QA) complementary to the non-destructive examination (NDE) methods used during the manufacturing. The IR analysis of an HHF-loaded component detects growing debonding of the plasma facing material, made of carbon fibre composite (CFC), after a few thermal cycles. In the case of the prototype testing, the IR data was processed manually. However, a QA method requires a reliable, reproducible and efficient automated procedure. Using the example of the HHF testing of W7-X pre-series target elements, the paper describes the developed automated IR image processing method. The algorithm is based on an iterative two-step correlation analysis with an individually defined reference pattern for the determination of the jitter.

  2. An adaptive feedback controller for transverse angle and position jitter correction in linear particle beam accelerators

    International Nuclear Information System (INIS)

    Barr, D.S.

    1993-01-01

    It is desired to design a position and angle jitter control system for pulsed linear accelerators that will increase the accuracy of correction over that achieved by currently used standard feedback jitter control systems. Interpulse or pulse-to-pulse correction is performed using the average value of each macropulse. The configuration of such a system resembles that of a standard feedback correction system with the addition of an adaptive controller that dynamically adjusts the gain-phase contour of the feedback electronics. The adaptive controller makes changes to the analog feedback system between macropulses. A simulation of such a system using real measured jitter data from the Stanford Linear Collider was shown to decrease the average rms jitter by over two and a half times. The system also increased and stabilized the correction at high frequencies; a typical problem with standard feedback systems

  3. An adaptive feedback controller for transverse angle and position jitter correction in linear particle beam accelerators

    International Nuclear Information System (INIS)

    Barr, D.S.

    1992-01-01

    It is desired to design a position and angle jitter control system for pulsed linear accelerators that will increase the accuracy of correction over that achieved by currently used standard feedback jitter control systems. Interpulse or pulse-to-pulse correction is performed using the average value of each macropulse. The configuration of such a system resembles that of a standard feedback correction system with the addition of an adaptive controller that dynamically adjusts the gain-phase contour of the feedback electronics. The adaptive controller makes changes to the analog feedback system between macropulses. A simulation of such a system using real measured jitter data from the Stanford Linear Collider was shown to decrease the average rms jitter by over two and a half times. The system also increased and stabilized the correction at high frequencies; a typical problem with standard feedback systems

  4. Peripheral refractive correction and automated perimetric profiles.

    Science.gov (United States)

    Wild, J M; Wood, J M; Crews, S J

    1988-06-01

    The effect of peripheral refractive error correction on the automated perimetric sensitivity profile was investigated on a sample of 10 clinically normal, experienced observers. Peripheral refractive error was determined at eccentricities of 0 degree, 20 degrees and 40 degrees along the temporal meridian of the right eye using the Canon Autoref R-1, an infra-red automated refractor, under the parametric conditions of the Octopus automated perimeter. Perimetric sensitivity was then undertaken at these eccentricities (stimulus sizes 0 and III) with and without the appropriate peripheral refractive correction using the Octopus 201 automated perimeter. Within the measurement limits of the experimental procedures employed, perimetric sensitivity was not influenced by peripheral refractive correction.

  5. Jitter-correction for IR/UV-XUV pump-probe experiments at the FLASH free-electron laser

    International Nuclear Information System (INIS)

    Savelyev, Evgeny; Boll, Rebecca; Bomme, Cedric; Schirmel, Nora; Redlin, Harald

    2017-01-01

    In pump-probe experiments employing a free-electron laser (FEL) in combination with a synchronized optical femtosecond laser, the arrival-time jitter between the FEL pulse and the optical laser pulse often severely limits the temporal resolution that can be achieved. Here, we present a pump-probe experiment on the UV-induced dissociation of 2,6-difluoroiodobenzene C 6 H 3 F 2 I) molecules performed at the FLASH FEL that takes advantage of recent upgrades of the FLASH timing and synchronization system to obtain high-quality data that are not limited by the FEL arrival-time jitter. Here, we discuss in detail the necessary data analysis steps and describe the origin of the time-dependent effects in the yields and kinetic energies of the fragment ions that we observe in the experiment.

  6. Automation of one-loop QCD corrections

    CERN Document Server

    Hirschi, Valentin; Frixione, Stefano; Garzelli, Maria Vittoria; Maltoni, Fabio; Pittau, Roberto

    2011-01-01

    We present the complete automation of the computation of one-loop QCD corrections, including UV renormalization, to an arbitrary scattering process in the Standard Model. This is achieved by embedding the OPP integrand reduction technique, as implemented in CutTools, into the MadGraph framework. By interfacing the tool so constructed, which we dub MadLoop, with MadFKS, the fully automatic computation of any infrared-safe observable at the next-to-leading order in QCD is attained. We demonstrate the flexibility and the reach of our method by calculating the production rates for a variety of processes at the 7 TeV LHC.

  7. Femtosecond resolution timing jitter correction on a TW scale Ti:sapphire laser system for FEL pump-probe experiments.

    Science.gov (United States)

    Csatari Divall, Marta; Mutter, Patrick; Divall, Edwin J; Hauri, Christoph P

    2015-11-16

    Intense ultrashort pulse lasers are used for fs resolution pump-probe experiments more and more at large scale facilities, such as free electron lasers (FEL). Measurement of the arrival time of the laser pulses and stabilization to the machine or other sub-systems on the target, is crucial for high time-resolution measurements. In this work we report on a single shot, spectrally resolved, non-collinear cross-correlator with sub-fs resolution. With a feedback applied we keep the output of the TW class Ti:sapphire amplifier chain in time with the seed oscillator to ~3 fs RMS level for several hours. This is well below the typical pulse duration used at FELs and supports fs resolution pump-probe experiments. Short term jitter and long term timing drift measurements are presented. Applicability to other wavelengths and integration into the timing infrastructure of the FEL are also covered to show the full potential of the device.

  8. Automated NLO QCD corrections with WHIZARD

    International Nuclear Information System (INIS)

    Weiss, Christian; Siegen Univ.; Chokoufe Nejad, Bijan; Reuter, Juergen; Kilian, Wolfgang

    2015-10-01

    We briefly discuss the current status of NLO QCD automation in the Monte Carlo event generator WHIZARD. The functionality is presented for the explicit study of off-shell top quark production with associated backgrounds at a lepton collider.

  9. Automated general temperature correction method for dielectric soil moisture sensors

    Science.gov (United States)

    Kapilaratne, R. G. C. Jeewantinie; Lu, Minjiao

    2017-08-01

    An effective temperature correction method for dielectric sensors is important to ensure the accuracy of soil water content (SWC) measurements of local to regional-scale soil moisture monitoring networks. These networks are extensively using highly temperature sensitive dielectric sensors due to their low cost, ease of use and less power consumption. Yet there is no general temperature correction method for dielectric sensors, instead sensor or site dependent correction algorithms are employed. Such methods become ineffective at soil moisture monitoring networks with different sensor setups and those that cover diverse climatic conditions and soil types. This study attempted to develop a general temperature correction method for dielectric sensors which can be commonly used regardless of the differences in sensor type, climatic conditions and soil type without rainfall data. In this work an automated general temperature correction method was developed by adopting previously developed temperature correction algorithms using time domain reflectometry (TDR) measurements to ThetaProbe ML2X, Stevens Hydra probe II and Decagon Devices EC-TM sensor measurements. The rainy day effects removal procedure from SWC data was automated by incorporating a statistical inference technique with temperature correction algorithms. The temperature correction method was evaluated using 34 stations from the International Soil Moisture Monitoring Network and another nine stations from a local soil moisture monitoring network in Mongolia. Soil moisture monitoring networks used in this study cover four major climates and six major soil types. Results indicated that the automated temperature correction algorithms developed in this study can eliminate temperature effects from dielectric sensor measurements successfully even without on-site rainfall data. Furthermore, it has been found that actual daily average of SWC has been changed due to temperature effects of dielectric sensors with a

  10. An Automated Baseline Correction Method Based on Iterative Morphological Operations.

    Science.gov (United States)

    Chen, Yunliang; Dai, Liankui

    2018-05-01

    Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.

  11. Localisation of beam offset jitter sources at ATF2

    CERN Document Server

    Pfingstner, J; Patecki, M; Schulte, D; Tomás, R

    2014-01-01

    For the commissioning and operation of modern particle accelerators, automated error detection and diagnostics methods are becoming increasingly important. In this paper, we present two such methods, which are capable of localising sources of beam offset jitter with a combination of correlation studies and so called degree of freedom plots. The methods were applied to the ATF2 beam line at KEK, where one of the major goals is the reduction of the beam offset jitter. Results of this localisation are shown in this paper. A big advantage of the presented method is its high robustness especially to varying optics parameters. Therefore, we believe that the developed beam offset jitter localisation methods can be easily applied to other accelerators.

  12. Automation of electroweak NLO corrections in general models

    Energy Technology Data Exchange (ETDEWEB)

    Lang, Jean-Nicolas [Universitaet Wuerzburg (Germany)

    2016-07-01

    I discuss the automation of generation of scattering amplitudes in general quantum field theories at next-to-leading order in perturbation theory. The work is based on Recola, a highly efficient one-loop amplitude generator for the Standard Model, which I have extended so that it can deal with general quantum field theories. Internally, Recola computes off-shell currents and for new models new rules for off-shell currents emerge which are derived from the Feynman rules. My work relies on the UFO format which can be obtained by a suited model builder, e.g. FeynRules. I have developed tools to derive the necessary counterterm structures and to perform the renormalization within Recola in an automated way. I describe the procedure using the example of the two-Higgs-doublet model.

  13. A fully automated algorithm of baseline correction based on wavelet feature points and segment interpolation

    Science.gov (United States)

    Qian, Fang; Wu, Yihui; Hao, Peng

    2017-11-01

    Baseline correction is a very important part of pre-processing. Baseline in the spectrum signal can induce uneven amplitude shifts across different wavenumbers and lead to bad results. Therefore, these amplitude shifts should be compensated before further analysis. Many algorithms are used to remove baseline, however fully automated baseline correction is convenient in practical application. A fully automated algorithm based on wavelet feature points and segment interpolation (AWFPSI) is proposed. This algorithm finds feature points through continuous wavelet transformation and estimates baseline through segment interpolation. AWFPSI is compared with three commonly introduced fully automated and semi-automated algorithms, using simulated spectrum signal, visible spectrum signal and Raman spectrum signal. The results show that AWFPSI gives better accuracy and has the advantage of easy use.

  14. Evaluation of refractive correction for standard automated perimetry in eyes wearing multifocal contact lenses.

    Science.gov (United States)

    Hirasawa, Kazunori; Ito, Hikaru; Ohori, Yukari; Takano, Yui; Shoji, Nobuyuki

    2017-01-01

    To evaluate the refractive correction for standard automated perimetry (SAP) in eyes with refractive multifocal contact lenses (CL) in healthy young participants. Twenty-nine eyes of 29 participants were included. Accommodation was paralyzed in all participants with 1% cyclopentolate hydrochloride. SAP was performed using the Humphrey SITA-standard 24-2 and 10-2 protocol under three refractive conditions: monofocal CL corrected for near distance (baseline); multifocal CL corrected for distance (mCL-D); and mCL-D corrected for near vision using a spectacle lens (mCL-N). Primary outcome measures were the foveal threshold, mean deviation (MD), and pattern standard deviation (PSD). The foveal threshold of mCL-N with both the 24-2 and 10-2 protocols significantly decreased by 2.2-2.5 dB ( P correction without additional near correction is to be recommended.

  15. On the use of the autocorrelation and covariance methods for feedforward control of transverse angle and position jitter in linear particle beam accelerators

    International Nuclear Information System (INIS)

    Barr, D.S.

    1994-01-01

    It is desired to design a predictive feedforward transverse jitter control system to control both angle and position jitter in pulsed linear accelerators. Such a system will increase the accuracy and bandwidth of correction over that of currently available feedback correction systems. Intrapulse correction is performed. An offline process actually ''learns'' the properties of the jitter, and uses these properties to apply correction to the beam. The correction weights calculated offline are downloaded to a real-time analog correction system between macropulses. Jitter data were taken at the Los Alamos National Laboratory (LANL) Ground Test Accelerator (GTA) telescope experiment at Argonne National Laboratory (ANL). The experiment consisted of the LANL telescope connected to the ANL ZGS proton source and linac. A simulation of the correction system using this data was shown to decrease the average rms jitter by a factor of two over that of a comparable standard feedback correction system. The system also improved the correction bandwidth

  16. On the use of the autocorrelation and covariance methods for feedforward control of transverse angle and position jitter in linear particle beam accelerators

    International Nuclear Information System (INIS)

    Barr, D.S.

    1993-01-01

    It is desired to design a predictive feedforward transverse jitter control system to control both angle and position jitter in pulsed linear accelerators. Such a system will increase the accuracy and bandwidth of correction over that of currently available feedback correction systems. Intrapulse correction is performed. An offline process actually open-quotes learnsclose quotes the properties of the jitter, and uses these properties to apply correction to the beam. The correction weights calculated offline are downloaded to a real-time analog correction system between macropulses. Jitter data were taken at the Los Alamos National Laboratory (LANL) Ground Test Accelerator (GTA) telescope experiment at Argonne National Laboratory (ANL). The experiment consisted of the LANL telescope connected to the ANL ZGS proton source and linac. A simulation of the correction system using this data was shown to decrease the average rms jitter by a factor of two over that of a comparable standard feedback correction system. The system also improved the correction bandwidth

  17. Correction of oral contrast artifacts in CT-based attenuation correction of PET images using an automated segmentation algorithm

    International Nuclear Information System (INIS)

    Ahmadian, Alireza; Ay, Mohammad R.; Sarkar, Saeed; Bidgoli, Javad H.; Zaidi, Habib

    2008-01-01

    Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map (μmap), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated μmaps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique in

  18. Spacecraft Jitter Attenuation Using Embedded Piezoelectric Actuators

    Science.gov (United States)

    Belvin, W. Keith

    1995-01-01

    Remote sensing from spacecraft requires precise pointing of measurement devices in order to achieve adequate spatial resolution. Unfortunately, various spacecraft disturbances induce vibrational jitter in the remote sensing instruments. The NASA Langley Research Center has performed analysis, simulations, and ground tests to identify the more promising technologies for minimizing spacecraft pointing jitter. These studies have shown that the use of smart materials to reduce spacecraft jitter is an excellent match between a maturing technology and an operational need. This paper describes the use of embedding piezoelectric actuators for vibration control and payload isolation. In addition, recent advances in modeling, simulation, and testing of spacecraft pointing jitter are discussed.

  19. Automated aberration correction of arbitrary laser modes in high numerical aperture systems

    OpenAIRE

    Hering, Julian; Waller, Erik H.; Freymann, Georg von

    2016-01-01

    Controlling the point-spread-function in three-dimensional laser lithography is crucial for fabricating structures with highest definition and resolution. In contrast to microscopy, aberrations have to be physically corrected prior to writing, to create well defined doughnut modes, bottlebeams or multi foci modes. We report on a modified Gerchberg-Saxton algorithm for spatial-light-modulator based automated aberration compensation to optimize arbitrary laser-modes in a high numerical aperture...

  20. Jitter reduction in Differentiated Services (Diffserv) networks

    NARCIS (Netherlands)

    Karagiannis, Georgios; Rexhepi, Vlora

    2001-01-01

    A method and a computer program for reducing jitter in IP packet transmission in a Diffserv network having ingress and egress Border Routers and using premium service, expedited forwarding and source route option, recognize incoming packets which have firm jitter requirements. The program verifies

  1. Jitter reduction in Differentiated Services (Diffserv) networks

    NARCIS (Netherlands)

    Karagiannis, Georgios; Rexhepi, Vlora

    2005-01-01

    A method and a computer program for reducing jitter in IP packet transmission in a Diffserv network having ingress and egress Border Routers and using premium service, expedited forwarding and source route option, recognize incoming packets which have firm jitter requirements. The program verifies

  2. Framework of Jitter Detection and Compensation for High Resolution Satellites

    Directory of Open Access Journals (Sweden)

    Xiaohua Tong

    2014-05-01

    Full Text Available Attitude jitter is a common phenomenon in the application of high resolution satellites, which may result in large errors of geo-positioning and mapping accuracy. Therefore, it is critical to detect and compensate attitude jitter to explore the full geometric potential of high resolution satellites. In this paper, a framework of jitter detection and compensation for high resolution satellites is proposed and some preliminary investigation is performed. Three methods for jitter detection are presented as follows. (1 The first one is based on multispectral images using parallax between two different bands in the image; (2 The second is based on stereo images using rational polynomial coefficients (RPCs; (3 The third is based on panchromatic images employing orthorectification processing. Based on the calculated parallax maps, the frequency and amplitude of the detected jitter are obtained. Subsequently, two approaches for jitter compensation are conducted. (1 The first one is to conduct the compensation on image, which uses the derived parallax observations for resampling; (2 The second is to conduct the compensation on attitude data, which treats the influence of jitter on attitude as correction of charge-coupled device (CCD viewing angles. Experiments with images from several satellites, such as ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiaometer, LRO (Lunar Reconnaissance Orbiter and ZY-3 (ZiYuan-3 demonstrate the promising performance and feasibility of the proposed framework.

  3. Evaluation of refractive correction for standard automated perimetry in eyes wearing multifocal contact lenses

    Directory of Open Access Journals (Sweden)

    Kazunori Hirasawa

    2017-10-01

    Full Text Available AIM: To evaluate the refractive correction for standard automated perimetry (SAP in eyes with refractive multifocal contact lenses (CL in healthy young participants. METHODS: Twenty-nine eyes of 29 participants were included. Accommodation was paralyzed in all participants with 1% cyclopentolate hydrochloride. SAP was performed using the Humphrey SITA-standard 24-2 and 10-2 protocol under three refractive conditions: monofocal CL corrected for near distance (baseline; multifocal CL corrected for distance (mCL-D; and mCL-D corrected for near vision using a spectacle lens (mCL-N. Primary outcome measures were the foveal threshold, mean deviation (MD, and pattern standard deviation (PSD. RESULTS: The foveal threshold of mCL-N with both the 24-2 and 10-2 protocols significantly decreased by 2.2-2.5 dB CONCLUSION: Despite the induced mydriasis and the optical design of the multifocal lens used in this study, our results indicated that, when the dome-shaped visual field test is performed with eyes with large pupils and wearing refractive multifocal CLs, distance correction without additional near correction is to be recommended.

  4. Automated aberration correction of arbitrary laser modes in high numerical aperture systems.

    Science.gov (United States)

    Hering, Julian; Waller, Erik H; Von Freymann, Georg

    2016-12-12

    Controlling the point-spread-function in three-dimensional laser lithography is crucial for fabricating structures with highest definition and resolution. In contrast to microscopy, aberrations have to be physically corrected prior to writing, to create well defined doughnut modes, bottlebeams or multi foci modes. We report on a modified Gerchberg-Saxton algorithm for spatial-light-modulator based automated aberration compensation to optimize arbitrary laser-modes in a high numerical aperture system. Using circularly polarized light for the measurement and first-guess initial conditions for amplitude and phase of the pupil function our scalar approach outperforms recent algorithms with vectorial corrections. Besides laser lithography also applications like optical tweezers and microscopy might benefit from the method presented.

  5. An automated baseline correction protocol for infrared spectra of atmospheric aerosols collected on polytetrafluoroethylene (Teflon) filters

    Science.gov (United States)

    Kuzmiakova, Adele; Dillner, Ann M.; Takahama, Satoshi

    2016-06-01

    A growing body of research on statistical applications for characterization of atmospheric aerosol Fourier transform infrared (FT-IR) samples collected on polytetrafluoroethylene (PTFE) filters (e.g., Russell et al., 2011; Ruthenburg et al., 2014) and a rising interest in analyzing FT-IR samples collected by air quality monitoring networks call for an automated PTFE baseline correction solution. The existing polynomial technique (Takahama et al., 2013) is not scalable to a project with a large number of aerosol samples because it contains many parameters and requires expert intervention. Therefore, the question of how to develop an automated method for baseline correcting hundreds to thousands of ambient aerosol spectra given the variability in both environmental mixture composition and PTFE baselines remains. This study approaches the question by detailing the statistical protocol, which allows for the precise definition of analyte and background subregions, applies nonparametric smoothing splines to reproduce sample-specific PTFE variations, and integrates performance metrics from atmospheric aerosol and blank samples alike in the smoothing parameter selection. Referencing 794 atmospheric aerosol samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011, we start by identifying key FT-IR signal characteristics, such as non-negative absorbance or analyte segment transformation, to capture sample-specific transitions between background and analyte. While referring to qualitative properties of PTFE background, the goal of smoothing splines interpolation is to learn the baseline structure in the background region to predict the baseline structure in the analyte region. We then validate the model by comparing smoothing splines baseline-corrected spectra with uncorrected and polynomial baseline (PB)-corrected equivalents via three statistical applications: (1) clustering analysis, (2) functional group quantification

  6. Comparatively Studied Color Correction Methods for Color Calibration of Automated Microscopy Complex of Biomedical Specimens

    Directory of Open Access Journals (Sweden)

    T. A. Kravtsova

    2016-01-01

    Full Text Available The paper considers a task of generating the requirements and creating a calibration target for automated microscopy systems (AMS of biomedical specimens to provide the invariance of algorithms and software to the hardware configuration. The required number of color fields of the calibration target and their color coordinates are mostly determined by the color correction method, for which coefficients of the equations are estimated during the calibration process. The paper analyses existing color calibration techniques for digital imaging systems using an optical microscope and shows that there is a lack of published results of comparative studies to demonstrate a particular useful color correction method for microscopic images. A comparative study of ten image color correction methods in RGB space using polynomials and combinations of color coordinate of different orders was carried out. The method of conditioned least squares to estimate the coefficients in the color correction equations using captured images of 217 color fields of the calibration target Kodak Q60-E3 was applied. The regularization parameter in this method was chosen experimentally. It was demonstrated that the best color correction quality characteristics are provided by the method that uses a combination of color coordinates of the 3rd order. The study of the influence of the number and the set of color fields included in calibration target on color correction quality for microscopic images was performed. Six train sets containing 30, 35, 40, 50, 60 and 80 color fields, and test set of 47 color fields not included in any of the train sets were formed. It was found out that the train set of 60 color fields minimizes the color correction error values for both operating modes of digital camera: using "default" color settings and with automatic white balance. At the same time it was established that the use of color fields from the widely used now Kodak Q60-E3 target does not

  7. Text recognition and correction for automated data collection by mobile devices

    Science.gov (United States)

    Ozarslan, Suleyman; Eren, P. Erhan

    2014-03-01

    Participatory sensing is an approach which allows mobile devices such as mobile phones to be used for data collection, analysis and sharing processes by individuals. Data collection is the first and most important part of a participatory sensing system, but it is time consuming for the participants. In this paper, we discuss automatic data collection approaches for reducing the time required for collection, and increasing the amount of collected data. In this context, we explore automated text recognition on images of store receipts which are captured by mobile phone cameras, and the correction of the recognized text. Accordingly, our first goal is to evaluate the performance of the Optical Character Recognition (OCR) method with respect to data collection from store receipt images. Images captured by mobile phones exhibit some typical problems, and common image processing methods cannot handle some of them. Consequently, the second goal is to address these types of problems through our proposed Knowledge Based Correction (KBC) method used in support of the OCR, and also to evaluate the KBC method with respect to the improvement on the accurate recognition rate. Results of the experiments show that the KBC method improves the accurate data recognition rate noticeably.

  8. Automated movement correction for dynamic PET/CT images: evaluation with phantom and patient data.

    Science.gov (United States)

    Ye, Hu; Wong, Koon-Pong; Wardak, Mirwais; Dahlbom, Magnus; Kepe, Vladimir; Barrio, Jorge R; Nelson, Linda D; Small, Gary W; Huang, Sung-Cheng

    2014-01-01

    Head movement during a dynamic brain PET/CT imaging results in mismatch between CT and dynamic PET images. It can cause artifacts in CT-based attenuation corrected PET images, thus affecting both the qualitative and quantitative aspects of the dynamic PET images and the derived parametric images. In this study, we developed an automated retrospective image-based movement correction (MC) procedure. The MC method first registered the CT image to each dynamic PET frames, then re-reconstructed the PET frames with CT-based attenuation correction, and finally re-aligned all the PET frames to the same position. We evaluated the MC method's performance on the Hoffman phantom and dynamic FDDNP and FDG PET/CT images of patients with neurodegenerative disease or with poor compliance. Dynamic FDDNP PET/CT images (65 min) were obtained from 12 patients and dynamic FDG PET/CT images (60 min) were obtained from 6 patients. Logan analysis with cerebellum as the reference region was used to generate regional distribution volume ratio (DVR) for FDDNP scan before and after MC. For FDG studies, the image derived input function was used to generate parametric image of FDG uptake constant (Ki) before and after MC. Phantom study showed high accuracy of registration between PET and CT and improved PET images after MC. In patient study, head movement was observed in all subjects, especially in late PET frames with an average displacement of 6.92 mm. The z-direction translation (average maximum = 5.32 mm) and x-axis rotation (average maximum = 5.19 degrees) occurred most frequently. Image artifacts were significantly diminished after MC. There were significant differences (Pdynamic brain FDDNP and FDG PET/CT scans could improve the qualitative and quantitative aspects of images of both tracers.

  9. Solving for the Surface: An Automated Approach to THEMIS Atmospheric Correction

    Science.gov (United States)

    Ryan, A. J.; Salvatore, M. R.; Smith, R.; Edwards, C. S.; Christensen, P. R.

    2013-12-01

    Here we present the initial results of an automated atmospheric correction algorithm for the Thermal Emission Imaging System (THEMIS) instrument, whereby high spectral resolution Thermal Emission Spectrometer (TES) data are queried to generate numerous atmospheric opacity values for each THEMIS infrared image. While the pioneering methods of Bandfield et al. [2004] also used TES spectra to atmospherically correct THEMIS data, the algorithm presented here is a significant improvement because of the reduced dependency on user-defined inputs for individual images. Additionally, this technique is particularly useful for correcting THEMIS images that have captured a range of atmospheric conditions and/or surface elevations, issues that have been difficult to correct for using previous techniques. Thermal infrared observations of the Martian surface can be used to determine the spatial distribution and relative abundance of many common rock-forming minerals. This information is essential to understanding the planet's geologic and climatic history. However, the Martian atmosphere also has absorptions in the thermal infrared which complicate the interpretation of infrared measurements obtained from orbit. TES has sufficient spectral resolution (143 bands at 10 cm-1 sampling) to linearly unmix and remove atmospheric spectral end-members from the acquired spectra. THEMIS has the benefit of higher spatial resolution (~100 m/pixel vs. 3x5 km/TES-pixel) but has lower spectral resolution (8 surface sensitive spectral bands). As such, it is not possible to isolate the surface component by unmixing the atmospheric contribution from the THEMIS spectra, as is done with TES. Bandfield et al. [2004] developed a technique using atmospherically corrected TES spectra as tie-points for constant radiance offset correction and surface emissivity retrieval. This technique is the primary method used to correct THEMIS but is highly susceptible to inconsistent results if great care in the

  10. Multi-objective optimization for an automated and simultaneous phase and baseline correction of NMR spectral data

    Science.gov (United States)

    Sawall, Mathias; von Harbou, Erik; Moog, Annekathrin; Behrens, Richard; Schröder, Henning; Simoneau, Joël; Steimers, Ellen; Neymeyr, Klaus

    2018-04-01

    Spectral data preprocessing is an integral and sometimes inevitable part of chemometric analyses. For Nuclear Magnetic Resonance (NMR) spectra a possible first preprocessing step is a phase correction which is applied to the Fourier transformed free induction decay (FID) signal. This preprocessing step can be followed by a separate baseline correction step. Especially if series of high-resolution spectra are considered, then automated and computationally fast preprocessing routines are desirable. A new method is suggested that applies the phase and the baseline corrections simultaneously in an automated form without manual input, which distinguishes this work from other approaches. The underlying multi-objective optimization or Pareto optimization provides improved results compared to consecutively applied correction steps. The optimization process uses an objective function which applies strong penalty constraints and weaker regularization conditions. The new method includes an approach for the detection of zero baseline regions. The baseline correction uses a modified Whittaker smoother. The functionality of the new method is demonstrated for experimental NMR spectra. The results are verified against gravimetric data. The method is compared to alternative preprocessing tools. Additionally, the simultaneous correction method is compared to a consecutive application of the two correction steps.

  11. Software-controlled, highly automated intrafraction prostate motion correction with intrafraction stereographic targeting: System description and clinical results

    International Nuclear Information System (INIS)

    Mutanga, Theodore F.; Boer, Hans C. J. de; Rajan, Vinayakrishnan; Dirkx, Maarten L. P.; Os, Marjolein J. H. van; Incrocci, Luca; Heijmen, Ben J. M.

    2012-01-01

    Purpose: A new system for software-controlled, highly automated correction of intrafraction prostate motion,'' intrafraction stereographic targeting'' (iSGT), is described and evaluated. Methods: At our institute, daily prostate positioning is routinely performed at the start of treatment beam using stereographic targeting (SGT). iSGT was implemented by extension of the SGT software to facilitate fast and accurate intrafraction motion corrections with minimal user interaction. iSGT entails megavoltage (MV) image acquisitions with the first segment of selected IMRT beams, automatic registration of implanted markers, followed by remote couch repositioning to correct for intrafraction motion above a predefined threshold, prior to delivery of the remaining segments. For a group of 120 patients, iSGT with corrections for two nearly lateral beams was evaluated in terms of workload and impact on effective intrafraction displacements in the sagittal plane. Results: SDs of systematic (Σ) and random (σ) displacements relative to the planning CT measured directly after initial SGT setup correction were eff eff eff eff eff eff < 0.7 mm, requiring corrections in 82.4% of the fractions. Because iSGT is highly automated, the extra time added by iSGT is <30 s if a correction is required. Conclusions: Without increasing imaging dose, iSGT successfully reduces intrafraction prostate motion with minimal workload and increase in fraction time. An action level of 2 mm is recommended.

  12. Space Weather Magnetometer Set with Automated AC Spacecraft Field Correction for GEO-KOMPSAT-2A

    Science.gov (United States)

    Auster, U.; Magnes, W.; Delva, M.; Valavanoglou, A.; Leitner, S.; Hillenmaier, O.; Strauch, C.; Brown, P.; Whiteside, B.; Bendyk, M.; Hilgers, A.; Kraft, S.; Luntama, J. P.; Seon, J.

    2016-05-01

    Monitoring the solar wind conditions, in particular its magnetic field (interplanetary magnetic field) ahead of the Earth is essential in performing accurate and reliable space weather forecasting. The magnetic condition of the spacecraft itself is a key parameter for the successful performance of the magnetometer onboard. In practice a condition with negligible magnetic field of the spacecraft cannot always be fulfilled and magnetic sources on the spacecraft interfere with the natural magnetic field measured by the space magnetometer. The presented "ready-to-use" Service Oriented Spacecraft Magnetometer (SOSMAG) is developed for use on any satellite implemented without magnetic cleanliness programme. It enables detection of the spacecraft field AC variations on a proper time scale suitable to distinguish the magnetic field variations relevant to space weather phenomena, such as sudden increase in the interplanetary field or southward turning. This is achieved through the use of dual fluxgate magnetometers on a short boom (1m) and two additional AMR sensors on the spacecraft body, which monitor potential AC disturbers. The measurements of the latter sensors enable an automated correction of the AC signal contributions from the spacecraft in the final magnetic vector. After successful development and test of the EQM prototype, a flight model (FM) is being built for the Korean satellite Geo-Kompsat 2A, with launch foreseen in 2018.

  13. An automated phase correction algorithm for retrieving permittivity and permeability of electromagnetic metamaterials

    Directory of Open Access Journals (Sweden)

    Z. X. Cao

    2014-06-01

    Full Text Available To retrieve complex-valued effective permittivity and permeability of electromagnetic metamaterials (EMMs based on resonant effect from scattering parameters using a complex logarithmic function is not inevitable. When complex values are expressed in terms of magnitude and phase, an infinite number of permissible phase angles is permissible due to the multi-valued property of complex logarithmic functions. Special attention needs to be paid to ensure continuity of the effective permittivity and permeability of lossy metamaterials as frequency sweeps. In this paper, an automated phase correction (APC algorithm is proposed to properly trace and compensate phase angles of the complex logarithmic function which may experience abrupt phase jumps near the resonant frequency region of the concerned EMMs, and hence the continuity of the effective optical properties of lossy metamaterials is ensured. The algorithm is then verified to extract effective optical properties from the simulated scattering parameters of the four different types of metamaterial media: a cut-wire cell array, a split ring resonator (SRR cell array, an electric-LC (E-LC resonator cell array, and a combined SRR and wire cell array respectively. The results demonstrate that the proposed algorithm is highly accurate and effective.

  14. Automation of NLO QCD and EW corrections with Sherpa and Recola

    Energy Technology Data Exchange (ETDEWEB)

    Biedermann, Benedikt; Denner, Ansgar; Pellen, Mathieu [Universitaet Wuerzburg, Institut fuer Theoretische Physik und Astrophysik, Wuerzburg (Germany); Braeuer, Stephan; Schumann, Steffen [Georg-August Universitaet Goettingen, II. Physikalisches Institut, Goettingen (Germany); Thompson, Jennifer M. [Universitaet Heidelberg, Institut fuer Theoretische Physik, Heidelberg (Germany)

    2017-07-15

    This publication presents the combination of the one-loop matrix-element generator Recola with the multipurpose Monte Carlo program Sherpa. Since both programs are highly automated, the resulting Sherpa +Recola framework allows for the computation of - in principle - any Standard Model process at both NLO QCD and EW accuracy. To illustrate this, three representative LHC processes have been computed at NLO QCD and EW: vector-boson production in association with jets, off-shell Z-boson pair production, and the production of a top-quark pair in association with a Higgs boson. In addition to fixed-order computations, when considering QCD corrections, all functionalities of Sherpa, i.e. particle decays, QCD parton showers, hadronisation, underlying events, etc. can be used in combination with Recola. This is demonstrated by the merging and matching of one-loop QCD matrix elements for Drell-Yan production in association with jets to the parton shower. The implementation is fully automatised, thus making it a perfect tool for both experimentalists and theorists who want to use state-of-the-art predictions at NLO accuracy. (orig.)

  15. Voxel-based morphometry and automated lobar volumetry: The trade-off between spatial scale and statistical correction

    Science.gov (United States)

    Voormolen, Eduard H.J.; Wei, Corie; Chow, Eva W.C.; Bassett, Anne S.; Mikulis, David J.; Crawley, Adrian P.

    2011-01-01

    Voxel-based morphometry (VBM) and automated lobar region of interest (ROI) volumetry are comprehensive and fast methods to detect differences in overall brain anatomy on magnetic resonance images. However, VBM and automated lobar ROI volumetry have detected dissimilar gray matter differences within identical image sets in our own experience and in previous reports. To gain more insight into how diverging results arise and to attempt to establish whether one method is superior to the other, we investigated how differences in spatial scale and in the need to statistically correct for multiple spatial comparisons influence the relative sensitivity of either technique to group differences in gray matter volumes. We assessed the performance of both techniques on a small dataset containing simulated gray matter deficits and additionally on a dataset of 22q11-deletion syndrome patients with schizophrenia (22q11DS-SZ) vs. matched controls. VBM was more sensitive to simulated focal deficits compared to automated ROI volumetry, and could detect global cortical deficits equally well. Moreover, theoretical calculations of VBM and ROI detection sensitivities to focal deficits showed that at increasing ROI size, ROI volumetry suffers more from loss in sensitivity than VBM. Furthermore, VBM and automated ROI found corresponding GM deficits in 22q11DS-SZ patients, except in the parietal lobe. Here, automated lobar ROI volumetry found a significant deficit only after a smaller subregion of interest was employed. Thus, sensitivity to focal differences is impaired relatively more by averaging over larger volumes in automated ROI methods than by the correction for multiple comparisons in VBM. These findings indicate that VBM is to be preferred over automated lobar-scale ROI volumetry for assessing gray matter volume differences between groups. PMID:19619660

  16. Correction

    DEFF Research Database (Denmark)

    Pinkevych, Mykola; Cromer, Deborah; Tolstrup, Martin

    2016-01-01

    [This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.].......[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.]....

  17. Low jitter RF distribution system

    Science.gov (United States)

    Wilcox, Russell; Doolittle, Lawrence; Huang, Gang

    2012-09-18

    A timing signal distribution system includes an optical frequency stabilized laser signal amplitude modulated at an rf frequency. A transmitter box transmits a first portion of the laser signal and receive a modified optical signal, and outputs a second portion of the laser signal and a portion of the modified optical signal. A first optical fiber carries the first laser signal portion and the modified optical signal, and a second optical fiber carries the second portion of the laser signal and the returned modified optical signal. A receiver box receives the first laser signal portion, shifts the frequency of the first laser signal portion outputs the modified optical signal, and outputs an electrical signal on the basis of the laser signal. A detector at the end of the second optical fiber outputs a signal based on the modified optical signal. An optical delay sensing circuit outputs a data signal based on the detected modified optical signal. An rf phase detect and correct signal circuit outputs a signal corresponding to a phase stabilized rf signal based on the data signal and the frequency received from the receiver box.

  18. Visualization and correction of automated segmentation, tracking and lineaging from 5-D stem cell image sequences.

    Science.gov (United States)

    Wait, Eric; Winter, Mark; Bjornsson, Chris; Kokovay, Erzsebet; Wang, Yue; Goderie, Susan; Temple, Sally; Cohen, Andrew R

    2014-10-03

    Neural stem cells are motile and proliferative cells that undergo mitosis, dividing to produce daughter cells and ultimately generating differentiated neurons and glia. Understanding the mechanisms controlling neural stem cell proliferation and differentiation will play a key role in the emerging fields of regenerative medicine and cancer therapeutics. Stem cell studies in vitro from 2-D image data are well established. Visualizing and analyzing large three dimensional images of intact tissue is a challenging task. It becomes more difficult as the dimensionality of the image data increases to include time and additional fluorescence channels. There is a pressing need for 5-D image analysis and visualization tools to study cellular dynamics in the intact niche and to quantify the role that environmental factors play in determining cell fate. We present an application that integrates visualization and quantitative analysis of 5-D (x,y,z,t,channel) and large montage confocal fluorescence microscopy images. The image sequences show stem cells together with blood vessels, enabling quantification of the dynamic behaviors of stem cells in relation to their vascular niche, with applications in developmental and cancer biology. Our application automatically segments, tracks, and lineages the image sequence data and then allows the user to view and edit the results of automated algorithms in a stereoscopic 3-D window while simultaneously viewing the stem cell lineage tree in a 2-D window. Using the GPU to store and render the image sequence data enables a hybrid computational approach. An inference-based approach utilizing user-provided edits to automatically correct related mistakes executes interactively on the system CPU while the GPU handles 3-D visualization tasks. By exploiting commodity computer gaming hardware, we have developed an application that can be run in the laboratory to facilitate rapid iteration through biological experiments. We combine unsupervised image

  19. ENERGY CORRECTION FOR HIGH POWER PROTON/H MINUS LINAC INJECTORS.

    Energy Technology Data Exchange (ETDEWEB)

    RAPARIA, D.; LEE, Y.Y.; WEI, J.

    2005-05-16

    High-energy proton/H minus energy (> GeV) linac injector suffer from energy jitter due to RF amplitude and phase stability. Especially in high power injectors this energy jitter result beam losses more than 1 W/m that require for hand on maintenance. Depending upon the requirements for next accelerator in the chain, this energy jitter may or may not require to be corrected. This paper will discuss the sources of this energy jitter, correction schemes with specific examples.

  20. CONCEPTUAL STRUCTURALLOGIC DIAGRAM PRODUCTION AUTOMATION EXPERT STUDY ON THE ISSUE OF CORRECTNESS OF CALCULATION OF THE TAX ON PROFIT OF ORGANIZATIONS

    Directory of Open Access Journals (Sweden)

    Andrey N. Ishchenko

    2014-01-01

    Full Text Available In this article the possibility of automation of an expert study on the questionof correctness of tax calculation profi t organization. Considered are the problemsof formalization of the expert research inthis field, specify the structure of imprisonment. The author proposes a conceptual structural-logic diagram automation expertresearch in this area.

  1. GBTX Temperature impact on Jitter Implementation on VLDB

    CERN Document Server

    Pecoraro, Cyril

    2015-01-01

    This report was written within the framework of the CERN Summer Student Program. It is focused on jitter measurement over the temperature of a GBTx ASIC populated on a VLDB board. A complete measurement setup was conceived around a climatic chamber and various configurations of this chip were tested for characterization of skew and cycle to cycle jitter.

  2. On the use of iterative techniques for feedforward control of transverse angle and position jitter in linear particle beam accelerators

    International Nuclear Information System (INIS)

    Barr, D.S.

    1995-01-01

    It is possible to use feedforward predictive control for transverse position and trajectory-angle jitter correction. The control procedure is straightforward, but creation of the predictive filter is not as obvious. The two process tested were the least mean squares (LMS) and Kalman filter methods. The controller parameters calculated offline are downloaded to a real-time analog correction system between macropulses. These techniques worked well for both interpulse (pulse-to-pulse) correction and intrapulse (within a pulse) correction with the Kalman filter method being the clear winner. A simulation based on interpulse data taken at the Stanford Linear Collider showed an improvement factor of almost three in the average rms jitter over standard feedback techniques for the Kalman filter. An improvement factor of over three was found for the Kalman filter on intrapulse data taken at the Los Alamos Meson Physics Facility. The feedforward systems also improved the correction bandwidth. copyright 1995 American Institute of Physics

  3. On the use of iterative techniques for feedforward control of transverse angle and position jitter in linear particle beam accelerators

    International Nuclear Information System (INIS)

    Barr, D.S.

    1994-01-01

    It is possible to use feedforward predictive control for transverse position and trajectory-angle jitter correction. The control procedure is straightforward, but creation of the predictive filter is not as obvious. The two processes tested were the least mean squares (LMS) and Kalman inter methods. The controller parameters calculated offline are downloaded to a real-time analog correction system between macropulses. These techniques worked well for both interpulse (pulse-to-pulse) correction and intrapulse (within a pulse) correction with the Kalman filter method being the clear winner. A simulation based on interpulse data taken at the Stanford Linear Collider showed an improvement factor of almost three in the average rms jitter over standard feedback techniques for the Kalman filter. An improvement factor of over three was found for the Kalman filter on intrapulse data taken at the Los Alamos Meson Physics Facility. The feedforward systems also improved the correction bandwidth

  4. Effect of jitter on an imaging FTIR spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, C. L., LLNL

    1997-04-01

    Line of sight (LOS) jitter produces temporal modulations of the signals which are detected in the focal plane of a temporally modulated imaging Fourier Transform Spectrometer. A theoretical treatment of LOS jitter effects is given, and is compared with the results of measurements with LIFTIRS1 (the Livermore Imaging Fourier Transform InfraRed Spectrometer). The identification, isolation, quantification and removal of jitter artifacts in hyperspectral imaging data by means of principal components analysis is discussed. The theoretical distribution of eigenvalues expected from principal components analysis is used to determine the level of significance of spatially coherent instrumental artifacts in general, including jitter as a representative example. It is concluded that an imaging FTIR spectrometer is much less seriously impacted by a given LOS jitter level than a non imaging FTIR spectrometer.

  5. Timing Jitter Analysis for Clock recovery Circuits Based on an Optoelectronic Phase-Locked Loop (OPLL)

    DEFF Research Database (Denmark)

    Zibar, Darko; Mørk, Jesper; Oxenløwe, Leif Katsuo

    2005-01-01

    Timing jitter of an OPLL based clock recovery is investigated. We demonstrate how loop gain, input and VCO signal jitter, loop filter bandwidth and a loop time delay influence jitter of the extracted clock signal......Timing jitter of an OPLL based clock recovery is investigated. We demonstrate how loop gain, input and VCO signal jitter, loop filter bandwidth and a loop time delay influence jitter of the extracted clock signal...

  6. Note: A new method for directly reducing the sampling jitter noise of the digital phasemeter

    Science.gov (United States)

    Liang, Yu-Rong

    2018-03-01

    The sampling jitter noise is one non-negligible noise source of the digital phasemeter used for space gravitational wave detection missions. This note provides a new method for directly reducing the sampling jitter noise of the digital phasemeter, by adding a dedicated signal of which the frequency, amplitude, and initial phase should be pre-set. In contrast to the phase correction using the pilot-tone in the work of Burnett, Gerberding et al., Liang et al., Ales et al., Gerberding et al., and Ware et al. [M.Sc. thesis, Luleå University of Technology, 2010; Classical Quantum Gravity 30, 235029 (2013); Rev. Sci. Instrum. 86, 016106 (2015); Rev. Sci. Instrum. 86, 084502 (2015); Rev. Sci. Instrum. 86, 074501 (2015); and Proceedings of the Earth Science Technology Conference (NASA, USA, 2006)], the new method is intrinsically additive noise suppression. The experiment results validate that the new method directly reduces the sampling jitter noise without data post-processing and provides the same phase measurement noise level (10-6 rad/Hz1/2 at 0.1 Hz) as the pilot-tone correction.

  7. Automated mass correction and data interpretation for protein open-access liquid chromatography-mass spectrometry.

    Science.gov (United States)

    Wagner, Craig D; Hall, John T; White, Wendy L; Miller, Luke A D; Williams, Jon D

    2007-02-01

    Characterization of recombinant protein purification fractions and final products by liquid chromatography-mass spectrometry (LC/MS) are requested more frequently each year. A protein open-access (OA) LC/MS system was developed in our laboratory to meet this demand. This paper compares the system that we originally implemented in our facilities in 2003 to the one now in use, and discusses, in more detail, recent enhancements that have improved its robustness, reliability, and data reporting capabilities. The system utilizes instruments equipped with reversed-phase chromatography and an orthogonal accelerated time-of-flight mass spectrometer fitted with an electrospray source. Sample analysis requests are accomplished using a simple form on a web-enabled laboratory information management system (LIMS). This distributed form is accessible from any intranet-connected company desktop computer. Automated data acquisition and processing are performed using a combination of in-house (OA-Self Service, OA-Monitor, and OA-Analysis Engine) and vendor-supplied programs (AutoLynx, and OpenLynx) located on acquisition computers and off-line processing workstations. Analysis results are then reported via the same web-based LIMS. Also presented are solutions to problems not addressed on commercially available, small-molecule OA-LC/MS systems. These include automated transforming of mass-to-charge (m/z) spectra to mass spectra and automated data interpretation that considers minor variants to the protein sequence-such as common post-translational modifications (PTMs). Currently, our protein OA-LC/MS platform runs on five LC/MS instruments located in three separate GlaxoSmithKline R&D sites in the US and UK. To date, more than 8000 protein OA-LC/MS samples have been analyzed. With these user friendly and highly automated OA systems in place, mass spectrometry plays a key role in assessing the quality of recombinant proteins, either produced at our facilities or bought from external

  8. Analysis of an automated background correction method for cardiovascular MR phase contrast imaging in children and young adults

    Energy Technology Data Exchange (ETDEWEB)

    Rigsby, Cynthia K.; Hilpipre, Nicholas; Boylan, Emma E.; Popescu, Andrada R.; Deng, Jie [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Medical Imaging, Chicago, IL (United States); McNeal, Gary R. [Siemens Medical Solutions USA Inc., Customer Solutions Group, Cardiovascular MR R and D, Chicago, IL (United States); Zhang, Gang [Ann and Robert H. Lurie Children' s Hospital of Chicago Research Center, Biostatistics Research Core, Chicago, IL (United States); Choi, Grace [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Pediatrics, Chicago, IL (United States); Greiser, Andreas [Siemens AG Healthcare Sector, Erlangen (Germany)

    2014-03-15

    Phase contrast magnetic resonance imaging (MRI) is a powerful tool for evaluating vessel blood flow. Inherent errors in acquisition, such as phase offset, eddy currents and gradient field effects, can cause significant inaccuracies in flow parameters. These errors can be rectified with the use of background correction software. To evaluate the performance of an automated phase contrast MRI background phase correction method in children and young adults undergoing cardiac MR imaging. We conducted a retrospective review of patients undergoing routine clinical cardiac MRI including phase contrast MRI for flow quantification in the aorta (Ao) and main pulmonary artery (MPA). When phase contrast MRI of the right and left pulmonary arteries was also performed, these data were included. We excluded patients with known shunts and metallic implants causing visible MRI artifact and those with more than mild to moderate aortic or pulmonary stenosis. Phase contrast MRI of the Ao, mid MPA, proximal right pulmonary artery (RPA) and left pulmonary artery (LPA) using 2-D gradient echo Fast Low Angle SHot (FLASH) imaging was acquired during normal respiration with retrospective cardiac gating. Standard phase image reconstruction and the automatic spatially dependent background-phase-corrected reconstruction were performed on each phase contrast MRI dataset. Non-background-corrected and background-phase-corrected net flow, forward flow, regurgitant volume, regurgitant fraction, and vessel cardiac output were recorded for each vessel. We compared standard non-background-corrected and background-phase-corrected mean flow values for the Ao and MPA. The ratio of pulmonary to systemic blood flow (Qp:Qs) was calculated for the standard non-background and background-phase-corrected data and these values were compared to each other and for proximity to 1. In a subset of patients who also underwent phase contrast MRI of the MPA, RPA, and LPA a comparison was made between standard non-background-corrected

  9. Automated fetal brain segmentation from 2D MRI slices for motion correction.

    Science.gov (United States)

    Keraudren, K; Kuklisova-Murgasova, M; Kyriakopoulou, V; Malamateniou, C; Rutherford, M A; Kainz, B; Hajnal, J V; Rueckert, D

    2014-11-01

    Motion correction is a key element for imaging the fetal brain in-utero using Magnetic Resonance Imaging (MRI). Maternal breathing can introduce motion, but a larger effect is frequently due to fetal movement within the womb. Consequently, imaging is frequently performed slice-by-slice using single shot techniques, which are then combined into volumetric images using slice-to-volume reconstruction methods (SVR). For successful SVR, a key preprocessing step is to isolate fetal brain tissues from maternal anatomy before correcting for the motion of the fetal head. This has hitherto been a manual or semi-automatic procedure. We propose an automatic method to localize and segment the brain of the fetus when the image data is acquired as stacks of 2D slices with anatomy misaligned due to fetal motion. We combine this segmentation process with a robust motion correction method, enabling the segmentation to be refined as the reconstruction proceeds. The fetal brain localization process uses Maximally Stable Extremal Regions (MSER), which are classified using a Bag-of-Words model with Scale-Invariant Feature Transform (SIFT) features. The segmentation process is a patch-based propagation of the MSER regions selected during detection, combined with a Conditional Random Field (CRF). The gestational age (GA) is used to incorporate prior knowledge about the size and volume of the fetal brain into the detection and segmentation process. The method was tested in a ten-fold cross-validation experiment on 66 datasets of healthy fetuses whose GA ranged from 22 to 39 weeks. In 85% of the tested cases, our proposed method produced a motion corrected volume of a relevant quality for clinical diagnosis, thus removing the need for manually delineating the contours of the brain before motion correction. Our method automatically generated as a side-product a segmentation of the reconstructed fetal brain with a mean Dice score of 93%, which can be used for further processing. Copyright

  10. Correction

    CERN Multimedia

    2002-01-01

    Tile Calorimeter modules stored at CERN. The larger modules belong to the Barrel, whereas the smaller ones are for the two Extended Barrels. (The article was about the completion of the 64 modules for one of the latter.) The photo on the first page of the Bulletin n°26/2002, from 24 July 2002, illustrating the article «The ATLAS Tile Calorimeter gets into shape» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.

  11. Correction

    Directory of Open Access Journals (Sweden)

    2012-01-01

    Full Text Available Regarding Gorelik, G., & Shackelford, T.K. (2011. Human sexual conflict from molecules to culture. Evolutionary Psychology, 9, 564–587: The authors wish to correct an omission in citation to the existing literature. In the final paragraph on p. 570, we neglected to cite Burch and Gallup (2006 [Burch, R. L., & Gallup, G. G., Jr. (2006. The psychobiology of human semen. In S. M. Platek & T. K. Shackelford (Eds., Female infidelity and paternal uncertainty (pp. 141–172. New York: Cambridge University Press.]. Burch and Gallup (2006 reviewed the relevant literature on FSH and LH discussed in this paragraph, and should have been cited accordingly. In addition, Burch and Gallup (2006 should have been cited as the originators of the hypothesis regarding the role of FSH and LH in the semen of rapists. The authors apologize for this oversight.

  12. Correction

    CERN Multimedia

    2002-01-01

    The photo on the second page of the Bulletin n°48/2002, from 25 November 2002, illustrating the article «Spanish Visit to CERN» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.   The Spanish delegation, accompanied by Spanish scientists at CERN, also visited the LHC superconducting magnet test hall (photo). From left to right: Felix Rodriguez Mateos of CERN LHC Division, Josep Piqué i Camps, Spanish Minister of Science and Technology, César Dopazo, Director-General of CIEMAT (Spanish Research Centre for Energy, Environment and Technology), Juan Antonio Rubio, ETT Division Leader at CERN, Manuel Aguilar-Benitez, Spanish Delegate to Council, Manuel Delfino, IT Division Leader at CERN, and Gonzalo León, Secretary-General of Scientific Policy to the Minister.

  13. Correction

    Directory of Open Access Journals (Sweden)

    2014-01-01

    Full Text Available Regarding Tagler, M. J., and Jeffers, H. M. (2013. Sex differences in attitudes toward partner infidelity. Evolutionary Psychology, 11, 821–832: The authors wish to correct values in the originally published manuscript. Specifically, incorrect 95% confidence intervals around the Cohen's d values were reported on page 826 of the manuscript where we reported the within-sex simple effects for the significant Participant Sex × Infidelity Type interaction (first paragraph, and for attitudes toward partner infidelity (second paragraph. Corrected values are presented in bold below. The authors would like to thank Dr. Bernard Beins at Ithaca College for bringing these errors to our attention. Men rated sexual infidelity significantly more distressing (M = 4.69, SD = 0.74 than they rated emotional infidelity (M = 4.32, SD = 0.92, F(1, 322 = 23.96, p < .001, d = 0.44, 95% CI [0.23, 0.65], but there was little difference between women's ratings of sexual (M = 4.80, SD = 0.48 and emotional infidelity (M = 4.76, SD = 0.57, F(1, 322 = 0.48, p = .29, d = 0.08, 95% CI [−0.10, 0.26]. As expected, men rated sexual infidelity (M = 1.44, SD = 0.70 more negatively than they rated emotional infidelity (M = 2.66, SD = 1.37, F(1, 322 = 120.00, p < .001, d = 1.12, 95% CI [0.85, 1.39]. Although women also rated sexual infidelity (M = 1.40, SD = 0.62 more negatively than they rated emotional infidelity (M = 2.09, SD = 1.10, this difference was not as large and thus in the evolutionary theory supportive direction, F(1, 322 = 72.03, p < .001, d = 0.77, 95% CI [0.60, 0.94].

  14. Efficient Photometry In-Frame Calibration (EPIC) Gaussian Corrections for Automated Background Normalization of Rate-Tracked Satellite Imagery

    Science.gov (United States)

    Griesbach, J.; Wetterer, C.; Sydney, P.; Gerber, J.

    Photometric processing of non-resolved Electro-Optical (EO) images has commonly required the use of dark and flat calibration frames that are obtained to correct for charge coupled device (CCD) dark (thermal) noise and CCD quantum efficiency/optical path vignetting effects respectively. It is necessary to account/calibrate for these effects so that the brightness of objects of interest (e.g. stars or resident space objects (RSOs)) may be measured in a consistent manner across the CCD field of view. Detected objects typically require further calibration using aperture photometry to compensate for sky background (shot noise). For this, annuluses are measured around each detected object whose contained pixels are used to estimate an average background level that is subtracted from the detected pixel measurements. In a new photometric calibration software tool developed for AFRL/RD, called Efficient Photometry In-Frame Calibration (EPIC), an automated background normalization technique is proposed that eliminates the requirement to capture dark and flat calibration images. The proposed technique simultaneously corrects for dark noise, shot noise, and CCD quantum efficiency/optical path vignetting effects. With this, a constant detection threshold may be applied for constant false alarm rate (CFAR) object detection without the need for aperture photometry corrections. The detected pixels may be simply summed (without further correction) for an accurate instrumental magnitude estimate. The noise distribution associated with each pixel is assumed to be sampled from a Poisson distribution. Since Poisson distributed data closely resembles Gaussian data for parameterized means greater than 10, the data may be corrected by applying bias subtraction and standard-deviation division. EPIC performs automated background normalization on rate-tracked satellite images using the following technique. A deck of approximately 50-100 images is combined by performing an independent median

  15. Automated correction on X-rays calibration using transmission chamber and LabVIEWTM

    International Nuclear Information System (INIS)

    Betti, Flavio; Potiens, Maria da Penha Albuquerque

    2009-01-01

    Uncertainties during prolonged exposure times on X-rays calibration procedures at the Instruments Calibration facilities at IPEN may suffer from efficiency (and therefore intensity) variations on the industrial X-Ray generator used. Using a transmission chamber as an online reference chamber during the whole irradiation process is proposed in order to compensate for such error source. Also temperature (and pressure) fluctuations may arise from the performance limited calibration room air conditioning system. As an open ionization chamber, that monitor chamber does require calculation of a correction factor due to the temperature and pressure effects on air density. Sending and processing data from all related instruments (electrometer, thermometer and barometer) can be more easily achieved by interfacing them to a host computer running an especially developed algorithm using LabVIEW TM environment which will not only apply the proper correction factors during runtime, but also determine the exact length of time to reach a desired condition, which can be: time period, charge collected, or air kerma, based on the previous calibration of the whole system using a reference chamber traceable to primary standard dosimetry laboratories. When performing such calibration, two temperature sensors (secondary standard thermistors) are simultaneously used, one for the transmission chamber, and other for the reference chamber. As the substitution method is used during actual customer's calibration, the readings from the second thermistor can also be used when desired for further corrections. Use of LabVIEW TM programming language allowed for a shorter development time, and it is also extremely convenient to make things easier when improvements and modifications are called for. (author)

  16. Setup accuracy of stereoscopic X-ray positioning with automated correction for rotational errors in patients treated with conformal arc radiotherapy for prostate cancer

    International Nuclear Information System (INIS)

    Soete, Guy; Verellen, Dirk; Tournel, Koen; Storme, Guy

    2006-01-01

    We evaluated setup accuracy of NovalisBody stereoscopic X-ray positioning with automated correction for rotational errors with the Robotics Tilt Module in patients treated with conformal arc radiotherapy for prostate cancer. The correction of rotational errors was shown to reduce random and systematic errors in all directions. (NovalisBody TM and Robotics Tilt Module TM are products of BrainLAB A.G., Heimstetten, Germany)

  17. Top-quark physics as a prime application of automated higher-order corrections

    Energy Technology Data Exchange (ETDEWEB)

    Weiss, Christian

    2017-07-15

    Experiments in high energy physics have reached an unprecedented accuracy. This accuracy has to be matched by the theoretical predictions used to search for new physics. For this purpose, sophisticated computer programs are necessary, both for the calculation of matrix elements (tree-level and loop) and in the field of Monte-Carlo event generation. The hadronic initial state at the LHC poses significant challenges for measurement and simulation. A future lepton collider, like the proposed international linear collider (ILC) in Japan or compact linear collider (CLIC) at CERN would have a much cleaner initial state. Such a machine would achieve an even higher precision. In the field of lepton colliders, the Whizard event generator has been established as the program of choice due to its unique treatment of beam structure functions and initial-state radiation. In this thesis, we present the extension of Whizard to next-to-leading order accuracy, thus augmenting it to the state of the art. We use the Frixione-Kunszt-Signer (FKS) subtraction scheme to subtract divergences, of which a detailed outline is given. This new functionality is used to perform in-depth studies of the top quark. Being the heaviest particle in the standard model, its strong connection to the Higgs sector as well as its abundant production at a future lepton collider makes it an excellent object of study. Yet, its lifetime is very short and high-multiplicity final-states of its decay products are decayed in the detector. This thesis investigates the influence of NLO QCD corrections to the fully off-shell top production processes e{sup +}e{sup -}→μ{sup +}ν{sub μ}e{sup -} anti ν{sub e}b anti b and e{sup +}e{sup -}→μ{sup +}ν{sub μ}e{sup -} anti ν{sub e}b anti bH. These calculations have not been performed for the first time. Moreover, the incorporation of NLO QCD corrections into the resummation of the top production threshold and its matching to the relativistic continuum for the process

  18. Top-quark physics as a prime application of automated higher-order corrections

    International Nuclear Information System (INIS)

    Weiss, Christian

    2017-07-01

    Experiments in high energy physics have reached an unprecedented accuracy. This accuracy has to be matched by the theoretical predictions used to search for new physics. For this purpose, sophisticated computer programs are necessary, both for the calculation of matrix elements (tree-level and loop) and in the field of Monte-Carlo event generation. The hadronic initial state at the LHC poses significant challenges for measurement and simulation. A future lepton collider, like the proposed international linear collider (ILC) in Japan or compact linear collider (CLIC) at CERN would have a much cleaner initial state. Such a machine would achieve an even higher precision. In the field of lepton colliders, the Whizard event generator has been established as the program of choice due to its unique treatment of beam structure functions and initial-state radiation. In this thesis, we present the extension of Whizard to next-to-leading order accuracy, thus augmenting it to the state of the art. We use the Frixione-Kunszt-Signer (FKS) subtraction scheme to subtract divergences, of which a detailed outline is given. This new functionality is used to perform in-depth studies of the top quark. Being the heaviest particle in the standard model, its strong connection to the Higgs sector as well as its abundant production at a future lepton collider makes it an excellent object of study. Yet, its lifetime is very short and high-multiplicity final-states of its decay products are decayed in the detector. This thesis investigates the influence of NLO QCD corrections to the fully off-shell top production processes e"+e"-→μ"+ν_μe"- anti ν_eb anti b and e"+e"-→μ"+ν_μe"- anti ν_eb anti bH. These calculations have not been performed for the first time. Moreover, the incorporation of NLO QCD corrections into the resummation of the top production threshold and its matching to the relativistic continuum for the process e"+e"-→bW"++ anti bW"-. All results are obtained with

  19. The identification of credit card encoders by hierarchical cluster analysis of the jitters of magnetic stripes.

    Science.gov (United States)

    Leung, S C; Fung, W K; Wong, K H

    1999-01-01

    The relative bit density variation graphs of 207 specimen credit cards processed by 12 encoding machines were examined first visually, and then classified by means of hierarchical cluster analysis. Twenty-nine credit cards being treated as 'questioned' samples were tested by way of cluster analysis against 'controls' derived from known encoders. It was found that hierarchical cluster analysis provided a high accuracy of identification with all 29 'questioned' samples classified correctly. On the other hand, although visual comparison of jitter graphs was less discriminating, it was nevertheless capable of giving a reasonably accurate result.

  20. Robust real-time change detection in high jitter.

    Energy Technology Data Exchange (ETDEWEB)

    Simonson, Katherine Mary; Ma, Tian J.

    2009-08-01

    A new method is introduced for real-time detection of transient change in scenes observed by staring sensors that are subject to platform jitter, pixel defects, variable focus, and other real-world challenges. The approach uses flexible statistical models for the scene background and its variability, which are continually updated to track gradual drift in the sensor's performance and the scene under observation. Two separate models represent temporal and spatial variations in pixel intensity. For the temporal model, each new frame is projected into a low-dimensional subspace designed to capture the behavior of the frame data over a recent observation window. Per-pixel temporal standard deviation estimates are based on projection residuals. The second approach employs a simple representation of jitter to generate pixelwise moment estimates from a single frame. These estimates rely on spatial characteristics of the scene, and are used gauge each pixel's susceptibility to jitter. The temporal model handles pixels that are naturally variable due to sensor noise or moving scene elements, along with jitter displacements comparable to those observed in the recent past. The spatial model captures jitter-induced changes that may not have been seen previously. Change is declared in pixels whose current values are inconsistent with both models.

  1. E-model MOS Estimate Improvement through Jitter Buffer Packet Loss Modelling

    Directory of Open Access Journals (Sweden)

    Adrian Kovac

    2011-01-01

    Full Text Available Proposed article analyses dependence of MOS as a voice call quality (QoS measure estimated through ITU-T E-model under real network conditions with jitter. In this paper, a method of jitter effect is proposed. Jitter as voice packet time uncertainty appears as increased packet loss caused by jitter memory buffer under- or overflow. Jitter buffer behaviour at receiver’s side is modelled as Pareto/D/1/K system with Pareto-distributed packet interarrival times and its performance is experimentally evaluated by using statistic tools. Jitter buffer stochastic model is then incorporated into E-model in an additive manner accounting for network jitter effects via excess packet loss complementing measured network packet loss. Proposed modification of E-model input parameter adds two degrees of freedom in modelling: network jitter and jitter buffer size.

  2. Engineering high reliability, low-jitter Marx generators

    International Nuclear Information System (INIS)

    Schneider, L.X.; Lockwood, G.J.

    1985-01-01

    Multimodule pulsed power accelerators typically require high module reliability and nanosecond regime simultaneity between modules. Energy storage using bipolar Marx generators can meet these requirements. Experience gained from computer simulations and the development of the DEMON II Marx generator has led to a fundamental understanding of the operation of these multistage devices. As a result of this research, significant improvements in erection time jitter and reliability have been realized in multistage, bipolar Marx generators. Erection time jitter has been measured as low as 2.5 nanoseconds for the 3.2MV, 16-stage PBFA I Marx and 3.5 nanoseconds for the 6.0MV, 30-stage PBFA II (DEMON II) Marx, while maintaining exceptionally low prefire rates. Performance data are presented from the DEMON II Marx research program, as well as discussions on the use of computer simulations in designing low-jitter Marx generators

  3. Identification of amplitude and timing jitter in external-cavity mode-locked semiconductor lasers

    DEFF Research Database (Denmark)

    Mulet, Josep; Mørk, Jesper; Kroh, Marcel

    2004-01-01

    We theoretically and experimentally investigate the dynamics of external-cavity mode-locked semiconductor lasers, focusing on stability properties, optimization of pulsewidth and timing jitter. A new numerical approach allows to clearly separate timing and amplitude jitter....

  4. Analysis of jitter due to call-level fluctuations

    NARCIS (Netherlands)

    M.R.H. Mandjes (Michel)

    2005-01-01

    textabstractIn communication networks used by constant bit rate applications, call-level dynamics (i.e., entering and leaving calls) lead to fluctuations in the load, and therefore also fluctuations in the delay (jitter). By intentionally delaying the packets at the destination, one can transform

  5. High reliability low jitter 80 kV pulse generator

    International Nuclear Information System (INIS)

    Savage, Mark Edward; Stoltzfus, Brian Scott

    2009-01-01

    Switching can be considered to be the essence of pulsed power. Time accurate switch/trigger systems with low inductance are useful in many applications. This article describes a unique switch geometry coupled with a low-inductance capacitive energy store. The system provides a fast-rising high voltage pulse into a low impedance load. It can be challenging to generate high voltage (more than 50 kilovolts) into impedances less than 10 (Omega), from a low voltage control signal with a fast rise time and high temporal accuracy. The required power amplification is large, and is usually accomplished with multiple stages. The multiple stages can adversely affect the temporal accuracy and the reliability of the system. In the present application, a highly reliable and low jitter trigger generator was required for the Z pulsed-power facility [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats,J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K.W. Struve, W.A. Stygar, L.K. Warne, and J. R. Woodworth, 2007 IEEE Pulsed Power Conference, Albuquerque, NM (IEEE, Piscataway, NJ, 2007), p. 979]. The large investment in each Z experiment demands low prefire probability and low jitter simultaneously. The system described here is based on a 100 kV DC-charged high-pressure spark gap, triggered with an ultraviolet laser. The system uses a single optical path for simultaneously triggering two parallel switches, allowing lower inductance and electrode erosion with a simple optical system. Performance of the system includes 6 ns output rise time into 5.6 (Omega), 550 ps one-sigma jitter measured from the 5 V trigger to the high voltage output, and misfire probability less than 10 -4 . The design of the system and some key measurements will be shown in the paper. We will discuss the design goals related to high reliability and low jitter. While

  6. Longitudinal Jitter Analysis of a Linear Accelerator Electron Gun

    Directory of Open Access Journals (Sweden)

    MingShan Liu

    2016-11-01

    Full Text Available We present measurements and analysis of the longitudinal timing jitter of a Beijing Electron Positron Collider (BEPCII linear accelerator electron gun. We simulated the longitudinal jitter effect of the gun using PARMELA to evaluate beam performance, including: beam profile, average energy, energy spread, and XY emittances. The maximum percentage difference of the beam parameters is calculated to be 100%, 13.27%, 42.24% and 65.01%, 86.81%, respectively. Due to this, the bunching efficiency is reduced to 54%. However, the longitudinal phase difference of the reference particle was 9.89°. The simulation results are in agreement with tests and are helpful to optimize the beam parameters by tuning the trigger timing of the gun during the bunching process.

  7. Injection Bucket Jitter Compensation Using Phase Lock System at Fermilab Booster

    Energy Technology Data Exchange (ETDEWEB)

    Seiya, K. [Fermilab; Drennan, C. [Fermilab; Pellico, W. [Fermilab; Chaurize, S. [Fermilab

    2017-05-12

    The extraction bucket position in the Fermilab Booster is controlled with a cogging process that involves the comparison of the Booster rf count and the Recycler Ring revolution marker. A one rf bucket jitter in the ex-traction bucket position results from the variability of the process that phase matches the Booster to the Recycler. However, the new slow phase lock process used to lock the frequency and phase of the Booster rf to the Recycler rf has been made digital and programmable and has been modified to correct the extraction notch position. The beam loss at the Recycler injection has been reduced by 20%. Beam studies and the phase lock system will be discussed in this paper.

  8. Timing jitter measurements at the SLC electron source

    International Nuclear Information System (INIS)

    Sodja, J.; Browne, M.J.; Clendenin, J.E.

    1989-03-01

    The SLC thermionic gun and electron source produce a beam of up to 15 /times/ 10 10 /sub e//minus/ in a single S-band bunch. A 170 keV, 2 ns FWHM pulse out of the gun is compressed by means of two subharmonic buncher cavities followed by an S-band buncher and a standard SLAC accelerating section. Ceramic gaps in the beam pipe at the output of the gun allow a measure of the beam intensity and timing. A measurement at these gaps of the timing jitter, with a resolution of <10 ps, is described. 3 refs., 5 figs

  9. Improved beam jitter control methods for high energy laser systems

    OpenAIRE

    Frist, Duane C.

    2009-01-01

    Approved for public release, distribution unlimited The objective of this research was to develop beam jitter control methods for a High Energy Laser (HEL) testbed. The first step was to characterize the new HEL testbed at NPS. This included determination of natural frequencies and component models which were used to create a Matlab/Simulink model of the testbed. Adaptive filters using Filtered-X Least Mean Squares (FX-LMS) and Filtered-X Recursive Least Square (FX-RLS) were then implement...

  10. Essential Technology and Application of Jitter Detection and Compensation for High Resolution Satellites

    Directory of Open Access Journals (Sweden)

    TONG Xiaohua

    2017-10-01

    Full Text Available Satellite jitter is a common and complex phenomenon for the on-orbit high resolution satellites, which may affect the mapping accuracy and quality of imagery. A framework of jitter detection and compensation integrating data processing of multiple sensors is proposed in this paper. Jitter detection is performed based on multispectral imagery, three-line-array imagery, dense ground control and attitude measurement data, and jitter compensation is conducted both on image and on attitude with the sensor model. The platform jitter of ZY-3 satellite is processed and analyzed using the proposed technology, and the results demonstrate the feasibility and reliability of jitter detection and compensation. The variation law analysis of jitter indicates that the frequencies of jitter of ZY-3 satellite hold in the range between 0.6 and 0.7 Hz, while the amplitudes of jitter of ZY-3 satellite drop from 1 pixel in the early stage to below 0.4 pixels and tend to remain stable in the following stage.

  11. Automated 3-D method for the correction of axial artifacts in spectral-domain optical coherence tomography images

    Science.gov (United States)

    Antony, Bhavna; Abràmoff, Michael D.; Tang, Li; Ramdas, Wishal D.; Vingerling, Johannes R.; Jansonius, Nomdo M.; Lee, Kyungmoo; Kwon, Young H.; Sonka, Milan; Garvin, Mona K.

    2011-01-01

    The 3-D spectral-domain optical coherence tomography (SD-OCT) images of the retina often do not reflect the true shape of the retina and are distorted differently along the x and y axes. In this paper, we propose a novel technique that uses thin-plate splines in two stages to estimate and correct the distinct axial artifacts in SD-OCT images. The method was quantitatively validated using nine pairs of OCT scans obtained with orthogonal fast-scanning axes, where a segmented surface was compared after both datasets had been corrected. The mean unsigned difference computed between the locations of this artifact-corrected surface after the single-spline and dual-spline correction was 23.36 ± 4.04 μm and 5.94 ± 1.09 μm, respectively, and showed a significant difference (p < 0.001 from two-tailed paired t-test). The method was also validated using depth maps constructed from stereo fundus photographs of the optic nerve head, which were compared to the flattened top surface from the OCT datasets. Significant differences (p < 0.001) were noted between the artifact-corrected datasets and the original datasets, where the mean unsigned differences computed over 30 optic-nerve-head-centered scans (in normalized units) were 0.134 ± 0.035 and 0.302 ± 0.134, respectively. PMID:21833377

  12. Correcting Inconsistencies and Errors in Bacterial Genome Metadata Using an Automated Curation Tool in Excel (AutoCurE).

    Science.gov (United States)

    Schmedes, Sarah E; King, Jonathan L; Budowle, Bruce

    2015-01-01

    Whole-genome data are invaluable for large-scale comparative genomic studies. Current sequencing technologies have made it feasible to sequence entire bacterial genomes with relative ease and time with a substantially reduced cost per nucleotide, hence cost per genome. More than 3,000 bacterial genomes have been sequenced and are available at the finished status. Publically available genomes can be readily downloaded; however, there are challenges to verify the specific supporting data contained within the download and to identify errors and inconsistencies that may be present within the organizational data content and metadata. AutoCurE, an automated tool for bacterial genome database curation in Excel, was developed to facilitate local database curation of supporting data that accompany downloaded genomes from the National Center for Biotechnology Information. AutoCurE provides an automated approach to curate local genomic databases by flagging inconsistencies or errors by comparing the downloaded supporting data to the genome reports to verify genome name, RefSeq accession numbers, the presence of archaea, BioProject/UIDs, and sequence file descriptions. Flags are generated for nine metadata fields if there are inconsistencies between the downloaded genomes and genomes reports and if erroneous or missing data are evident. AutoCurE is an easy-to-use tool for local database curation for large-scale genome data prior to downstream analyses.

  13. Automated 3-D method for the correction of axial artifacts in spectral-domain optical coherence tomography images

    NARCIS (Netherlands)

    Antony, Bhavna; Abramoff, Michael D.; Tang, Li; Ramdas, Wishal D.; Vingerling, Johannes R.; Jansonius, Nomdo M.; Lee, Kyungmoo; Kwon, Young H.; Sonka, Milan; Garvin, Mona K.

    2011-01-01

    The 3-D spectral-domain optical coherence tomography (SD-OCT) images of the retina often do not reflect the true shape of the retina and are distorted differently along the x and y axes. In this paper, we propose a novel technique that uses thin-plate splines in two stages to estimate and correct

  14. Automated correction of spin-history related motion artefacts in fMRI : Simulated and phantom data

    NARCIS (Netherlands)

    Muresan, L; Renken, R.; Roerdink, J.B.T.M.; Duifhuis, H.

    This paper concerns the problem of correcting spin-history artefacts in fMRI data. We focus on the influence of through-plane motion on the history of magnetization. A change in object position will disrupt the tissue’s steady-state magnetization. The disruption will propagate to the next few

  15. Quantitative evaluation of automated skull-stripping methods applied to contemporary and legacy images: effects of diagnosis, bias correction, and slice location

    DEFF Research Database (Denmark)

    Fennema-Notestine, Christine; Ozyurt, I Burak; Clark, Camellia P

    2006-01-01

    Performance of automated methods to isolate brain from nonbrain tissues in magnetic resonance (MR) structural images may be influenced by MR signal inhomogeneities, type of MR image set, regional anatomy, and age and diagnosis of subjects studied. The present study compared the performance of four...... methods: Brain Extraction Tool (BET; Smith [2002]: Hum Brain Mapp 17:143-155); 3dIntracranial (Ward [1999] Milwaukee: Biophysics Research Institute, Medical College of Wisconsin; in AFNI); a Hybrid Watershed algorithm (HWA, Segonne et al. [2004] Neuroimage 22:1060-1075; in FreeSurfer); and Brain Surface...... Extractor (BSE, Sandor and Leahy [1997] IEEE Trans Med Imag 16:41-54; Shattuck et al. [2001] Neuroimage 13:856-876) to manually stripped images. The methods were applied to uncorrected and bias-corrected datasets; Legacy and Contemporary T1-weighted image sets; and four diagnostic groups (depressed...

  16. Histograms of Oriented 3D Gradients for Fully Automated Fetal Brain Localization and Robust Motion Correction in 3 T Magnetic Resonance Images.

    Science.gov (United States)

    Serag, Ahmed; Macnaught, Gillian; Denison, Fiona C; Reynolds, Rebecca M; Semple, Scott I; Boardman, James P

    2017-01-01

    Fetal brain magnetic resonance imaging (MRI) is a rapidly emerging diagnostic imaging tool. However, automated fetal brain localization is one of the biggest obstacles in expediting and fully automating large-scale fetal MRI processing. We propose a method for automatic localization of fetal brain in 3 T MRI when the images are acquired as a stack of 2D slices that are misaligned due to fetal motion. First, the Histogram of Oriented Gradients (HOG) feature descriptor is extended from 2D to 3D images. Then, a sliding window is used to assign a score to all possible windows in an image, depending on the likelihood of it containing a brain, and the window with the highest score is selected. In our evaluation experiments using a leave-one-out cross-validation strategy, we achieved 96% of complete brain localization using a database of 104 MRI scans at gestational ages between 34 and 38 weeks. We carried out comparisons against template matching and random forest based regression methods and the proposed method showed superior performance. We also showed the application of the proposed method in the optimization of fetal motion correction and how it is essential for the reconstruction process. The method is robust and does not rely on any prior knowledge of fetal brain development.

  17. Histograms of Oriented 3D Gradients for Fully Automated Fetal Brain Localization and Robust Motion Correction in 3 T Magnetic Resonance Images

    Directory of Open Access Journals (Sweden)

    Ahmed Serag

    2017-01-01

    Full Text Available Fetal brain magnetic resonance imaging (MRI is a rapidly emerging diagnostic imaging tool. However, automated fetal brain localization is one of the biggest obstacles in expediting and fully automating large-scale fetal MRI processing. We propose a method for automatic localization of fetal brain in 3 T MRI when the images are acquired as a stack of 2D slices that are misaligned due to fetal motion. First, the Histogram of Oriented Gradients (HOG feature descriptor is extended from 2D to 3D images. Then, a sliding window is used to assign a score to all possible windows in an image, depending on the likelihood of it containing a brain, and the window with the highest score is selected. In our evaluation experiments using a leave-one-out cross-validation strategy, we achieved 96% of complete brain localization using a database of 104 MRI scans at gestational ages between 34 and 38 weeks. We carried out comparisons against template matching and random forest based regression methods and the proposed method showed superior performance. We also showed the application of the proposed method in the optimization of fetal motion correction and how it is essential for the reconstruction process. The method is robust and does not rely on any prior knowledge of fetal brain development.

  18. Simulations of chopper jitter at the LET neutron spectrometer at the ISIS TS2

    DEFF Research Database (Denmark)

    Klenø, Kaspar Hewitt; Lefmann, Kim; Willendrup, Peter Kjær

    2014-01-01

    The effect of uncertainty in chopper phasing (jitter) has been investigated for the high-resolution time-of-flight spectrometer LET at the ISIS second target station. The investigation is carried out using virtual experiments, with the neutron simulation package McStas, where the chopper jitter i...

  19. Practical security analysis of continuous-variable quantum key distribution with jitter in clock synchronization

    Science.gov (United States)

    Xie, Cailang; Guo, Ying; Liao, Qin; Zhao, Wei; Huang, Duan; Zhang, Ling; Zeng, Guihua

    2018-03-01

    How to narrow the gap of security between theory and practice has been a notoriously urgent problem in quantum cryptography. Here, we analyze and provide experimental evidence of the clock jitter effect on the practical continuous-variable quantum key distribution (CV-QKD) system. The clock jitter is a random noise which exists permanently in the clock synchronization in the practical CV-QKD system, it may compromise the system security because of its impact on data sampling and parameters estimation. In particular, the practical security of CV-QKD with different clock jitter against collective attack is analyzed theoretically based on different repetition frequencies, the numerical simulations indicate that the clock jitter has more impact on a high-speed scenario. Furthermore, a simplified experiment is designed to investigate the influence of the clock jitter.

  20. Passive energy jitter reduction in the cascaded third harmonic generation process

    International Nuclear Information System (INIS)

    Yan, L; Du, Y; You, Y; Sun, X; Wang, D; Hua, J; Shi, J; Lu, W; Huang, W; Chen, H; Tang, C; Huang, Z

    2014-01-01

    In free electron laser (FEL) systems with ultraviolet (UV) laser driven injectors, a highly stable UV source generated through cascaded third harmonic generation (THG) from an infrared (IR) source is a key element in guaranteeing the acceptable current jitter at the undulator. In this letter, the negative slope of the THG efficiency for high intensity ultrashort IR pulses is revealed to be a passive stabilization mechanism for energy jitter reduction in UV. A reduction of 2.5 times the energy jitter in UV is demonstrated in the experiment and simulations show that the energy jitter in UV can be reduced by more than one order of magnitude if the energy jitter in IR is less than 3%, with proper design of the THG efficiency curve, fulfilling the challenging requirement for UV laser stability in a broad scope of applications such as the photoinjector of x-ray FELs. (letter)

  1. Parallel combinations of pre-ionized low jitter spark gaps

    International Nuclear Information System (INIS)

    Fitzsimmons, W.A.; Rosocha, L.A.

    1979-01-01

    The properties of 10 to 30 kV four electrode field emission pre-ionized triggered spark gaps have been studied. A mid-plane off-axis trigger electrode is biased at +V 0 /2, and a field emission point is located adjacent to and biased at the grounded cathode potential. Simultaneous application of a -V 0 trigger rapid pulse to both the electrodes results in the rapid sequential closing of the anode-trigger and trigger-cathode gaps. The observed jitter is about 1.5 ns. Parallel operation of these gaps (up to 10 so far) connected to a common capacitive load has been studied. A simple theory that predicts the number of gaps that may be expected to operate in parallel is discussed

  2. Jitter-Robust Orthogonal Hermite Pulses for Ultra-Wideband Impulse Radio Communications

    Directory of Open Access Journals (Sweden)

    Ryuji Kohno

    2005-03-01

    Full Text Available The design of a class of jitter-robust, Hermite polynomial-based, orthogonal pulses for ultra-wideband impulse radio (UWB-IR communications systems is presented. A unified and exact closed-form expression of the auto- and cross-correlation functions of Hermite pulses is provided. Under the assumption that jitter values are sufficiently smaller than pulse widths, this formula is used to decompose jitter-shifted pulses over an orthonormal basis of the Hermite space. For any given jitter probability density function (pdf, the decomposition yields an equivalent distribution of N-by-N matrices which simplifies the convolutional jitter channel model onto a multiplicative matrix model. The design of jitter-robust orthogonal pulses is then transformed into a generalized eigendecomposition problem whose solution is obtained with a Jacobi-like simultaneous diagonalization algorithm applied over a subset of samples of the channel matrix distribution. Examples of the waveforms obtained with the proposed design and their improved auto- and cross-correlation functions are given. Simulation results are presented, which demonstrate the superior performance of a pulse-shape modulated (PSM- UWB-IR system using the proposed pulses, over the same system using conventional orthogonal Hermite pulses, in jitter channels with additive white Gaussian noise (AWGN.

  3. E-Model MOS Estimate Precision Improvement and Modelling of Jitter Effects

    Directory of Open Access Journals (Sweden)

    Adrian Kovac

    2012-01-01

    Full Text Available This paper deals with the ITU-T E-model, which is used for non-intrusive MOS VoIP call quality estimation on IP networks. The pros of E-model are computational simplicity and usability on real-time traffic. The cons, as shown in our previous work, are the inability of E-model to reflect effects of network jitter present on real traffic flows and jitter-buffer behavior on end user devices. These effects are visible mostly on traffic over WAN, internet and radio networks and cause the E-model MOS call quality estimate to be noticeably too optimistic. In this paper, we propose a modification to E-model using previously proposed Pplef (effective packet loss using jitter and jitter-buffer model based on Pareto/D/1/K system. We subsequently perform optimization of newly added parameters reflecting jitter effects into E-model by using PESQ intrusive measurement method as a reference for selected audio codecs. Function fitting and parameter optimization is performed under varying delay, packet loss, jitter and different jitter-buffer sizes for both, correlated and uncorrelated long-tailed network traffic.

  4. EVIDENCE AGAINST AN ECOLOGICAL EXPLANATION OF THE JITTER ADVANTAGE FOR VECTION

    Directory of Open Access Journals (Sweden)

    Stephen ePalmisano

    2014-11-01

    Full Text Available Visual-vestibular conflicts have been traditionally used to explain both perceptions of self-motion and experiences of motion sickness. However, sensory conflict theories have been challenged by findings that adding simulated viewpoint jitter to inducing displays enhances (rather than reduces or destroys visual illusions of self-motion experienced by stationary observers. One possible explanation of this jitter advantage for vection is that jittering optic flows are more ecological than smooth displays. Despite the intuitive appeal of this idea, it has proven difficult to test. Here we compared subjective experiences generated by jittering and smooth radial flows when observers were exposed to either visual-only or multisensory self-motion stimulations. The display jitter (if present was generated in real-time by updating the virtual computer-graphics camera position to match the observer’s tracked head motions when treadmill walking or walking in place, or was a playback of these head motions when standing still. As expected, the (more naturalistic treadmill walking and the (less naturalistic walking in place were found to generate very different physical head jitters. However, contrary to the ecological account of the phenomenon, playbacks of treadmill walking and walking in place display jitter both enhanced visually induced illusions of self-motion to a similar degree (compared to smooth displays.

  5. Individual TL detector characteristics in automated processing of personnel dosemeters: correction factors as extension to identity codes of dosemeter cards

    International Nuclear Information System (INIS)

    Toivonen, Matti.

    1979-07-01

    One, two and three-component dosemeter cards and their associated processing equipment were developed for personnel monitoring. A novel feature of the TLD system is that the individual sensitivity correction factors of TL detectors for β/γ radiation dosimetry and special timing factors for the readout of neutron detectors are stored on dosemeter cards as an extension of the identity codes. These data are utilized in the automatic TL reading process with the aim of cancelling out the influence of the individual detector characteristics on the measuring results. Stimulation of TL is done with hot nitrogen without removing the detectors from their cards and without any metal contact. Changes in detector characteristics are thus improbable. The reading process can be adjusted in a variety of ways. For example, each detector in the same card can be processed with optimal heating and the specific 250 deg C glow peak of neutron radiation can be roughly separated from the main LiF glow peaks. (author)

  6. Zero-crossing detector with sub-microsecond jitter and crosstalk

    Science.gov (United States)

    Dick, G. John; Kuhnle, Paul F.; Sydnor, Richard L.

    1990-01-01

    A zero-crossing detector (ZCD) was built and tested with a new circuit design which gives reduced time jitter compared to previous designs. With the new design, time jitter is reduced for the first time to a value which approaches that due to noise in the input amplifying stage. Additionally, with fiber-optic transmission of the output signal, crosstalk between units has been eliminated. The measured values are in good agreement with circuit noise calculations and approximately ten times lower than that for ZCD's presently installed in the JPL test facility. Crosstalk between adjacent units was reduced even more than the jitter.

  7. Latency and Jitter Analysis for IEEE 802.11e Wireless LANs

    Directory of Open Access Journals (Sweden)

    Sungkwan Youm

    2013-01-01

    Full Text Available This paper presents a numerical analysis of latency and jitter for IEEE 802.11e wireless local area networks (WLANs in a saturation condition, by using a Markov model. We use this model to explicate how the enhanced distributed coordination function (EDCF differentiates classes of service and to characterize the probability distribution of the medium access control (MAC layer packet latency and jitter, on which the quality of the voice over Internet protocol (VoIP calls is dependent. From the proposed analytic model, we can estimate the available number of nodes determining the system performance, in order to satisfy user demands on the latency and jitter.

  8. Optimum FIR filter for sampled signals in presence of jitter

    Science.gov (United States)

    Cattaneo, Paolo Walter

    1996-02-01

    The requirements of the integrated readout electronics for calorimetry at high luminosity hadron colliders pose new challenges both to hardware design and to the performance of signal processing algorithms. Both aspects have been treated in detail by the FERMI(RD16) collaboration [C. Alippi et al., Nucl. Instr. and Meth. A 344 (1994) 180], from which this work has been motivated. The estimation of the amplitude of sampled signals is usually performed with a digital FIR filter, or with a more sophisticated non linear digital filter using FIR filters as building blocks [S.J. Inkinen and J. Niittylahti, Trainable FIR-order statistic hybrid filters, to be published in IEEE Trans. Circuits and Systems; H. Alexanian et al., FERMI Collaboration, Optimized digital feature extraction in the FERMI microsystem Nucl. Instr. and Meth. A 357 (1995)]. In presence of significant signal phase jitter with respect to the clock, the phase dependence of the filter output can be a major source of error. This is especially true for measurements of large amplitudes for which the effect of electronic noise becomes negligible. This paper reports on the determination of digital FIR filters that optimize the signal over noise ratio due to known jitter distributions for different filter lengths. As the presence of electronic noise is neglected, the results are mainly relevant for measurements of large signals. FERMI is a collaboration with the aim of designing integrated electronics for the read out of calorimeter detectors in particle physics experiments at hadron colliders. It includes: CERN, Geneva, Switzerland; Department of Physics and Measurement Technology, University of Linköping, Sweden; Center for Industrial Microelectronics and Materials Technology, University of Linköping, Sweden; LPNHE Universities Paris VI-VII, Paris, France; Dipartimento di Elettronica, Politecnico di Milano, Italy, Sezine INFN, Pavia, Italy; Dipartimento di Fisica Nucleare e Teorica dell'Universitá e Sezione

  9. ASTROMETRIC JITTER OF THE SUN AS A STAR

    International Nuclear Information System (INIS)

    Makarov, V. V.; Parker, D.; Ulrich, R. K.

    2010-01-01

    The daily variation of the solar photocenter over some 11 yr is derived from the Mount Wilson data reprocessed by Ulrich et al. to closely match the surface distribution of solar irradiance. The standard deviations of astrometric jitter are 0.52 μAU and 0.39 μAU in the equatorial and the axial dimensions, respectively. The overall dispersion is strongly correlated with solar cycle, reaching 0.91 μAU at maximum activity in 2000. The largest short-term deviations from the running average (up to 2.6 μAU) occur when a group of large spots happen to lie on one side with respect to the center of the disk. The amplitude spectrum of the photocenter variations never exceeds 0.033 μAU for the range of periods 0.6-1.4 yr, corresponding to the orbital periods of planets in the habitable zone. Astrometric detection of Earth-like planets around stars as quiet as the Sun is not affected by star spot noise, but the prospects for more active stars may be limited to giant planets.

  10. Problems in Microgravity Fluid Mechanics: G-Jitter Convection

    Science.gov (United States)

    Homsy, G. M.

    2005-01-01

    This is the final report on our NASA grant, Problems in Microgravity Fluid Mechanics NAG3-2513: 12/14/2000 - 11/30/2003, extended through 11/30/2004. This grant was made to Stanford University and then transferred to the University of California at Santa Barbara when the PI relocated there in January 2001. Our main activity has been to conduct both experimental and theoretical studies of instabilities in fluids that are relevant to the microgravity environment, i.e. those that do not involve the action of buoyancy due to a steady gravitational field. Full details of the work accomplished under this grant are given below. Our work has focused on: (i) Theoretical and computational studies of the effect of g-jitter on instabilities of convective states where the convection is driven by forces other than buoyancy (ii) Experimental studies of instabilities during displacements of miscible fluid pairs in tubes, with a focus on the degree to which these mimic those found in immiscible fluids. (iii) Theoretical and experimental studies of the effect of time dependent electrohydrodynamic forces on chaotic advection in drops immersed in a second dielectric liquid. Our objectives are to acquire insight and understanding into microgravity fluid mechanics problems that bear on either fundamental issues or applications in fluid physics. We are interested in the response of fluids to either a fluctuating acceleration environment or to forces other than gravity that cause fluid mixing and convection. We have been active in several general areas.

  11. Femtosecond precision measurement of laser–rf phase jitter in a photocathode rf gun

    International Nuclear Information System (INIS)

    Shi, Libing; Zhao, Lingrong; Lu, Chao; Jiang, Tao; Liu, Shengguang; Wang, Rui; Zhu, Pengfei; Xiang, Dao

    2017-01-01

    We report on the measurement of the laser–rf phase jitter in a photocathode rf gun with femtosecond precision. In this experiment four laser pulses with equal separation are used to produce electron bunch trains; then the laser–rf phase jitter is obtained by measuring the variations of the electron bunch spacing with an rf deflector. Furthermore, we show that when the gun and the deflector are powered by the same rf source, it is possible to obtain the laser–rf phase jitter in the gun through measurement of the beam–rf phase jitter in the deflector. Based on these measurements, we propose an effective time-stamping method that may be applied in MeV ultrafast electron diffraction facilities to enhance the temporal resolution.

  12. Femtosecond precision measurement of laser–rf phase jitter in a photocathode rf gun

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Libing; Zhao, Lingrong; Lu, Chao; Jiang, Tao; Liu, Shengguang; Wang, Rui; Zhu, Pengfei; Xiang, Dao, E-mail: dxiang@sjtu.edu.cn

    2017-03-21

    We report on the measurement of the laser–rf phase jitter in a photocathode rf gun with femtosecond precision. In this experiment four laser pulses with equal separation are used to produce electron bunch trains; then the laser–rf phase jitter is obtained by measuring the variations of the electron bunch spacing with an rf deflector. Furthermore, we show that when the gun and the deflector are powered by the same rf source, it is possible to obtain the laser–rf phase jitter in the gun through measurement of the beam–rf phase jitter in the deflector. Based on these measurements, we propose an effective time-stamping method that may be applied in MeV ultrafast electron diffraction facilities to enhance the temporal resolution.

  13. Complacency and Automation Bias in the Use of Imperfect Automation.

    Science.gov (United States)

    Wickens, Christopher D; Clegg, Benjamin A; Vieane, Alex Z; Sebok, Angelia L

    2015-08-01

    We examine the effects of two different kinds of decision-aiding automation errors on human-automation interaction (HAI), occurring at the first failure following repeated exposure to correctly functioning automation. The two errors are incorrect advice, triggering the automation bias, and missing advice, reflecting complacency. Contrasts between analogous automation errors in alerting systems, rather than decision aiding, have revealed that alerting false alarms are more problematic to HAI than alerting misses are. Prior research in decision aiding, although contrasting the two aiding errors (incorrect vs. missing), has confounded error expectancy. Participants performed an environmental process control simulation with and without decision aiding. For those with the aid, automation dependence was created through several trials of perfect aiding performance, and an unexpected automation error was then imposed in which automation was either gone (one group) or wrong (a second group). A control group received no automation support. The correct aid supported faster and more accurate diagnosis and lower workload. The aid failure degraded all three variables, but "automation wrong" had a much greater effect on accuracy, reflecting the automation bias, than did "automation gone," reflecting the impact of complacency. Some complacency was manifested for automation gone, by a longer latency and more modest reduction in accuracy. Automation wrong, creating the automation bias, appears to be a more problematic form of automation error than automation gone, reflecting complacency. Decision-aiding automation should indicate its lower degree of confidence in uncertain environments to avoid the automation bias. © 2015, Human Factors and Ergonomics Society.

  14. Fator de correção para indivíduos com capacidade acomodativa baseado no uso do refrator automático Correction factor for individuals with accommodative capacity based on automated refractor

    Directory of Open Access Journals (Sweden)

    Rodrigo Ueno Takahagi

    2009-12-01

    Full Text Available OBJETIVO: Pesquisar um fator de correção para avaliação do erro refrativo sem a utilização da cicloplegia. MÉTODOS: Foram estudados 623 pacientes (1.246 olhos, de ambos os sexos, com idade entre 3 e 40 anos. As refratometrias estática e dinâmica foram obtidas usando-se o refrator automático Shin-Nippon Accuref-K 9001. A cicloplegia foi obtida com a instilação de uma gota de colírio ciclopentolato a 1%, com refratometria estática 30 minutos após. Os dados foram submetidos à análise estatística usando a técnica do modelo de regressão linear e modelo de regressão múltipla do valor dióptrico com e sem cicloplegia, em função da idade. RESULTADOS: A correlação entre valores dióptricos sem e com cicloplegia quanto ao erro astigmático variou de 81,52% a 92,27%. Quanto ao valor dióptrico esférico, a correlação foi menor (53,57% a 87,78%. O mesmo se observou em relação ao eixo do astigmatismo (28,86% a 58,80%. O modelo de regressão múltipla em função da idade mostrou coeficiente de determinação múltiplo maior para a miopia (86,38% e astigmatismo (79,79%. O menor coeficiente foi observado para o eixo do astigmatismo (17,70%. CONCLUSÃO: Avaliando-se os erros refrativos com e sem cicloplegia, observou-se alta correlação nas ametropias cilíndricas. Foram desenvolvidas equações matemáticas como fator de correção para refratometrias dos pacientes sem cicloplegia, portadores de ametropias cilíndricas e esféricas.PURPOSE: To determine a correction factor for refractive errors evaluated without cycloplegy effect. METHODS: A study was made with 623 patients (1,246 eyes of both sexes, aging between 3 and 40 years old. The dynamic and static refractometries were obtained using the automated refractor Shin-Nippon Accuref-K 9001. 1% Cyclopentolate was dropped and the static refractometry was performed in 30 minutes. Data were analyzed using the linear regression model and the multiple regression model of the diopter

  15. Radial Velocities of Subgiant Stars and New Astrophysical Insights into RV Jitter

    Science.gov (United States)

    Luhn, Jacob; Bastien, Fabienne; Wright, Jason T.

    2018-01-01

    For nearly 20 years, the California Planet Search (CPS) has simultaneously monitored precise radial velocities and chromospheric activity levels of stars from Keck observatory to search for exoplanets. This sample provides a useful set of stars to better determine the dependence of RV jitter on flicker (which traces surface gravity) first shown in Bastien et al. (2014). We expand upon this initial work by examining a much larger sample of stars covering a much wider range of stellar parameters (effective temperature, surface gravity, and activity, among others). For more than 600 stars, there are enough RV measurements to distinguish this astrophysical jitter from accelerations due to orbital companions. To properly isolate RV jitter from these effects, we must first remove the RV signal due to these companions, including several previously unannounced giant planets around subgiant stars. We highlight some new results from our analysis of the CPS data. A more thorough understanding of the various sources of RV jitter and the underlying stellar phenomena that drive these intrinsic RV variations will enable more precise jitter estimates for RV follow-up targets such as those from K2 or the upcoming TESS mission.

  16. An Experimental Study of a Low-Jitter Pulsed Electromagnetic Plasma Accelerator

    Science.gov (United States)

    Thio, Y. C. Francis; Lee, Michael; Eskridge, Richard; Smith, James; Martin, Adam; Rodgers, Stephen L. (Technical Monitor)

    2001-01-01

    An experimental plasma accelerator for a variety of applications under development at the NASA Marshall Space Flight Center is described. The accelerator is a pulsed plasma thruster and has been tested experimentally and plasma jet velocities of approximately 50 kilometers per second have been obtained. The plasma jet structure has been photographed with 10 ns exposure times to reveal a stable and repeatable plasma structure. Data for velocity profile information has been obtained using light pipes embedded in the gun walls to record the plasma transit at various barrel locations. Preliminary spatially resolved spectral data and magnetic field probe data are also presented. A high speed triggering system has been developed and tested as a means of reducing the gun "jitter". This jitter has been characterized and future work for second generation "ultra-low jitter" gun development is identified.

  17. Real-time operating system timing jitter and its impact on motor control

    Science.gov (United States)

    Proctor, Frederick M.; Shackleford, William P.

    2001-12-01

    General-purpose microprocessors are increasingly being used for control applications due to their widespread availability and software support for non-control functions like networking and operator interfaces. Two classes of real-time operating systems (RTOS) exist for these systems. The traditional RTOS serves as the sole operating system, and provides all OS services. Examples include ETS, LynxOS, QNX, Windows CE and VxWorks. RTOS extensions add real-time scheduling capabilities to non-real-time OSes, and provide minimal services needed for the time-critical portions of an application. Examples include RTAI and RTL for Linux, and HyperKernel, OnTime and RTX for Windows NT. Timing jitter is an issue in these systems, due to hardware effects such as bus locking, caches and pipelines, and software effects from mutual exclusion resource locks, non-preemtible critical sections, disabled interrupts, and multiple code paths in the scheduler. Jitter is typically on the order of a microsecond to a few tens of microseconds for hard real-time operating systems, and ranges from milliseconds to seconds in the worst case for soft real-time operating systems. The question of its significance on the performance of a controller arises. Naturally, the smaller the scheduling period required for a control task, the more significant is the impact of timing jitter. Aside from this intuitive relationship is the greater significance of timing on open-loop control, such as for stepper motors, than for closed-loop control, such as for servo motors. Techniques for measuring timing jitter are discussed, and comparisons between various platforms are presented. Techniques to reduce jitter or mitigate its effects are presented. The impact of jitter on stepper motor control is analyzed.

  18. Novel design of low-jitter 10 GHz all-active monolithic mode-locked lasers

    DEFF Research Database (Denmark)

    Larsson, David; Yvind, Kresten; Christiansen, Lotte Jin

    2004-01-01

    Using a novel design, we have fabricated 10 GHz all-active monolithic mode-locked semiconductor lasers that generate 1.4 ps pulses with record-low timing jitter. The dynamical properties of lasers with 1 and 2 QWs are compared.......Using a novel design, we have fabricated 10 GHz all-active monolithic mode-locked semiconductor lasers that generate 1.4 ps pulses with record-low timing jitter. The dynamical properties of lasers with 1 and 2 QWs are compared....

  19. Design and implementation of high-precision and low-jitter programmable delay circuitry

    International Nuclear Information System (INIS)

    Gao Yuan; Cui Ke; Zhang Hongfei; Luo Chunli; Yang Dongxu; Liang Hao; Wang Jian

    2011-01-01

    A programmable delay circuit design which has characteristics of high-precision, low-jitter, wide-programmable-range and low power is introduced. The delay circuitry uses the scheme which has two parts: the coarse delay and the fine delay that could be controlled separately. Using different coarse delay chip can reach different maximum programmable range. And the fine delay programmable chip has the minimum step which is down to 10 ps. The whole circuitry jitter will be less than 100 ps. The design has been successfully applied in Quantum Key Distribution experiment. (authors)

  20. Removal of jitter noise in 3D shape recovery from image focus by using Kalman filter.

    Science.gov (United States)

    Jang, Hoon-Seok; Muhammad, Mannan Saeed; Choi, Tae-Sun

    2018-02-01

    In regard to Shape from Focus, one critical factor impacting system application is mechanical vibration of the translational stage causing jitter noise along the optical axis. This noise is not detectable by simply observing the image. However, when focus measures are applied, inaccuracies in the depth occur. In this article, jitter noise and focus curves are modeled by Gaussian distribution and quadratic function, respectively. Then Kalman filter is designed and applied to eliminate this noise in the focus curves, as a post-processing step after the focus measure application. Experiments are implemented with simulated objects and real objects to show usefulness of proposed algorithm. © 2017 Wiley Periodicals, Inc.

  1. Low jitter and high power all-active mode-locked lasers

    DEFF Research Database (Denmark)

    Yvind, Kresten; Larsson, David; Christiansen, Lotte Jin

    2003-01-01

    A novel epitaxial design leading to low loss and low gain saturation improves the properties of 40 GHz mode-locked lasers. We obtain 2.8 ps nearly chirp free pulses with 228 fs jitter and fiber-coupled power of 7 mW.......A novel epitaxial design leading to low loss and low gain saturation improves the properties of 40 GHz mode-locked lasers. We obtain 2.8 ps nearly chirp free pulses with 228 fs jitter and fiber-coupled power of 7 mW....

  2. Communication Networks - Analysis of jitter due to call-level fluctuations

    NARCIS (Netherlands)

    Mandjes, M.R.H.

    2007-01-01

    Abstract In communication networks used by constant bit rate applications, call-level dynamics (i.e. entering and leaving calls) lead to fluctuations in the load, and therefore also fluctuations in the delay (jitter). By intentionally delaying the packets at the destination, one can transform the

  3. A Low-Jitter Wireless Transmission Based on Buffer Management in Coding-Aware Routing

    Directory of Open Access Journals (Sweden)

    Cunbo Lu

    2015-08-01

    Full Text Available It is significant to reduce packet jitter for real-time applications in a wireless network. Existing coding-aware routing algorithms use the opportunistic network coding (ONC scheme in a packet coding algorithm. The ONC scheme never delays packets to wait for the arrival of a future coding opportunity. The loss of some potential coding opportunities may degrade the contribution of network coding to jitter performance. In addition, most of the existing coding-aware routing algorithms assume that all flows participating in the network have equal rate. This is unrealistic, since multi-rate environments often appear. To overcome the above problem and expand coding-aware routing to multi-rate scenarios, from the view of data transmission, we present a low-jitter wireless transmission algorithm based on buffer management (BLJCAR, which decides packets in coding node according to the queue-length based threshold policy instead of the regular ONC policy as used in existing coding-aware routing algorithms. BLJCAR is a unified framework to merge the single rate case and multiple rate case. Simulations results show that the BLJCAR algorithm embedded in coding-aware routing outperforms the traditional ONC policy in terms of jitter, packet delivery delay, packet loss ratio and network throughput in network congestion in any traffic rates.

  4. Noise Originating from Intra-pixel Structure and Satellite Attitude Jitter on COROT

    DEFF Research Database (Denmark)

    Karoff, Christoffer; Arentoft, Torben; Kjeldsen, Hans

    2006-01-01

    We present a study on noise in space-based photometry originating from sensitivity variations within individual pixels, known as intra-pixel variations, and satellite attitude jitter. We have measured the intra-pixel structure on an e2v 47-20 CCD and made simulations of the effects these structur...

  5. Low-Jitter Clock Multiplication: a Comparison between PLLs and DLLs

    NARCIS (Netherlands)

    van de Beek, R.C.H.; Klumperink, Eric A.M.; Vaucher, Cicero S.; Nauta, Bram

    This paper shows that, for a given power budget, a practical phase-locked loop (PLL)-based clock multiplier generates less jitter than a delay-locked loop (DLL) equivalent. This is due to the fact that the delay cells in a PLL ring-oscillator can consume more power per cell than their counterparts

  6. Photodetection-induced relative timing jitter in synchronized time-lens source for coherent Raman scattering microscopy

    Directory of Open Access Journals (Sweden)

    Jiaqi Wang

    2017-09-01

    Full Text Available Synchronized time-lens source is a novel method to generate synchronized optical pulses to mode-locked lasers, and has found widespread applications in coherent Raman scattering microscopy. Relative timing jitter between the mode-locked laser and the synchronized time-lens source is a key parameter for evaluating the synchronization performance of such synchronized laser systems. However, the origins of the relative timing jitter in such systems are not fully determined, which in turn prevents the experimental efforts to optimize the synchronization performance. Here, we demonstrate, through theoretical modeling and numerical simulation, that the photodetection could be one physical origin of the relative timing jitter. Comparison with relative timing jitter due to the intrinsic timing jitter of the mode-locked laser is also demonstrated, revealing different qualitative and quantitative behaviors. Based on the nature of this photodetection-induced timing jitter, we further propose several strategies to reduce the relative timing jitter. Our theoretical results will provide guidelines for optimizing synchronization performance in experiments.

  7. “Booster” training: Evaluation of instructor-led bedside cardiopulmonary resuscitation skill training and automated corrective feedback to improve cardiopulmonary resuscitation compliance of Pediatric Basic Life Support providers during simulated cardiac arrest

    Science.gov (United States)

    Sutton, Robert M.; Niles, Dana; Meaney, Peter A.; Aplenc, Richard; French, Benjamin; Abella, Benjamin S.; Lengetti, Evelyn L.; Berg, Robert A.; Helfaer, Mark A.; Nadkarni, Vinay

    2013-01-01

    Objective To investigate the effectiveness of brief bedside “booster” cardiopulmonary resuscitation (CPR) training to improve CPR guideline compliance of hospital-based pediatric providers. Design Prospective, randomized trial. Setting General pediatric wards at Children’s Hospital of Philadelphia. Subjects Sixty-nine Basic Life Support–certified hospital-based providers. Intervention CPR recording/feedback defibrillators were used to evaluate CPR quality during simulated pediatric arrest. After a 60-sec pretraining CPR evaluation, subjects were randomly assigned to one of three instructional/feedback methods to be used during CPR booster training sessions. All sessions (training/CPR manikin practice) were of equal duration (2 mins) and differed only in the method of corrective feedback given to participants during the session. The study arms were as follows: 1) instructor-only training; 2) automated defibrillator feedback only; and 3) instructor training combined with automated feedback. Measurements and Main Results Before instruction, 57% of the care providers performed compressions within guideline rate recommendations (rate >90 min−1 and 38 mm); and 36% met overall CPR compliance (rate and depth within targets). After instruction, guideline compliance improved (instructor-only training: rate 52% to 87% [p .01], and overall CPR compliance, 43% to 78% [p CPR compliance, 35% to 96% [p training combined with automated feedback: rate 48% to 100% [p CPR compliance, 30% to 100% [p CPR instruction, most certified Pediatric Basic Life Support providers did not perform guideline-compliant CPR. After a brief bedside training, CPR quality improved irrespective of training content (instructor vs. automated feedback). Future studies should investigate bedside training to improve CPR quality during actual pediatric cardiac arrests. PMID:20625336

  8. "Booster" training: evaluation of instructor-led bedside cardiopulmonary resuscitation skill training and automated corrective feedback to improve cardiopulmonary resuscitation compliance of Pediatric Basic Life Support providers during simulated cardiac arrest.

    Science.gov (United States)

    Sutton, Robert M; Niles, Dana; Meaney, Peter A; Aplenc, Richard; French, Benjamin; Abella, Benjamin S; Lengetti, Evelyn L; Berg, Robert A; Helfaer, Mark A; Nadkarni, Vinay

    2011-05-01

    To investigate the effectiveness of brief bedside "booster" cardiopulmonary resuscitation (CPR) training to improve CPR guideline compliance of hospital-based pediatric providers. Prospective, randomized trial. General pediatric wards at Children's Hospital of Philadelphia. Sixty-nine Basic Life Support-certified hospital-based providers. CPR recording/feedback defibrillators were used to evaluate CPR quality during simulated pediatric arrest. After a 60-sec pretraining CPR evaluation, subjects were randomly assigned to one of three instructional/feedback methods to be used during CPR booster training sessions. All sessions (training/CPR manikin practice) were of equal duration (2 mins) and differed only in the method of corrective feedback given to participants during the session. The study arms were as follows: 1) instructor-only training; 2) automated defibrillator feedback only; and 3) instructor training combined with automated feedback. Before instruction, 57% of the care providers performed compressions within guideline rate recommendations (rate >90 min(-1) and 38 mm); and 36% met overall CPR compliance (rate and depth within targets). After instruction, guideline compliance improved (instructor-only training: rate 52% to 87% [p .01], and overall CPR compliance, 43% to 78% [p CPR compliance, 35% to 96% [p training combined with automated feedback: rate 48% to 100% [p CPR compliance, 30% to 100% [p CPR instruction, most certified Pediatric Basic Life Support providers did not perform guideline-compliant CPR. After a brief bedside training, CPR quality improved irrespective of training content (instructor vs. automated feedback). Future studies should investigate bedside training to improve CPR quality during actual pediatric cardiac arrests.

  9. Microphone triggering circuit for elimination of mechanically induced frequency-jitter in diode laser spectrometers: implications for quantitative analysis.

    Science.gov (United States)

    Sams, R L; Fried, A

    1987-09-01

    An electronic timing circuit using a microphone triggering device has been developed for elimination of mechanically induced frequency-jitter in diode laser spectrometers employing closed-cycle refrigerators. Mechanical compressor piston shocks are detected by the microphone and actuate an electronic circuit which ultimately interrupts data acquisition until the mechanical vibrations are completely quenched. In this way, laser sweeps contaminated by compressor frequency-jitter are not co-averaged. Employing this circuit, measured linewidths were in better agreement with that calculated. The importance of eliminating this mechanically induced frequency-jitter when carrying out quantitative diode laser measurements is further discussed.

  10. Design of optical axis jitter control system for multi beam lasers based on FPGA

    Science.gov (United States)

    Ou, Long; Li, Guohui; Xie, Chuanlin; Zhou, Zhiqiang

    2018-02-01

    A design of optical axis closed-loop control system for multi beam lasers coherent combining based on FPGA was introduced. The system uses piezoelectric ceramics Fast Steering Mirrors (FSM) as actuator, the Fairfield spot detection of multi beam lasers by the high speed CMOS camera for optical detecting, a control system based on FPGA for real-time optical axis jitter suppression. The algorithm for optical axis centroid detecting and PID of anti-Integral saturation were realized by FPGA. Optimize the structure of logic circuit by reuse resource and pipeline, as a result of reducing logic resource but reduced the delay time, and the closed-loop bandwidth increases to 100Hz. The jitter of laser less than 40Hz was reduced 40dB. The cost of the system is low but it works stably.

  11. Variable Delay Element For Jitter Control In High Speed Data Links

    Science.gov (United States)

    Livolsi, Robert R.

    2002-06-11

    A circuit and method for decreasing the amount of jitter present at the receiver input of high speed data links which uses a driver circuit for input from a high speed data link which comprises a logic circuit having a first section (1) which provides data latches, a second section (2) which provides a circuit generates a pre-destorted output and for compensating for level dependent jitter having an OR function element and a NOR function element each of which is coupled to two inputs and to a variable delay element as an input which provides a bi-modal delay for pulse width pre-distortion, a third section (3) which provides a muxing circuit, and a forth section (4) for clock distribution in the driver circuit. A fifth section is used for logic testing the driver circuit.

  12. Broadband noise limit in the photodetection of ultralow jitter optical pulses.

    Science.gov (United States)

    Sun, Wenlu; Quinlan, Franklyn; Fortier, Tara M; Deschenes, Jean-Daniel; Fu, Yang; Diddams, Scott A; Campbell, Joe C

    2014-11-14

    Applications with optical atomic clocks and precision timing often require the transfer of optical frequency references to the electrical domain with extremely high fidelity. Here we examine the impact of photocarrier scattering and distributed absorption on the photocurrent noise of high-speed photodiodes when detecting ultralow jitter optical pulses. Despite its small contribution to the total photocurrent, this excess noise can determine the phase noise and timing jitter of microwave signals generated by detecting ultrashort optical pulses. A Monte Carlo simulation of the photodetection process is used to quantitatively estimate the excess noise. Simulated phase noise on the 10 GHz harmonic of a photodetected pulse train shows good agreement with previous experimental data, leading to the conclusion that the lowest phase noise photonically generated microwave signals are limited by photocarrier scattering well above the quantum limit of the optical pulse train.

  13. A low jitter supply regulated charge pump PLL with self-calibration

    International Nuclear Information System (INIS)

    Chen Min; Li Zhichao; Xiao Jingbo; Chen Jie; Liu Yuntao

    2016-01-01

    This paper describes a ring oscillator based low jitter charge pump PLL with supply regulation and digital calibration. In order to combat power supply noise, a low drop output voltage regulator is implemented. The VCO gain is tunable by using the 4 bit control self-calibration technique. So that the optimal VCO gain is automatically selected and the process/temperature variation is compensated. Fabricated in the 0.13 μm CMOS process, the PLL achieves a frequency range of 100–400 MHz and occupies a 190 × 200 μm 2 area. The measured RMS jitter is 5.36 ps at a 400 MHz operating frequency. (paper)

  14. Acquisition and Initial Analysis of H+- and H--Beam Centroid Jitter at LANSCE

    Science.gov (United States)

    Gilpatrick, J. D.; Bitteker, L.; Gulley, M. S.; Kerstiens, D.; Oothoudt, M.; Pillai, C.; Power, J.; Shelley, F.

    2006-11-01

    During the 2005 Los Alamos Neutron Science Center (LANSCE) beam runs, beam current and centroid-jitter data were observed, acquired, analyzed, and documented for both the LANSCE H+ and H- beams. These data were acquired using three beam position monitors (BPMs) from the 100-MeV Isotope Production Facility (IPF) beam line and three BPMs from the Switchyard transport line at the end of the LANSCE 800-MeV linac. The two types of data acquired, intermacropulse and intramacropulse, were analyzed for statistical and frequency characteristics as well as various other correlations including comparing their phase-space like characteristics in a coordinate system of transverse angle versus transverse position. This paper will briefly describe the measurements required to acquire these data, the initial analysis of these jitter data, and some interesting dilemmas these data presented.

  15. The development of high-voltage repetitive low-jitter corona stabilized triggered switch

    Science.gov (United States)

    Geng, Jiuyuan; Yang, Jianhua; Cheng, Xinbing; Yang, Xiao; Chen, Rong

    2018-04-01

    The high-power switch plays an important part in a pulse power system. With the trend of pulse power technology toward modularization, miniaturization, and accuracy control, higher requirements on electrical trigger and jitter of the switch have been put forward. A high-power low-jitter corona-stabilized triggered switch (CSTS) is designed in this paper. This kind of CSTS is based on corona stabilized mechanism, and it can be used as a main switch of an intense electron-beam accelerator (IEBA). Its main feature was the use of an annular trigger electrode instead of a traditional needle-like trigger electrode, taking main and side trigger rings to fix the discharging channels and using SF6/N2 gas mixture as its operation gas. In this paper, the strength of the local field enhancement was changed by a trigger electrode protrusion length Dp. The differences of self-breakdown voltage and its stability, delay time jitter, trigger requirements, and operation range of the switch were compared. Then the effect of different SF6/N2 mixture ratio on switch performance was explored. The experimental results show that when the SF6 is 15% with the pressure of 0.2 MPa, the hold-off voltage of the switch is 551 kV, the operating range is 46.4%-93.5% of the self-breakdown voltage, the jitter is 0.57 ns, and the minimum trigger voltage requirement is 55.8% of the peak. At present, the CSTS has been successfully applied to an IEBA for long time operation.

  16. High time resolution beam-based measurement of the rf-to-laser jitter in a photocathode rf gun

    Directory of Open Access Journals (Sweden)

    Zhen Zhang

    2014-03-01

    Full Text Available Characterizing the rf-to-laser jitter in the photocathode rf gun and its possible origins is important for improving the synchronization and beam quality of the linac based on the photocathode rf gun. A new method based on the rf compression effect in the photocathode rf gun is proposed to measure the rf-to-laser jitter in the gun. By taking advantage of the correlation between the rf compression and the laser injection phase, the error caused by the jitter of the accelerating field in the gun is minimized and thus 10 fs time resolution is expected. Experimental demonstration at the Tsinghua Thomson scattering x-ray source with a time resolution better than 35 fs is reported in this paper. The experimental results are successfully used to obtain information on the possible cause of the jitter and the accompanying drifts.

  17. Influence of incident light wavelength on time jitter of fast photomultipliers

    International Nuclear Information System (INIS)

    Moszynski, M.; Vacher, J.

    1977-01-01

    The study of the single photoelectron time resolution as a function of the wavelength of the incident light was performed for a 56 CVP photomultiplier having an S-1 photocathode. The light flash from the XP22 light emitting diode generator was passed through passband filters and illuminated the 5 mm diameter central part of the photocathode. A significant increase of the time resolution above 30% was observed when the wavelength of the incident light was changed from 790 nm to 580 nm. This gives experimental evidence that the time jitter resulting from the spread of the initial velocity of photoelectrons is proportional to the square root of the maximal initial energy of photoelectrons. Based on this conclusion the measured time jitter of C31024, RCA8850 and XP2020 photomultipliers with the use of the XP22 light emitting diode at 560 nm light wavelength was recalculated to estimate the time jitter at 400 nm near the maximum of the photocathode sensitivity. It shows an almost twice larger time spread at 400 nm for the C31024 and RCA8850 with a high gain first dynode and an about 1.5 times larger time spread for the XP2020 photomultiplier, than those measured at 560 nm. (Auth.)

  18. The effect of jitter on the performance of space coherent optical communication system with Costas loop

    Science.gov (United States)

    Li, Xin; Hong, Yifeng; Wang, Jinfang; Liu, Yang; Sun, Xun; Li, Mi

    2018-01-01

    Numerous communication techniques and optical devices successfully applied in space optical communication system indicates a good portability of it. With this good portability, typical coherent demodulation technique of Costas loop can be easily adopted in space optical communication system. As one of the components of pointing error, the effect of jitter plays an important role in the communication quality of such system. Here, we obtain the probability density functions (PDF) of different jitter degrees and explain their essential effect on the bit error rate (BER) space optical communication system. Also, under the effect of jitter, we research the bit error rate of space coherent optical communication system using Costas loop with different system parameters of transmission power, divergence angle, receiving diameter, avalanche photodiode (APD) gain, and phase deviation caused by Costas loop. Through a numerical simulation of this kind of communication system, we demonstrate the relationship between the BER and these system parameters, and some corresponding methods of system optimization are presented to enhance the communication quality.

  19. Sub-nanosecond jitter, repetitive impulse generators for high reliability applications

    International Nuclear Information System (INIS)

    Krausse, G.J.; Sarjeant, W.J.

    1981-01-01

    Low jitter, high reliability impulse generator development has recently become of ever increasing importance for developing nuclear physics and weapons applications. The research and development of very low jitter (< 30 ps), multikilovolt generators for high reliability, minimum maintenance trigger applications utilizing a new class of high-pressure tetrode thyratrons now commercially available are described. The overall system design philosophy is described followed by a detailed analysis of the subsystem component elements. A multi-variable experimental analysis of this new tetrode thyratron was undertaken, in a low-inductance configuration, as a function of externally available parameters. For specific thyratron trigger conditions, rise times of 18 ns into 6.0-Ω loads were achieved at jitters as low as 24 ps. Using this database, an integrated trigger generator system with solid-state front-end is described in some detail. The generator was developed to serve as the Master Trigger Generator for a large neutrino detector installation at the Los Alamos Meson Physics Facility

  20. Short locking time and low jitter phase-locked loop based on slope charge pump control

    International Nuclear Information System (INIS)

    Guo Zhongjie; Liu Youbao; Wu Longsheng; Wang Xihu; Tang Wei

    2010-01-01

    A novel structure of a phase-locked loop (PLL) characterized by a short locking time and low jitter is presented, which is realized by generating a linear slope charge pump current dependent on monitoring the output of the phase frequency detector (PFD) to implement adaptive bandwidth control. This improved PLL is created by utilizing a fast start-up circuit and a slope current control on a conventional charge pump PLL. First, the fast start-up circuit is enabled to achieve fast pre-charging to the loop filter. Then, when the output pulse of the PFD is larger than a minimum value, the charge pump current is increased linearly by the slope current control to ensure a shorter locking time and a lower jitter. Additionally, temperature variation is attenuated with the temperature compensation in the charge pump current design. The proposed PLL has been fabricated in a kind of DSP chip based on a 0.35 μm CMOS process. Comparing the characteristics with the classical PLL, the proposed PLL shows that it can reduce the locking time by 60% with a low peak-to-peak jitter of 0.3% at a wide operation temperature range. (semiconductor integrated circuits)

  1. Dynamics of the Drosophila circadian clock: theoretical anti-jitter network and controlled chaos.

    Directory of Open Access Journals (Sweden)

    Hassan M Fathallah-Shaykh

    Full Text Available BACKGROUND: Electronic clocks exhibit undesirable jitter or time variations in periodic signals. The circadian clocks of humans, some animals, and plants consist of oscillating molecular networks with peak-to-peak time of approximately 24 hours. Clockwork orange (CWO is a transcriptional repressor of Drosophila direct target genes. METHODOLOGY/PRINCIPAL FINDINGS: Theory and data from a model of the Drosophila circadian clock support the idea that CWO controls anti-jitter negative circuits that stabilize peak-to-peak time in light-dark cycles (LD. The orbit is confined to chaotic attractors in both LD and dark cycles and is almost periodic in LD; furthermore, CWO diminishes the Euclidean dimension of the chaotic attractor in LD. Light resets the clock each day by restricting each molecular peak to the proximity of a prescribed time. CONCLUSIONS/SIGNIFICANCE: The theoretical results suggest that chaos plays a central role in the dynamics of the Drosophila circadian clock and that a single molecule, CWO, may sense jitter and repress it by its negative loops.

  2. Low-sensitivity H ∞ filter design for linear delta operator systems with sampling time jitter

    Science.gov (United States)

    Guo, Xiang-Gui; Yang, Guang-Hong

    2012-04-01

    This article is concerned with the problem of designing H ∞ filters for a class of linear discrete-time systems with low-sensitivity to sampling time jitter via delta operator approach. Delta-domain model is used to avoid the inherent numerical ill-condition resulting from the use of the standard shift-domain model at high sampling rates. Based on projection lemma in combination with the descriptor system approach often used to solve problems related to delay, a novel bounded real lemma with three slack variables for delta operator systems is presented. A sensitivity approach based on this novel lemma is proposed to mitigate the effects of sampling time jitter on system performance. Then, the problem of designing a low-sensitivity filter can be reduced to a convex optimisation problem. An important consideration in the design of correlation filters is the optimal trade-off between the standard H ∞ criterion and the sensitivity of the transfer function with respect to sampling time jitter. Finally, a numerical example demonstrating the validity of the proposed design method is given.

  3. Quantitative evaluation of automated skull-stripping methods applied to contemporary and legacy images: effects of diagnosis, bias correction, and slice location

    DEFF Research Database (Denmark)

    Fennema-Notestine, Christine; Ozyurt, I Burak; Clark, Camellia P

    2006-01-01

    Extractor (BSE, Sandor and Leahy [1997] IEEE Trans Med Imag 16:41-54; Shattuck et al. [2001] Neuroimage 13:856-876) to manually stripped images. The methods were applied to uncorrected and bias-corrected datasets; Legacy and Contemporary T1-weighted image sets; and four diagnostic groups (depressed...... distances, and an Expectation-Maximization algorithm. Methods tended to perform better on contemporary datasets; bias correction did not significantly improve method performance. Mesial sections were most difficult for all methods. Although AD image sets were most difficult to strip, HWA and BSE were more...

  4. Generating a Square Switching Window for Timing Jitter Tolerant 160 Gb/s Demultiplexing by the Optical Fourier Transform Technique

    DEFF Research Database (Denmark)

    Oxenløwe, Leif Katsuo; Galili, Michael; Clausen, A. T:

    2006-01-01

    A square spectrum is optically Fourier transformed into a square pulse in the time domain. This is used to demultiplex a 160 Gb/s data signal with a significant increase in jitter tolerance to 2.6 ps.......A square spectrum is optically Fourier transformed into a square pulse in the time domain. This is used to demultiplex a 160 Gb/s data signal with a significant increase in jitter tolerance to 2.6 ps....

  5. Influence of P300 latency jitter on event related potential-based brain-computer interface performance

    Science.gov (United States)

    Aricò, P.; Aloise, F.; Schettini, F.; Salinari, S.; Mattia, D.; Cincotti, F.

    2014-06-01

    Objective. Several ERP-based brain-computer interfaces (BCIs) that can be controlled even without eye movements (covert attention) have been recently proposed. However, when compared to similar systems based on overt attention, they displayed significantly lower accuracy. In the current interpretation, this is ascribed to the absence of the contribution of short-latency visual evoked potentials (VEPs) in the tasks performed in the covert attention modality. This study aims to investigate if this decrement (i) is fully explained by the lack of VEP contribution to the classification accuracy; (ii) correlates with lower temporal stability of the single-trial P300 potentials elicited in the covert attention modality. Approach. We evaluated the latency jitter of P300 evoked potentials in three BCI interfaces exploiting either overt or covert attention modalities in 20 healthy subjects. The effect of attention modality on the P300 jitter, and the relative contribution of VEPs and P300 jitter to the classification accuracy have been analyzed. Main results. The P300 jitter is higher when the BCI is controlled in covert attention. Classification accuracy negatively correlates with jitter. Even disregarding short-latency VEPs, overt-attention BCI yields better accuracy than covert. When the latency jitter is compensated offline, the difference between accuracies is not significant. Significance. The lower temporal stability of the P300 evoked potential generated during the tasks performed in covert attention modality should be regarded as the main contributing explanation of lower accuracy of covert-attention ERP-based BCIs.

  6. Operational proof of automation

    International Nuclear Information System (INIS)

    Jaerschky, R.; Reifenhaeuser, R.; Schlicht, K.

    1976-01-01

    Automation of the power plant process may imply quite a number of problems. The automation of dynamic operations requires complicated programmes often interfering in several branched areas. This reduces clarity for the operating and maintenance staff, whilst increasing the possibilities of errors. The synthesis and the organization of standardized equipment have proved very successful. The possibilities offered by this kind of automation for improving the operation of power plants will only sufficiently and correctly be turned to profit, however, if the application of these technics of equipment is further improved and if its volume is tallied with a definite etc. (orig.) [de

  7. Operational proof of automation

    International Nuclear Information System (INIS)

    Jaerschky, R.; Schlicht, K.

    1977-01-01

    Automation of the power plant process may imply quite a number of problems. The automation of dynamic operations requires complicated programmes often interfering in several branched areas. This reduces clarity for the operating and maintenance staff, whilst increasing the possibilities of errors. The synthesis and the organization of standardized equipment have proved very successful. The possibilities offered by this kind of automation for improving the operation of power plants will only sufficiently and correctly be turned to profit, however, if the application of these equipment techniques is further improved and if it stands in a certain ratio with a definite efficiency. (orig.) [de

  8. Automatic computation of radiative corrections

    International Nuclear Information System (INIS)

    Fujimoto, J.; Ishikawa, T.; Shimizu, Y.; Kato, K.; Nakazawa, N.; Kaneko, T.

    1997-01-01

    Automated systems are reviewed focusing on their general structure and requirement specific to the calculation of radiative corrections. Detailed description of the system and its performance is presented taking GRACE as a concrete example. (author)

  9. On transcending the impasse of respiratory motion correction applications in routine clinical imaging - a consideration of a fully automated data driven motion control framework

    International Nuclear Information System (INIS)

    Kesner, Adam L; Schleyer, Paul J; Büther, Florian; Walter, Martin A; Schäfers, Klaus P; Koo, Phillip J

    2014-01-01

    Positron emission tomography (PET) is increasingly used for the detection, characterization, and follow-up of tumors located in the thorax. However, patient respiratory motion presents a unique limitation that hinders the application of high-resolution PET technology for this type of imaging. Efforts to transcend this limitation have been underway for more than a decade, yet PET remains for practical considerations a modality vulnerable to motion-induced image degradation. Respiratory motion control is not employed in routine clinical operations. In this article, we take an opportunity to highlight some of the recent advancements in data-driven motion control strategies and how they may form an underpinning for what we are presenting as a fully automated data-driven motion control framework. This framework represents an alternative direction for future endeavors in motion control and can conceptually connect individual focused studies with a strategy for addressing big picture challenges and goals. The online version of this article (doi:10.1186/2197-7364-1-8) contains supplementary material, which is available to authorized users.

  10. High reliability low jitter 80 kV pulse generator

    Directory of Open Access Journals (Sweden)

    M. E. Savage

    2009-08-01

    Full Text Available Switching can be considered to be the essence of pulsed power. Time accurate switch/trigger systems with low inductance are useful in many applications. This article describes a unique switch geometry coupled with a low-inductance capacitive energy store. The system provides a fast-rising high voltage pulse into a low impedance load. It can be challenging to generate high voltage (more than 50 kilovolts into impedances less than 10  Ω, from a low voltage control signal with a fast rise time and high temporal accuracy. The required power amplification is large, and is usually accomplished with multiple stages. The multiple stages can adversely affect the temporal accuracy and the reliability of the system. In the present application, a highly reliable and low jitter trigger generator was required for the Z pulsed-power facility [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats,J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, and J. R. Woodworth, 2007 IEEE Pulsed Power Conference, Albuquerque, NM (IEEE, Piscataway, NJ, 2007, p. 979]. The large investment in each Z experiment demands low prefire probability and low jitter simultaneously. The system described here is based on a 100 kV DC-charged high-pressure spark gap, triggered with an ultraviolet laser. The system uses a single optical path for simultaneously triggering two parallel switches, allowing lower inductance and electrode erosion with a simple optical system. Performance of the system includes 6 ns output rise time into 5.6  Ω, 550 ps one-sigma jitter measured from the 5 V trigger to the high voltage output, and misfire probability less than 10^{-4}. The design of the system and some key measurements will be shown in the paper. We will discuss the

  11. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  12. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  13. Automatic Power Factor Correction Using Capacitive Bank

    OpenAIRE

    Mr.Anant Kumar Tiwari,; Mrs. Durga Sharma

    2014-01-01

    The power factor correction of electrical loads is a problem common to all industrial companies. Earlier the power factor correction was done by adjusting the capacitive bank manually [1]. The automated power factor corrector (APFC) using capacitive load bank is helpful in providing the power factor correction. Proposed automated project involves measuring the power factor value from the load using microcontroller. The design of this auto-adjustable power factor correction is ...

  14. Timing-jitter reduction in a dispersion-managed soliton system

    International Nuclear Information System (INIS)

    Mu, R.; Grigoryan, V.S.; Menyuk, C.R.; Golovchenko, E.A.; Pilipetskii, A.N.

    1998-01-01

    We found by using Monte Carlo simulations that the timing jitter in a dispersion-managed soliton system decreases as the strength of the dispersion management and hence the ratio of the pulse energy to the pulse bandwidth increases. The results are in qualitative but not quantitative agreement with earlier predictions that the decrease is inversely proportional to the square root of the pulse energy. Using an improved semi-analytical theory, we obtained quantitative agreement with the simulations. copyright 1998 Optical Society of America

  15. Low jitter spark gap switch for repetitively pulsed parallel capacitor banks

    International Nuclear Information System (INIS)

    Rohwein, G.J.

    1980-01-01

    A two-section air insulated spark gap has been developed for switching multi-kilojoule plus-minus charged parallel capacitor banks which operate continuously at pulse rates up to 20 pps. The switch operates with less than 2 ns jitter, recovers its dielectric strength within 2 to 5 ms and has not shown degraded performance in sequential test runs totaling over a million shots. Its estimated life with copper electrodes is > 10 7 shots. All preliminary tests indicate that the switch is suitable for continuous running multi-kilojoule systems operating to at least 20 pps

  16. Secondary wavelength stabilization of unbalanced Michelson interferometers for the generation of low-jitter pulse trains.

    Science.gov (United States)

    Shalloo, R J; Corner, L

    2016-09-01

    We present a double unbalanced Michelson interferometer producing up to four output pulses from a single input pulse. The interferometer is stabilized with the Hänsch-Couillaud method using an auxiliary low power continuous wave laser injected into the interferometer, allowing the stabilization of the temporal jitter of the output pulses to 0.02 fs. Such stabilized pulse trains would be suitable for driving multi-pulse laser wakefield accelerators, and the technique could be extended to include amplification in the arms of the interferometer.

  17. Explaining the morphology of supernova remnant (SNR) 1987A with the jittering jets explosion mechanism

    Science.gov (United States)

    Bear, Ealeal; Soker, Noam

    2018-04-01

    We find that the remnant of supernova (SN) 1987A shares some morphological features with four supernova remnants (SNRs) that have signatures of shaping by jets, and from that we strengthen the claim that jets played a crucial role in the explosion of SN 1987A. Some of the morphological features appear also in planetary nebulae (PNe) where jets are observed. The clumpy ejecta bring us to support the claim that the jittering jets explosion mechanism can account for the structure of the remnant of SN 1987A, i.e., SNR 1987A. We conduct a preliminary attempt to quantify the fluctuations in the angular momentum of the mass that is accreted on to the newly born neutron star via an accretion disk or belt. The accretion disk/belt launches the jets that explode core collapse supernovae (CCSNe). The relaxation time of the accretion disk/belt is comparable to the duration of a typical jet-launching episode in the jittering jets explosion mechanism, and hence the disk/belt has no time to relax. We suggest that this might explain two unequal opposite jets that later lead to unequal sides of the elongated structures in some SNRs of CCSNe. We reiterate our earlier call for a paradigm shift from neutrino-driven explosion to a jet-driven explosion of CCSNe.

  18. Effect of 3 Key Factors on Average End to End Delay and Jitter in MANET

    Directory of Open Access Journals (Sweden)

    Saqib Hakak

    2015-01-01

    Full Text Available A mobile ad-hoc network (MANET is a self-configuring infrastructure-less network of mobile devices connected by wireless links where each node or mobile device is independent to move in any desired direction and thus the links keep moving from one node to another. In such a network, the mobile nodes are equipped with CSMA/CA (carrier sense multiple access with collision avoidance transceivers and communicate with each other via radio. In MANETs, routing is considered one of the most difficult and challenging tasks. Because of this, most studies on MANETs have focused on comparing protocols under varying network conditions. But to the best of our knowledge no one has studied the effect of other factors on network performance indicators like throughput, jitter and so on, revealing how much influence a particular factor or group of factors has on each network performance indicator. Thus, in this study the effects of three key factors, i.e. routing protocol, packet size and DSSS rate, were evaluated on key network performance metrics, i.e. average delay and average jitter, as these parameters are crucial for network performance and directly affect the buffering requirements for all video devices and downstream networks.

  19. REGULAR PATTERN MINING (WITH JITTER ON WEIGHTED-DIRECTED DYNAMIC GRAPHS

    Directory of Open Access Journals (Sweden)

    A. GUPTA

    2017-02-01

    Full Text Available Real world graphs are mostly dynamic in nature, exhibiting time-varying behaviour in structure of the graph, weight on the edges and direction of the edges. Mining regular patterns in the occurrence of edge parameters gives an insight into the consumer trends over time in ecommerce co-purchasing networks. But such patterns need not necessarily be precise as in the case when some product goes out of stock or a group of customers becomes unavailable for a short period of time. Ignoring them may lead to loss of useful information and thus taking jitter into account becomes vital. To the best of our knowledge, no work has been yet reported to extract regular patterns considering a jitter of length greater than unity. In this article, we propose a novel method to find quasi regular patterns on weight and direction sequences of such graphs. The method involves analysing the dynamic network considering the inconsistencies in the occurrence of edges. It utilizes the relation between the occurrence sequence and the corresponding weight and direction sequences to speed up this process. Further, these patterns are used to determine the most central nodes (such as the most profit yielding products. To accomplish this we introduce the concept of dynamic closeness centrality and dynamic betweenness centrality. Experiments on Enron e-mail dataset and a synthetic dynamic network show that the presented approach is efficient, so it can be used to find patterns in large scale networks consisting of many timestamps.

  20. A review on high-resolution CMOS delay lines: towards sub-picosecond jitter performance.

    Science.gov (United States)

    Abdulrazzaq, Bilal I; Abdul Halin, Izhal; Kawahito, Shoji; Sidek, Roslina M; Shafie, Suhaidi; Yunus, Nurul Amziah Md

    2016-01-01

    A review on CMOS delay lines with a focus on the most frequently used techniques for high-resolution delay step is presented. The primary types, specifications, delay circuits, and operating principles are presented. The delay circuits reported in this paper are used for delaying digital inputs and clock signals. The most common analog and digitally-controlled delay elements topologies are presented, focusing on the main delay-tuning strategies. IC variables, namely, process, supply voltage, temperature, and noise sources that affect delay resolution through timing jitter are discussed. The design specifications of these delay elements are also discussed and compared for the common delay line circuits. As a result, the main findings of this paper are highlighting and discussing the followings: the most efficient high-resolution delay line techniques, the trade-off challenge found between CMOS delay lines designed using either analog or digitally-controlled delay elements, the trade-off challenge between delay resolution and delay range and the proposed solutions for this challenge, and how CMOS technology scaling can affect the performance of CMOS delay lines. Moreover, the current trends and efforts used in order to generate output delayed signal with low jitter in the sub-picosecond range are presented.

  1. A novel fair active queue management algorithm based on traffic delay jitter

    Science.gov (United States)

    Wang, Xue-Shun; Yu, Shao-Hua; Dai, Jin-You; Luo, Ting

    2009-11-01

    In order to guarantee the quantity of data traffic delivered in the network, congestion control strategy is adopted. According to the study of many active queue management (AQM) algorithms, this paper proposes a novel active queue management algorithm named JFED. JFED can stabilize queue length at a desirable level by adjusting output traffic rate and adopting a reasonable calculation of packet drop probability based on buffer queue length and traffic jitter; and it support burst packet traffic through the packet delay jitter, so that it can traffic flow medium data. JFED impose effective punishment upon non-responsible flow with a full stateless method. To verify the performance of JFED, it is implemented in NS2 and is compared with RED and CHOKe with respect to different performance metrics. Simulation results show that the proposed JFED algorithm outperforms RED and CHOKe in stabilizing instantaneous queue length and in fairness. It is also shown that JFED enables the link capacity to be fully utilized by stabilizing the queue length at a desirable level, while not incurring excessive packet loss ratio.

  2. Linear Polarization, Circular Polarization, and Depolarization of Gamma-ray Bursts: A Simple Case of Jitter Radiation

    Energy Technology Data Exchange (ETDEWEB)

    Mao, Jirong; Wang, Jiancheng, E-mail: jirongmao@mail.ynao.ac.cn [Yunnan Observatories, Chinese Academy of Sciences, 650011 Kunming, Yunnan Province (China)

    2017-04-01

    Linear and circular polarizations of gamma-ray bursts (GRBs) have been detected recently. We adopt a simplified model to investigate GRB polarization characteristics in this paper. A compressed two-dimensional turbulent slab containing stochastic magnetic fields is considered, and jitter radiation can produce the linear polarization under this special magnetic field topology. Turbulent Faraday rotation measure (RM) of this slab makes strong wavelength-dependent depolarization. The jitter photons can also scatter with those magnetic clumps inside the turbulent slab, and a nonzero variance of the Stokes parameter V can be generated. Furthermore, the linearly and circularly polarized photons in the optical and radio bands may suffer heavy absorptions from the slab. Thus we consider the polarized jitter radiation transfer processes. Finally, we compare our model results with the optical detections of GRB 091018, GRB 121024A, and GRB 131030A. We suggest simultaneous observations of GRB multi-wavelength polarization in the future.

  3. The fast correction coil feedback control system

    International Nuclear Information System (INIS)

    Coffield, F.; Caporaso, G.; Zentler, J.M.

    1989-01-01

    A model-based feedback control system has been developed to correct beam displacement errors in the Advanced Test Accelerator (ATA) electron beam accelerator. The feedback control system drives an X/Y dipole steering system that has a 40-MHz bandwidth and can produce ±300-Gauss-cm dipole fields. A simulator was used to develop the control algorithm and to quantify the expected performance in the presence of beam position measurement noise and accelerator timing jitter. The major problem to date has been protecting the amplifiers from the voltage that is inductively coupled to the steering bars by the beam. 3 refs., 8 figs

  4. Building Automation Systems.

    Science.gov (United States)

    Honeywell, Inc., Minneapolis, Minn.

    A number of different automation systems for use in monitoring and controlling building equipment are described in this brochure. The system functions include--(1) collection of information, (2) processing and display of data at a central panel, and (3) taking corrective action by sounding alarms, making adjustments, or automatically starting and…

  5. Note: Design and implementation of a home-built imaging system with low jitter for cold atom experiments

    Energy Technology Data Exchange (ETDEWEB)

    Hachtel, A. J.; Gillette, M. C.; Clements, E. R.; Zhong, S.; Weeks, M. R.; Bali, S., E-mail: balis@miamioh.edu [Department of Physics, Miami University, Oxford, Ohio 45056-1866 (United States)

    2016-05-15

    A novel home-built system for imaging cold atom samples is presented using a readily available astronomy camera which has the requisite sensitivity but no timing-control. We integrate the camera with LabVIEW achieving fast, low-jitter imaging with a convenient user-defined interface. We show that our system takes precisely timed millisecond exposures and offers significant improvements in terms of system jitter and readout time over previously reported home-built systems. Our system rivals current commercial “black box” systems in performance and user-friendliness.

  6. Home Automation

    OpenAIRE

    Ahmed, Zeeshan

    2010-01-01

    In this paper I briefly discuss the importance of home automation system. Going in to the details I briefly present a real time designed and implemented software and hardware oriented house automation research project, capable of automating house's electricity and providing a security system to detect the presence of unexpected behavior.

  7. Detection of myocardial ischemia by automated, motion-corrected, color-encoded perfusion maps compared with visual analysis of adenosine stress cardiovascular magnetic resonance imaging at 3 T: a pilot study.

    Science.gov (United States)

    Doesch, Christina; Papavassiliu, Theano; Michaely, Henrik J; Attenberger, Ulrike I; Glielmi, Christopher; Süselbeck, Tim; Fink, Christian; Borggrefe, Martin; Schoenberg, Stefan O

    2013-09-01

    The purpose of this study was to compare automated, motion-corrected, color-encoded (AMC) perfusion maps with qualitative visual analysis of adenosine stress cardiovascular magnetic resonance imaging for detection of flow-limiting stenoses. Myocardial perfusion measurements applying the standard adenosine stress imaging protocol and a saturation-recovery temporal generalized autocalibrating partially parallel acquisition (t-GRAPPA) turbo fast low angle shot (Turbo FLASH) magnetic resonance imaging sequence were performed in 25 patients using a 3.0-T MAGNETOM Skyra (Siemens Healthcare Sector, Erlangen, Germany). Perfusion studies were analyzed using AMC perfusion maps and qualitative visual analysis. Angiographically detected coronary artery (CA) stenoses greater than 75% or 50% or more with a myocardial perfusion reserve index less than 1.5 were considered as hemodynamically relevant. Diagnostic performance and time requirement for both methods were compared. Interobserver and intraobserver reliability were also assessed. A total of 29 CA stenoses were included in the analysis. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy for detection of ischemia on a per-patient basis were comparable using the AMC perfusion maps compared to visual analysis. On a per-CA territory basis, the attribution of an ischemia to the respective vessel was facilitated using the AMC perfusion maps. Interobserver and intraobserver reliability were better for the AMC perfusion maps (concordance correlation coefficient, 0.94 and 0.93, respectively) compared to visual analysis (concordance correlation coefficient, 0.73 and 0.79, respectively). In addition, in comparison to visual analysis, the AMC perfusion maps were able to significantly reduce analysis time from 7.7 (3.1) to 3.2 (1.9) minutes (P < 0.0001). The AMC perfusion maps yielded a diagnostic performance on a per-patient and on a per-CA territory basis comparable with the visual analysis

  8. Jitter reduction of a reaction wheel by management of angular momentum using magnetic torquers in nano- and micro-satellites

    Science.gov (United States)

    Inamori, Takaya; Wang, Jihe; Saisutjarit, Phongsatorn; Nakasuka, Shinichi

    2013-07-01

    Nowadays, nano- and micro-satellites, which are smaller than conventional large satellites, provide access to space to many satellite developers, and they are attracting interest as an application of space development because development is possible over shorter time period at a lower cost. In most of these nano- and micro-satellite missions, the satellites generally must meet strict attitude requirements for obtaining scientific data under strict constraints of power consumption, space, and weight. In many satellite missions, the jitter of a reaction wheel degrades the performance of the mission detectors and attitude sensors; therefore, jitter should be controlled or isolated to reduce its effect on sensor devices. In conventional standard-sized satellites, tip-tilt mirrors (TTMs) and isolators are used for controlling or isolating the vibrations from reaction wheels; however, it is difficult to use these devices for nano- and micro-satellite missions under the strict power, space, and mass constraints. In this research, the jitter of reaction wheels is reduced by using accurate sensors, small reaction wheels, and slow rotation frequency reaction wheel instead of TTMs and isolators. The objective of a reaction wheel in many satellite missions is the management of the satellite's angular momentum, which increases because of attitude disturbances. If the magnitude of the disturbance is reduced in orbit or on the ground, the magnitude of the angular momentum that the reaction wheels gain from attitude disturbances in orbit becomes smaller; therefore, satellites can stabilize their attitude using only smaller reaction wheels or slow rotation speed, which cause relatively smaller vibration. In nano- and micro-satellite missions, the dominant attitude disturbance is a magnetic torque, which can be cancelled by using magnetic actuators. With the magnetic compensation, the satellite reduces the angular momentum that the reaction wheels gain, and therefore, satellites do

  9. Understanding human management of automation errors

    Science.gov (United States)

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  10. Femtosecond timing-jitter between photo-cathode laser and ultra-short electron bunches by means of hybrid compression

    CERN Document Server

    Pompili, Riccardo; Bellaveglia, M; Biagioni, A; Castorina, G; Chiadroni, E; Cianchi, A; Croia, M; Di Giovenale, D; Ferrario, M; Filippi, F; Gallo, A; Gatti, G; Giorgianni, F; Giribono, A; Li, W; Lupi, S; Mostacci, A; Petrarca, M; Piersanti, L; Di Pirro, G; Romeo, S; Scifo, J; Shpakov, V; Vaccarezza, C; Villa, F

    2017-01-01

    The generation of ultra-short electron bunches with ultra-low timing-jitter relative to the photo-cathode (PC) laser has been experimentally proved for the first time at the SPARC_LAB test-facility (INFN-LNF, Frascati) exploiting a two-stage hybrid compression scheme. The first stage employs RF-based compression (velocity-bunching), which shortens the bunch and imprints an energy chirp on it. The second stage is performed in a non-isochronous dogleg line, where the compression is completed resulting in a final bunch duration below 90 fs (rms). At the same time, the beam arrival timing-jitter with respect to the PC laser has been measured to be lower than 20 fs (rms). The reported results have been validated with numerical simulations.

  11. Femtosecond timing-jitter between photo-cathode laser and ultra-short electron bunches by means of hybrid compression

    International Nuclear Information System (INIS)

    Pompili, R; Anania, M P; Bellaveglia, M; Biagioni, A; Castorina, G; Chiadroni, E; Croia, M; Giovenale, D Di; Ferrario, M; Gallo, A; Gatti, G; Cianchi, A; Filippi, F; Giorgianni, F; Giribono, A; Lupi, S; Mostacci, A; Petrarca, M; Piersanti, L; Li, W

    2016-01-01

    The generation of ultra-short electron bunches with ultra-low timing-jitter relative to the photo-cathode (PC) laser has been experimentally proved for the first time at the SPARC-LAB test-facility (INFN-LNF, Frascati) exploiting a two-stage hybrid compression scheme. The first stage employs RF-based compression (velocity-bunching), which shortens the bunch and imprints an energy chirp on it. The second stage is performed in a non-isochronous dogleg line, where the compression is completed resulting in a final bunch duration below 90 fs (rms). At the same time, the beam arrival timing-jitter with respect to the PC laser has been measured to be lower than 20 fs (rms). The reported results have been validated with numerical simulations. (paper)

  12. A low spur, low jitter 10-GHz phase-locked loop in 0.13-μm CMOS technology

    International Nuclear Information System (INIS)

    Mei Niansong; Sun Yu; Lu Bo; Pan Yaohua; Huang Yumei; Hong Zhiliang

    2011-01-01

    This paper presents a 10-GHz low spur and low jitter phase-locked loop (PLL). An improved low phase noise VCO and a dynamic phase frequency detector with a short delay reset time are employed to reduce the noise of the PLL. We also discuss the methodology to optimize the high frequency prescaler's noise and the charge pump's current mismatch. The chip was fabricated in a SMIC 0.13-μm RF CMOS process with a 1.2-V power supply. The measured integrated RMS jitter is 757 fs (1 kHz to 10 MHz); the phase noise is -89 and -118.1 dBc/Hz at 10 kHz and 1 MHz frequency offset, respectively; and the reference frequency spur is below -77 dBc. The chip size is 0.32 mm 2 and the power consumption is 30.6 mW. (semiconductor integrated circuits)

  13. Transmission line transformer for reliable and low-jitter triggering of a railgap switch.

    Science.gov (United States)

    Verma, Rishi; Mishra, Ekansh; Sagar, Karuna; Meena, Manraj; Shyam, Anurag

    2014-09-01

    The performance of railgap switch critically relies upon multichannel breakdown between the extended electrodes (rails) in order to ensure distributed current transfer along electrode length and to minimize the switch inductance. The initiation of several simultaneous arc channels along the switch length depends on the gap triggering technique and on the rate at which the electric field changes within the gap. This paper presents design, construction, and output characteristics of a coaxial cable based three-stage transmission line transformer (TLT) that is capable of initiating multichannel breakdown in a high voltage, low inductance railgap switch. In each stage three identical lengths of URM67 coaxial cables have been used in parallel and they have been wounded in separate cassettes to enhance the isolation of the output of transformer from the input. The cascaded output impedance of TLT is ~50 Ω. Along with multi-channel formation over the complete length of electrode rails, significant reduction in jitter (≤2 ns) and conduction delay (≤60 ns) has been observed by the realization of large amplitude (~80 kV), high dV/dt (~6 kV/ns) pulse produced by the indigenously developed TLT based trigger generator. The superior performance of TLT over conventional pulse transformer for railgap triggering application has been compared and demonstrated experimentally.

  14. Designing and commissioning of a setup for timing-jitter measurements using electro-optic temporal decoding

    International Nuclear Information System (INIS)

    Borissenko, Dennis

    2016-12-01

    Precise measurements of the arrival time jitter between the ionization laser, used to create the plasma, and the driver beam in the PWFA setup of the FLASHForward project are of high interest for the operation and optimization of the experiment. In this thesis, an electro-optic temporal decoding (EOTD) setup with near crossed polarizer detection scheme is presented, which can measure the timing-jitter to an accuracy of around 30 fs. This result was obtained during several measurements conducted at the coherent transition radiation beamline CTR141 at FLASH, using a 100 μm thick GaP crystal and coherent diffraction/transition radiation, generated from the FLASH1 electron bunches. Measurements were performed during long and short electron bunch operation at FLASH, showing that best results are obtained with CDR from long electron bunches. Utilizing CTR led to a higher EO signal and ''over-compensation'' of the SHG background level during the measurement, which resulted in a double-peak structure of the observed THz pulses. To resolve the single-cycle nature of these THz pulses, the SHG background had to be adjusted properly. Furthermore, EOTD measurements during a short bunch operation run at FLASH exhibited strong oscillations in the EO signal, which were suspected to come either from internal lattice resonances of the EO crystal or internal reflections, or excitation of water vapor in the humid air in the laboratory. The oscillations spoiled the observed EOTD trace leading to no sensible measurements of the arrival time jitter during this short bunch operation. To evaluate the capabilities of the setup for monitoring the timing jitter of short PWFA accelerated electron bunches or very short driver bunches at FLASHForward, further investigations on the observed oscillations in the EOTD traces have to be performed during short bunch operation at FLASH with different crystals and under vacuum conditions, to understand the oscillations of the EO signal better.

  15. Designing and commissioning of a setup for timing-jitter measurements using electro-optic temporal decoding

    Energy Technology Data Exchange (ETDEWEB)

    Borissenko, Dennis

    2016-12-15

    Precise measurements of the arrival time jitter between the ionization laser, used to create the plasma, and the driver beam in the PWFA setup of the FLASHForward project are of high interest for the operation and optimization of the experiment. In this thesis, an electro-optic temporal decoding (EOTD) setup with near crossed polarizer detection scheme is presented, which can measure the timing-jitter to an accuracy of around 30 fs. This result was obtained during several measurements conducted at the coherent transition radiation beamline CTR141 at FLASH, using a 100 μm thick GaP crystal and coherent diffraction/transition radiation, generated from the FLASH1 electron bunches. Measurements were performed during long and short electron bunch operation at FLASH, showing that best results are obtained with CDR from long electron bunches. Utilizing CTR led to a higher EO signal and ''over-compensation'' of the SHG background level during the measurement, which resulted in a double-peak structure of the observed THz pulses. To resolve the single-cycle nature of these THz pulses, the SHG background had to be adjusted properly. Furthermore, EOTD measurements during a short bunch operation run at FLASH exhibited strong oscillations in the EO signal, which were suspected to come either from internal lattice resonances of the EO crystal or internal reflections, or excitation of water vapor in the humid air in the laboratory. The oscillations spoiled the observed EOTD trace leading to no sensible measurements of the arrival time jitter during this short bunch operation. To evaluate the capabilities of the setup for monitoring the timing jitter of short PWFA accelerated electron bunches or very short driver bunches at FLASHForward, further investigations on the observed oscillations in the EOTD traces have to be performed during short bunch operation at FLASH with different crystals and under vacuum conditions, to understand the oscillations of the EO

  16. Jitter Studies for a 2.4 GeV Light Source Accelerator Using LiTrack

    International Nuclear Information System (INIS)

    Penn, Gregory E.

    2010-01-01

    Electron beam quality is an important factor in the performance of a free electron laser (FEL). Parameters of particular interest are the electron beam energy, slice emittance and energy spread, peak current, and energy chirp. Jitter in average energy is typically many times the slice energy spread. A seeded FEL is sensitive not only to these local properties but also to factors such as shot-to-shot consistency and the uniformity of the energy and current profiles across the bunch. The timing and bunch length jitter should be controlled to maximize the interval of time over which the electron beam can be reliably seeded by a laser to produce good output in the FEL. LiTrack, a one-dimensional tracking code which includes the effect of longitudinal wakefields, is used to study the sensitivity of the accelerator portion of a 2.4 GeV FEL to sources of variability such as the radio frequency (RF) cavities, chicanes, and the timing and efficiency of electron production at the photocathode. The main contributors to jitter in the resulting electron beam are identified and quantified for various figures of merit.

  17. Optical beam transport to a remote location for low jitter pump-probe experiments with a free electron laser

    Directory of Open Access Journals (Sweden)

    P. Cinquegrana

    2014-04-01

    Full Text Available In this paper we propose a scheme that allows a strong reduction of the timing jitter between the pulses of a free electron laser (FEL and external laser pulses delivered simultaneously at the FEL experimental stations for pump-probe–type experiments. The technique, applicable to all seeding-based FEL schemes, relies on the free-space optical transport of a portion of the seed laser pulse from its optical table to the experimental stations. The results presented here demonstrate that a carefully designed laser beam transport, incorporating also a transverse beam position stabilization, allows one to keep the timing fluctuations, added by as much as 150 m of free space propagation and a number of beam folding mirrors, to less than 4 femtoseconds rms. By its nature our scheme removes the major common timing jitter sources, so the overall jitter in pump-probe measurements done in this way will be below 10 fs (with a margin to be lowered to below 5 fs, much better than the best results reported previously in the literature amounting to 33 fs rms.

  18. Process automation

    International Nuclear Information System (INIS)

    Moser, D.R.

    1986-01-01

    Process automation technology has been pursued in the chemical processing industries and to a very limited extent in nuclear fuel reprocessing. Its effective use has been restricted in the past by the lack of diverse and reliable process instrumentation and the unavailability of sophisticated software designed for process control. The Integrated Equipment Test (IET) facility was developed by the Consolidated Fuel Reprocessing Program (CFRP) in part to demonstrate new concepts for control of advanced nuclear fuel reprocessing plants. A demonstration of fuel reprocessing equipment automation using advanced instrumentation and a modern, microprocessor-based control system is nearing completion in the facility. This facility provides for the synergistic testing of all chemical process features of a prototypical fuel reprocessing plant that can be attained with unirradiated uranium-bearing feed materials. The unique equipment and mission of the IET facility make it an ideal test bed for automation studies. This effort will provide for the demonstration of the plant automation concept and for the development of techniques for similar applications in a full-scale plant. A set of preliminary recommendations for implementing process automation has been compiled. Some of these concepts are not generally recognized or accepted. The automation work now under way in the IET facility should be useful to others in helping avoid costly mistakes because of the underutilization or misapplication of process automation. 6 figs

  19. An innovative scintillation process for correcting, cooling, and reducing the randomness of waveforms

    International Nuclear Information System (INIS)

    Shen, J.

    1991-01-01

    Research activities were concentrated on an innovative scintillation technique for high-energy collider detection. Heretofore, scintillation waveform data of high- energy physics events have been problematically random. This paper represents a bottleneck of data flow for the next generation of detectors for proton colliders like SSC or LHC. Prevailing problems to resolve were: additional time walk and jitter resulting from the random hitting positions of particles, increased walk and jitter caused by scintillation photon propagation dispersions, and quantum fluctuations of luminescence. However, these were manageable when the different aspects of randomness had been clarified in increased detail. For this purpose, these three were defined as pseudorandomness, quasi-randomness, and real randomness, respectively. A unique scintillation counter incorporating long scintillators with light guides, a drift chamber, and fast discriminators plus integrators was employed to resolve problems of correcting time walk and reducing the additional jitter by establishing an analytical waveform description of V(t,z) for a measured (z). Resolving problem was accomplished by reducing jitter by compressing V(t,z) with a nonlinear medium, called cooling scintillation. Resolving problem was proposed by orienting molecular and polarizing scintillation through the use of intense magnetic technology, called stabilizing the waveform

  20. Advanced health monitor for automated driving functions

    OpenAIRE

    Mikovski Iotov, I.

    2017-01-01

    There is a trend in the automotive domain where driving functions are taken from the driver by automated driving functions. In order to guarantee the correct behavior of these auto-mated driving functions, the report introduces an Advanced Health Monitor that uses Tem-poral Logic and Probabilistic Analysis to indicate the system’s health.

  1. Advanced health monitor for automated driving functions

    NARCIS (Netherlands)

    Mikovski Iotov, I.

    2017-01-01

    There is a trend in the automotive domain where driving functions are taken from the driver by automated driving functions. In order to guarantee the correct behavior of these auto-mated driving functions, the report introduces an Advanced Health Monitor that uses Tem-poral Logic and Probabilistic

  2. Distribution automation

    International Nuclear Information System (INIS)

    Gruenemeyer, D.

    1991-01-01

    This paper reports on a Distribution Automation (DA) System enhances the efficiency and productivity of a utility. It also provides intangible benefits such as improved public image and market advantages. A utility should evaluate the benefits and costs of such a system before committing funds. The expenditure for distribution automation is economical when justified by the deferral of a capacity increase, a decrease in peak power demand, or a reduction in O and M requirements

  3. Publisher Correction

    DEFF Research Database (Denmark)

    Turcot, Valérie; Lu, Yingchang; Highland, Heather M

    2018-01-01

    In the published version of this paper, the name of author Emanuele Di Angelantonio was misspelled. This error has now been corrected in the HTML and PDF versions of the article.......In the published version of this paper, the name of author Emanuele Di Angelantonio was misspelled. This error has now been corrected in the HTML and PDF versions of the article....

  4. Author Correction

    DEFF Research Database (Denmark)

    Grundle, D S; Löscher, C R; Krahmann, G

    2018-01-01

    A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper.......A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper....

  5. Automated drawing generation system

    International Nuclear Information System (INIS)

    Yoshinaga, Toshiaki; Kawahata, Junichi; Yoshida, Naoto; Ono, Satoru

    1991-01-01

    Since automated CAD drawing generation systems still require human intervention, improvements were focussed on an interactive processing section (data input and correcting operation) which necessitates a vast amount of work. As a result, human intervention was eliminated, the original objective of a computerized system. This is the first step taken towards complete automation. The effects of development and commercialization of the system are as described below. (1) The interactive processing time required for generating drawings was improved. It was determined that introduction of the CAD system has reduced the time required for generating drawings. (2) The difference in skills between workers preparing drawings has been eliminated and the quality of drawings has been made uniform. (3) The extent of knowledge and experience demanded of workers has been reduced. (author)

  6. Virtual automation.

    Science.gov (United States)

    Casis, E; Garrido, A; Uranga, B; Vives, A; Zufiaurre, C

    2001-01-01

    Total laboratory automation (TLA) can be substituted in mid-size laboratories by a computer sample workflow control (virtual automation). Such a solution has been implemented in our laboratory using PSM, software developed in cooperation with Roche Diagnostics (Barcelona, Spain), to this purpose. This software is connected to the online analyzers and to the laboratory information system and is able to control and direct the samples working as an intermediate station. The only difference with TLA is the replacement of transport belts by personnel of the laboratory. The implementation of this virtual automation system has allowed us the achievement of the main advantages of TLA: workload increase (64%) with reduction in the cost per test (43%), significant reduction in the number of biochemistry primary tubes (from 8 to 2), less aliquoting (from 600 to 100 samples/day), automation of functional testing, drastic reduction of preanalytical errors (from 11.7 to 0.4% of the tubes) and better total response time for both inpatients (from up to 48 hours to up to 4 hours) and outpatients (from up to 10 days to up to 48 hours). As an additional advantage, virtual automation could be implemented without hardware investment and significant headcount reduction (15% in our lab).

  7. A low-jitter RF PLL frequency synthesizer with high-speed mixed-signal down-scaling circuits

    International Nuclear Information System (INIS)

    Tang Lu; Wang Zhigong; Xue Hong; He Xiaohu; Xu Yong; Sun Ling

    2010-01-01

    A low-jitter RF phase locked loop (PLL) frequency synthesizer with high-speed mixed-signal down-scaling circuits is proposed. Several techniques are proposed to reduce the design complexity and improve the performance of the mixed-signal down-scaling circuit in the PLL. An improved D-latch is proposed to increase the speed and the driving capability of the DMP in the down-scaling circuit. Through integrating the D-latch with 'OR' logic for dual-modulus operation, the delays associated with both the 'OR' and D-flip-flop (DFF) operations are reduced, and the complexity of the circuit is also decreased. The programmable frequency divider of the down-scaling circuit is realized in a new method based on deep submicron CMOS technology standard cells and a more accurate wire-load model. The charge pump in the PLL is also realized with a novel architecture to improve the current matching characteristic so as to reduce the jitter of the system. The proposed RF PLL frequency synthesizer is realized with a TSMC 0.18-μm CMOS process. The measured phase noise of the PLL frequency synthesizer output at 100 kHz offset from the center frequency is only -101.52 dBc/Hz. The circuit exhibits a low RMS jitter of 3.3 ps. The power consumption of the PLL frequency synthesizer is also as low as 36 mW at a 1.8 V power supply. (semiconductor integrated circuits)

  8. Analysis of Salient Feature Jitter in the Cochlea for Objective Prediction of Temporally Localized Distortion in Synthesized Speech

    Directory of Open Access Journals (Sweden)

    Wenliang Lu

    2009-01-01

    Full Text Available Temporally localized distortions account for the highest variance in subjective evaluation of coded speech signals (Sen (2001 and Hall (2001. The ability to discern and decompose perceptually relevant temporally localized coding noise from other types of distortions is both of theoretical importance as well as a valuable tool for deploying and designing speech synthesis systems. The work described within uses a physiologically motivated cochlear model to provide a tractable analysis of salient feature trajectories as processed by the cochlea. Subsequent statistical analysis shows simple relationships between the jitter of these trajectories and temporal attributes of the Diagnostic Acceptability Measure (DAM.

  9. Reduction of the jitter of single-flux-quantum time-to-digital converters for time-of-flight mass spectrometry

    International Nuclear Information System (INIS)

    Sano, K.; Muramatsu, Y.; Yamanashi, Y.; Yoshikawa, N.; Zen, N.; Ohkubo, M.

    2014-01-01

    Highlights: • We proposed single-flux-quantum (SFQ) time-to-digital converters (TDCs) for TOF-MS. • SFQ TDC can measure time intervals between multiple signals with high-resolution. • SFQ TDC can directly convert the time intervals into binary data. • We designed two types of SFQ TDCs to reduce the jitter. • The jitter is reduced to less than 100 ps. - Abstract: We have been developing a high-resolution superconducting time-of-flight mass spectrometry (TOF-MS) system, which utilizes a superconducting strip ion detector (SSID) and a single-flux-quantum (SFQ) time-to-digital converter (TDC). The SFQ TDC can measure time intervals between multiple input signals and directly convert them into binary data. In our previous study, 24-bit SFQ TDC with a 3 × 24-bit First-In First-Out (FIFO) buffer was designed and implemented using the AIST Nb standard process 2 (STP2), whose time resolution and dynamic range are 100 ps and 1.6 ms, respectively. In this study we reduce the jitter of the TDC by using two different approaches: one uses an on-chip clock generator with an on-chip low-pass filter for reducing the noise in the bias current, and the other uses a low-jitter external clock source at room temperature. We confirmed that the jitter is reduced to less than 100 ps in the latter approach

  10. Reduction of the jitter of single-flux-quantum time-to-digital converters for time-of-flight mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Sano, K., E-mail: sano-kyosuke-cw@ynu.jp [Department Electrical and Computer Engineering, Yokohama National University, 79-5 Tokiwadai, Hodogaya, Yokohama 240-8501 (Japan); Muramatsu, Y.; Yamanashi, Y.; Yoshikawa, N. [Department Electrical and Computer Engineering, Yokohama National University, 79-5 Tokiwadai, Hodogaya, Yokohama 240-8501 (Japan); Zen, N.; Ohkubo, M. [Research Institute of Instrumentation Frontier, National Institute of Advanced Industrial Science and Technology, 1-1-1 Umezono, Tsukuba 305-8568 (Japan)

    2014-09-15

    Highlights: • We proposed single-flux-quantum (SFQ) time-to-digital converters (TDCs) for TOF-MS. • SFQ TDC can measure time intervals between multiple signals with high-resolution. • SFQ TDC can directly convert the time intervals into binary data. • We designed two types of SFQ TDCs to reduce the jitter. • The jitter is reduced to less than 100 ps. - Abstract: We have been developing a high-resolution superconducting time-of-flight mass spectrometry (TOF-MS) system, which utilizes a superconducting strip ion detector (SSID) and a single-flux-quantum (SFQ) time-to-digital converter (TDC). The SFQ TDC can measure time intervals between multiple input signals and directly convert them into binary data. In our previous study, 24-bit SFQ TDC with a 3 × 24-bit First-In First-Out (FIFO) buffer was designed and implemented using the AIST Nb standard process 2 (STP2), whose time resolution and dynamic range are 100 ps and 1.6 ms, respectively. In this study we reduce the jitter of the TDC by using two different approaches: one uses an on-chip clock generator with an on-chip low-pass filter for reducing the noise in the bias current, and the other uses a low-jitter external clock source at room temperature. We confirmed that the jitter is reduced to less than 100 ps in the latter approach.

  11. Automating Finance

    Science.gov (United States)

    Moore, John

    2007-01-01

    In past years, higher education's financial management side has been riddled with manual processes and aging mainframe applications. This article discusses schools which had taken advantage of an array of technologies that automate billing, payment processing, and refund processing in the case of overpayment. The investments are well worth it:…

  12. Library Automation.

    Science.gov (United States)

    Husby, Ole

    1990-01-01

    The challenges and potential benefits of automating university libraries are reviewed, with special attention given to cooperative systems. Aspects discussed include database size, the role of the university computer center, storage modes, multi-institutional systems, resource sharing, cooperative system management, networking, and intelligent…

  13. Gigahertz repetition rate, sub-femtosecond timing jitter optical pulse train directly generated from a mode-locked Yb:KYW laser.

    Science.gov (United States)

    Yang, Heewon; Kim, Hyoji; Shin, Junho; Kim, Chur; Choi, Sun Young; Kim, Guang-Hoon; Rotermund, Fabian; Kim, Jungwon

    2014-01-01

    We show that a 1.13 GHz repetition rate optical pulse train with 0.70 fs high-frequency timing jitter (integration bandwidth of 17.5 kHz-10 MHz, where the measurement instrument-limited noise floor contributes 0.41 fs in 10 MHz bandwidth) can be directly generated from a free-running, single-mode diode-pumped Yb:KYW laser mode-locked by single-wall carbon nanotube-coated mirrors. To our knowledge, this is the lowest-timing-jitter optical pulse train with gigahertz repetition rate ever measured. If this pulse train is used for direct sampling of 565 MHz signals (Nyquist frequency of the pulse train), the jitter level demonstrated would correspond to the projected effective-number-of-bit of 17.8, which is much higher than the thermal noise limit of 50 Ω load resistance (~14 bits).

  14. Publisher Correction

    DEFF Research Database (Denmark)

    Stokholm, Jakob; Blaser, Martin J.; Thorsen, Jonathan

    2018-01-01

    The originally published version of this Article contained an incorrect version of Figure 3 that was introduced following peer review and inadvertently not corrected during the production process. Both versions contain the same set of abundance data, but the incorrect version has the children...

  15. Publisher Correction

    DEFF Research Database (Denmark)

    Flachsbart, Friederike; Dose, Janina; Gentschew, Liljana

    2018-01-01

    The original version of this Article contained an error in the spelling of the author Robert Häsler, which was incorrectly given as Robert Häesler. This has now been corrected in both the PDF and HTML versions of the Article....

  16. Correction to

    DEFF Research Database (Denmark)

    Roehle, Robert; Wieske, Viktoria; Schuetz, Georg M

    2018-01-01

    The original version of this article, published on 19 March 2018, unfortunately contained a mistake. The following correction has therefore been made in the original: The names of the authors Philipp A. Kaufmann, Ronny Ralf Buechel and Bernhard A. Herzog were presented incorrectly....

  17. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ...

  18. A Conflict-Free Low-Jitter Guaranteed-Rate MAC Protocol for Base-Station Communications in Wireless Mesh Networks

    Science.gov (United States)

    Szymanski, T. H.

    A scheduling algorithm and MAC protocol which provides low-jitter guaranteed-rate (GR) communications between base-stations (BS) in a Wireless Mesh Network (WMN) is proposed. The protocol can provision long-term multimedia services such as VOIP, IPTV, or Video-on-Demand. The time-axis is partitioned into scheduling frames with F time-slots each. A directional antennae scheme is used to provide each directed link with a fixed transmission rate. A protocol such as IntServ is used to provision resources along an end-to-end path of BSs for GR sessions. The Guaranteed Rates between the BSs are then specified in a doubly stochastic traffic rate matrix, which is recursively decomposed to yield a low-jitter GR frame transmission schedule. In the resulting schedule, the end-to-end delay and jitter are small and bounded, and the cell loss rate due to primary scheduling conflicts is zero. For dual-channel WMNs, the MAC protocol can achieve 100% utilization, as well as near-minimal queueing delays and near minimal delay jitter. The scheduling time complexity is O(NFlogNF), where N is the number of BSs. Extensive simulation results are presented.

  19. Concentric needle single fiber electromyography: normative jitter values on voluntary activated Extensor Digitorum Communis Eletromiografia de fibra única com agulha concêntrica: valores normativos do jitter no estudo por contração voluntária do músculo Extensor Digitorum Communis

    Directory of Open Access Journals (Sweden)

    João Aris Kouyoumdjian

    2007-06-01

    Full Text Available Single fiber electromyography (SFEMG is the most sensitive clinical neurophysiological test for neuromuscular junction disorders, particularly myasthenia gravis. Normal values for jitter obtained with SFEMG electrode have been published, but there are few publications for concentric needle electrode (CNE. The aim of this study was to discuss the possibilities to analyse the jitter in CNE recordings and to get normal values of jitter for voluntary activated Extensor Digitorum Communis using disposable CNE. Fifty normal subjects were studied, 16 male and 34 female with a mean age of 37.1±10.3 years (19-55. The jitter values of action potentials pairs of isolated muscular fibers were expressed as the mean consecutive difference (MCD after 20 analysed potential pairs. The mean MCD (n=50 obtained was 24.2±2.8 µs (range of mean values in each subject was 18-31. Upper 95% confidence limit is 29.8 µs. The mean jitter of all potential pairs (n=1000 obtained was 24.07±7.30 µs (range 9-57. A practical upper limit for individual data is set to 46 µs. The mean interpotential interval (MIPI was 779±177 µs (range of individual mean values was 530-1412; there were no potentials with impulse blocking. The present study confirms that CNE is suitable for jitter analysis although certain precautions must be mentioned. Our findings of jitter values with CNE were similar to some other few reports in literature.Eletromiografia de fibra única (SFEMG é o método eletrofisiológico mais sensível para diagnóstico das desordens de junção neuromuscular, particularmente miastenia gravis. Jitter obtido por meio de eletrodo de SFEMG já foi padronizado, porém há poucas publicações com uso de eletrodo de agulha concêntrica (CNE. O objetivo deste estudo é discutir as possibilidades de analisar o jitter por registro com CNE e obter valores normativos para o músculo Extensor Digitorum Communis por ativação muscular mínima. Foram estudados 50 indiv

  20. Plant automation

    International Nuclear Information System (INIS)

    Christensen, L.J.; Sackett, J.I.; Dayal, Y.; Wagner, W.K.

    1989-01-01

    This paper describes work at EBR-II in the development and demonstration of new control equipment and methods and associated schemes for plant prognosis, diagnosis, and automation. The development work has attracted the interest of other national laboratories, universities, and commercial companies. New initiatives include use of new control strategies, expert systems, advanced diagnostics, and operator displays. The unique opportunity offered by EBR-II is as a test bed where a total integrated approach to automatic reactor control can be directly tested under real power plant conditions

  1. AUTOMATION OF CONVEYOR BELT TRANSPORT

    Directory of Open Access Journals (Sweden)

    Nenad Marinović

    1990-12-01

    Full Text Available Belt conveyor transport, although one of the most economical mining transport system, introduce many problems to mantain the continuity of the operation. Every stop causes economical loses. Optimal operation require correct tension of the belt, correct belt position and velocity and faultless rolls, which are together input conditions for automation. Detection and position selection of the faults are essential for safety to eliminate fire hazard and for efficient maintenance. Detection and location of idler roll faults are still open problem and up to now not solved successfully (the paper is published in Croatian.

  2. WIDAFELS flexible automation systems

    International Nuclear Information System (INIS)

    Shende, P.S.; Chander, K.P.; Ramadas, P.

    1990-01-01

    After discussing the various aspects of automation, some typical examples of various levels of automation are given. One of the examples is of automated production line for ceramic fuel pellets. (M.G.B.)

  3. An Automation Planning Primer.

    Science.gov (United States)

    Paynter, Marion

    1988-01-01

    This brief planning guide for library automation incorporates needs assessment and evaluation of options to meet those needs. A bibliography of materials on automation planning and software reviews, library software directories, and library automation journals is included. (CLB)

  4. Low cost automation

    International Nuclear Information System (INIS)

    1987-03-01

    This book indicates method of building of automation plan, design of automation facilities, automation and CHIP process like basics of cutting, NC processing machine and CHIP handling, automation unit, such as drilling unit, tapping unit, boring unit, milling unit and slide unit, application of oil pressure on characteristics and basic oil pressure circuit, application of pneumatic, automation kinds and application of process, assembly, transportation, automatic machine and factory automation.

  5. Electroweak corrections

    International Nuclear Information System (INIS)

    Beenakker, W.J.P.

    1989-01-01

    The prospect of high accuracy measurements investigating the weak interactions, which are expected to take place at the electron-positron storage ring LEP at CERN and the linear collider SCL at SLAC, offers the possibility to study also the weak quantum effects. In order to distinguish if the measured weak quantum effects lie within the margins set by the standard model and those bearing traces of new physics one had to go beyond the lowest order and also include electroweak radiative corrections (EWRC) in theoretical calculations. These higher-order corrections also can offer the possibility of getting information about two particles present in the Glashow-Salam-Weinberg model (GSW), but not discovered up till now, the top quark and the Higgs boson. In ch. 2 the GSW standard model of electroweak interactions is described. In ch. 3 some special techniques are described for determination of integrals which are responsible for numerical instabilities caused by large canceling terms encountered in the calculation of EWRC effects, and methods necessary to get hold of the extensive algebra typical for EWRC. In ch. 4 various aspects related to EWRC effects are discussed, in particular the dependence of the unknown model parameters which are the masses of the top quark and the Higgs boson. The processes which are discussed are production of heavy fermions from electron-positron annihilation and those of the fermionic decay of the Z gauge boson. (H.W.). 106 refs.; 30 figs.; 6 tabs.; schemes

  6. Automated Budget System -

    Data.gov (United States)

    Department of Transportation — The Automated Budget System (ABS) automates management and planning of the Mike Monroney Aeronautical Center (MMAC) budget by providing enhanced capability to plan,...

  7. The effect of individual differences in working memory in older adults on performance with different degrees of automated technology.

    Science.gov (United States)

    Pak, Richard; McLaughlin, Anne Collins; Leidheiser, William; Rovira, Ericka

    2017-04-01

    A leading hypothesis to explain older adults' overdependence on automation is age-related declines in working memory. However, it has not been empirically examined. The purpose of the current experiment was to examine how working memory affected performance with different degrees of automation in older adults. In contrast to the well-supported idea that higher degrees of automation, when the automation is correct, benefits performance but higher degrees of automation, when the automation fails, increasingly harms performance, older adults benefited from higher degrees of automation when the automation was correct but were not differentially harmed by automation failures. Surprisingly, working memory did not interact with degree of automation but did interact with automation correctness or failure. When automation was correct, older adults with higher working memory ability had better performance than those with lower abilities. But when automation was incorrect, all older adults, regardless of working memory ability, performed poorly. Practitioner Summary: The design of automation intended for older adults should focus on ways of making the correctness of the automation apparent to the older user and suggest ways of helping them recover when it is malfunctioning.

  8. Automation 2017

    CERN Document Server

    Zieliński, Cezary; Kaliczyńska, Małgorzata

    2017-01-01

    This book consists of papers presented at Automation 2017, an international conference held in Warsaw from March 15 to 17, 2017. It discusses research findings associated with the concepts behind INDUSTRY 4.0, with a focus on offering a better understanding of and promoting participation in the Fourth Industrial Revolution. Each chapter presents a detailed analysis of a specific technical problem, in most cases followed by a numerical analysis, simulation and description of the results of implementing the solution in a real-world context. The theoretical results, practical solutions and guidelines presented are valuable for both researchers working in the area of engineering sciences and practitioners looking for solutions to industrial problems. .

  9. Marketing automation

    Directory of Open Access Journals (Sweden)

    TODOR Raluca Dania

    2017-01-01

    Full Text Available The automation of the marketing process seems to be nowadays, the only solution to face the major changes brought by the fast evolution of technology and the continuous increase in supply and demand. In order to achieve the desired marketing results, businessis have to employ digital marketing and communication services. These services are efficient and measurable thanks to the marketing technology used to track, score and implement each campaign. Due to the technical progress, the marketing fragmentation, demand for customized products and services on one side and the need to achieve constructive dialogue with the customers, immediate and flexible response and the necessity to measure the investments and the results on the other side, the classical marketing approached had changed continue to improve substantially.

  10. Automated Registration of Images from Multiple Bands of Resourcesat-2 Liss-4 camera

    Science.gov (United States)

    Radhadevi, P. V.; Solanki, S. S.; Jyothi, M. V.; Varadan, G.

    2014-11-01

    Continuous and automated co-registration and geo-tagging of images from multiple bands of Liss-4 camera is one of the interesting challenges of Resourcesat-2 data processing. Three arrays of the Liss-4 camera are physically separated in the focal plane in alongtrack direction. Thus, same line on the ground will be imaged by extreme bands with a time interval of as much as 2.1 seconds. During this time, the satellite would have covered a distance of about 14 km on the ground and the earth would have rotated through an angle of 30". A yaw steering is done to compensate the earth rotation effects, thus ensuring a first level registration between the bands. But this will not do a perfect co-registration because of the attitude fluctuations, satellite movement, terrain topography, PSM steering and small variations in the angular placement of the CCD lines (from the pre-launch values) in the focal plane. This paper describes an algorithm based on the viewing geometry of the satellite to do an automatic band to band registration of Liss-4 MX image of Resourcesat-2 in Level 1A. The algorithm is using the principles of photogrammetric collinearity equations. The model employs an orbit trajectory and attitude fitting with polynomials. Then, a direct geo-referencing with a global DEM with which every pixel in the middle band is mapped to a particular position on the surface of the earth with the given attitude. Attitude is estimated by interpolating measurement data obtained from star sensors and gyros, which are sampled at low frequency. When the sampling rate of attitude information is low compared to the frequency of jitter or micro-vibration, images processed by geometric correction suffer from distortion. Therefore, a set of conjugate points are identified between the bands to perform a relative attitude error estimation and correction which will ensure the internal accuracy and co-registration of bands. Accurate calculation of the exterior orientation parameters with

  11. A fully automated and reproducible level-set segmentation approach for generation of MR-based attenuation correction map of PET images in the brain employing single STE-MR imaging modality

    Energy Technology Data Exchange (ETDEWEB)

    Kazerooni, Anahita Fathi; Aarabi, Mohammad Hadi [Quantitative MR Imaging and Spectroscopy Group, Research Center for Cellular and Molecular Imaging, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Ay, Mohammadreza [Quantitative MR Imaging and Spectroscopy Group, Research Center for Cellular and Molecular Imaging, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Medical Imaging Systems Group, Research Center for Cellular and Molecular Imaging, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Rad, Hamidreza Saligheh [Quantitative MR Imaging and Spectroscopy Group, Research Center for Cellular and Molecular Imaging, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of)

    2014-07-29

    Generating MR-based attenuation correction map (μ-map) for quantitative reconstruction of PET images still remains a challenge in hybrid PET/MRI systems, mainly because cortical bone structures are indistinguishable from proximal air cavities in conventional MR images. Recently, development of short echo-time (STE) MR imaging sequences, has shown promise in differentiating cortical bone from air. However, on STE-MR images, the bone appears with discontinuous boundaries. Therefore, segmentation techniques based on intensity classification, such as thresholding or fuzzy C-means, fail to homogeneously delineate bone boundaries, especially in the presence of intrinsic noise and intensity inhomogeneity. Consequently, they cannot be fully automatized, must be fine-tuned on the case-by-case basis, and require additional morphological operations for segmentation refinement. To overcome the mentioned problems, in this study, we introduce a new fully automatic and reproducible STE-MR segmentation approach exploiting level-set in a clustering-based intensity inhomogeneity correction framework to reliably delineate bone from soft tissue and air.

  12. A fully automated and reproducible level-set segmentation approach for generation of MR-based attenuation correction map of PET images in the brain employing single STE-MR imaging modality

    International Nuclear Information System (INIS)

    Kazerooni, Anahita Fathi; Aarabi, Mohammad Hadi; Ay, Mohammadreza; Rad, Hamidreza Saligheh

    2014-01-01

    Generating MR-based attenuation correction map (μ-map) for quantitative reconstruction of PET images still remains a challenge in hybrid PET/MRI systems, mainly because cortical bone structures are indistinguishable from proximal air cavities in conventional MR images. Recently, development of short echo-time (STE) MR imaging sequences, has shown promise in differentiating cortical bone from air. However, on STE-MR images, the bone appears with discontinuous boundaries. Therefore, segmentation techniques based on intensity classification, such as thresholding or fuzzy C-means, fail to homogeneously delineate bone boundaries, especially in the presence of intrinsic noise and intensity inhomogeneity. Consequently, they cannot be fully automatized, must be fine-tuned on the case-by-case basis, and require additional morphological operations for segmentation refinement. To overcome the mentioned problems, in this study, we introduce a new fully automatic and reproducible STE-MR segmentation approach exploiting level-set in a clustering-based intensity inhomogeneity correction framework to reliably delineate bone from soft tissue and air.

  13. Illumination correction in psoriasis lesions images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær

    2003-01-01

    An approach to automatically correct illumination problems in dermatological images is presented. The illumination function is estimated after combining the thematic map indicating skin-produced by an automated classification scheme- with the dermatological image data. The user is only required t...

  14. Judson_Mansouri_Automated_Chemical_Curation_QSAREnvRes_Data

    Data.gov (United States)

    U.S. Environmental Protection Agency — Here we describe the development of an automated KNIME workflow to curate and correct errors in the structure and identity of chemicals using the publically...

  15. Both Automation and Paper.

    Science.gov (United States)

    Purcell, Royal

    1988-01-01

    Discusses the concept of a paperless society and the current situation in library automation. Various applications of automation and telecommunications are addressed, and future library automation is considered. Automation at the Monroe County Public Library in Bloomington, Indiana, is described as an example. (MES)

  16. Asymmetric dual-loop feedback to suppress spurious tones and reduce timing jitter in self-mode-locked quantum-dash lasers emitting at 155 μm

    Science.gov (United States)

    Asghar, Haroon; McInerney, John G.

    2017-09-01

    We demonstrate an asymmetric dual-loop feedback scheme to suppress external cavity side-modes induced in self-mode-locked quantum-dash lasers with conventional single and dual-loop feedback. In this letter, we achieved optimal suppression of spurious tones by optimizing the length of second delay time. We observed that asymmetric dual-loop feedback, with large (~8x) disparity in cavity lengths, eliminates all external-cavity side-modes and produces flat RF spectra close to the main peak with low timing jitter compared to single-loop feedback. Significant reduction in RF linewidth and reduced timing jitter was also observed as a function of increased second feedback delay time. The experimental results based on this feedback configuration validate predictions of recently published numerical simulations. This interesting asymmetric dual-loop feedback scheme provides simplest, efficient and cost effective stabilization of side-band free optoelectronic oscillators based on mode-locked lasers.

  17. Implementing BosonSampling with time-bin encoding: Analysis of loss, mode mismatch, and time jitter

    Science.gov (United States)

    Motes, Keith R.; Dowling, Jonathan P.; Gilchrist, Alexei; Rohde, Peter P.

    2015-11-01

    It was recently shown by Motes, Gilchrist, Dowling, and Rohde [Phys. Rev. Lett. 113, 120501 (2014), 10.1103/PhysRevLett.113.120501] that a time-bin encoded fiber-loop architecture can implement an arbitrary passive linear optics transformation. This was shown in the case of an ideal scheme whereby the architecture has no sources of error. In any realistic implementation, however, physical errors are present, which corrupt the output of the transformation. We investigate the dominant sources of error in this architecture—loss and mode mismatch—and consider how it affects the BosonSampling protocol, a key application for passive linear optics. For our loss analysis we consider two major components that contribute to loss—fiber and switches—and calculate how this affects the success probability and fidelity of the device. Interestingly, we find that errors due to loss are not uniform (unique to time-bin encoding), which asymmetrically biases the implemented unitary. Thus loss necessarily limits the class of unitaries that may be implemented, and therefore future implementations must prioritize minimizing loss rates if arbitrary unitaries are to be implemented. Our formalism for mode mismatch is generalized to account for various phenomenon that may cause mode mismatch, but we focus on two—errors in fiber-loop lengths and time jitter of the photon source. These results provide a guideline for how well future experimental implementations might perform in light of these error mechanisms.

  18. Sub-fs electron bunch generation with sub-10-fs bunch arrival-time jitter via bunch slicing in a magnetic chicane

    Directory of Open Access Journals (Sweden)

    J. Zhu

    2016-05-01

    Full Text Available The generation of ultrashort electron bunches with ultrasmall bunch arrival-time jitter is of vital importance for laser-plasma wakefield acceleration with external injection. We study the production of 100-MeV electron bunches with bunch durations of subfemtosecond (fs and bunch arrival-time jitters of less than 10 fs, in an S-band photoinjector by using a weak magnetic chicane with a slit collimator. The beam dynamics inside the chicane is simulated by using two codes with different self-force models. The first code separates the self-force into a three-dimensional (3D quasistatic space-charge model and a one-dimensional coherent synchrotron radiation (CSR model, while the other one starts from the first principle with a so-called 3D sub-bunch method. The simulations indicate that the CSR effect dominates the horizontal emittance growth and the 1D CSR model underestimates the final bunch duration and emittance because of the very large transverse-to-longitudinal aspect ratio of the sub-fs bunch. Particularly, the CSR effect is also strongly affected by the vertical bunch size. Due to the coupling between the horizontal and longitudinal phase spaces, the bunch duration at the entrance of the last dipole magnet of the chicane is still significantly longer than that at the exit of the chicane, which considerably mitigates the impact of space charge and CSR effects on the beam quality. Exploiting this effect, a bunch charge of up to 4.8 pC in a sub-fs bunch could be simulated. In addition, we analytically and numerically investigate the impact of different jitter sources on the bunch arrival-time jitter downstream of the chicane, and define the tolerance budgets assuming realistic values of the stability of the linac for different bunch charges and compression schemes.

  19. Low-timing-jitter, stretched-pulse passively mode-locked fiber laser with tunable repetition rate and high operation stability

    International Nuclear Information System (INIS)

    Liu, Yuanshan; Zhang, Jian-Guo; Chen, Guofu; Zhao, Wei; Bai, Jing

    2010-01-01

    We design a low-timing-jitter, repetition-rate-tunable, stretched-pulse passively mode-locked fiber laser by using a nonlinear amplifying loop mirror (NALM), a semiconductor saturable absorber mirror (SESAM), and a tunable optical delay line in the laser configuration. Low-timing-jitter optical pulses are stably produced when a SESAM and a 0.16 m dispersion compensation fiber are employed in the laser cavity. By inserting a tunable optical delay line between NALM and SESAM, the variable repetition-rate operation of a self-starting, passively mode-locked fiber laser is successfully demonstrated over a range from 49.65 to 50.47 MHz. The experimental results show that the newly designed fiber laser can maintain the mode locking at the pumping power of 160 mW to stably generate periodic optical pulses with width less than 170 fs and timing jitter lower than 75 fs in the 1.55 µm wavelength region, when the fundamental repetition rate of the laser is continuously tuned between 49.65 and 50.47 MHz. Moreover, this fiber laser has a feature of turn-key operation with high repeatability of its fundamental repetition rate in practice

  20. Experience of automation failures in training: effects on trust, automation bias, complacency and performance.

    Science.gov (United States)

    Sauer, Juergen; Chavaillaz, Alain; Wastell, David

    2016-06-01

    This work examined the effects of operators' exposure to various types of automation failures in training. Forty-five participants were trained for 3.5 h on a simulated process control environment. During training, participants either experienced a fully reliable, automatic fault repair facility (i.e. faults detected and correctly diagnosed), a misdiagnosis-prone one (i.e. faults detected but not correctly diagnosed) or a miss-prone one (i.e. faults not detected). One week after training, participants were tested for 3 h, experiencing two types of automation failures (misdiagnosis, miss). The results showed that automation bias was very high when operators trained on miss-prone automation encountered a failure of the diagnostic system. Operator errors resulting from automation bias were much higher when automation misdiagnosed a fault than when it missed one. Differences in trust levels that were instilled by the different training experiences disappeared during the testing session. Practitioner Summary: The experience of automation failures during training has some consequences. A greater potential for operator errors may be expected when an automatic system failed to diagnose a fault than when it failed to detect one.

  1. Laboratory automation: trajectory, technology, and tactics.

    Science.gov (United States)

    Markin, R S; Whalen, S A

    2000-05-01

    Laboratory automation is in its infancy, following a path parallel to the development of laboratory information systems in the late 1970s and early 1980s. Changes on the horizon in healthcare and clinical laboratory service that affect the delivery of laboratory results include the increasing age of the population in North America, the implementation of the Balanced Budget Act (1997), and the creation of disease management companies. Major technology drivers include outcomes optimization and phenotypically targeted drugs. Constant cost pressures in the clinical laboratory have forced diagnostic manufacturers into less than optimal profitability states. Laboratory automation can be a tool for the improvement of laboratory services and may decrease costs. The key to improvement of laboratory services is implementation of the correct automation technology. The design of this technology should be driven by required functionality. Automation design issues should be centered on the understanding of the laboratory and its relationship to healthcare delivery and the business and operational processes in the clinical laboratory. Automation design philosophy has evolved from a hardware-based approach to a software-based approach. Process control software to support repeat testing, reflex testing, and transportation management, and overall computer-integrated manufacturing approaches to laboratory automation implementation are rapidly expanding areas. It is clear that hardware and software are functionally interdependent and that the interface between the laboratory automation system and the laboratory information system is a key component. The cost-effectiveness of automation solutions suggested by vendors, however, has been difficult to evaluate because the number of automation installations are few and the precision with which operational data have been collected to determine payback is suboptimal. The trend in automation has moved from total laboratory automation to a

  2. Evolution of a Benthic Imaging System From a Towed Camera to an Automated Habitat Characterization System

    Science.gov (United States)

    2008-09-01

    automated processing of images for color correction, segmentation of foreground targets from sediment and classification of targets to taxonomic category...element in the development of HabCam as a tool for habitat characterization is the automated processing of images for color correction, segmentation of

  3. Semianalytical study of the propagation of an ultrastrong femtosecond laser pulse in a plasma with ultrarelativistic electron jitter

    Energy Technology Data Exchange (ETDEWEB)

    Jovanović, Dušan, E-mail: dusan.jovanovic@ipb.ac.rs [Institute of Physics, University of Belgrade, Pregrevica 118, 11080 Belgrade, Zemun (Serbia); Fedele, Renato, E-mail: renato.fedele@na.infn.it [Dipartimento di Fisica, Università di Napoli “Federico II,” M.S. Angelo, Napoli (Italy); INFN Sezione di Napoli, Complesso Universitario di M.S. Angelo, Napoli (Italy); Belić, Milivoj, E-mail: milivoj.belic@qatar.tamu.edu [Texas A and M University at Qatar, P.O. Box 23874, Doha (Qatar); De Nicola, Sergio, E-mail: sergio.denicola@spin.cnr.it [SPIN-CNR, Complesso Universitario di M.S. Angelo, Napoli (Italy)

    2015-04-15

    The interaction of a multi-petawatt, pancake-shaped laser pulse with an unmagnetized plasma is studied analytically and numerically in a regime with ultrarelativistic electron jitter velocities, in which the plasma electrons are almost completely expelled from the pulse region. The study is applied to a laser wakefield acceleration scheme with specifications that may be available in the next generation of Ti:Sa lasers and with the use of recently developed pulse compression techniques. A set of novel nonlinear equations is derived using a three-timescale description, with an intermediate timescale associated with the nonlinear phase of the electromagnetic wave and with the spatial bending of its wave front. They describe, on an equal footing, both the strong and the moderate laser intensity regimes, pertinent to the core and to the edges of the pulse. These have fundamentally different dispersive properties since in the core the electrons are almost completely expelled by a very strong ponderomotive force, and the electromagnetic wave packet is imbedded in a vacuum channel, thus having (almost) linear properties. Conversely, at the pulse edges, the laser amplitude is smaller, and the wave is weakly nonlinear and dispersive. New nonlinear terms in the wave equation, introduced by the nonlinear phase, describe without the violation of imposed scaling laws a smooth transition to a nondispersive electromagnetic wave at very large intensities and a simultaneous saturation of the (initially cubic) nonlocal nonlinearity. The temporal evolution of the laser pulse is studied both analytically and by numerically solving the model equations in a two-dimensional geometry, with the spot diameter presently used in some laser acceleration experiments. The most stable initial pulse length is estimated to exceed ≳1.5–2 μm. Moderate stretching of the pulse in the direction of propagation is observed, followed by the development of a vacuum channel and of a very large

  4. An automated swimming respirometer

    DEFF Research Database (Denmark)

    STEFFENSEN, JF; JOHANSEN, K; BUSHNELL, PG

    1984-01-01

    An automated respirometer is described that can be used for computerized respirometry of trout and sharks.......An automated respirometer is described that can be used for computerized respirometry of trout and sharks....

  5. Autonomy and Automation

    Science.gov (United States)

    Shively, Jay

    2017-01-01

    A significant level of debate and confusion has surrounded the meaning of the terms autonomy and automation. Automation is a multi-dimensional concept, and we propose that Remotely Piloted Aircraft Systems (RPAS) automation should be described with reference to the specific system and task that has been automated, the context in which the automation functions, and other relevant dimensions. In this paper, we present definitions of automation, pilot in the loop, pilot on the loop and pilot out of the loop. We further propose that in future, the International Civil Aviation Organization (ICAO) RPAS Panel avoids the use of the terms autonomy and autonomous when referring to automated systems on board RPA. Work Group 7 proposes to develop, in consultation with other workgroups, a taxonomy of Levels of Automation for RPAS.

  6. Configuration Management Automation (CMA) -

    Data.gov (United States)

    Department of Transportation — Configuration Management Automation (CMA) will provide an automated, integrated enterprise solution to support CM of FAA NAS and Non-NAS assets and investments. CMA...

  7. Fast Automated Decoupling at RHIC

    CERN Document Server

    Beebe-Wang, Joanne

    2005-01-01

    Coupling correction is essential for the operational performance of RHIC. The independence of the transverse degrees of freedom makes diagnostics and tune control easier, and it is advantageous to operate an accelerator close to the coupling resonance to minimize nearby nonlinear sidebands. An automated decoupling application has been developed at RHIC for coupling correction during routine operations. The application decouples RHIC globally by minimizing the tune separation through finding the optimal settings of two orthogonal skew quadrupole families. The program provides options of automatic, semi-automatic and manual decoupling operations. It accesses tune information from all RHIC tune measurement systems: the PLL (Phase Lock Loop), the high frequency Schottky system, and the tune meter. It also supplies tune and skew quadrupole scans, finding the minimum tune separation, display the real time results and interface with the RHIC control system. We summarize the capabilities of the decoupling application...

  8. Automated ISS Flight Utilities

    Science.gov (United States)

    Offermann, Jan Tuzlic

    2016-01-01

    EVADES output. As mentioned above, GEnEVADOSE makes extensive use of ROOT version 6, the data analysis framework developed at the European Organization for Nuclear Research (CERN), and the code is written to the C++11 standard (as are the other projects). My second project is the Automated Mission Reference Exposure Utility (AMREU).Unlike GEnEVADOSE, AMREU is a combination of three frameworks written in both Python and C++, also making use of ROOT (and PyROOT). Run as a combination of daily and weekly cron jobs, these macros query the SRAG database system to determine the active ISS missions, and query minute-by-minute radiation dose information from ISS-TEPC (Tissue Equivalent Proportional Counter), one of the radiation detectors onboard the ISS. Using this information, AMREU creates a corrected data set of daily radiation doses, addressing situations where TEPC may be offline or locked up by correcting doses for days with less than 95% live time (the total amount time the instrument acquires data) by averaging the past 7 days. As not all errors may be automatically detectable, AMREU also allows for manual corrections, checking an updated plaintext file each time it runs. With the corrected data, AMREU generates cumulative dose plots for each mission, and uses a Python script to generate a flight note file (.docx format) containing these plots, as well as information sections to be filled in and modified by the space weather environment officers with information specific to the week. AMREU is set up to run without requiring any user input, and it automatically archives old flight notes and information files for missions that are no longer active. My other projects involve cleaning up a large data set from the Charged Particle Directional Spectrometer (CPDS), joining together many different data sets in order to clean up information in SRAG SQL databases, and developing other automated utilities for displaying information on active solar regions, that may be used by the

  9. Automation in Clinical Microbiology

    Science.gov (United States)

    Ledeboer, Nathan A.

    2013-01-01

    Historically, the trend toward automation in clinical pathology laboratories has largely bypassed the clinical microbiology laboratory. In this article, we review the historical impediments to automation in the microbiology laboratory and offer insight into the reasons why we believe that we are on the cusp of a dramatic change that will sweep a wave of automation into clinical microbiology laboratories. We review the currently available specimen-processing instruments as well as the total laboratory automation solutions. Lastly, we outline the types of studies that will need to be performed to fully assess the benefits of automation in microbiology laboratories. PMID:23515547

  10. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... and Craniofacial Surgery Cleft Lip/Palate and Craniofacial Surgery A cleft lip may require one or more ... find out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment ...

  11. Automated Testing of Event-Driven Applications

    DEFF Research Database (Denmark)

    Jensen, Casper Svenning

    may be tested by selecting an interesting input (i.e. a sequence of events), and deciding if a failure occurs when the selected input is applied to the event-driven application under test. Automated testing promises to reduce the workload for developers by automatically selecting interesting inputs...... and detect failures. However, it is non-trivial to conduct automated testing of event-driven applications because of, for example, infinite input spaces and the absence of specifications of correct application behavior. In this PhD dissertation, we identify a number of specific challenges when conducting...... automated testing of event-driven applications, and we present novel techniques for solving these challenges. First, we present an algorithm for stateless model-checking of event-driven applications with partial-order reduction, and we show how this algorithm may be used to systematically test web...

  12. Automated imaging system for single molecules

    Science.gov (United States)

    Schwartz, David Charles; Runnheim, Rodney; Forrest, Daniel

    2012-09-18

    There is provided a high throughput automated single molecule image collection and processing system that requires minimal initial user input. The unique features embodied in the present disclosure allow automated collection and initial processing of optical images of single molecules and their assemblies. Correct focus may be automatically maintained while images are collected. Uneven illumination in fluorescence microscopy is accounted for, and an overall robust imaging operation is provided yielding individual images prepared for further processing in external systems. Embodiments described herein are useful in studies of any macromolecules such as DNA, RNA, peptides and proteins. The automated image collection and processing system and method of same may be implemented and deployed over a computer network, and may be ergonomically optimized to facilitate user interaction.

  13. Automation bias: empirical results assessing influencing factors.

    Science.gov (United States)

    Goddard, Kate; Roudsari, Abdul; Wyatt, Jeremy C

    2014-05-01

    To investigate the rate of automation bias - the propensity of people to over rely on automated advice and the factors associated with it. Tested factors were attitudinal - trust and confidence, non-attitudinal - decision support experience and clinical experience, and environmental - task difficulty. The paradigm of simulated decision support advice within a prescribing context was used. The study employed within participant before-after design, whereby 26 UK NHS General Practitioners were shown 20 hypothetical prescribing scenarios with prevalidated correct and incorrect answers - advice was incorrect in 6 scenarios. They were asked to prescribe for each case, followed by being shown simulated advice. Participants were then asked whether they wished to change their prescription, and the post-advice prescription was recorded. Rate of overall decision switching was captured. Automation bias was measured by negative consultations - correct to incorrect prescription switching. Participants changed prescriptions in 22.5% of scenarios. The pre-advice accuracy rate of the clinicians was 50.38%, which improved to 58.27% post-advice. The CDSS improved the decision accuracy in 13.1% of prescribing cases. The rate of automation bias, as measured by decision switches from correct pre-advice, to incorrect post-advice was 5.2% of all cases - a net improvement of 8%. More immediate factors such as trust in the specific CDSS, decision confidence, and task difficulty influenced rate of decision switching. Lower clinical experience was associated with more decision switching. Age, DSS experience and trust in CDSS generally were not significantly associated with decision switching. This study adds to the literature surrounding automation bias in terms of its potential frequency and influencing factors. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. Automation systems for radioimmunoassay

    International Nuclear Information System (INIS)

    Yamasaki, Paul

    1974-01-01

    The application of automation systems for radioimmunoassay (RIA) was discussed. Automated systems could be useful in the second step, of the four basic processes in the course of RIA, i.e., preparation of sample for reaction. There were two types of instrumentation, a semi-automatic pipete, and a fully automated pipete station, both providing for fast and accurate dispensing of the reagent or for the diluting of sample with reagent. Illustrations of the instruments were shown. (Mukohata, S.)

  15. Detecting vocal fatigue in student singers using acoustic measures of mean fundamental frequency, jitter, shimmer, and harmonics-to-noise ratio

    Science.gov (United States)

    Sisakun, Siphan

    2000-12-01

    The purpose of this study is to explore the ability of four acoustic parameters, mean fundamental frequency, jitter, shimmer, and harmonics-to-noise ratio, to detect vocal fatigue in student singers. The participants are 15 voice students, who perform two distinct tasks, data collection task and vocal fatiguing task. The data collection task includes the sustained vowel /a/, reading a standard passage, and self-rate on a vocal fatigue form. The vocal fatiguing task is the vocal practice of musical scores for a total of 45 minutes. The four acoustic parameters are extracted using the software EZVoicePlus. The data analyses are performed to answer eight research questions. The first four questions relate to correlations of the self-rating scale and each of the four parameters. The next four research questions relate to differences in the parameters over time using one-factor repeated measures analysis of variance (ANOVA). The result yields a proposed acoustic profile of vocal fatigue in student singers. This profile is characterized by increased fundamental frequency; slightly decreased jitter; slightly decreased shimmer; and slightly increased harmonics-to-noise ratio. The proposed profile requires further investigation.

  16. RCrane: semi-automated RNA model building.

    Science.gov (United States)

    Keating, Kevin S; Pyle, Anna Marie

    2012-08-01

    RNA crystals typically diffract to much lower resolutions than protein crystals. This low-resolution diffraction results in unclear density maps, which cause considerable difficulties during the model-building process. These difficulties are exacerbated by the lack of computational tools for RNA modeling. Here, RCrane, a tool for the partially automated building of RNA into electron-density maps of low or intermediate resolution, is presented. This tool works within Coot, a common program for macromolecular model building. RCrane helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RCrane then allows the crystallographer to review the newly built structure and select alternative backbone conformations where desired. This tool can also be used to automatically correct the backbone structure of previously built nucleotides. These automated corrections can fix incorrect sugar puckers, steric clashes and other structural problems.

  17. Laboratory Automation and Middleware.

    Science.gov (United States)

    Riben, Michael

    2015-06-01

    The practice of surgical pathology is under constant pressure to deliver the highest quality of service, reduce errors, increase throughput, and decrease turnaround time while at the same time dealing with an aging workforce, increasing financial constraints, and economic uncertainty. Although not able to implement total laboratory automation, great progress continues to be made in workstation automation in all areas of the pathology laboratory. This report highlights the benefits and challenges of pathology automation, reviews middleware and its use to facilitate automation, and reviews the progress so far in the anatomic pathology laboratory. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Automated cloning methods.; TOPICAL

    International Nuclear Information System (INIS)

    Collart, F.

    2001-01-01

    Argonne has developed a series of automated protocols to generate bacterial expression clones by using a robotic system designed to be used in procedures associated with molecular biology. The system provides plate storage, temperature control from 4 to 37 C at various locations, and Biomek and Multimek pipetting stations. The automated system consists of a robot that transports sources from the active station on the automation system. Protocols for the automated generation of bacterial expression clones can be grouped into three categories (Figure 1). Fragment generation protocols are initiated on day one of the expression cloning procedure and encompass those protocols involved in generating purified coding region (PCR)

  19. About development of automation control systems

    Science.gov (United States)

    Myshlyaev, L. P.; Wenger, K. G.; Ivushkin, K. A.; Makarov, V. N.

    2018-05-01

    The shortcomings of approaches to the development of modern control automation systems and ways of their improvement are given: the correct formation of objects for study and optimization; a joint synthesis of control objects and control systems, an increase in the structural diversity of the elements of control systems. Diagrams of control systems with purposefully variable structure of their elements are presented. Structures of control algorithms for an object with a purposefully variable structure are given.

  20. Photogrammetric approach to automated checking of DTMs

    DEFF Research Database (Denmark)

    Potucková, Marketa

    2005-01-01

    Geometrically accurate digital terrain models (DTMs) are essential for orthoimage production and many other applications. Collecting reference data or visual inspection are reliable but time consuming and therefore expensive methods for finding errors in DTMs. In this paper, a photogrammetric...... approach to automated checking and improving of DTMs is evaluated. Corresponding points in two overlapping orthoimages are found by means of area based matching. Provided the image orientation is correct, discovered displacements correspond to DTM errors. Improvements of the method regarding its...

  1. Reload safety analysis automation tools

    International Nuclear Information System (INIS)

    Havlůj, F.; Hejzlar, J.; Vočka, R.

    2013-01-01

    Performing core physics calculations for the sake of reload safety analysis is a very demanding and time consuming process. This process generally begins with the preparation of libraries for the core physics code using a lattice code. The next step involves creating a very large set of calculations with the core physics code. Lastly, the results of the calculations must be interpreted, correctly applying uncertainties and checking whether applicable limits are satisfied. Such a procedure requires three specialized experts. One must understand the lattice code in order to correctly calculate and interpret its results. The next expert must have a good understanding of the physics code in order to create libraries from the lattice code results and to correctly define all the calculations involved. The third expert must have a deep knowledge of the power plant and the reload safety analysis procedure in order to verify, that all the necessary calculations were performed. Such a procedure involves many steps and is very time consuming. At ÚJV Řež, a.s., we have developed a set of tools which can be used to automate and simplify the whole process of performing reload safety analysis. Our application QUADRIGA automates lattice code calculations for library preparation. It removes user interaction with the lattice code and reduces his task to defining fuel pin types, enrichments, assembly maps and operational parameters all through a very nice and user-friendly GUI. The second part in reload safety analysis calculations is done by CycleKit, a code which is linked with our core physics code ANDREA. Through CycleKit large sets of calculations with complicated interdependencies can be performed using simple and convenient notation. CycleKit automates the interaction with ANDREA, organizes all the calculations, collects the results, performs limit verification and displays the output in clickable html format. Using this set of tools for reload safety analysis simplifies

  2. FAST AUTOMATED DECOUPLING AT RHIC

    International Nuclear Information System (INIS)

    BEEBE-WANG, J.J.

    2005-01-01

    Coupling correction is essential for the operational performance of RHIC. The independence of the transverse degrees of freedom makes diagnostics and tune control easier, and it is advantageous to operate an accelerator close to the coupling resonance to minimize nearby nonlinear sidebands. An automated coupling correction application iDQmini has been developed for RHIC routine operations. The application decouples RHIC globally by minimizing the tune separation through finding the optimal settings of two orthogonal skew quadrupole families. The program iDQmini provides options of automatic, semi-automatic and manual decoupling operations. It accesses tune information from all RHIC tune measurement systems: the PLL (phase lock loop), the high frequency Schottky system and the tune meter. It also supplies tune and skew quadrupole scans, finding the minimum tune separation, display the real time results and interface with the RHIC control system. We summarize the capabilities of the coupling correction application iDQmini, and discuss the operational protections incorporated in the program

  3. Automated processing for proton spectroscopic imaging using water reference deconvolution.

    Science.gov (United States)

    Maudsley, A A; Wu, Z; Meyerhoff, D J; Weiner, M W

    1994-06-01

    Automated formation of MR spectroscopic images (MRSI) is necessary before routine application of these methods is possible for in vivo studies; however, this task is complicated by the presence of spatially dependent instrumental distortions and the complex nature of the MR spectrum. A data processing method is presented for completely automated formation of in vivo proton spectroscopic images, and applied for analysis of human brain metabolites. This procedure uses the water reference deconvolution method (G. A. Morris, J. Magn. Reson. 80, 547(1988)) to correct for line shape distortions caused by instrumental and sample characteristics, followed by parametric spectral analysis. Results for automated image formation were found to compare favorably with operator dependent spectral integration methods. While the water reference deconvolution processing was found to provide good correction of spatially dependent resonance frequency shifts, it was found to be susceptible to errors for correction of line shape distortions. These occur due to differences between the water reference and the metabolite distributions.

  4. Automated System Marketplace 1994.

    Science.gov (United States)

    Griffiths, Jose-Marie; Kertis, Kimberly

    1994-01-01

    Reports results of the 1994 Automated System Marketplace survey based on responses from 60 vendors. Highlights include changes in the library automation marketplace; estimated library systems revenues; minicomputer and microcomputer-based systems; marketplace trends; global markets and mergers; research needs; new purchase processes; and profiles…

  5. Automation in Warehouse Development

    NARCIS (Netherlands)

    Hamberg, R.; Verriet, J.

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and

  6. Order Division Automated System.

    Science.gov (United States)

    Kniemeyer, Justin M.; And Others

    This publication was prepared by the Order Division Automation Project staff to fulfill the Library of Congress' requirement to document all automation efforts. The report was originally intended for internal use only and not for distribution outside the Library. It is now felt that the library community at-large may have an interest in the…

  7. Automate functional testing

    Directory of Open Access Journals (Sweden)

    Ramesh Kalindri

    2014-06-01

    Full Text Available Currently, software engineers are increasingly turning to the option of automating functional tests, but not always have successful in this endeavor. Reasons range from low planning until over cost in the process. Some principles that can guide teams in automating these tests are described in this article.

  8. Automation and robotics

    Science.gov (United States)

    Montemerlo, Melvin

    1988-01-01

    The Autonomous Systems focus on the automation of control systems for the Space Station and mission operations. Telerobotics focuses on automation for in-space servicing, assembly, and repair. The Autonomous Systems and Telerobotics each have a planned sequence of integrated demonstrations showing the evolutionary advance of the state-of-the-art. Progress is briefly described for each area of concern.

  9. Automating the Small Library.

    Science.gov (United States)

    Skapura, Robert

    1987-01-01

    Discusses the use of microcomputers for automating school libraries, both for entire systems and for specific library tasks. Highlights include available library management software, newsletters that evaluate software, constructing an evaluation matrix, steps to consider in library automation, and a brief discussion of computerized card catalogs.…

  10. Automated model building

    CERN Document Server

    Caferra, Ricardo; Peltier, Nicholas

    2004-01-01

    This is the first book on automated model building, a discipline of automated deduction that is of growing importance Although models and their construction are important per se, automated model building has appeared as a natural enrichment of automated deduction, especially in the attempt to capture the human way of reasoning The book provides an historical overview of the field of automated deduction, and presents the foundations of different existing approaches to model construction, in particular those developed by the authors Finite and infinite model building techniques are presented The main emphasis is on calculi-based methods, and relevant practical results are provided The book is of interest to researchers and graduate students in computer science, computational logic and artificial intelligence It can also be used as a textbook in advanced undergraduate courses

  11. Automation in Immunohematology

    Directory of Open Access Journals (Sweden)

    Meenu Bajpai

    2012-01-01

    Full Text Available There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process.

  12. Automation in Warehouse Development

    CERN Document Server

    Verriet, Jacques

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and supports the quality of picking processes. Secondly, the development of models to simulate and analyse warehouse designs and their components facilitates the challenging task of developing warehouses that take into account each customer’s individual requirements and logistic processes. Automation in Warehouse Development addresses both types of automation from the innovative perspective of applied science. In particular, it describes the outcomes of the Falcon project, a joint endeavour by a consortium of industrial and academic partners. The results include a model-based approach to automate warehouse control design, analysis models for warehouse design, concepts for robotic item handling and computer vision, and auton...

  13. A rigid motion correction method for helical computed tomography (CT)

    International Nuclear Information System (INIS)

    Kim, J-H; Kyme, A; Fulton, R; Nuyts, J; Kuncic, Z

    2015-01-01

    We propose a method to compensate for six degree-of-freedom rigid motion in helical CT of the head. The method is demonstrated in simulations and in helical scans performed on a 16-slice CT scanner. Scans of a Hoffman brain phantom were acquired while an optical motion tracking system recorded the motion of the bed and the phantom. Motion correction was performed by restoring projection consistency using data from the motion tracking system, and reconstructing with an iterative fully 3D algorithm. Motion correction accuracy was evaluated by comparing reconstructed images with a stationary reference scan. We also investigated the effects on accuracy of tracker sampling rate, measurement jitter, interpolation of tracker measurements, and the synchronization of motion data and CT projections. After optimization of these aspects, motion corrected images corresponded remarkably closely to images of the stationary phantom with correlation and similarity coefficients both above 0.9. We performed a simulation study using volunteer head motion and found similarly that our method is capable of compensating effectively for realistic human head movements. To the best of our knowledge, this is the first practical demonstration of generalized rigid motion correction in helical CT. Its clinical value, which we have yet to explore, may be significant. For example it could reduce the necessity for repeat scans and resource-intensive anesthetic and sedation procedures in patient groups prone to motion, such as young children. It is not only applicable to dedicated CT imaging, but also to hybrid PET/CT and SPECT/CT, where it could also ensure an accurate CT image for lesion localization and attenuation correction of the functional image data. (paper)

  14. Automated borehole gravity meter system

    International Nuclear Information System (INIS)

    Lautzenhiser, Th.V.; Wirtz, J.D.

    1984-01-01

    An automated borehole gravity meter system for measuring gravity within a wellbore. The gravity meter includes leveling devices for leveling the borehole gravity meter, displacement devices for applying forces to a gravity sensing device within the gravity meter to bring the gravity sensing device to a predetermined or null position. Electronic sensing and control devices are provided for (i) activating the displacement devices, (ii) sensing the forces applied to the gravity sensing device, (iii) electronically converting the values of the forces into a representation of the gravity at the location in the wellbore, and (iv) outputting such representation. The system further includes electronic control devices with the capability of correcting the representation of gravity for tidal effects, as well as, calculating and outputting the formation bulk density and/or porosity

  15. NWS Corrections to Observations

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Form B-14 is the National Weather Service form entitled 'Notice of Corrections to Weather Records.' The forms are used to make corrections to observations on forms...

  16. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... more surgeries depending on the extent of the repair needed. Click here to find out more. Corrective ... more surgeries depending on the extent of the repair needed. Click here to find out more. Corrective ...

  17. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... Jaw Surgery Download Download the ebook for further information Corrective jaw, or orthognathic surgery is performed by ... your treatment. Correction of Common Dentofacial Deformities ​ ​ The information provided here is not intended as a substitute ...

  18. Systematic review automation technologies

    Science.gov (United States)

    2014-01-01

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time. PMID:25005128

  19. Method and system for correcting an aberration of a beam of charged particles

    International Nuclear Information System (INIS)

    1975-01-01

    A beam of charged particles is deflected in a closed path such as a square over a cross wire grid, for example, at a constant velocity by an X Y deflection system. A small high frequency jitter is added at both axes of deflection to cause oscillation of the beam at 45deg to the X and Y axes. From the time that the leading edge of the oscillating beam passes over the wire until the trailing edge of the beam passes over the wire, an envelope of the oscillations produced by the jitter is obtained. A second envelope is obtained when the leading edge of the beam exits from being over the wire until the trailing edge of the beam ceases to be over the wire. Thus, a pair of envelopes is produced as the beam passes over each wire of the grid. The number of pulses exceeding ten percent of the peak voltage in the eight envelopes produced by the beam completing a cycle in its closed path around the grid are counted and compared with those counted during the previous cycle of the beam moving in its closed path over the grid. As the number of pulses decreases, the quality of the focus of the beam increases so that correction signals are applied to the focus coil in accordance with whether the number of pulses is increasing or decreasing

  20. Method and system for correcting an aberration of a beam of charged particles

    Energy Technology Data Exchange (ETDEWEB)

    1975-06-20

    A beam of charged particles is deflected in a closed path such as a square over a cross wire grid, for example, at a constant velocity by an X Y deflection system. A small high frequency jitter is added at both axes of deflection to cause oscillation of the beam at 45deg to the X and Y axes. From the time that the leading edge of the oscillating beam passes over the wire until the trailing edge of the beam passes over the wire, an envelope of the oscillations produced by the jitter is obtained. A second envelope is obtained when the leading edge of the beam exits from being over the wire until the trailing edge of the beam ceases to be over the wire. Thus, a pair of envelopes is produced as the beam passes over each wire of the grid. The number of pulses exceeding ten percent of the peak voltage in the eight envelopes produced by the beam completing a cycle in its closed path around the grid are counted and compared with those counted during the previous cycle of the beam moving in its closed path over the grid. As the number of pulses decreases, the quality of the focus of the beam increases so that correction signals are applied to the focus coil in accordance with whether the number of pulses is increasing or decreasing.

  1. Automation of radioimmunoassay

    International Nuclear Information System (INIS)

    Yamaguchi, Chisato; Yamada, Hideo; Iio, Masahiro

    1974-01-01

    Automation systems for measuring Australian antigen by radioimmunoassay under development were discussed. Samples were processed as follows: blood serum being dispensed by automated sampler to the test tube, and then incubated under controlled time and temperature; first counting being omitted; labelled antibody being dispensed to the serum after washing; samples being incubated and then centrifuged; radioactivities in the precipitate being counted by auto-well counter; measurements being tabulated by automated typewriter. Not only well-type counter but also position counter was studied. (Kanao, N.)

  2. Automated electron microprobe

    International Nuclear Information System (INIS)

    Thompson, K.A.; Walker, L.R.

    1986-01-01

    The Plant Laboratory at the Oak Ridge Y-12 Plant has recently obtained a Cameca MBX electron microprobe with a Tracor Northern TN5500 automation system. This allows full stage and spectrometer automation and digital beam control. The capabilities of the system include qualitative and quantitative elemental microanalysis for all elements above and including boron in atomic number, high- and low-magnification imaging and processing, elemental mapping and enhancement, and particle size, shape, and composition analyses. Very low magnification, quantitative elemental mapping using stage control (which is of particular interest) has been accomplished along with automated size, shape, and composition analysis over a large relative area

  3. Chef infrastructure automation cookbook

    CERN Document Server

    Marschall, Matthias

    2013-01-01

    Chef Infrastructure Automation Cookbook contains practical recipes on everything you will need to automate your infrastructure using Chef. The book is packed with illustrated code examples to automate your server and cloud infrastructure.The book first shows you the simplest way to achieve a certain task. Then it explains every step in detail, so that you can build your knowledge about how things work. Eventually, the book shows you additional things to consider for each approach. That way, you can learn step-by-step and build profound knowledge on how to go about your configuration management

  4. Managing laboratory automation.

    Science.gov (United States)

    Saboe, T J

    1995-01-01

    This paper discusses the process of managing automated systems through their life cycles within the quality-control (QC) laboratory environment. The focus is on the process of directing and managing the evolving automation of a laboratory; system examples are given. The author shows how both task and data systems have evolved, and how they interrelate. A BIG picture, or continuum view, is presented and some of the reasons for success or failure of the various examples cited are explored. Finally, some comments on future automation need are discussed.

  5. Automated PCB Inspection System

    Directory of Open Access Journals (Sweden)

    Syed Usama BUKHARI

    2017-05-01

    Full Text Available Development of an automated PCB inspection system as per the need of industry is a challenging task. In this paper a case study is presented, to exhibit, a proposed system for an immigration process of a manual PCB inspection system to an automated PCB inspection system, with a minimal intervention on the existing production flow, for a leading automotive manufacturing company. A detailed design of the system, based on computer vision followed by testing and analysis was proposed, in order to aid the manufacturer in the process of automation.

  6. Automated Vehicles Symposium 2015

    CERN Document Server

    Beiker, Sven

    2016-01-01

    This edited book comprises papers about the impacts, benefits and challenges of connected and automated cars. It is the third volume of the LNMOB series dealing with Road Vehicle Automation. The book comprises contributions from researchers, industry practitioners and policy makers, covering perspectives from the U.S., Europe and Japan. It is based on the Automated Vehicles Symposium 2015 which was jointly organized by the Association of Unmanned Vehicle Systems International (AUVSI) and the Transportation Research Board (TRB) in Ann Arbor, Michigan, in July 2015. The topical spectrum includes, but is not limited to, public sector activities, human factors, ethical and business aspects, energy and technological perspectives, vehicle systems and transportation infrastructure. This book is an indispensable source of information for academic researchers, industrial engineers and policy makers interested in the topic of road vehicle automation.

  7. Hydrometeorological Automated Data System

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Office of Hydrologic Development of the National Weather Service operates HADS, the Hydrometeorological Automated Data System. This data set contains the last 48...

  8. Automated External Defibrillator

    Science.gov (United States)

    ... leads to a 10 percent reduction in survival. Training To Use an Automated External Defibrillator Learning how to use an AED and taking a CPR (cardiopulmonary resuscitation) course are helpful. However, if trained ...

  9. Planning for Office Automation.

    Science.gov (United States)

    Mick, Colin K.

    1983-01-01

    Outlines a practical approach to planning for office automation termed the "Focused Process Approach" (the "what" phase, "how" phase, "doing" phase) which is a synthesis of the problem-solving and participatory planning approaches. Thirteen references are provided. (EJS)

  10. Fixed automated spray technology.

    Science.gov (United States)

    2011-04-19

    This research project evaluated the construction and performance of Boschungs Fixed Automated : Spray Technology (FAST) system. The FAST system automatically sprays de-icing material on : the bridge when icing conditions are about to occur. The FA...

  11. Automated Vehicles Symposium 2014

    CERN Document Server

    Beiker, Sven; Road Vehicle Automation 2

    2015-01-01

    This paper collection is the second volume of the LNMOB series on Road Vehicle Automation. The book contains a comprehensive review of current technical, socio-economic, and legal perspectives written by experts coming from public authorities, companies and universities in the U.S., Europe and Japan. It originates from the Automated Vehicle Symposium 2014, which was jointly organized by the Association for Unmanned Vehicle Systems International (AUVSI) and the Transportation Research Board (TRB) in Burlingame, CA, in July 2014. The contributions discuss the challenges arising from the integration of highly automated and self-driving vehicles into the transportation system, with a focus on human factors and different deployment scenarios. This book is an indispensable source of information for academic researchers, industrial engineers, and policy makers interested in the topic of road vehicle automation.

  12. Automation Interface Design Development

    Data.gov (United States)

    National Aeronautics and Space Administration — Our research makes its contributions at two levels. At one level, we addressed the problems of interaction between humans and computers/automation in a particular...

  13. I-94 Automation FAQs

    Data.gov (United States)

    Department of Homeland Security — In order to increase efficiency, reduce operating costs and streamline the admissions process, U.S. Customs and Border Protection has automated Form I-94 at air and...

  14. Automation synthesis modules review

    International Nuclear Information System (INIS)

    Boschi, S.; Lodi, F.; Malizia, C.; Cicoria, G.; Marengo, M.

    2013-01-01

    The introduction of 68 Ga labelled tracers has changed the diagnostic approach to neuroendocrine tumours and the availability of a reliable, long-lived 68 Ge/ 68 Ga generator has been at the bases of the development of 68 Ga radiopharmacy. The huge increase in clinical demand, the impact of regulatory issues and a careful radioprotection of the operators have boosted for extensive automation of the production process. The development of automated systems for 68 Ga radiochemistry, different engineering and software strategies and post-processing of the eluate were discussed along with impact of automation with regulations. - Highlights: ► Generators availability and robust chemistry boosted for the huge diffusion of 68Ga radiopharmaceuticals. ► Different technological approaches for 68Ga radiopharmaceuticals will be discussed. ► Generator eluate post processing and evolution to cassette based systems were the major issues in automation. ► Impact of regulations on the technological development will be also considered

  15. Disassembly automation automated systems with cognitive abilities

    CERN Document Server

    Vongbunyong, Supachai

    2015-01-01

    This book presents a number of aspects to be considered in the development of disassembly automation, including the mechanical system, vision system and intelligent planner. The implementation of cognitive robotics increases the flexibility and degree of autonomy of the disassembly system. Disassembly, as a step in the treatment of end-of-life products, can allow the recovery of embodied value left within disposed products, as well as the appropriate separation of potentially-hazardous components. In the end-of-life treatment industry, disassembly has largely been limited to manual labor, which is expensive in developed countries. Automation is one possible solution for economic feasibility. The target audience primarily comprises researchers and experts in the field, but the book may also be beneficial for graduate students.

  16. Highway Electrification And Automation

    OpenAIRE

    Shladover, Steven E.

    1992-01-01

    This report addresses how the California Department of Transportation and the California PATH Program have made efforts to evaluate the feasibility and applicability of highway electrification and automation technologies. In addition to describing how the work was conducted, the report also describes the findings on highway electrification and highway automation, with experimental results, design study results, and a region-wide application impacts study for Los Angeles.

  17. Automated lattice data generation

    Directory of Open Access Journals (Sweden)

    Ayyar Venkitesh

    2018-01-01

    Full Text Available The process of generating ensembles of gauge configurations (and measuring various observables over them can be tedious and error-prone when done “by hand”. In practice, most of this procedure can be automated with the use of a workflow manager. We discuss how this automation can be accomplished using Taxi, a minimal Python-based workflow manager built for generating lattice data. We present a case study demonstrating this technology.

  18. Automated lattice data generation

    Science.gov (United States)

    Ayyar, Venkitesh; Hackett, Daniel C.; Jay, William I.; Neil, Ethan T.

    2018-03-01

    The process of generating ensembles of gauge configurations (and measuring various observables over them) can be tedious and error-prone when done "by hand". In practice, most of this procedure can be automated with the use of a workflow manager. We discuss how this automation can be accomplished using Taxi, a minimal Python-based workflow manager built for generating lattice data. We present a case study demonstrating this technology.

  19. Automated security management

    CERN Document Server

    Al-Shaer, Ehab; Xie, Geoffrey

    2013-01-01

    In this contributed volume, leading international researchers explore configuration modeling and checking, vulnerability and risk assessment, configuration analysis, and diagnostics and discovery. The authors equip readers to understand automated security management systems and techniques that increase overall network assurability and usability. These constantly changing networks defend against cyber attacks by integrating hundreds of security devices such as firewalls, IPSec gateways, IDS/IPS, authentication servers, authorization/RBAC servers, and crypto systems. Automated Security Managemen

  20. Marketing automation supporting sales

    OpenAIRE

    Sandell, Niko

    2016-01-01

    The past couple of decades has been a time of major changes in marketing. Digitalization has become a permanent part of marketing and at the same time enabled efficient collection of data. Personalization and customization of content are playing a crucial role in marketing when new customers are acquired. This has also created a need for automation to facilitate the distribution of targeted content. As a result of successful marketing automation more information of the customers is gathered ...

  1. Instant Sikuli test automation

    CERN Document Server

    Lau, Ben

    2013-01-01

    Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks. A concise guide written in an easy-to follow style using the Starter guide approach.This book is aimed at automation and testing professionals who want to use Sikuli to automate GUI. Some Python programming experience is assumed.

  2. Managing laboratory automation

    OpenAIRE

    Saboe, Thomas J.

    1995-01-01

    This paper discusses the process of managing automated systems through their life cycles within the quality-control (QC) laboratory environment. The focus is on the process of directing and managing the evolving automation of a laboratory; system examples are given. The author shows how both task and data systems have evolved, and how they interrelate. A BIG picture, or continuum view, is presented and some of the reasons for success or failure of the various examples cited are explored. Fina...

  3. Shielded cells transfer automation

    International Nuclear Information System (INIS)

    Fisher, J.J.

    1984-01-01

    Nuclear waste from shielded cells is removed, packaged, and transferred manually in many nuclear facilities. Radiation exposure is absorbed by operators during these operations and limited only through procedural controls. Technological advances in automation using robotics have allowed a production waste removal operation to be automated to reduce radiation exposure. The robotic system bags waste containers out of glove box and transfers them to a shielded container. Operators control the system outside the system work area via television cameras. 9 figures

  4. Automated Status Notification System

    Science.gov (United States)

    2005-01-01

    NASA Lewis Research Center's Automated Status Notification System (ASNS) was born out of need. To prevent "hacker attacks," Lewis' telephone system needed to monitor communications activities 24 hr a day, 7 days a week. With decreasing staff resources, this continuous monitoring had to be automated. By utilizing existing communications hardware, a UNIX workstation, and NAWK (a pattern scanning and processing language), we implemented a continuous monitoring system.

  5. Automatic Contextual Text Correction Using The Linguistic Habits Graph Lhg

    Directory of Open Access Journals (Sweden)

    Marcin Gadamer

    2009-01-01

    Full Text Available Automatic text correction is an essential problem of today text processors and editors. Thispaper introduces a novel algorithm for automation of contextual text correction using a LinguisticHabit Graph (LHG also introduced in this paper. A specialist internet crawler hasbeen constructed for searching through web sites in order to build a Linguistic Habit Graphafter text corpuses gathered in polish web sites. The achieved correction results on a basis ofthis algorithm using this LHG were compared with commercial programs which also enableto make text correction: Microsoft Word 2007, Open Office Writer 3.0 and search engineGoogle. The achieved results of text correction were much better than correction made bythese commercial tools.

  6. RCrane: semi-automated RNA model building

    International Nuclear Information System (INIS)

    Keating, Kevin S.; Pyle, Anna Marie

    2012-01-01

    RCrane is a new tool for the partially automated building of RNA crystallographic models into electron-density maps of low or intermediate resolution. This tool helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RNA crystals typically diffract to much lower resolutions than protein crystals. This low-resolution diffraction results in unclear density maps, which cause considerable difficulties during the model-building process. These difficulties are exacerbated by the lack of computational tools for RNA modeling. Here, RCrane, a tool for the partially automated building of RNA into electron-density maps of low or intermediate resolution, is presented. This tool works within Coot, a common program for macromolecular model building. RCrane helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RCrane then allows the crystallographer to review the newly built structure and select alternative backbone conformations where desired. This tool can also be used to automatically correct the backbone structure of previously built nucleotides. These automated corrections can fix incorrect sugar puckers, steric clashes and other structural problems

  7. Automated Groundwater Screening

    International Nuclear Information System (INIS)

    Taylor, Glenn A.; Collard, Leonard B.

    2005-01-01

    The Automated Intruder Analysis has been extended to include an Automated Ground Water Screening option. This option screens 825 radionuclides while rigorously applying the National Council on Radiation Protection (NCRP) methodology. An extension to that methodology is presented to give a more realistic screening factor for those radionuclides which have significant daughters. The extension has the promise of reducing the number of radionuclides which must be tracked by the customer. By combining the Automated Intruder Analysis with the Automated Groundwater Screening a consistent set of assumptions and databases is used. A method is proposed to eliminate trigger values by performing rigorous calculation of the screening factor thereby reducing the number of radionuclides sent to further analysis. Using the same problem definitions as in previous groundwater screenings, the automated groundwater screening found one additional nuclide, Ge-68, which failed the screening. It also found that 18 of the 57 radionuclides contained in NCRP Table 3.1 failed the screening. This report describes the automated groundwater screening computer application

  8. Automated MAD and MIR structure solution

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.; Berendzen, Joel

    1999-01-01

    A fully automated procedure for solving MIR and MAD structures has been developed using a scoring scheme to convert the structure-solution process into an optimization problem. Obtaining an electron-density map from X-ray diffraction data can be difficult and time-consuming even after the data have been collected, largely because MIR and MAD structure determinations currently require many subjective evaluations of the qualities of trial heavy-atom partial structures before a correct heavy-atom solution is obtained. A set of criteria for evaluating the quality of heavy-atom partial solutions in macromolecular crystallography have been developed. These have allowed the conversion of the crystal structure-solution process into an optimization problem and have allowed its automation. The SOLVE software has been used to solve MAD data sets with as many as 52 selenium sites in the asymmetric unit. The automated structure-solution process developed is a major step towards the fully automated structure-determination, model-building and refinement procedure which is needed for genomic scale structure determinations

  9. Corrections to primordial nucleosynthesis

    International Nuclear Information System (INIS)

    Dicus, D.A.; Kolb, E.W.; Gleeson, A.M.; Sudarshan, E.C.G.; Teplitz, V.L.; Turner, M.S.

    1982-01-01

    The changes in primordial nucleosynthesis resulting from small corrections to rates for weak processes that connect neutrons and protons are discussed. The weak rates are corrected by improved treatment of Coulomb and radiative corrections, and by inclusion of plasma effects. The calculations lead to a systematic decrease in the predicted 4 He abundance of about ΔY = 0.0025. The relative changes in other primoridal abundances are also 1 to 2%

  10. A precise technique for manufacturing correction coil

    International Nuclear Information System (INIS)

    Schieber, L.

    1992-01-01

    An automated method of manufacturing correction coils has been developed which provides a precise embodiment of the coil design. Numerically controlled machines have been developed to accurately position coil windings on the beam tube. Two types of machines have been built. One machine bonds the wire to a substrate which is wrapped around the beam tube after it is completed while the second machine bonds the wire directly to the beam tube. Both machines use the Multiwire reg-sign technique of bonding the wire to the substrate utilizing an ultrasonic stylus. These machines are being used to manufacture coils for both the SSC and RHIC

  11. Automated system for review of radiotherapy treatment sheets

    International Nuclear Information System (INIS)

    Collado Chamorro, P.; Sanz Freire, C. J.; Vazquez Galinanes, A.; Diaz Pascual, V.; Gomez amez, J.; Martinez Sanchez, S.; Ossola Lentati, G. A.

    2011-01-01

    In many modern radiotherapy services begins to leaf treatment implemented in electronic form. In our department has developed an automated reporting system, that check the following parameters: treatment completed correctly, number of sessions and cumulative dose administered. Likewise treatments are verified in the allocated separate unit, and over-writing table parameters.

  12. Publisher Correction: Predicting unpredictability

    Science.gov (United States)

    Davis, Steven J.

    2018-06-01

    In this News & Views article originally published, the wrong graph was used for panel b of Fig. 1, and the numbers on the y axes of panels a and c were incorrect; the original and corrected Fig. 1 is shown below. This has now been corrected in all versions of the News & Views.

  13. Automation of Taxiing

    Directory of Open Access Journals (Sweden)

    Jaroslav Bursík

    2017-01-01

    Full Text Available The article focuses on the possibility of automation of taxiing, which is the part of a flight, which, under adverse weather conditions, greatly reduces the operational usability of an airport, and is the only part of a flight that has not been affected by automation, yet. Taxiing is currently handled manually by the pilot, who controls the airplane based on information from visual perception. The article primarily deals with possible ways of obtaining navigational information, and its automatic transfer to the controls. Analyzed wand assessed were currently available technologies such as computer vision, Light Detection and Ranging and Global Navigation Satellite System, which are useful for navigation and their general implementation into an airplane was designed. Obstacles to the implementation were identified, too. The result is a proposed combination of systems along with their installation into airplane’s systems so that it is possible to use the automated taxiing.

  14. A computational framework for automation of point defect calculations

    International Nuclear Information System (INIS)

    Goyal, Anuj; Gorai, Prashun; Peng, Haowei

    2017-01-01

    We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.

  15. Control and automation systems

    International Nuclear Information System (INIS)

    Schmidt, R.; Zillich, H.

    1986-01-01

    A survey is given of the development of control and automation systems for energy uses. General remarks about control and automation schemes are followed by a description of modern process control systems along with process control processes as such. After discussing the particular process control requirements of nuclear power plants the paper deals with the reliability and availability of process control systems and refers to computerized simulation processes. The subsequent paragraphs are dedicated to descriptions of the operating floor, ergonomic conditions, existing systems, flue gas desulfurization systems, the electromagnetic influences on digital circuits as well as of light wave uses. (HAG) [de

  16. Automated nuclear materials accounting

    International Nuclear Information System (INIS)

    Pacak, P.; Moravec, J.

    1982-01-01

    An automated state system of accounting for nuclear materials data was established in Czechoslovakia in 1979. A file was compiled of 12 programs in the PL/1 language. The file is divided into four groups according to logical associations, namely programs for data input and checking, programs for handling the basic data file, programs for report outputs in the form of worksheets and magnetic tape records, and programs for book inventory listing, document inventory handling and materials balance listing. A similar automated system of nuclear fuel inventory for a light water reactor was introduced for internal purposes in the Institute of Nuclear Research (UJV). (H.S.)

  17. Automating the CMS DAQ

    International Nuclear Information System (INIS)

    Bauer, G; Darlea, G-L; Gomez-Ceballos, G; Bawej, T; Chaze, O; Coarasa, J A; Deldicque, C; Dobson, M; Dupont, A; Gigi, D; Glege, F; Gomez-Reino, R; Hartl, C; Hegeman, J; Masetti, L; Behrens, U; Branson, J; Cittolin, S; Holzner, A; Erhan, S

    2014-01-01

    We present the automation mechanisms that have been added to the Data Acquisition and Run Control systems of the Compact Muon Solenoid (CMS) experiment during Run 1 of the LHC, ranging from the automation of routine tasks to automatic error recovery and context-sensitive guidance to the operator. These mechanisms helped CMS to maintain a data taking efficiency above 90% and to even improve it to 95% towards the end of Run 1, despite an increase in the occurrence of single-event upsets in sub-detector electronics at high LHC luminosity.

  18. System for Automated Calibration of Vector Modulators

    Science.gov (United States)

    Lux, James; Boas, Amy; Li, Samuel

    2009-01-01

    Vector modulators are used to impose baseband modulation on RF signals, but non-ideal behavior limits the overall performance. The non-ideal behavior of the vector modulator is compensated using data collected with the use of an automated test system driven by a LabVIEW program that systematically applies thousands of control-signal values to the device under test and collects RF measurement data. The technology innovation automates several steps in the process. First, an automated test system, using computer controlled digital-to-analog converters (DACs) and a computer-controlled vector network analyzer (VNA) systematically can apply different I and Q signals (which represent the complex number by which the RF signal is multiplied) to the vector modulator under test (VMUT), while measuring the RF performance specifically, gain and phase. The automated test system uses the LabVIEW software to control the test equipment, collect the data, and write it to a file. The input to the Lab - VIEW program is either user-input for systematic variation, or is provided in a file containing specific test values that should be fed to the VMUT. The output file contains both the control signals and the measured data. The second step is to post-process the file to determine the correction functions as needed. The result of the entire process is a tabular representation, which allows translation of a desired I/Q value to the required analog control signals to produce a particular RF behavior. In some applications, corrected performance is needed only for a limited range. If the vector modulator is being used as a phase shifter, there is only a need to correct I and Q values that represent points on a circle, not the entire plane. This innovation has been used to calibrate 2-GHz MMIC (monolithic microwave integrated circuit) vector modulators in the High EIRP Cluster Array project (EIRP is high effective isotropic radiated power). These calibrations were then used to create

  19. Altering user' acceptance of automation through prior automation exposure.

    Science.gov (United States)

    Bekier, Marek; Molesworth, Brett R C

    2017-06-01

    Air navigation service providers worldwide see increased use of automation as one solution to overcome the capacity constraints imbedded in the present air traffic management (ATM) system. However, increased use of automation within any system is dependent on user acceptance. The present research sought to determine if the point at which an individual is no longer willing to accept or cooperate with automation can be manipulated. Forty participants underwent training on a computer-based air traffic control programme, followed by two ATM exercises (order counterbalanced), one with and one without the aid of automation. Results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation ('tipping point') decreased; suggesting it is indeed possible to alter automation acceptance. Practitioner Summary: This paper investigates whether the point at which a user of automation rejects automation (i.e. 'tipping point') is constant or can be manipulated. The results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation decreased; suggesting it is possible to alter automation acceptance.

  20. LIBRARY AUTOMATION IN NIGERAN UNIVERSITIES

    African Journals Online (AJOL)

    facilitate services and access to information in libraries is widely acceptable. ... Moreover, Ugah (2001) reports that the automation process at the. Abubakar ... blueprint in 1987 and a turn-key system of automation was suggested for the library.

  1. Future Trends in Process Automation

    OpenAIRE

    Jämsä-Jounela, Sirkka-Liisa

    2007-01-01

    The importance of automation in the process industries has increased dramatically in recent years. In the highly industrialized countries, process automation serves to enhance product quality, master the whole range of products, improve process safety and plant availability, efficiently utilize resources and lower emissions. In the rapidly developing countries, mass production is the main motivation for applying process automation. The greatest demand for process automation is in the chemical...

  2. Adaptive Automation Design and Implementation

    Science.gov (United States)

    2015-09-17

    with an automated system to a real-world adaptive au- tomation system implementation. There have been plenty of adaptive automation 17 Adaptive...of systems without increasing manpower requirements by allocating routine tasks to automated aids, improving safety through the use of au- tomated ...between intermediate levels of au- tomation , explicitly defining which human task a given level automates. Each model aids the creation and classification

  3. Correction of Neonatal Hypovolemia

    Directory of Open Access Journals (Sweden)

    V. V. Moskalev

    2007-01-01

    Full Text Available Objective: to evaluate the efficiency of hydroxyethyl starch solution (6% refortane, Berlin-Chemie versus fresh frozen plasma used to correct neonatal hypovolemia.Materials and methods. In 12 neonatal infants with hypoco-agulation, hypovolemia was corrected with fresh frozen plasma (10 ml/kg body weight. In 13 neonates, it was corrected with 6% refortane infusion in a dose of 10 ml/kg. Doppler echocardiography was used to study central hemodynamic parameters and Doppler study was employed to examine regional blood flow in the anterior cerebral and renal arteries.Results. Infusion of 6% refortane and fresh frozen plasma at a rate of 10 ml/hour during an hour was found to normalize the parameters of central hemodynamics and regional blood flow.Conclusion. Comparative analysis of the findings suggests that 6% refortane is the drug of choice in correcting neonatal hypovolemia. Fresh frozen plasma should be infused in hemostatic disorders. 

  4. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... surgery. It is important to understand that your treatment, which will probably include orthodontics before and after ... to realistically estimate the time required for your treatment. Correction of Common Dentofacial Deformities ​ ​ The information provided ...

  5. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... misalignment of jaws and teeth. Surgery can improve chewing, speaking and breathing. While the patient's appearance may ... indicate the need for corrective jaw surgery: Difficulty chewing, or biting food Difficulty swallowing Chronic jaw or ...

  6. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... It can also invite bacteria that lead to gum disease. Click here to find out more. Who We ... It can also invite bacteria that lead to gum disease. Click here to find out more. Corrective Jaw ...

  7. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... is performed by an oral and maxillofacial surgeon (OMS) to correct a wide range of minor and ... when sleeping, including snoring) Your dentist, orthodontist and OMS will work together to determine whether you are ...

  8. Automated HAZOP revisited

    DEFF Research Database (Denmark)

    Taylor, J. R.

    2017-01-01

    Hazard and operability analysis (HAZOP) has developed from a tentative approach to hazard identification for process plants in the early 1970s to an almost universally accepted approach today, and a central technique of safety engineering. Techniques for automated HAZOP analysis were developed...

  9. Automated Student Model Improvement

    Science.gov (United States)

    Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.

    2012-01-01

    Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational…

  10. Automated Vehicle Monitoring System

    OpenAIRE

    Wibowo, Agustinus Deddy Arief; Heriansyah, Rudi

    2014-01-01

    An automated vehicle monitoring system is proposed in this paper. The surveillance system is based on image processing techniques such as background subtraction, colour balancing, chain code based shape detection, and blob. The proposed system will detect any human's head as appeared at the side mirrors. The detected head will be tracked and recorded for further action.

  11. Mechatronic Design Automation

    DEFF Research Database (Denmark)

    Fan, Zhun

    successfully design analogue filters, vibration absorbers, micro-electro-mechanical systems, and vehicle suspension systems, all in an automatic or semi-automatic way. It also investigates the very important issue of co-designing plant-structures and dynamic controllers in automated design of Mechatronic...

  12. Automated Accounting. Instructor Guide.

    Science.gov (United States)

    Moses, Duane R.

    This curriculum guide was developed to assist business instructors using Dac Easy Accounting College Edition Version 2.0 software in their accounting programs. The module consists of four units containing assignment sheets and job sheets designed to enable students to master competencies identified in the area of automated accounting. The first…

  13. Automated conflict resolution issues

    Science.gov (United States)

    Wike, Jeffrey S.

    1991-01-01

    A discussion is presented of how conflicts for Space Network resources should be resolved in the ATDRSS era. The following topics are presented: a description of how resource conflicts are currently resolved; a description of issues associated with automated conflict resolution; present conflict resolution strategies; and topics for further discussion.

  14. Automated gamma counters

    International Nuclear Information System (INIS)

    Regener, M.

    1977-01-01

    This is a report on the most recent developments in the full automation of gamma counting in RIA, in particular by Messrs. Kontron. The development targets were flexibility in sample capacity and shape of test tubes, the possibility of using different radioisotopes for labelling due to an optimisation of the detector system and the use of microprocessers to substitute software for hardware. (ORU) [de

  15. Myths in test automation

    Directory of Open Access Journals (Sweden)

    Jazmine Francis

    2014-12-01

    Full Text Available Myths in automation of software testing is an issue of discussion that echoes about the areas of service in validation of software industry. Probably, the first though that appears in knowledgeable reader would be Why this old topic again? What's New to discuss the matter? But, for the first time everyone agrees that undoubtedly automation testing today is not today what it used to be ten or fifteen years ago, because it has evolved in scope and magnitude. What began as a simple linear scripts for web applications today has a complex architecture and a hybrid framework to facilitate the implementation of testing applications developed with various platforms and technologies. Undoubtedly automation has advanced, but so did the myths associated with it. The change in perspective and knowledge of people on automation has altered the terrain. This article reflects the points of views and experience of the author in what has to do with the transformation of the original myths in new versions, and how they are derived; also provides his thoughts on the new generation of myths.

  16. Myths in test automation

    Directory of Open Access Journals (Sweden)

    Jazmine Francis

    2015-01-01

    Full Text Available Myths in automation of software testing is an issue of discussion that echoes about the areas of service in validation of software industry. Probably, the first though that appears in knowledgeable reader would be Why this old topic again? What's New to discuss the matter? But, for the first time everyone agrees that undoubtedly automation testing today is not today what it used to be ten or fifteen years ago, because it has evolved in scope and magnitude. What began as a simple linear scripts for web applications today has a complex architecture and a hybrid framework to facilitate the implementation of testing applications developed with various platforms and technologies. Undoubtedly automation has advanced, but so did the myths associated with it. The change in perspective and knowledge of people on automation has altered the terrain. This article reflects the points of views and experience of the author in what has to do with the transformation of the original myths in new versions, and how they are derived; also provides his thoughts on the new generation of myths.

  17. Automation of activation analysis

    International Nuclear Information System (INIS)

    Ivanov, I.N.; Ivanets, V.N.; Filippov, V.V.

    1985-01-01

    The basic data on the methods and equipment of activation analysis are presented. Recommendations on the selection of activation analysis techniques, and especially the technique envisaging the use of short-lived isotopes, are given. The equipment possibilities to increase dataway carrying capacity, using modern computers for the automation of the analysis and data processing procedure, are shown

  18. Protokoller til Home Automation

    DEFF Research Database (Denmark)

    Kjær, Kristian Ellebæk

    2008-01-01

    computer, der kan skifte mellem foruddefinerede indstillinger. Nogle gange kan computeren fjernstyres over internettet, så man kan se hjemmets status fra en computer eller måske endda fra en mobiltelefon. Mens nævnte anvendelser er klassiske indenfor home automation, er yderligere funktionalitet dukket op...

  19. Automation of radioimmunoassays

    International Nuclear Information System (INIS)

    Goldie, D.J.; West, P.M.; Ismail, A.A.A.

    1979-01-01

    A short account is given of recent developments in automation of the RIA technique. Difficulties encountered in the incubation, separation and quantitation steps are summarized. Published references are given to a number of systems, both discrete and continuous flow, and details are given of a system developed by the present authors. (U.K.)

  20. Microcontroller for automation application

    Science.gov (United States)

    Cooper, H. W.

    1975-01-01

    The description of a microcontroller currently being developed for automation application was given. It is basically an 8-bit microcomputer with a 40K byte random access memory/read only memory, and can control a maximum of 12 devices through standard 15-line interface ports.

  1. Driver Psychology during Automated Platooning

    NARCIS (Netherlands)

    Heikoop, D.D.

    2017-01-01

    With the rapid increase in vehicle automation technology, the call for understanding how humans behave while driving in an automated vehicle becomes more urgent. Vehicles that have automated systems such as Lane Keeping Assist (LKA) or Adaptive Cruise Control (ACC) not only support drivers in their

  2. Automated Inadvertent Intruder Application

    International Nuclear Information System (INIS)

    Koffman, Larry D.; Lee, Patricia L.; Cook, James R.; Wilhite, Elmer L.

    2008-01-01

    The Environmental Analysis and Performance Modeling group of Savannah River National Laboratory (SRNL) conducts performance assessments of the Savannah River Site (SRS) low-level waste facilities to meet the requirements of DOE Order 435.1. These performance assessments, which result in limits on the amounts of radiological substances that can be placed in the waste disposal facilities, consider numerous potential exposure pathways that could occur in the future. One set of exposure scenarios, known as inadvertent intruder analysis, considers the impact on hypothetical individuals who are assumed to inadvertently intrude onto the waste disposal site. Inadvertent intruder analysis considers three distinct scenarios for exposure referred to as the agriculture scenario, the resident scenario, and the post-drilling scenario. Each of these scenarios has specific exposure pathways that contribute to the overall dose for the scenario. For the inadvertent intruder analysis, the calculation of dose for the exposure pathways is a relatively straightforward algebraic calculation that utilizes dose conversion factors. Prior to 2004, these calculations were performed using an Excel spreadsheet. However, design checks of the spreadsheet calculations revealed that errors could be introduced inadvertently when copying spreadsheet formulas cell by cell and finding these errors was tedious and time consuming. This weakness led to the specification of functional requirements to create a software application that would automate the calculations for inadvertent intruder analysis using a controlled source of input parameters. This software application, named the Automated Inadvertent Intruder Application, has undergone rigorous testing of the internal calculations and meets software QA requirements. The Automated Inadvertent Intruder Application was intended to replace the previous spreadsheet analyses with an automated application that was verified to produce the same calculations and

  3. ICT: isotope correction toolbox.

    Science.gov (United States)

    Jungreuthmayer, Christian; Neubauer, Stefan; Mairinger, Teresa; Zanghellini, Jürgen; Hann, Stephan

    2016-01-01

    Isotope tracer experiments are an invaluable technique to analyze and study the metabolism of biological systems. However, isotope labeling experiments are often affected by naturally abundant isotopes especially in cases where mass spectrometric methods make use of derivatization. The correction of these additive interferences--in particular for complex isotopic systems--is numerically challenging and still an emerging field of research. When positional information is generated via collision-induced dissociation, even more complex calculations for isotopic interference correction are necessary. So far, no freely available tools can handle tandem mass spectrometry data. We present isotope correction toolbox, a program that corrects tandem mass isotopomer data from tandem mass spectrometry experiments. Isotope correction toolbox is written in the multi-platform programming language Perl and, therefore, can be used on all commonly available computer platforms. Source code and documentation can be freely obtained under the Artistic License or the GNU General Public License from: https://github.com/jungreuc/isotope_correction_toolbox/ {christian.jungreuthmayer@boku.ac.at,juergen.zanghellini@boku.ac.at} Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Automating spectral measurements

    Science.gov (United States)

    Goldstein, Fred T.

    2008-09-01

    This paper discusses the architecture of software utilized in spectroscopic measurements. As optical coatings become more sophisticated, there is mounting need to automate data acquisition (DAQ) from spectrophotometers. Such need is exacerbated when 100% inspection is required, ancillary devices are utilized, cost reduction is crucial, or security is vital. While instrument manufacturers normally provide point-and-click DAQ software, an application programming interface (API) may be missing. In such cases automation is impossible or expensive. An API is typically provided in libraries (*.dll, *.ocx) which may be embedded in user-developed applications. Users can thereby implement DAQ automation in several Windows languages. Another possibility, developed by FTG as an alternative to instrument manufacturers' software, is the ActiveX application (*.exe). ActiveX, a component of many Windows applications, provides means for programming and interoperability. This architecture permits a point-and-click program to act as automation client and server. Excel, for example, can control and be controlled by DAQ applications. Most importantly, ActiveX permits ancillary devices such as barcode readers and XY-stages to be easily and economically integrated into scanning procedures. Since an ActiveX application has its own user-interface, it can be independently tested. The ActiveX application then runs (visibly or invisibly) under DAQ software control. Automation capabilities are accessed via a built-in spectro-BASIC language with industry-standard (VBA-compatible) syntax. Supplementing ActiveX, spectro-BASIC also includes auxiliary serial port commands for interfacing programmable logic controllers (PLC). A typical application is automatic filter handling.

  5. Automation of Tabular Application Formation

    Directory of Open Access Journals (Sweden)

    S. V. Zykin

    2013-01-01

    Full Text Available The paper considers automation problems of the interface formation between a table and a relational database. The task description is formalized and the description of the existing approaches to formation of data representations on an example of widely widespread CASE-tools is submitted. The definition of intermediate data representation as a ”join table” is offered, which is used for maintenance of correctness of data representation formation, and also is necessary for direct and inverse data transformations. On the basis of lossless join property and realized dependencies, the concept and a way of context formation of the application and restrictions is introduced. The considered material is further used for constructing an inverse data transformation from tabular presentation into a relational one. On the basis of relationships properties on a database scheme, the partial order on the relations is established, and the restriction of acyclic databases schemes is introduced. The received results are further used at the analysis of principles of formation of inverse data transformation, and the basic details of such a transformation algorithm are considered.

  6. Geological Corrections in Gravimetry

    Science.gov (United States)

    Mikuška, J.; Marušiak, I.

    2015-12-01

    Applying corrections for the known geology to gravity data can be traced back into the first quarter of the 20th century. Later on, mostly in areas with sedimentary cover, at local and regional scales, the correction known as gravity stripping has been in use since the mid 1960s, provided that there was enough geological information. Stripping at regional to global scales became possible after releasing the CRUST 2.0 and later CRUST 1.0 models in the years 2000 and 2013, respectively. Especially the later model provides quite a new view on the relevant geometries and on the topographic and crustal densities as well as on the crust/mantle density contrast. Thus, the isostatic corrections, which have been often used in the past, can now be replaced by procedures working with an independent information interpreted primarily from seismic studies. We have developed software for performing geological corrections in space domain, based on a-priori geometry and density grids which can be of either rectangular or spherical/ellipsoidal types with cells of the shapes of rectangles, tesseroids or triangles. It enables us to calculate the required gravitational effects not only in the form of surface maps or profiles but, for instance, also along vertical lines, which can shed some additional light on the nature of the geological correction. The software can work at a variety of scales and considers the input information to an optional distance from the calculation point up to the antipodes. Our main objective is to treat geological correction as an alternative to accounting for the topography with varying densities since the bottoms of the topographic masses, namely the geoid or ellipsoid, generally do not represent geological boundaries. As well we would like to call attention to the possible distortions of the corrected gravity anomalies. This work was supported by the Slovak Research and Development Agency under the contract APVV-0827-12.

  7. MRI intensity inhomogeneity correction by combining intensity and spatial information

    International Nuclear Information System (INIS)

    Vovk, Uros; Pernus, Franjo; Likar, Bostjan

    2004-01-01

    We propose a novel fully automated method for retrospective correction of intensity inhomogeneity, which is an undesired phenomenon in many automatic image analysis tasks, especially if quantitative analysis is the final goal. Besides most commonly used intensity features, additional spatial image features are incorporated to improve inhomogeneity correction and to make it more dynamic, so that local intensity variations can be corrected more efficiently. The proposed method is a four-step iterative procedure in which a non-parametric inhomogeneity correction is conducted. First, the probability distribution of image intensities and corresponding second derivatives is obtained. Second, intensity correction forces, condensing the probability distribution along the intensity feature, are computed for each voxel. Third, the inhomogeneity correction field is estimated by regularization of all voxel forces, and fourth, the corresponding partial inhomogeneity correction is performed. The degree of inhomogeneity correction dynamics is determined by the size of regularization kernel. The method was qualitatively and quantitatively evaluated on simulated and real MR brain images. The obtained results show that the proposed method does not corrupt inhomogeneity-free images and successfully corrects intensity inhomogeneity artefacts even if these are more dynamic

  8. Human-centred automation programme: review of experiment related studies

    International Nuclear Information System (INIS)

    Grimstad, Tone; Andresen, Gisle; Skjerve, Ann Britt Miberg

    2000-04-01

    Twenty-three empirical studies concerning automation and performance have been reviewed. The purposes of the review are to support experimental studies in the Human-Centred Automation (HCA) programme and to develop a general theory on HCA. Each study was reviewed with regard to twelve study characteristics: domain, type of study, purpose, definition of automation, variables, theoretical basis, models of operator performance, methods applied, experimental design, outcome, stated scope of results, strengths and limitations. Seven of the studies involved domain experts, the rest used students as participants. The majority of the articles originated from the aviation domain: only the study conducted in HAMMLAB considered process control in power plants. In the experimental studies, the independent variable was level of automation, or reliability of automation, while the most common dependent variables were workload, situation awareness, complacency, trust, and criteria of performance, e.g., number of correct responses or response time. Although the studies highlight important aspects of human-automation interaction, it is still unclear how system performance is affected. Nevertheless, the fact that many factors seem to be involved is taken as support for the system-oriented approach of the HCA programme. In conclusion, the review provides valuable input both to the design of experiments and to the development of a general theory. (Author). refs

  9. Robust Active Label Correction

    DEFF Research Database (Denmark)

    Kremer, Jan; Sha, Fei; Igel, Christian

    2018-01-01

    for the noisy data lead to different active label correction algorithms. If loss functions consider the label noise rates, these rates are estimated during learning, where importance weighting compensates for the sampling bias. We show empirically that viewing the true label as a latent variable and computing......Active label correction addresses the problem of learning from input data for which noisy labels are available (e.g., from imprecise measurements or crowd-sourcing) and each true label can be obtained at a significant cost (e.g., through additional measurements or human experts). To minimize......). To select labels for correction, we adopt the active learning strategy of maximizing the expected model change. We consider the change in regularized empirical risk functionals that use different pointwise loss functions for patterns with noisy and true labels, respectively. Different loss functions...

  10. Generalised Batho correction factor

    International Nuclear Information System (INIS)

    Siddon, R.L.

    1984-01-01

    There are various approximate algorithms available to calculate the radiation dose in the presence of a heterogeneous medium. The Webb and Fox product over layers formulation of the generalised Batho correction factor requires determination of the number of layers and the layer densities for each ray path. It has been shown that the Webb and Fox expression is inefficient for the heterogeneous medium which is expressed as regions of inhomogeneity rather than layers. The inefficiency of the layer formulation is identified as the repeated problem of determining for each ray path which inhomogeneity region corresponds to a particular layer. It has been shown that the formulation of the Batho correction factor as a product over inhomogeneity regions avoids that topological problem entirely. The formulation in terms of a product over regions simplifies the computer code and reduces the time required to calculate the Batho correction factor for the general heterogeneous medium. (U.K.)

  11. THE SECONDARY EXTINCTION CORRECTION

    Energy Technology Data Exchange (ETDEWEB)

    Zachariasen, W. H.

    1963-03-15

    It is shown that Darwin's formula for the secondary extinction correction, which has been universally accepted and extensively used, contains an appreciable error in the x-ray diffraction case. The correct formula is derived. As a first order correction for secondary extinction, Darwin showed that one should use an effective absorption coefficient mu + gQ where an unpolarized incident beam is presumed. The new derivation shows that the effective absorption coefficient is mu + 2gQ(1 + cos/sup 4/2 theta )/(1 plus or minus cos/sup 2/2 theta )/s up 2/, which gives mu + gQ at theta =0 deg and theta = 90 deg , but mu + 2gQ at theta = 45 deg . Darwin's theory remains valid when applied to neutron diffraction. (auth)

  12. Rapid automated nuclear chemistry

    International Nuclear Information System (INIS)

    Meyer, R.A.

    1979-01-01

    Rapid Automated Nuclear Chemistry (RANC) can be thought of as the Z-separation of Neutron-rich Isotopes by Automated Methods. The range of RANC studies of fission and its products is large. In a sense, the studies can be categorized into various energy ranges from the highest where the fission process and particle emission are considered, to low energies where nuclear dynamics are being explored. This paper presents a table which gives examples of current research using RANC on fission and fission products. The remainder of this text is divided into three parts. The first contains a discussion of the chemical methods available for the fission product elements, the second describes the major techniques, and in the last section, examples of recent results are discussed as illustrations of the use of RANC

  13. Automated optical assembly

    Science.gov (United States)

    Bala, John L.

    1995-08-01

    Automation and polymer science represent fundamental new technologies which can be directed toward realizing the goal of establishing a domestic, world-class, commercial optics business. Use of innovative optical designs using precision polymer optics will enable the US to play a vital role in the next generation of commercial optical products. The increased cost savings inherent in the utilization of optical-grade polymers outweighs almost every advantage of using glass for high volume situations. Optical designers must gain experience with combined refractive/diffractive designs and broaden their knowledge base regarding polymer technology beyond a cursory intellectual exercise. Implementation of a fully automated assembly system, combined with utilization of polymer optics, constitutes the type of integrated manufacturing process which will enable the US to successfully compete with the low-cost labor employed in the Far East, as well as to produce an equivalent product.

  14. Automated breeder fuel fabrication

    International Nuclear Information System (INIS)

    Goldmann, L.H.; Frederickson, J.R.

    1983-01-01

    The objective of the Secure Automated Fabrication (SAF) Project is to develop remotely operated equipment for the processing and manufacturing of breeder reactor fuel pins. The SAF line will be installed in the Fuels and Materials Examination Facility (FMEF). The FMEF is presently under construction at the Department of Energy's (DOE) Hanford site near Richland, Washington, and is operated by the Westinghouse Hanford Company (WHC). The fabrication and support systems of the SAF line are designed for computer-controlled operation from a centralized control room. Remote and automated fuel fabriction operations will result in: reduced radiation exposure to workers; enhanced safeguards; improved product quality; near real-time accountability, and increased productivity. The present schedule calls for installation of SAF line equipment in the FMEF beginning in 1984, with qualifying runs starting in 1986 and production commencing in 1987. 5 figures

  15. Automated multiple failure FMEA

    International Nuclear Information System (INIS)

    Price, C.J.; Taylor, N.S.

    2002-01-01

    Failure mode and effects analysis (FMEA) is typically performed by a team of engineers working together. In general, they will only consider single point failures in a system. Consideration of all possible combinations of failures is impractical for all but the simplest example systems. Even if the task of producing the FMEA report for the full multiple failure scenario were automated, it would still be impractical for the engineers to read, understand and act on all of the results. This paper shows how approximate failure rates for components can be used to select the most likely combinations of failures for automated investigation using simulation. The important information can be automatically identified from the resulting report, making it practical for engineers to study and act on the results. The strategy described in the paper has been applied to a range of electrical subsystems, and the results have confirmed that the strategy described here works well for realistically complex systems

  16. ATLAS Distributed Computing Automation

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Borrego, C; Campana, S; Di Girolamo, A; Elmsheuser, J; Hejbal, J; Kouba, T; Legger, F; Magradze, E; Medrano Llamas, R; Negri, G; Rinaldi, L; Sciacca, G; Serfon, C; Van Der Ster, D C

    2012-01-01

    The ATLAS Experiment benefits from computing resources distributed worldwide at more than 100 WLCG sites. The ATLAS Grid sites provide over 100k CPU job slots, over 100 PB of storage space on disk or tape. Monitoring of status of such a complex infrastructure is essential. The ATLAS Grid infrastructure is monitored 24/7 by two teams of shifters distributed world-wide, by the ATLAS Distributed Computing experts, and by site administrators. In this paper we summarize automation efforts performed within the ATLAS Distributed Computing team in order to reduce manpower costs and improve the reliability of the system. Different aspects of the automation process are described: from the ATLAS Grid site topology provided by the ATLAS Grid Information System, via automatic site testing by the HammerCloud, to automatic exclusion from production or analysis activities.

  17. Automated Analysis of Accountability

    DEFF Research Database (Denmark)

    Bruni, Alessandro; Giustolisi, Rosario; Schürmann, Carsten

    2017-01-01

    that the system can detect the misbehaving parties who caused that failure. Accountability is an intuitively stronger property than verifiability as the latter only rests on the possibility of detecting the failure of a goal. A plethora of accountability and verifiability definitions have been proposed...... in the literature. Those definitions are either very specific to the protocols in question, hence not applicable in other scenarios, or too general and widely applicable but requiring complicated and hard to follow manual proofs. In this paper, we advance formal definitions of verifiability and accountability...... that are amenable to automated verification. Our definitions are general enough to be applied to different classes of protocols and different automated security verification tools. Furthermore, we point out formally the relation between verifiability and accountability. We validate our definitions...

  18. Bryant J. correction formula

    International Nuclear Information System (INIS)

    Tejera R, A.; Cortes P, A.; Becerril V, A.

    1990-03-01

    For the practical application of the method proposed by J. Bryant, the authors carried out a series of small corrections, related with the bottom, the dead time of the detectors and channels, with the resolution time of the coincidences, with the accidental coincidences, with the decay scheme and with the gamma efficiency of the beta detector beta and the beta efficiency beta of the gamma detector. The calculation of the correction formula is presented in the development of the present report, being presented 25 combinations of the probability of the first existent state at once of one disintegration and the second state at once of the following disintegration. (Author)

  19. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  20. Automation and Mankind

    Science.gov (United States)

    1960-08-07

    limited by the cap- abilities of the human organism in the matter of control of its processes. In our time, the speeds of technological processes are...in many cases limited by conditions of control. The speed of human reaction is limited and therefore, at pre- sent, only processes of a relatively...forwiard, It can e foreseer thast automIation will comp~letely free Mans -Pn work unler conlitions’ of high texpemratures pressures,, anid nollutA-: or

  1. Automated Cooperative Trajectories

    Science.gov (United States)

    Hanson, Curt; Pahle, Joseph; Brown, Nelson

    2015-01-01

    This presentation is an overview of the Automated Cooperative Trajectories project. An introduction to the phenomena of wake vortices is given, along with a summary of past research into the possibility of extracting energy from the wake by flying close parallel trajectories. Challenges and barriers to adoption of civilian automatic wake surfing technology are identified. A hardware-in-the-loop simulation is described that will support future research. Finally, a roadmap for future research and technology transition is proposed.

  2. Automating ASW fusion

    OpenAIRE

    Pabelico, James C.

    2011-01-01

    Approved for public release; distribution is unlimited. This thesis examines ASW eFusion, an anti-submarine warfare (ASW) tactical decision aid (TDA) that utilizes Kalman filtering to improve battlespace awareness by simplifying and automating the track management process involved in anti-submarine warfare (ASW) watchstanding operations. While this program can currently help the ASW commander manage uncertainty and make better tactical decisions, the program has several limitations. Comman...

  3. Automatic physiological waveform processing for FMRI noise correction and analysis.

    Directory of Open Access Journals (Sweden)

    Daniel J Kelley

    2008-03-01

    Full Text Available Functional MRI resting state and connectivity studies of brain focus on neural fluctuations at low frequencies which share power with physiological fluctuations originating from lung and heart. Due to the lack of automated software to process physiological signals collected at high magnetic fields, a gap exists in the processing pathway between the acquisition of physiological data and its use in fMRI software for both physiological noise correction and functional analyses of brain activation and connectivity. To fill this gap, we developed an open source, physiological signal processing program, called PhysioNoise, in the python language. We tested its automated processing algorithms and dynamic signal visualization on resting monkey cardiac and respiratory waveforms. PhysioNoise consistently identifies physiological fluctuations for fMRI noise correction and also generates covariates for subsequent analyses of brain activation and connectivity.

  4. Autonomy, Automation, and Systems

    Science.gov (United States)

    Turner, Philip R.

    1987-02-01

    Aerospace industry interest in autonomy and automation, given fresh impetus by the national goal of establishing a Space Station, is becoming a major item of research and technology development. The promise of new technology arising from research in Artificial Intelligence (AI) has focused much attention on its potential in autonomy and automation. These technologies can improve performance in autonomous control functions that involve planning, scheduling, and fault diagnosis of complex systems. There are, however, many aspects of system and subsystem design in an autonomous system that impact AI applications, but do not directly involve AI technology. Development of a system control architecture, establishment of an operating system within the design, providing command and sensory data collection features appropriate to automated operation, and the use of design analysis tools to support system engineering are specific examples of major design issues. Aspects such as these must also receive attention and technology development support if we are to implement complex autonomous systems within the realistic limitations of mass, power, cost, and available flight-qualified technology that are all-important to a flight project.

  5. Longwall automation 2

    Energy Technology Data Exchange (ETDEWEB)

    David Hainsworth; David Reid; Con Caris; J.C. Ralston; C.O. Hargrave; Ron McPhee; I.N. Hutchinson; A. Strange; C. Wesner [CSIRO (Australia)

    2008-05-15

    This report covers a nominal two-year extension to the Major Longwall Automation Project (C10100). Production standard implementation of Longwall Automation Steering Committee (LASC) automation systems has been achieved at Beltana and Broadmeadow mines. The systems are now used on a 24/7 basis and have provided production benefits to the mines. The LASC Information System (LIS) has been updated and has been implemented successfully in the IT environment of major coal mining houses. This enables 3D visualisation of the longwall environment and equipment to be accessed on line. A simulator has been specified and a prototype system is now ready for implementation. The Shearer Position Measurement System (SPMS) has been upgraded to a modular commercial production standard hardware solution.A compact hardware solution for visual face monitoring has been developed, an approved enclosure for a thermal infrared camera has been produced and software for providing horizon control through faulted conditions has been delivered. The incorporation of the LASC Cut Model information into OEM horizon control algorithms has been bench and underground tested. A prototype system for shield convergence monitoring has been produced and studies to identify techniques for coal flow optimisation and void monitoring have been carried out. Liaison with equipment manufacturers has been maintained and technology delivery mechanisms for LASC hardware and software have been established.

  6. Automation in biological crystallization.

    Science.gov (United States)

    Stewart, Patrick Shaw; Mueller-Dieckmann, Jochen

    2014-06-01

    Crystallization remains the bottleneck in the crystallographic process leading from a gene to a three-dimensional model of the encoded protein or RNA. Automation of the individual steps of a crystallization experiment, from the preparation of crystallization cocktails for initial or optimization screens to the imaging of the experiments, has been the response to address this issue. Today, large high-throughput crystallization facilities, many of them open to the general user community, are capable of setting up thousands of crystallization trials per day. It is thus possible to test multiple constructs of each target for their ability to form crystals on a production-line basis. This has improved success rates and made crystallization much more convenient. High-throughput crystallization, however, cannot relieve users of the task of producing samples of high quality. Moreover, the time gained from eliminating manual preparations must now be invested in the careful evaluation of the increased number of experiments. The latter requires a sophisticated data and laboratory information-management system. A review of the current state of automation at the individual steps of crystallization with specific attention to the automation of optimization is given.

  7. A Toolchain to Produce Correct-by-Construction OCaml Programs

    OpenAIRE

    Filliâtre , Jean-Christophe; Gondelman , Léon; Paskevich , Andrei; Pereira , Mário; Melo De Sousa , Simão

    2018-01-01

    This paper presents a methodology to get correct-by-construction OCaml programs using the Why3 tool. First, a formal behavioral specification is given in the form of an OCaml module signature extended with type invariants and function contracts, in the spirit of JML. Second, an implementation is written in the programming language of Why3 and then verified with respect to the specification. Finally, an OCaml program is obtained by an automated translation. Our methodology is illustrated with ...

  8. AUTOMATED INADVERTENT INTRUDER APPLICATION

    International Nuclear Information System (INIS)

    Koffman, L; Patricia Lee, P; Jim Cook, J; Elmer Wilhite, E

    2007-01-01

    The Environmental Analysis and Performance Modeling group of Savannah River National Laboratory (SRNL) conducts performance assessments of the Savannah River Site (SRS) low-level waste facilities to meet the requirements of DOE Order 435.1. These performance assessments, which result in limits on the amounts of radiological substances that can be placed in the waste disposal facilities, consider numerous potential exposure pathways that could occur in the future. One set of exposure scenarios, known as inadvertent intruder analysis, considers the impact on hypothetical individuals who are assumed to inadvertently intrude onto the waste disposal site. Inadvertent intruder analysis considers three distinct scenarios for exposure referred to as the agriculture scenario, the resident scenario, and the post-drilling scenario. Each of these scenarios has specific exposure pathways that contribute to the overall dose for the scenario. For the inadvertent intruder analysis, the calculation of dose for the exposure pathways is a relatively straightforward algebraic calculation that utilizes dose conversion factors. Prior to 2004, these calculations were performed using an Excel spreadsheet. However, design checks of the spreadsheet calculations revealed that errors could be introduced inadvertently when copying spreadsheet formulas cell by cell and finding these errors was tedious and time consuming. This weakness led to the specification of functional requirements to create a software application that would automate the calculations for inadvertent intruder analysis using a controlled source of input parameters. This software application, named the Automated Inadvertent Intruder Application, has undergone rigorous testing of the internal calculations and meets software QA requirements. The Automated Inadvertent Intruder Application was intended to replace the previous spreadsheet analyses with an automated application that was verified to produce the same calculations and

  9. Attenuation correction for SPECT

    International Nuclear Information System (INIS)

    Hosoba, Minoru

    1986-01-01

    Attenuation correction is required for the reconstruction of a quantitative SPECT image. A new method for detecting body contours, which are important for the correction of tissue attenuation, is presented. The effect of body contours, detected by the newly developed method, on the reconstructed images was evaluated using various techniques for attenuation correction. The count rates in the specified region of interest in the phantom image by the Radial Post Correction (RPC) method, the Weighted Back Projection (WBP) method, Chang's method were strongly affected by the accuracy of the contours, as compared to those by Sorenson's method. To evaluate the effect of non-uniform attenuators on the cardiac SPECT, computer simulation experiments were performed using two types of models, the uniform attenuator model (UAM) and the non-uniform attenuator model (NUAM). The RPC method showed the lowest relative percent error (%ERROR) in UAM (11 %). However, 20 to 30 percent increase in %ERROR was observed for NUAM reconstructed with the RPC, WBP, and Chang's methods. Introducing an average attenuation coefficient (0.12/cm for Tc-99m and 0.14/cm for Tl-201) in the RPC method decreased %ERROR to the levels for UAM. Finally, a comparison between images, which were obtained by 180 deg and 360 deg scans and reconstructed from the RPC method, showed that the degree of the distortion of the contour of the simulated ventricles in the 180 deg scan was 15 % higher than that in the 360 deg scan. (Namekawa, K.)

  10. Text Induced Spelling Correction

    NARCIS (Netherlands)

    Reynaert, M.W.C.

    2004-01-01

    We present TISC, a language-independent and context-sensitive spelling checking and correction system designed to facilitate the automatic removal of non-word spelling errors in large corpora. Its lexicon is derived from a very large corpus of raw text, without supervision, and contains word

  11. Ballistic deficit correction

    International Nuclear Information System (INIS)

    Duchene, G.; Moszynski, M.; Curien, D.

    1991-01-01

    The EUROGAM data-acquisition has to handle a large number of events/s. Typical in-beam experiments using heavy-ion fusion reactions assume the production of about 50 000 compound nuclei per second deexciting via particle and γ-ray emissions. The very powerful γ-ray detection of EUROGAM is expected to produce high-fold event rates as large as 10 4 events/s. Such high count rates introduce, in a common dead time mode, large dead times for the whole system associated with the processing of the pulse, its digitization and its readout (from the preamplifier pulse up to the readout of the information). In order to minimize the dead time the shaping time constant τ, usually about 3 μs for large volume Ge detectors has to be reduced. Smaller shaping times, however, will adversely affect the energy resolution due to ballistic deficit. One possible solution is to operate the linear amplifier, with a somewhat smaller shaping time constant (in the present case we choose τ = 1.5 μs), in combination with a ballistic deficit compensator. The ballistic deficit can be corrected in different ways using a Gated Integrator, a hardware correction or even a software correction. In this paper we present a comparative study of the software and hardware corrections as well as gated integration

  12. Correctness of concurrent processes

    NARCIS (Netherlands)

    E.R. Olderog (Ernst-Rüdiger)

    1989-01-01

    textabstractA new notion of correctness for concurrent processes is introduced and investigated. It is a relationship P sat S between process terms P built up from operators of CCS [Mi 80], CSP [Ho 85] and COSY [LTS 79] and logical formulas S specifying sets of finite communication sequences as in

  13. Error Correcting Codes -34 ...

    Indian Academy of Sciences (India)

    information and coding theory. A large scale relay computer had failed to deliver the expected results due to a hardware fault. Hamming, one of the active proponents of computer usage, was determined to find an efficient means by which computers could detect and correct their own faults. A mathematician by train-.

  14. Measured attenuation correction methods

    International Nuclear Information System (INIS)

    Ostertag, H.; Kuebler, W.K.; Doll, J.; Lorenz, W.J.

    1989-01-01

    Accurate attenuation correction is a prerequisite for the determination of exact local radioactivity concentrations in positron emission tomography. Attenuation correction factors range from 4-5 in brain studies to 50-100 in whole body measurements. This report gives an overview of the different methods of determining the attenuation correction factors by transmission measurements using an external positron emitting source. The long-lived generator nuclide 68 Ge/ 68 Ga is commonly used for this purpose. The additional patient dose from the transmission source is usually a small fraction of the dose due to the subsequent emission measurement. Ring-shaped transmission sources as well as rotating point or line sources are employed in modern positron tomographs. By masking a rotating line or point source, random and scattered events in the transmission scans can be effectively suppressed. The problems of measured attenuation correction are discussed: Transmission/emission mismatch, random and scattered event contamination, counting statistics, transmission/emission scatter compensation, transmission scan after administration of activity to the patient. By using a double masking technique simultaneous emission and transmission scans become feasible. (orig.)

  15. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... their surgery, orthognathic surgery is performed to correct functional problems. Jaw Surgery can have a dramatic effect on many aspects of life. Following are some of the conditions that may ... front, or side Facial injury Birth defects Receding lower jaw and ...

  16. Error Correcting Codes

    Indian Academy of Sciences (India)

    successful consumer products of all time - the Compact Disc. (CD) digital audio .... We can make ... only 2 t additional parity check symbols are required, to be able to correct t .... display information (contah'ling music related data and a table.

  17. 10. Correctness of Programs

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 3; Issue 4. Algorithms - Correctness of Programs. R K Shyamasundar. Series Article Volume 3 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India.

  18. Space environments and their effects on space automation and robotics

    Science.gov (United States)

    Garrett, Henry B.

    1990-01-01

    Automated and robotic systems will be exposed to a variety of environmental anomalies as a result of adverse interactions with the space environment. As an example, the coupling of electrical transients into control systems, due to EMI from plasma interactions and solar array arcing, may cause spurious commands that could be difficult to detect and correct in time to prevent damage during critical operations. Spacecraft glow and space debris could introduce false imaging information into optical sensor systems. The presentation provides a brief overview of the primary environments (plasma, neutral atmosphere, magnetic and electric fields, and solid particulates) that cause such adverse interactions. The descriptions, while brief, are intended to provide a basis for the other papers presented at this conference which detail the key interactions with automated and robotic systems. Given the growing complexity and sensitivity of automated and robotic space systems, an understanding of adverse space environments will be crucial to mitigating their effects.

  19. Automation, Performance and International Competition

    DEFF Research Database (Denmark)

    Kromann, Lene; Sørensen, Anders

    This paper presents new evidence on trade‐induced automation in manufacturing firms using unique data combining a retrospective survey that we have assembled with register data for 2005‐2010. In particular, we establish a causal effect where firms that have specialized in product types for which...... the Chinese exports to the world market has risen sharply invest more in automated capital compared to firms that have specialized in other product types. We also study the relationship between automation and firm performance and find that firms with high increases in scale and scope of automation have faster...... productivity growth than other firms. Moreover, automation improves the efficiency of all stages of the production process by reducing setup time, run time, and inspection time and increasing uptime and quantity produced per worker. The efficiency improvement varies by type of automation....

  20. Automation in organizations: Eternal conflict

    Science.gov (United States)

    Dieterly, D. L.

    1981-01-01

    Some ideas on and insights into the problems associated with automation in organizations are presented with emphasis on the concept of automation, its relationship to the individual, and its impact on system performance. An analogy is drawn, based on an American folk hero, to emphasize the extent of the problems encountered when dealing with automation within an organization. A model is proposed to focus attention on a set of appropriate dimensions. The function allocation process becomes a prominent aspect of the model. The current state of automation research is mentioned in relation to the ideas introduced. Proposed directions for an improved understanding of automation's effect on the individual's efficiency are discussed. The importance of understanding the individual's perception of the system in terms of the degree of automation is highlighted.

  1. Automation System Products and Research

    OpenAIRE

    Rintala, Mikko; Sormunen, Jussi; Kuisma, Petri; Rahkala, Matti

    2014-01-01

    Automation systems are used in most buildings nowadays. In the past they were mainly used in industry to control and monitor critical systems. During the past few decades the automation systems have become more common and are used today from big industrial solutions to homes of private customers. With the growing need for ecologic and cost-efficient management systems, home and building automation systems are becoming a standard way of controlling lighting, ventilation, heating etc. Auto...

  2. Guidelines for Automation Project Execution

    OpenAIRE

    Takkinen, Heidi

    2011-01-01

    The purpose of this Master’s thesis was to create instructions for executing an automation project. Sarlin Oy Ab needed directions on how to execute an automation project. Sarlin is starting up a new business area offering total project solutions for customers. Sarlin focuses on small and minor automation projects on domestic markets. The thesis represents issues related to project execution starting from the theory of the project to its kick-off and termination. Site work is one importan...

  3. 78 FR 53466 - Modification of Two National Customs Automation Program (NCAP) Tests Concerning Automated...

    Science.gov (United States)

    2013-08-29

    ... Customs Automation Program (NCAP) Tests Concerning Automated Commercial Environment (ACE) Document Image... National Customs Automation Program (NCAP) tests concerning document imaging, known as the Document Image... the National Customs Automation Program (NCAP) tests concerning document imaging, known as the...

  4. World-wide distribution automation systems

    International Nuclear Information System (INIS)

    Devaney, T.M.

    1994-01-01

    A worldwide power distribution automation system is outlined. Distribution automation is defined and the status of utility automation is discussed. Other topics discussed include a distribution management system, substation feeder, and customer functions, potential benefits, automation costs, planning and engineering considerations, automation trends, databases, system operation, computer modeling of system, and distribution management systems

  5. Contaminant analysis automation, an overview

    International Nuclear Information System (INIS)

    Hollen, R.; Ramos, O. Jr.

    1996-01-01

    To meet the environmental restoration and waste minimization goals of government and industry, several government laboratories, universities, and private companies have formed the Contaminant Analysis Automation (CAA) team. The goal of this consortium is to design and fabricate robotics systems that standardize and automate the hardware and software of the most common environmental chemical methods. In essence, the CAA team takes conventional, regulatory- approved (EPA Methods) chemical analysis processes and automates them. The automation consists of standard laboratory modules (SLMs) that perform the work in a much more efficient, accurate, and cost- effective manner

  6. Automating CPM-GOMS

    Science.gov (United States)

    John, Bonnie; Vera, Alonso; Matessa, Michael; Freed, Michael; Remington, Roger

    2002-01-01

    CPM-GOMS is a modeling method that combines the task decomposition of a GOMS analysis with a model of human resource usage at the level of cognitive, perceptual, and motor operations. CPM-GOMS models have made accurate predictions about skilled user behavior in routine tasks, but developing such models is tedious and error-prone. We describe a process for automatically generating CPM-GOMS models from a hierarchical task decomposition expressed in a cognitive modeling tool called Apex. Resource scheduling in Apex automates the difficult task of interleaving the cognitive, perceptual, and motor resources underlying common task operators (e.g. mouse move-and-click). Apex's UI automatically generates PERT charts, which allow modelers to visualize a model's complex parallel behavior. Because interleaving and visualization is now automated, it is feasible to construct arbitrarily long sequences of behavior. To demonstrate the process, we present a model of automated teller interactions in Apex and discuss implications for user modeling. available to model human users, the Goals, Operators, Methods, and Selection (GOMS) method [6, 21] has been the most widely used, providing accurate, often zero-parameter, predictions of the routine performance of skilled users in a wide range of procedural tasks [6, 13, 15, 27, 28]. GOMS is meant to model routine behavior. The user is assumed to have methods that apply sequences of operators and to achieve a goal. Selection rules are applied when there is more than one method to achieve a goal. Many routine tasks lend themselves well to such decomposition. Decomposition produces a representation of the task as a set of nested goal states that include an initial state and a final state. The iterative decomposition into goals and nested subgoals can terminate in primitives of any desired granularity, the choice of level of detail dependent on the predictions required. Although GOMS has proven useful in HCI, tools to support the

  7. Correction of refractive errors

    Directory of Open Access Journals (Sweden)

    Vladimir Pfeifer

    2005-10-01

    Full Text Available Background: Spectacles and contact lenses are the most frequently used, the safest and the cheapest way to correct refractive errors. The development of keratorefractive surgery has brought new opportunities for correction of refractive errors in patients who have the need to be less dependent of spectacles or contact lenses. Until recently, RK was the most commonly performed refractive procedure for nearsighted patients.Conclusions: The introduction of excimer laser in refractive surgery has given the new opportunities of remodelling the cornea. The laser energy can be delivered on the stromal surface like in PRK or deeper on the corneal stroma by means of lamellar surgery. In LASIK flap is created with microkeratome in LASEK with ethanol and in epi-LASIK the ultra thin flap is created mechanically.

  8. PS Booster Orbit Correction

    CERN Document Server

    Chanel, M; Rumolo, G; Tomás, R; CERN. Geneva. AB Department

    2008-01-01

    At the end of the 2007 run, orbit measurements were carried out in the 4 rings of the PS Booster (PSB) for different working points and beam energies. The aim of these measurements was to provide the necessary input data for a PSB realignment campaign during the 2007/2008 shutdown. Currently, only very few corrector magnets can be operated reliably in the PSB; therefore the orbit correction has to be achieved by displacing (horizontally and vertically) and/or tilting some of the defocusing quadrupoles (QDs). In this report we first describe the orbit measurements, followed by a detailed explanation of the orbit correction strategy. Results and conclusions are presented in the last section.

  9. Automating dipole subtraction

    International Nuclear Information System (INIS)

    Hasegawa, K.; Moch, S.; Uwer, P.

    2008-07-01

    We report on automating the Catani-Seymour dipole subtraction which is a general procedure to treat infrared divergences in real emission processes at next-to-leading order in QCD. The automatization rests on three essential steps: the creation of the dipole terms, the calculation of the color linked squared Born matrix elements, and the evaluation of different helicity amplitudes. The routines have been tested for a number of complex processes, such as the real emission process gg→t anti tggg. (orig.)

  10. Automating dipole subtraction

    Energy Technology Data Exchange (ETDEWEB)

    Hasegawa, K.; Moch, S. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Uwer, P. [Karlsruhe Univ. (T.H.) (Germany). Inst. fuer Theoretische Teilchenphysik

    2008-07-15

    We report on automating the Catani-Seymour dipole subtraction which is a general procedure to treat infrared divergences in real emission processes at next-to-leading order in QCD. The automatization rests on three essential steps: the creation of the dipole terms, the calculation of the color linked squared Born matrix elements, and the evaluation of different helicity amplitudes. The routines have been tested for a number of complex processes, such as the real emission process gg{yields}t anti tggg. (orig.)

  11. Fossil power plant automation

    International Nuclear Information System (INIS)

    Divakaruni, S.M.; Touchton, G.

    1991-01-01

    This paper elaborates on issues facing the utilities industry and seeks to address how new computer-based control and automation technologies resulting from recent microprocessor evolution, can improve fossil plant operations and maintenance. This in turn can assist utilities to emerge stronger from the challenges ahead. Many presentations at the first ISA/EPRI co-sponsored conference are targeted towards improving the use of computer and control systems in the fossil and nuclear power plants and we believe this to be the right forum to share our ideas

  12. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  13. Automated tuning of the advanced photon source booster synchrotron

    International Nuclear Information System (INIS)

    Biedron, S.G.; Milton, S.V.

    1997-01-01

    The acceleration cycle of the Advanced Photon Source (APS) booster synchrotron is completed within 223 ms and is repeated at 2 Hz. Unless properly corrected, transverse and longitudinal injection errors can lead to inefficient booster performance. In order to simplify daily operation, automated tuning methods have been developed. Through the use of beam position monitor (BPM) reading, transfer line corrector magnets, magnet ramp timing, and empirically determined response functions, the injection process is optimized by correcting the first turn trajectory to the measured closed orbit. These tuning algorithms and their implementation are described here along with an evaluation of their performance

  14. Automated planning of breast radiotherapy using cone beam CT imaging

    International Nuclear Information System (INIS)

    Amit, Guy; Purdie, Thomas G.

    2015-01-01

    Purpose: Develop and clinically validate a methodology for using cone beam computed tomography (CBCT) imaging in an automated treatment planning framework for breast IMRT. Methods: A technique for intensity correction of CBCT images was developed and evaluated. The technique is based on histogram matching of CBCT image sets, using information from “similar” planning CT image sets from a database of paired CBCT and CT image sets (n = 38). Automated treatment plans were generated for a testing subset (n = 15) on the planning CT and the corrected CBCT. The plans generated on the corrected CBCT were compared to the CT-based plans in terms of beam parameters, dosimetric indices, and dose distributions. Results: The corrected CBCT images showed considerable similarity to their corresponding planning CTs (average mutual information 1.0±0.1, average sum of absolute differences 185 ± 38). The automated CBCT-based plans were clinically acceptable, as well as equivalent to the CT-based plans with average gantry angle difference of 0.99°±1.1°, target volume overlap index (Dice) of 0.89±0.04 although with slightly higher maximum target doses (4482±90 vs 4560±84, P < 0.05). Gamma index analysis (3%, 3 mm) showed that the CBCT-based plans had the same dose distribution as plans calculated with the same beams on the registered planning CTs (average gamma index 0.12±0.04, gamma <1 in 99.4%±0.3%). Conclusions: The proposed method demonstrates the potential for a clinically feasible and efficient online adaptive breast IMRT planning method based on CBCT imaging, integrating automation

  15. Automation bias in electronic prescribing.

    Science.gov (United States)

    Lyell, David; Magrabi, Farah; Raban, Magdalena Z; Pont, L G; Baysari, Melissa T; Day, Richard O; Coiera, Enrico

    2017-03-16

    Clinical decision support (CDS) in e-prescribing can improve safety by alerting potential errors, but introduces new sources of risk. Automation bias (AB) occurs when users over-rely on CDS, reducing vigilance in information seeking and processing. Evidence of AB has been found in other clinical tasks, but has not yet been tested with e-prescribing. This study tests for the presence of AB in e-prescribing and the impact of task complexity and interruptions on AB. One hundred and twenty students in the final two years of a medical degree prescribed medicines for nine clinical scenarios using a simulated e-prescribing system. Quality of CDS (correct, incorrect and no CDS) and task complexity (low, low + interruption and high) were varied between conditions. Omission errors (failure to detect prescribing errors) and commission errors (acceptance of false positive alerts) were measured. Compared to scenarios with no CDS, correct CDS reduced omission errors by 38.3% (p < .0001, n = 120), 46.6% (p < .0001, n = 70), and 39.2% (p < .0001, n = 120) for low, low + interrupt and high complexity scenarios respectively. Incorrect CDS increased omission errors by 33.3% (p < .0001, n = 120), 24.5% (p < .009, n = 82), and 26.7% (p < .0001, n = 120). Participants made commission errors, 65.8% (p < .0001, n = 120), 53.5% (p < .0001, n = 82), and 51.7% (p < .0001, n = 120). Task complexity and interruptions had no impact on AB. This study found evidence of AB omission and commission errors in e-prescribing. Verification of CDS alerts is key to avoiding AB errors. However, interventions focused on this have had limited success to date. Clinicians should remain vigilant to the risks of CDS failures and verify CDS.

  16. Automated Test Case Generation

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I would like to present the concept of automated test case generation. I work on it as part of my PhD and I think it would be interesting also for other people. It is also the topic of a workshop paper that I am introducing in Paris. (abstract below) Please note that the talk itself would be more general and not about the specifics of my PhD, but about the broad field of Automated Test Case Generation. I would introduce the main approaches (combinatorial testing, symbolic execution, adaptive random testing) and their advantages and problems. (oracle problem, combinatorial explosion, ...) Abstract of the paper: Over the last decade code-based test case generation techniques such as combinatorial testing or dynamic symbolic execution have seen growing research popularity. Most algorithms and tool implementations are based on finding assignments for input parameter values in order to maximise the execution branch coverage. Only few of them consider dependencies from outside the Code Under Test’s scope such...

  17. Automating quantum experiment control

    Science.gov (United States)

    Stevens, Kelly E.; Amini, Jason M.; Doret, S. Charles; Mohler, Greg; Volin, Curtis; Harter, Alexa W.

    2017-03-01

    The field of quantum information processing is rapidly advancing. As the control of quantum systems approaches the level needed for useful computation, the physical hardware underlying the quantum systems is becoming increasingly complex. It is already becoming impractical to manually code control for the larger hardware implementations. In this chapter, we will employ an approach to the problem of system control that parallels compiler design for a classical computer. We will start with a candidate quantum computing technology, the surface electrode ion trap, and build a system instruction language which can be generated from a simple machine-independent programming language via compilation. We incorporate compile time generation of ion routing that separates the algorithm description from the physical geometry of the hardware. Extending this approach to automatic routing at run time allows for automated initialization of qubit number and placement and additionally allows for automated recovery after catastrophic events such as qubit loss. To show that these systems can handle real hardware, we present a simple demonstration system that routes two ions around a multi-zone ion trap and handles ion loss and ion placement. While we will mainly use examples from transport-based ion trap quantum computing, many of the issues and solutions are applicable to other architectures.

  18. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT ...

    Science.gov (United States)

    The Automated Geospatial Watershed Assessment tool (AGWA) is a GIS interface jointly developed by the USDA Agricultural Research Service, the U.S. Environmental Protection Agency, the University of Arizona, and the University of Wyoming to automate the parameterization and execution of the Soil Water Assessment Tool (SWAT) and KINEmatic Runoff and EROSion (KINEROS2) hydrologic models. The application of these two models allows AGWA to conduct hydrologic modeling and watershed assessments at multiple temporal and spatial scales. AGWA’s current outputs are runoff (volumes and peaks) and sediment yield, plus nitrogen and phosphorus with the SWAT model. AGWA uses commonly available GIS data layers to fully parameterize, execute, and visualize results from both models. Through an intuitive interface the user selects an outlet from which AGWA delineates and discretizes the watershed using a Digital Elevation Model (DEM) based on the individual model requirements. The watershed model elements are then intersected with soils and land cover data layers to derive the requisite model input parameters. The chosen model is then executed, and the results are imported back into AGWA for visualization. This allows managers to identify potential problem areas where additional monitoring can be undertaken or mitigation activities can be focused. AGWA also has tools to apply an array of best management practices. There are currently two versions of AGWA available; AGWA 1.5 for

  19. Maneuver Automation Software

    Science.gov (United States)

    Uffelman, Hal; Goodson, Troy; Pellegrin, Michael; Stavert, Lynn; Burk, Thomas; Beach, David; Signorelli, Joel; Jones, Jeremy; Hahn, Yungsun; Attiyah, Ahlam; hide

    2009-01-01

    The Maneuver Automation Software (MAS) automates the process of generating commands for maneuvers to keep the spacecraft of the Cassini-Huygens mission on a predetermined prime mission trajectory. Before MAS became available, a team of approximately 10 members had to work about two weeks to design, test, and implement each maneuver in a process that involved running many maneuver-related application programs and then serially handing off data products to other parts of the team. MAS enables a three-member team to design, test, and implement a maneuver in about one-half hour after Navigation has process-tracking data. MAS accepts more than 60 parameters and 22 files as input directly from users. MAS consists of Practical Extraction and Reporting Language (PERL) scripts that link, sequence, and execute the maneuver- related application programs: "Pushing a single button" on a graphical user interface causes MAS to run navigation programs that design a maneuver; programs that create sequences of commands to execute the maneuver on the spacecraft; and a program that generates predictions about maneuver performance and generates reports and other files that enable users to quickly review and verify the maneuver design. MAS can also generate presentation materials, initiate electronic command request forms, and archive all data products for future reference.

  20. Automated screening for retinopathy

    Directory of Open Access Journals (Sweden)

    A. S. Rodin

    2014-07-01

    Full Text Available Retinal pathology is a common cause of an irreversible decrease of central vision commonly found amongst senior population. Detection of the earliest signs of retinal diseases can be facilitated by viewing retinal images available from the telemedicine networks. To facilitate the process of retinal images, screening software applications based on image recognition technology are currently on the various stages of development.Purpose: To develop and implement computerized image recognition software that can be used as a decision support technologyfor retinal image screening for various types of retinopathies.Methods: The software application for the retina image recognition has been developed using C++ language. It was tested on dataset of 70 images with various types of pathological features (age related macular degeneration, chorioretinitis, central serous chorioretinopathy and diabetic retinopathy.Results: It was shown that the system can achieve a sensitivity of 73 % and specificity of 72 %.Conclusion: Automated detection of macular lesions using proposed software can significantly reduce manual grading workflow. In addition, automated detection of retinal lesions can be implemented as a clinical decision support system for telemedicine screening. It is anticipated that further development of this technology can become a part of diagnostic image analysis system for the electronic health records.

  1. Automation from pictures

    International Nuclear Information System (INIS)

    Kozubal, A.J.

    1992-01-01

    The state transition diagram (STD) model has been helpful in the design of real time software, especially with the emergence of graphical computer aided software engineering (CASE) tools. Nevertheless, the translation of the STD to real time code has in the past been primarily a manual task. At Los Alamos we have automated this process. The designer constructs the STD using a CASE tool (Cadre Teamwork) using a special notation for events and actions. A translator converts the STD into an intermediate state notation language (SNL), and this SNL is compiled directly into C code (a state program). Execution of the state program is driven by external events, allowing multiple state programs to effectively share the resources of the host processor. Since the design and the code are tightly integrated through the CASE tool, the design and code never diverge, and we avoid design obsolescence. Furthermore, the CASE tool automates the production of formal technical documents from the graphic description encapsulated by the CASE tool. (author)

  2. Automated digital magnetofluidics

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, J; Garcia, A A; Marquez, M [Harrington Department of Bioengineering Arizona State University, Tempe AZ 85287-9709 (United States)], E-mail: tony.garcia@asu.edu

    2008-08-15

    Drops can be moved in complex patterns on superhydrophobic surfaces using a reconfigured computer-controlled x-y metrology stage with a high degree of accuracy, flexibility, and reconfigurability. The stage employs a DMC-4030 controller which has a RISC-based, clock multiplying processor with DSP functions, accepting encoder inputs up to 22 MHz, provides servo update rates as high as 32 kHz, and processes commands at rates as fast as 40 milliseconds. A 6.35 mm diameter cylindrical NdFeB magnet is translated by the stage causing water drops to move by the action of induced magnetization of coated iron microspheres that remain in the drop and are attracted to the rare earth magnet through digital magnetofluidics. Water drops are easily moved in complex patterns in automated digital magnetofluidics at an average speed of 2.8 cm/s over a superhydrophobic polyethylene surface created by solvent casting. With additional components, some potential uses for this automated microfluidic system include characterization of superhydrophobic surfaces, water quality analysis, and medical diagnostics.

  3. Automated digital magnetofluidics

    Science.gov (United States)

    Schneider, J.; Garcia, A. A.; Marquez, M.

    2008-08-01

    Drops can be moved in complex patterns on superhydrophobic surfaces using a reconfigured computer-controlled x-y metrology stage with a high degree of accuracy, flexibility, and reconfigurability. The stage employs a DMC-4030 controller which has a RISC-based, clock multiplying processor with DSP functions, accepting encoder inputs up to 22 MHz, provides servo update rates as high as 32 kHz, and processes commands at rates as fast as 40 milliseconds. A 6.35 mm diameter cylindrical NdFeB magnet is translated by the stage causing water drops to move by the action of induced magnetization of coated iron microspheres that remain in the drop and are attracted to the rare earth magnet through digital magnetofluidics. Water drops are easily moved in complex patterns in automated digital magnetofluidics at an average speed of 2.8 cm/s over a superhydrophobic polyethylene surface created by solvent casting. With additional components, some potential uses for this automated microfluidic system include characterization of superhydrophobic surfaces, water quality analysis, and medical diagnostics.

  4. How jets get the jitters

    International Nuclear Information System (INIS)

    Zarmi, Y.

    1977-01-01

    Models in which the temporal evolution of hadronic jets and the rapidity ordering of particles within jets are correlated are discussed. Observable effects on the particle average transverse momentum (energy- and longitudinal momentum-dependence) characteristic of such models are pointed out. In particular, models in which, within jets, slow particles are produced first and fast particles come out last should exhibit the well known seagull effect, with rising, for fixed x, proportionately to the square root of the mean particle multiplicity. If, by analogy, the transverse momentum distributions of partons also exhibit such features, then we have a source of scaling violation in deep inelastic reactions that shows up at high energies rather than at low energies, and a source for an energy and Q 2 dependent in lepton pair production. (author)

  5. Measurement Techniques for Clock Jitter

    Science.gov (United States)

    Lansdowne, Chatwin; Schlesinger, Adam

    2012-01-01

    NASA is in the process of modernizing its communications infrastructure to accompany the development of a Crew Exploration Vehicle (CEV) to replace the shuttle. With this effort comes the opportunity to infuse more advanced coded modulation techniques, including low-density parity-check (LDPC) codes that offer greater coding gains than the current capability. However, in order to take full advantage of these codes, the ground segment receiver synchronization loops must be able to operate at a lower signal-to-noise ratio (SNR) than supported by equipment currently in use.

  6. Work Planing Automation at Mechanical Subdivision

    OpenAIRE

    Dzindzelėta, Vytautas

    2005-01-01

    Work planing automation, installation possibilities and future outlook at mechanical subdivision. To study how the work planing has changed before and after automation process and to analyse automation process methodology.

  7. AUTO: An Automation Simulator.

    Science.gov (United States)

    Gold, Bennett Alan

    In order to devise an aid for the teaching of formal languages and automata theory, a system was developed which allows a student to design, test, and change automata in an interactive manner. This process permits the user to observe the step-by-step operation of a defined automaton and to correct or alter its operation. Thus, the need for lengthy…

  8. Automation for mineral resource development

    Energy Technology Data Exchange (ETDEWEB)

    Norrie, A.W.; Turner, D.R. (eds.)

    1986-01-01

    A total of 55 papers were presented at the symposium under the following headings: automation and the future of mining; modelling and control of mining processes; transportation for mining; automation and the future of metallurgical processes; modelling and control of metallurgical processes; and general aspects. Fifteen papers have been abstracted separately.

  9. Opening up Library Automation Software

    Science.gov (United States)

    Breeding, Marshall

    2009-01-01

    Throughout the history of library automation, the author has seen a steady advancement toward more open systems. In the early days of library automation, when proprietary systems dominated, the need for standards was paramount since other means of inter-operability and data exchange weren't possible. Today's focus on Application Programming…

  10. Resins production: batch plant automation

    International Nuclear Information System (INIS)

    Banti, M.; Mauri, G.

    1996-01-01

    Companies that look for automation in their plants without external resources, have at their disposal flexible, custom and easy to use DCS, open towards PLC. In this article it is explained why Hoechts has followed this way of new plants for resins production automation

  11. Automated Methods of Corrosion Measurements

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov

    1997-01-01

    . Mechanical control, recording, and data processing must therefore be automated to a high level of precision and reliability. These general techniques and the apparatus involved have been described extensively. The automated methods of such high-resolution microscopy coordinated with computerized...

  12. Migration monitoring with automated technology

    Science.gov (United States)

    Rhonda L. Millikin

    2005-01-01

    Automated technology can supplement ground-based methods of migration monitoring by providing: (1) unbiased and automated sampling; (2) independent validation of current methods; (3) a larger sample area for landscape-level analysis of habitat selection for stopover, and (4) an opportunity to study flight behavior. In particular, radar-acoustic sensor fusion can...

  13. Automated methods of corrosion measurement

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov; Bech-Nielsen, Gregers; Reeve, John Ch

    1997-01-01

    to revise assumptions regarding the basis of the method, which sometimes leads to the discovery of as-yet unnoticed phenomena. The present selection of automated methods for corrosion measurements is not motivated simply by the fact that a certain measurement can be performed automatically. Automation...... is applied to nearly all types of measurements today....

  14. Classification of Automated Search Traffic

    Science.gov (United States)

    Buehrer, Greg; Stokes, Jack W.; Chellapilla, Kumar; Platt, John C.

    As web search providers seek to improve both relevance and response times, they are challenged by the ever-increasing tax of automated search query traffic. Third party systems interact with search engines for a variety of reasons, such as monitoring a web site’s rank, augmenting online games, or possibly to maliciously alter click-through rates. In this paper, we investigate automated traffic (sometimes referred to as bot traffic) in the query stream of a large search engine provider. We define automated traffic as any search query not generated by a human in real time. We first provide examples of different categories of query logs generated by automated means. We then develop many different features that distinguish between queries generated by people searching for information, and those generated by automated processes. We categorize these features into two classes, either an interpretation of the physical model of human interactions, or as behavioral patterns of automated interactions. Using the these detection features, we next classify the query stream using multiple binary classifiers. In addition, a multiclass classifier is then developed to identify subclasses of both normal and automated traffic. An active learning algorithm is used to suggest which user sessions to label to improve the accuracy of the multiclass classifier, while also seeking to discover new classes of automated traffic. Performance analysis are then provided. Finally, the multiclass classifier is used to predict the subclass distribution for the search query stream.

  15. Automated Test-Form Generation

    Science.gov (United States)

    van der Linden, Wim J.; Diao, Qi

    2011-01-01

    In automated test assembly (ATA), the methodology of mixed-integer programming is used to select test items from an item bank to meet the specifications for a desired test form and optimize its measurement accuracy. The same methodology can be used to automate the formatting of the set of selected items into the actual test form. Three different…

  16. Automated evaluation of ultrasonic indications

    International Nuclear Information System (INIS)

    Hansch, M.K.T.; Stegemann, D.

    1994-01-01

    Future requirements of reliability and reproducibility in quality assurance demand computer evaluation of defect indications. The ultrasonic method with its large field of applications and a high potential for automation provides all preconditions for fully automated inspection. The survey proposes several desirable hardware improvements, data acquisition requirements and software configurations. (orig.) [de

  17. Automated Methods Of Corrosion Measurements

    DEFF Research Database (Denmark)

    Bech-Nielsen, Gregers; Andersen, Jens Enevold Thaulov; Reeve, John Ch

    1997-01-01

    The chapter describes the following automated measurements: Corrosion Measurements by Titration, Imaging Corrosion by Scanning Probe Microscopy, Critical Pitting Temperature and Application of the Electrochemical Hydrogen Permeation Cell.......The chapter describes the following automated measurements: Corrosion Measurements by Titration, Imaging Corrosion by Scanning Probe Microscopy, Critical Pitting Temperature and Application of the Electrochemical Hydrogen Permeation Cell....

  18. Robotics/Automated Systems Technicians.

    Science.gov (United States)

    Doty, Charles R.

    Major resources exist that can be used to develop or upgrade programs in community colleges and technical institutes that educate robotics/automated systems technicians. The first category of resources is Economic, Social, and Education Issues. The Office of Technology Assessment (OTA) report, "Automation and the Workplace," presents analyses of…

  19. Translation: Aids, Robots, and Automation.

    Science.gov (United States)

    Andreyewsky, Alexander

    1981-01-01

    Examines electronic aids to translation both as ways to automate it and as an approach to solve problems resulting from shortage of qualified translators. Describes the limitations of robotic MT (Machine Translation) systems, viewing MAT (Machine-Aided Translation) as the only practical solution and the best vehicle for further automation. (MES)

  20. Brain Image Motion Correction

    DEFF Research Database (Denmark)

    Jensen, Rasmus Ramsbøl; Benjaminsen, Claus; Larsen, Rasmus

    2015-01-01

    The application of motion tracking is wide, including: industrial production lines, motion interaction in gaming, computer-aided surgery and motion correction in medical brain imaging. Several devices for motion tracking exist using a variety of different methodologies. In order to use such devices...... offset and tracking noise in medical brain imaging. The data are generated from a phantom mounted on a rotary stage and have been collected using a Siemens High Resolution Research Tomograph for positron emission tomography. During acquisition the phantom was tracked with our latest tracking prototype...

  1. The Science of Home Automation

    Science.gov (United States)

    Thomas, Brian Louis

    Smart home technologies and the concept of home automation have become more popular in recent years. This popularity has been accompanied by social acceptance of passive sensors installed throughout the home. The subsequent increase in smart homes facilitates the creation of home automation strategies. We believe that home automation strategies can be generated intelligently by utilizing smart home sensors and activity learning. In this dissertation, we hypothesize that home automation can benefit from activity awareness. To test this, we develop our activity-aware smart automation system, CARL (CASAS Activity-aware Resource Learning). CARL learns the associations between activities and device usage from historical data and utilizes the activity-aware capabilities to control the devices. To help validate CARL we deploy and test three different versions of the automation system in a real-world smart environment. To provide a foundation of activity learning, we integrate existing activity recognition and activity forecasting into CARL home automation. We also explore two alternatives to using human-labeled data to train the activity learning models. The first unsupervised method is Activity Detection, and the second is a modified DBSCAN algorithm that utilizes Dynamic Time Warping (DTW) as a distance metric. We compare the performance of activity learning with human-defined labels and with automatically-discovered activity categories. To provide evidence in support of our hypothesis, we evaluate CARL automation in a smart home testbed. Our results indicate that home automation can be boosted through activity awareness. We also find that the resulting automation has a high degree of usability and comfort for the smart home resident.

  2. Automated one-loop calculations with GOSAM

    International Nuclear Information System (INIS)

    Cullen, Gavin; Greiner, Nicolas; Heinrich, Gudrun; Reiter, Thomas; Luisoni, Gionata

    2011-11-01

    We present the program package GoSam which is designed for the automated calculation of one-loop amplitudes for multi-particle processes in renormalisable quantum field theories. The amplitudes, which are generated in terms of Feynman diagrams, can be reduced using either D-dimensional integrand-level decomposition or tensor reduction. GoSam can be used to calculate one-loop QCD and/or electroweak corrections to Standard Model processes and offers the flexibility to link model files for theories Beyond the Standard Model. A standard interface to programs calculating real radiation is also implemented. We demonstrate the flexibility of the program by presenting examples of processes with up to six external legs attached to the loop. (orig.)

  3. Automated one-loop calculations with GOSAM

    Energy Technology Data Exchange (ETDEWEB)

    Cullen, Gavin [Edinburgh Univ. (United Kingdom). School of Physics and Astronomy; Deutsches Elektronen-Synchrotron, Zeuthen [DESY; Germany; Greiner, Nicolas [Illinois Univ., Urbana-Champaign, IL (United States). Dept. of Physics; Max-Planck-Institut fuer Physik, Muenchen (Germany); Heinrich, Gudrun; Reiter, Thomas [Max-Planck-Institut fuer Physik, Muenchen (Germany); Luisoni, Gionata [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology; Mastrolia, Pierpaolo [Max-Planck-Institut fuer Physik, Muenchen (Germany); Padua Univ. (Italy). Dipt. di Fisica; Ossola, Giovanni [New York City Univ., NY (United States). New York City College of Technology; New York City Univ., NY (United States). The Graduate School and University Center; Tramontano, Francesco [European Organization for Nuclear Research (CERN), Geneva (Switzerland)

    2011-11-15

    We present the program package GoSam which is designed for the automated calculation of one-loop amplitudes for multi-particle processes in renormalisable quantum field theories. The amplitudes, which are generated in terms of Feynman diagrams, can be reduced using either D-dimensional integrand-level decomposition or tensor reduction. GoSam can be used to calculate one-loop QCD and/or electroweak corrections to Standard Model processes and offers the flexibility to link model files for theories Beyond the Standard Model. A standard interface to programs calculating real radiation is also implemented. We demonstrate the flexibility of the program by presenting examples of processes with up to six external legs attached to the loop. (orig.)

  4. Automated Formal Verification for PLC Control Systems

    CERN Multimedia

    Fernández Adiego, Borja

    2014-01-01

    Programmable Logic Controllers (PLCs) are widely used devices used in industrial control systems. Ensuring that the PLC software is compliant with its specification is a challenging task. Formal verification has become a recommended practice to ensure the correctness of the safety-critical software. However, these techniques are still not widely applied in industry due to the complexity of building formal models, which represent the system and the formalization of requirement specifications. We propose a general methodology to perform automated model checking of complex properties expressed in temporal logics (e.g. CTL, LTL) on PLC programs. This methodology is based on an Intermediate Model (IM), meant to transform PLC programs written in any of the languages described in the IEC 61131-3 standard (ST, IL, etc.) to different modeling languages of verification tools. This approach has been applied to CERN PLC programs validating the methodology.

  5. Automated landmark-guided deformable image registration.

    Science.gov (United States)

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-07

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency.

  6. Increased Automation in Stereo Camera Calibration Techniques

    Directory of Open Access Journals (Sweden)

    Brandi House

    2006-08-01

    Full Text Available Robotic vision has become a very popular field in recent years due to the numerous promising applications it may enhance. However, errors within the cameras and in their perception of their environment can cause applications in robotics to fail. To help correct these internal and external imperfections, stereo camera calibrations are performed. There are currently many accurate methods of camera calibration available; however, most or all of them are time consuming and labor intensive. This research seeks to automate the most labor intensive aspects of a popular calibration technique developed by Jean-Yves Bouguet. His process requires manual selection of the extreme corners of a checkerboard pattern. The modified process uses embedded LEDs in the checkerboard pattern to act as active fiducials. Images are captured of the checkerboard with the LEDs on and off in rapid succession. The difference of the two images automatically highlights the location of the four extreme corners, and these corner locations take the place of the manual selections. With this modification to the calibration routine, upwards of eighty mouse clicks are eliminated per stereo calibration. Preliminary test results indicate that accuracy is not substantially affected by the modified procedure. Improved automation to camera calibration procedures may finally penetrate the barriers to the use of calibration in practice.

  7. Automated landmark-guided deformable image registration

    International Nuclear Information System (INIS)

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-01

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency. (paper)

  8. Robust automated knowledge capture.

    Energy Technology Data Exchange (ETDEWEB)

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  9. Printing quality control automation

    Science.gov (United States)

    Trapeznikova, O. V.

    2018-04-01

    One of the most important problems in the concept of standardizing the process of offset printing is the control the quality rating of printing and its automation. To solve the problem, a software has been developed taking into account the specifics of printing system components and the behavior in printing process. In order to characterize the distribution of ink layer on the printed substrate the so-called deviation of the ink layer thickness on the sheet from nominal surface is suggested. The geometric data construction the surface projections of the color gamut bodies allows to visualize the color reproduction gamut of printing systems in brightness ranges and specific color sectors, that provides a qualitative comparison of the system by the reproduction of individual colors in a varying ranges of brightness.

  10. Automated electronic filter design

    CERN Document Server

    Banerjee, Amal

    2017-01-01

    This book describes a novel, efficient and powerful scheme for designing and evaluating the performance characteristics of any electronic filter designed with predefined specifications. The author explains techniques that enable readers to eliminate complicated manual, and thus error-prone and time-consuming, steps of traditional design techniques. The presentation includes demonstration of efficient automation, using an ANSI C language program, which accepts any filter design specification (e.g. Chebyschev low-pass filter, cut-off frequency, pass-band ripple etc.) as input and generates as output a SPICE(Simulation Program with Integrated Circuit Emphasis) format netlist. Readers then can use this netlist to run simulations with any version of the popular SPICE simulator, increasing accuracy of the final results, without violating any of the key principles of the traditional design scheme.

  11. Berkeley automated supernova search

    Energy Technology Data Exchange (ETDEWEB)

    Kare, J.T.; Pennypacker, C.R.; Muller, R.A.; Mast, T.S.; Crawford, F.S.; Burns, M.S.

    1981-01-01

    The Berkeley automated supernova search employs a computer controlled 36-inch telescope and charge coupled device (CCD) detector to image 2500 galaxies per night. A dedicated minicomputer compares each galaxy image with stored reference data to identify supernovae in real time. The threshold for detection is m/sub v/ = 18.8. We plan to monitor roughly 500 galaxies in Virgo and closer every night, and an additional 6000 galaxies out to 70 Mpc on a three night cycle. This should yield very early detection of several supernovae per year for detailed study, and reliable premaximum detection of roughly 100 supernovae per year for statistical studies. The search should be operational in mid-1982.

  12. Automated asteroseismic peak detections

    DEFF Research Database (Denmark)

    de Montellano, Andres Garcia Saravia Ortiz; Hekker, S.; Themessl, N.

    2018-01-01

    Space observatories such as Kepler have provided data that can potentially revolutionize our understanding of stars. Through detailed asteroseismic analyses we are capable of determining fundamental stellar parameters and reveal the stellar internal structure with unprecedented accuracy. However......, such detailed analyses, known as peak bagging, have so far been obtained for only a small percentage of the observed stars while most of the scientific potential of the available data remains unexplored. One of the major challenges in peak bagging is identifying how many solar-like oscillation modes are visible...... of detected oscillation modes. The algorithm presented here opens the possibility for detailed and automated peak bagging of the thousands of solar-like oscillators observed by Kepler....

  13. Automated Motivic Analysis

    DEFF Research Database (Denmark)

    Lartillot, Olivier

    2016-01-01

    Motivic analysis provides very detailed understanding of musical composi- tions, but is also particularly difficult to formalize and systematize. A computational automation of the discovery of motivic patterns cannot be reduced to a mere extraction of all possible sequences of descriptions...... for lossless compression. The structural complexity resulting from successive repetitions of patterns can be controlled through a simple modelling of cycles. Generally, motivic patterns cannot always be defined solely as sequences of descriptions in a fixed set of dimensions: throughout the descriptions...... of the successive notes and intervals, various sets of musical parameters may be invoked. In this chapter, a method is presented that allows for these heterogeneous patterns to be discovered. Motivic repetition with local ornamentation is detected by reconstructing, on top of “surface-level” monodic voices, longer...

  14. Berkeley automated supernova search

    International Nuclear Information System (INIS)

    Kare, J.T.; Pennypacker, C.R.; Muller, R.A.; Mast, T.S.

    1981-01-01

    The Berkeley automated supernova search employs a computer controlled 36-inch telescope and charge coupled device (CCD) detector to image 2500 galaxies per night. A dedicated minicomputer compares each galaxy image with stored reference data to identify supernovae in real time. The threshold for detection is m/sub v/ = 18.8. We plan to monitor roughly 500 galaxies in Virgo and closer every night, and an additional 6000 galaxies out to 70 Mpc on a three night cycle. This should yield very early detection of several supernovae per year for detailed study, and reliable premaximum detection of roughly 100 supernovae per year for statistical studies. The search should be operational in mid-1982

  15. (No) Security in Automation!?

    CERN Document Server

    Lüders, S

    2008-01-01

    Modern Information Technologies like Ethernet, TCP/IP, web server or FTP are nowadays increas-ingly used in distributed control and automation systems. Thus, information from the factory floor is now directly available at the management level (From Shop-Floor to Top-Floor) and can be ma-nipulated from there. Despite the benefits coming with this (r)evolution, new vulnerabilities are in-herited, too: worms and viruses spread within seconds via Ethernet and attackers are becoming interested in control systems. Unfortunately, control systems lack the standard security features that usual office PCs have. This contribution will elaborate on these problems, discuss the vulnerabilities of modern control systems and present international initiatives for mitigation.

  16. Coordinated joint motion control system with position error correction

    Science.gov (United States)

    Danko, George L.

    2016-04-05

    Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.

  17. [Automated anesthesia record systems].

    Science.gov (United States)

    Heinrichs, W; Mönk, S; Eberle, B

    1997-07-01

    The introduction of electronic anaesthesia documentation systems was attempted as early as in 1979, although their efficient application has become reality only in the past few years. The advantages of the electronic protocol are apparent: Continuous high quality documentation, comparability of data due to the availability of a data bank, reduction in the workload of the anaesthetist and availability of additional data. Disadvantages of the electronic protocol have also been discussed in the literature. By going through the process of entering data on the course of the anaesthetic procedure on the protocol sheet, the information is mentally absorbed and evaluated by the anaesthetist. This information may, however, be lost when the data are recorded fully automatically-without active involvement on the part of the anaesthetist. Recent publications state that by using intelligent alarms and/or integrated displays manual record keeping is no longer necessary for anaesthesia vigilance. The technical design of automated anaesthesia records depends on an integration of network technology into the hospital. It will be appropriate to connect the systems to the internet, but safety requirements have to be followed strictly. Concerning the database, client server architecture as well as language standards like SQL should be used. Object oriented databases will be available in the near future. Another future goal of automated anaesthesia record systems will be using knowledge based technologies within these systems. Drug interactions, disease related anaesthetic techniques and other information sources can be integrated. At this time, almost none of the commercially available systems has matured to a point where their purchase can be recommended without reservation. There is still a lack of standards for the subsequent exchange of data and a solution to a number of ergonomic problems still remains to be found. Nevertheless, electronic anaesthesia protocols will be required in

  18. Using Modeling and Simulation to Predict Operator Performance and Automation-Induced Complacency With Robotic Automation: A Case Study and Empirical Validation.

    Science.gov (United States)

    Wickens, Christopher D; Sebok, Angelia; Li, Huiyang; Sarter, Nadine; Gacy, Andrew M

    2015-09-01

    The aim of this study was to develop and validate a computational model of the automation complacency effect, as operators work on a robotic arm task, supported by three different degrees of automation. Some computational models of complacency in human-automation interaction exist, but those are formed and validated within the context of fairly simplified monitoring failures. This research extends model validation to a much more complex task, so that system designers can establish, without need for human-in-the-loop (HITL) experimentation, merits and shortcomings of different automation degrees. We developed a realistic simulation of a space-based robotic arm task that could be carried out with three different levels of trajectory visualization and execution automation support. Using this simulation, we performed HITL testing. Complacency was induced via several trials of correctly performing automation and then was assessed on trials when automation failed. Following a cognitive task analysis of the robotic arm operation, we developed a multicomponent model of the robotic operator and his or her reliance on automation, based in part on visual scanning. The comparison of model predictions with empirical results revealed that the model accurately predicted routine performance and predicted the responses to these failures after complacency developed. However, the scanning models do not account for the entire attention allocation effects of complacency. Complacency modeling can provide a useful tool for predicting the effects of different types of imperfect automation. The results from this research suggest that focus should be given to supporting situation awareness in automation development. © 2015, Human Factors and Ergonomics Society.

  19. Automating the radiographic NDT process

    International Nuclear Information System (INIS)

    Aman, J.K.

    1986-01-01

    Automation, the removal of the human element in inspection, has not been generally applied to film radiographic NDT. The justication for automating is not only productivity but also reliability of results. Film remains in the automated system of the future because of its extremely high image content, approximately 8 x 10 9 bits per 14 x 17. The equivalent to 2200 computer floppy discs. Parts handling systems and robotics applied for manufacturing and some NDT modalities, should now be applied to film radiographic NDT systems. Automatic film handling can be achieved with the daylight NDT film handling system. Automatic film processing is becoming the standard in industry and can be coupled to the daylight system. Robots offer the opportunity to automate fully the exposure step. Finally, computer aided interpretation appears on the horizon. A unit which laser scans a 14 x 17 (inch) film in 6 - 8 seconds can digitize film information for further manipulation and possible automatic interrogations (computer aided interpretation). The system called FDRS (for Film Digital Radiography System) is moving toward 50 micron (*approx* 16 lines/mm) resolution. This is believed to meet the need of the majority of image content needs. We expect the automated system to appear first in parts (modules) as certain operations are automated. The future will see it all come together in an automated film radiographic NDT system (author) [pt

  20. Comparação entre a correção cilíndrica total e o equivalente esférico na realização da perimetria computadorizada Comparison between the full cylindrical correction and the spherical equivalent in the execution of automated perimetry

    Directory of Open Access Journals (Sweden)

    Paulo Leonardi Filho

    2004-08-01

    Full Text Available OBJETIVO: Verificar a existência de diferença estatisticamente significativa entre exames de campimetria computadorizada, realizados com a utilização da correção total e do equivalente esférico, em pacientes com ametropia cilíndrica de valores iguais ou maiores que 1,50 dioptrias. MÉTODOS: Vinte pacientes (35 olhos foram submetidos a exame de campo visual, perimetria computadorizada Humphrey - estratégia SITA 24-2, usando em um exame a correção total e em outro o equivalente esférico. Foram utilizados como parâmetros de comparação os valores de Mean Deviation, Pattern Standard Deviation, perdas de fixação, falso-positivos, falso-negativos e duração dos exames. RESULTADOS: Os parâmetros Mean Deviation, Standard Pattern Deviation, falso-positivos, falso-negativos e duração do exame não apresentaram diferença estatisticamente significativa. A perda de fixação foi maior no grupo usando correção cilíndrica total, dado estatisticamente significante. CONCLUSÃO: Exames de campo visual realizados com equivalente esférico não mostram diferença na sensibilidade retínica quando comparados com o uso de correção cilíndrica total, dentro dos padrões adotados neste estudo.PURPOSE: To compare the results in subjects undergoing visual field testing using the full cylindrical correction and the spherical equivalent. METHODS: Twenty patients (35 eyes underwent visual field testing with Humphrey (SITA 24-2 perimetry using the full cylindrical correction and the spherical equivalent at a random sequence. Mean Deviation, Standard Pattern Deviation, fixation loss, false positive and false negative results and test duration were compared. RESULTS: No difference was found regarding Mean Deviation, Standard Pattern Deviation, false positive and false negative results and test duration. Fixation loss was significantly higher with the full cylindrical correction. CONCLUSION: Visual fields performed with the spherical equivalent show no

  1. RCRA corrective action and closure

    International Nuclear Information System (INIS)

    1995-02-01

    This information brief explains how RCRA corrective action and closure processes affect one another. It examines the similarities and differences between corrective action and closure, regulators' interests in RCRA facilities undergoing closure, and how the need to perform corrective action affects the closure of DOE's permitted facilities and interim status facilities

  2. Decision Making In A High-Tech World: Automation Bias and Countermeasures

    Science.gov (United States)

    Mosier, Kathleen L.; Skitka, Linda J.; Burdick, Mark R.; Heers, Susan T.; Rosekind, Mark R. (Technical Monitor)

    1996-01-01

    resultant errors. To what extent these effects generalize to performance situations is not yet empirically established. The two studies to be presented represent concurrent efforts, with student and professional pilot samples, to determine the effects of accountability pressures on automation bias and on the verification of the accurate functioning of automated aids. Students (Experiment 1) and commercial pilots (Experiment 2) performed simulated flight tasks using automated aids. In both studies, participants who perceived themselves as accountable for their strategies of interaction with the automation were significantly more likely to verify its correctness, and committed significantly fewer automation-related errors than those who did not report this perception.

  3. Rethinking political correctness.

    Science.gov (United States)

    Ely, Robin J; Meyerson, Debra E; Davidson, Martin N

    2006-09-01

    Legal and cultural changes over the past 40 years ushered unprecedented numbers of women and people of color into companies' professional ranks. Laws now protect these traditionally underrepresented groups from blatant forms of discrimination in hiring and promotion. Meanwhile, political correctness has reset the standards for civility and respect in people's day-to-day interactions. Despite this obvious progress, the authors' research has shown that political correctness is a double-edged sword. While it has helped many employees feel unlimited by their race, gender, or religion,the PC rule book can hinder people's ability to develop effective relationships across race, gender, and religious lines. Companies need to equip workers with skills--not rules--for building these relationships. The authors offer the following five principles for healthy resolution of the tensions that commonly arise over difference: Pause to short-circuit the emotion and reflect; connect with others, affirming the importance of relationships; question yourself to identify blind spots and discover what makes you defensive; get genuine support that helps you gain a broader perspective; and shift your mind-set from one that says, "You need to change," to one that asks, "What can I change?" When people treat their cultural differences--and related conflicts and tensions--as opportunities to gain a more accurate view of themselves, one another, and the situation, trust builds and relationships become stronger. Leaders should put aside the PC rule book and instead model and encourage risk taking in the service of building the organization's relational capacity. The benefits will reverberate through every dimension of the company's work.

  4. All-fiber interferometer-based repetition-rate stabilization of mode-locked lasers to 10-14-level frequency instability and 1-fs-level jitter over 1  s.

    Science.gov (United States)

    Kwon, Dohyeon; Kim, Jungwon

    2017-12-15

    We report on all-fiber Michelson interferometer-based repetition-rate stabilization of femtosecond mode-locked lasers down to 1.3×10 -14 frequency instability and 1.4 fs integrated jitter in a 1 s time scale. The use of a compactly packaged 10 km long single-mode fiber (SMF)-28 fiber link as a timing reference allows the scaling of phase noise at a 10 GHz carrier down to -80  dBc/Hz at 1 Hz Fourier frequency. We also tested a 500 m long low-thermal-sensitivity fiber as a reference and found that, compared to standard SMF-28 fiber, it can mitigate the phase noise divergence by ∼10  dB/dec in the 0.1-1 Hz Fourier frequency range. These results suggest that the use of a longer low-thermal-sensitivity fiber may achieve sub-femtosecond integrated timing jitter with sub-10 -14 -level frequency instability in repetition rate by a simple and robust all-fiber-photonic method.

  5. [Automated analyzer of enzyme immunoassay].

    Science.gov (United States)

    Osawa, S

    1995-09-01

    Automated analyzers for enzyme immunoassay can be classified by several points of view: the kind of labeled antibodies or enzymes, detection methods, the number of tests per unit time, analytical time and speed per run. In practice, it is important for us consider the several points such as detection limits, the number of tests per unit time, analytical range, and precision. Most of the automated analyzers on the market can randomly access and measure samples. I will describe the recent advance of automated analyzers reviewing their labeling antibodies and enzymes, the detection methods, the number of test per unit time and analytical time and speed per test.

  6. Programmable Automated Welding System (PAWS)

    Science.gov (United States)

    Kline, Martin D.

    1994-01-01

    An ambitious project to develop an advanced, automated welding system is being funded as part of the Navy Joining Center with Babcock & Wilcox as the prime integrator. This program, the Programmable Automated Welding System (PAWS), involves the integration of both planning and real-time control activities. Planning functions include the development of a graphical decision support system within a standard, portable environment. Real-time control functions include the development of a modular, intelligent, real-time control system and the integration of a number of welding process sensors. This paper presents each of these components of the PAWS and discusses how they can be utilized to automate the welding operation.

  7. An Automation Survival Guide for Media Centers.

    Science.gov (United States)

    Whaley, Roger E.

    1989-01-01

    Reviews factors that should affect the decision to automate a school media center and offers suggestions for the automation process. Topics discussed include getting the library collection ready for automation, deciding what automated functions are needed, evaluating software vendors, selecting software, and budgeting. (CLB)

  8. Automated Video Analysis of Non-verbal Communication in a Medical Setting.

    Science.gov (United States)

    Hart, Yuval; Czerniak, Efrat; Karnieli-Miller, Orit; Mayo, Avraham E; Ziv, Amitai; Biegon, Anat; Citron, Atay; Alon, Uri

    2016-01-01

    Non-verbal communication plays a significant role in establishing good rapport between physicians and patients and may influence aspects of patient health outcomes. It is therefore important to analyze non-verbal communication in medical settings. Current approaches to measure non-verbal interactions in medicine employ coding by human raters. Such tools are labor intensive and hence limit the scale of possible studies. Here, we present an automated video analysis tool for non-verbal interactions in a medical setting. We test the tool using videos of subjects that interact with an actor portraying a doctor. The actor interviews the subjects performing one of two scripted scenarios of interviewing the subjects: in one scenario the actor showed minimal engagement with the subject. The second scenario included active listening by the doctor and attentiveness to the subject. We analyze the cross correlation in total kinetic energy of the two people in the dyad, and also characterize the frequency spectrum of their motion. We find large differences in interpersonal motion synchrony and entrainment between the two performance scenarios. The active listening scenario shows more synchrony and more symmetric followership than the other scenario. Moreover, the active listening scenario shows more high-frequency motion termed jitter that has been recently suggested to be a marker of followership. The present approach may be useful for analyzing physician-patient interactions in terms of synchrony and dominance in a range of medical settings.

  9. Home automation with Intel Galileo

    CERN Document Server

    Dundar, Onur

    2015-01-01

    This book is for anyone who wants to learn Intel Galileo for home automation and cross-platform software development. No knowledge of programming with Intel Galileo is assumed, but knowledge of the C programming language is essential.

  10. Strategic Transit Automation Research Plan

    Science.gov (United States)

    2018-01-01

    Transit bus automation could deliver many potential benefits, but transit agencies need additional research and policy guidance to make informed deployment decisions. Although funding and policy constraints may play a role, there is also a reasonable...

  11. The Evaluation of Automated Systems

    National Research Council Canada - National Science Library

    McDougall, Jeffrey

    2004-01-01

    .... The Army has recognized this change and is adapting to operate in this new environment. It has developed a number of automated tools to assist leaders in the command and control of their organizations...

  12. National Automated Conformity Inspection Process -

    Data.gov (United States)

    Department of Transportation — The National Automated Conformity Inspection Process (NACIP) Application is intended to expedite the workflow process as it pertains to the FAA Form 81 0-10 Request...

  13. Automation of the testing procedure

    International Nuclear Information System (INIS)

    Haas, H.; Fleischer, M.; Bachner, E.

    1979-01-01

    For the judgement of technologies applied and the testing of specific components of the HTR primary circuit, complex test procedures and data evaluations are required. Extensive automation of these test procedures is indispensable. (orig.) [de

  14. Automation of coal mining equipment

    Energy Technology Data Exchange (ETDEWEB)

    Yamada, Ryuji

    1986-12-25

    Major machines used in the working face include the shearer and the self-advancing frame. The shearer has been changed from the radio-controlled model to the microcomputer operated machine, while automating various functions. In addition, a system for comprehensively examining operating conditions and natural conditions in the working face for further automation. The selfadvancing frame has been modified from the sequence controlled model to the microcomputer aided electrohydraulic control system. In order to proceed further with automation and introduce robotics, detectors, control units and valves must be made smaller in higher reliability. The system will be controlled above the ground in the future, provided that the machines in the working face are remote controlled at the gate while transmitting relevant data above the ground from this system. Thus, automated working face will be realized. (2 figs, 1 photo)

  15. Synthesis of Automated Vehicle Legislation

    Science.gov (United States)

    2017-10-01

    This report provides a synthesis of issues addressed by state legislation regarding automated vehicles (AV); AV technologies are rapidly evolving and many states have developed legislation to govern AV testing and deployment and to assure safety on p...

  16. Fully automated parallel oligonucleotide synthesizer

    Czech Academy of Sciences Publication Activity Database

    Lebl, M.; Burger, Ch.; Ellman, B.; Heiner, D.; Ibrahim, G.; Jones, A.; Nibbe, M.; Thompson, J.; Mudra, Petr; Pokorný, Vít; Poncar, Pavel; Ženíšek, Karel

    2001-01-01

    Roč. 66, č. 8 (2001), s. 1299-1314 ISSN 0010-0765 Institutional research plan: CEZ:AV0Z4055905 Keywords : automated oligonucleotide synthesizer Subject RIV: CC - Organic Chemistry Impact factor: 0.778, year: 2001

  17. Automation and Human Resource Management.

    Science.gov (United States)

    Taft, Michael

    1988-01-01

    Discussion of the automation of personnel administration in libraries covers (1) new developments in human resource management systems; (2) system requirements; (3) software evaluation; (4) vendor evaluation; (5) selection of a system; (6) training and support; and (7) benefits. (MES)

  18. Reduction of density-modification bias by β correction

    International Nuclear Information System (INIS)

    Skubák, Pavol; Pannu, Navraj S.

    2011-01-01

    A cross-validation-based method for bias reduction in ‘classical’ iterative density modification of experimental X-ray crystallography maps provides significantly more accurate phase-quality estimates and leads to improved automated model building. Density modification often suffers from an overestimation of phase quality, as seen by escalated figures of merit. A new cross-validation-based method to address this estimation bias by applying a bias-correction parameter ‘β’ to maximum-likelihood phase-combination functions is proposed. In tests on over 100 single-wavelength anomalous diffraction data sets, the method is shown to produce much more reliable figures of merit and improved electron-density maps. Furthermore, significantly better results are obtained in automated model building iterated with phased refinement using the more accurate phase probability parameters from density modification

  19. Anesthesiology, automation, and artificial intelligence.

    Science.gov (United States)

    Alexander, John C; Joshi, Girish P

    2018-01-01

    There have been many attempts to incorporate automation into the practice of anesthesiology, though none have been successful. Fundamentally, these failures are due to the underlying complexity of anesthesia practice and the inability of rule-based feedback loops to fully master it. Recent innovations in artificial intelligence, especially machine learning, may usher in a new era of automation across many industries, including anesthesiology. It would be wise to consider the implications of such potential changes before they have been fully realized.

  20. Virtual Machine in Automation Projects

    OpenAIRE

    Xing, Xiaoyuan

    2010-01-01

    Virtual machine, as an engineering tool, has recently been introduced into automation projects in Tetra Pak Processing System AB. The goal of this paper is to examine how to better utilize virtual machine for the automation projects. This paper designs different project scenarios using virtual machine. It analyzes installability, performance and stability of virtual machine from the test results. Technical solutions concerning virtual machine are discussed such as the conversion with physical...

  1. Evolution of Home Automation Technology

    OpenAIRE

    Mohd. Rihan; M. Salim Beg

    2009-01-01

    In modern society home and office automation has becomeincreasingly important, providing ways to interconnectvarious home appliances. This interconnection results infaster transfer of information within home/offices leading tobetter home management and improved user experience.Home Automation, in essence, is a technology thatintegrates various electrical systems of a home to provideenhanced comfort and security. Users are grantedconvenient and complete control over all the electrical homeappl...

  2. Automated measuring systems. Automatisierte Messsysteme

    Energy Technology Data Exchange (ETDEWEB)

    1985-01-01

    Microprocessors have become a regular component of automated measuring systems. Experts offer their experience and basic information in 24 lectures and 10 poster presentations. The focus is on the following: Automated measuring, computer and microprocessor use, sensor technique, actuator technique, communication, interfaces, man-system interaction, distrubance tolerance and availability as well as uses. A discussion meeting is dedicated to the theme complex sensor digital signal, sensor interface and sensor bus.

  3. Aprendizaje automático

    OpenAIRE

    Moreno, Antonio

    1994-01-01

    En este libro se introducen los conceptos básicos en una de las ramas más estudiadas actualmente dentro de la inteligencia artificial: el aprendizaje automático. Se estudian temas como el aprendizaje inductivo, el razonamiento analógico, el aprendizaje basado en explicaciones, las redes neuronales, los algoritmos genéticos, el razonamiento basado en casos o las aproximaciones teóricas al aprendizaje automático.

  4. Safeguards through secure automated fabrication

    International Nuclear Information System (INIS)

    DeMerschman, A.W.; Carlson, R.L.

    1982-01-01

    Westinghouse Hanford Company, a prime contractor for the U.S. Department of Energy, is constructing the Secure Automated Fabrication (SAF) line for fabrication of mixed oxide breeder fuel pins. Fuel processing by automation, which provides a separation of personnel from fuel handling, will provide a means whereby advanced safeguards concepts will be introduced. Remote operations and the inter-tie between the process computer and the safeguards computer are discussed

  5. Automated sample analysis and remediation

    International Nuclear Information System (INIS)

    Hollen, R.; Settle, F.

    1995-01-01

    The Contaminant Analysis Automation Project is developing an automated chemical analysis system to address the current needs of the US Department of Energy (DOE). These needs focus on the remediation of large amounts of radioactive and chemically hazardous wastes stored, buried and still being processed at numerous DOE sites. This paper outlines the advantages of the system under development, and details the hardware and software design. A prototype system for characterizing polychlorinated biphenyls in soils is also described

  6. Manned spacecraft automation and robotics

    Science.gov (United States)

    Erickson, Jon D.

    1987-01-01

    The Space Station holds promise of being a showcase user and driver of advanced automation and robotics technology. The author addresses the advances in automation and robotics from the Space Shuttle - with its high-reliability redundancy management and fault tolerance design and its remote manipulator system - to the projected knowledge-based systems for monitoring, control, fault diagnosis, planning, and scheduling, and the telerobotic systems of the future Space Station.

  7. Home Automation and Security System

    OpenAIRE

    Surinder Kaur,; Rashmi Singh; Neha Khairwal; Pratyk Jain

    2016-01-01

    Easy Home or Home automation plays a very important role in modern era because of its flexibility in using it at different places with high precision which will save money and time by decreasing human hard work. Prime focus of this technology is to control the household equipment’s like light, fan, door, AC etc. automatically. This research paper has detailed information on Home Automation and Security System using Arduino, GSM and how we can control home appliances using Android application....

  8. 2015 Chinese Intelligent Automation Conference

    CERN Document Server

    Li, Hongbo

    2015-01-01

    Proceedings of the 2015 Chinese Intelligent Automation Conference presents selected research papers from the CIAC’15, held in Fuzhou, China. The topics include adaptive control, fuzzy control, neural network based control, knowledge based control, hybrid intelligent control, learning control, evolutionary mechanism based control, multi-sensor integration, failure diagnosis, reconfigurable control, etc. Engineers and researchers from academia, industry and the government can gain valuable insights into interdisciplinary solutions in the field of intelligent automation.

  9. BARD: Better Automated Redistricting

    Directory of Open Access Journals (Sweden)

    Micah Altman

    2011-08-01

    Full Text Available BARD is the first (and at time of writing, only open source software package for general redistricting and redistricting analysis. BARD provides methods to create, display, compare, edit, automatically refine, evaluate, and profile political districting plans. BARD aims to provide a framework for scientific analysis of redistricting plans and to facilitate wider public participation in the creation of new plans.BARD facilitates map creation and refinement through command-line, graphical user interface, and automatic methods. Since redistricting is a computationally complex partitioning problem not amenable to an exact optimization solution, BARD implements a variety of selectable metaheuristics that can be used to refine existing or randomly-generated redistricting plans based on user-determined criteria.Furthermore, BARD supports automated generation of redistricting plans and profiling of plans by assigning different weights to various criteria, such as district compactness or equality of population. This functionality permits exploration of trade-offs among criteria. The intent of a redistricting authority may be explored by examining these trade-offs and inferring which reasonably observable plans were not adopted.Redistricting is a computationally-intensive problem for even modest-sized states. Performance is thus an important consideration in BARD's design and implementation. The program implements performance enhancements such as evaluation caching, explicit memory management, and distributed computing across snow clusters.

  10. Automated uranium titration system

    International Nuclear Information System (INIS)

    Takahashi, M.; Kato, Y.

    1983-01-01

    An automated titration system based on the Davies-Gray method has been developed for accurate determination of uranium. The system consists of a potentiometric titrator with precise burettes, a sample changer, an electronic balance and a desk-top computer with a printer. Fifty-five titration vessels are loaded in the sample changer. The first three contain the standard solution for standardizing potassium dichromate titrant, and the next two and the last two contain the control samples for data quality assurance. The other forty-eight measurements are carried out for sixteen unknown samples. Sample solution containing about 100 mg uranium is taken in a titration vessel. At the pretreatment position, uranium (VI) is reduced to uranium (IV) by iron (II). After the valency adjustment, the vessel is transferred to the titration position. The rate of titrant addition is automatically controlled to be slower near the end-point. The last figure (0.01 mL) of the equivalent titrant volume for uranium is calculated from the potential change. The results obtained with this system on 100 mg uranium gave a precision of 0.2% (RSD,n=3) and an accuracy of better than 0.1%. Fifty-five titrations are accomplished in 10 hours. (author)

  11. Automated asteroseismic peak detections

    Science.gov (United States)

    García Saravia Ortiz de Montellano, Andrés; Hekker, S.; Themeßl, N.

    2018-05-01

    Space observatories such as Kepler have provided data that can potentially revolutionize our understanding of stars. Through detailed asteroseismic analyses we are capable of determining fundamental stellar parameters and reveal the stellar internal structure with unprecedented accuracy. However, such detailed analyses, known as peak bagging, have so far been obtained for only a small percentage of the observed stars while most of the scientific potential of the available data remains unexplored. One of the major challenges in peak bagging is identifying how many solar-like oscillation modes are visible in a power density spectrum. Identification of oscillation modes is usually done by visual inspection that is time-consuming and has a degree of subjectivity. Here, we present a peak-detection algorithm especially suited for the detection of solar-like oscillations. It reliably characterizes the solar-like oscillations in a power density spectrum and estimates their parameters without human intervention. Furthermore, we provide a metric to characterize the false positive and false negative rates to provide further information about the reliability of a detected oscillation mode or the significance of a lack of detected oscillation modes. The algorithm presented here opens the possibility for detailed and automated peak bagging of the thousands of solar-like oscillators observed by Kepler.

  12. Particle Accelerator Focus Automation

    Science.gov (United States)

    Lopes, José; Rocha, Jorge; Redondo, Luís; Cruz, João

    2017-08-01

    The Laboratório de Aceleradores e Tecnologias de Radiação (LATR) at the Campus Tecnológico e Nuclear, of Instituto Superior Técnico (IST) has a horizontal electrostatic particle accelerator based on the Van de Graaff machine which is used for research in the area of material characterization. This machine produces alfa (He+) and proton (H+) beams of some μA currents up to 2 MeV/q energies. Beam focusing is obtained using a cylindrical lens of the Einzel type, assembled near the high voltage terminal. This paper describes the developed system that automatically focuses the ion beam, using a personal computer running the LabVIEW software, a multifunction input/output board and signal conditioning circuits. The focusing procedure consists of a scanning method to find the lens bias voltage which maximizes the beam current measured on a beam stopper target, which is used as feedback for the scanning cycle. This system, as part of a wider start up and shut down automation system built for this particle accelerator, brings great advantages to the operation of the accelerator by turning it faster and easier to operate, requiring less human presence, and adding the possibility of total remote control in safe conditions.

  13. Particle Accelerator Focus Automation

    Directory of Open Access Journals (Sweden)

    Lopes José

    2017-08-01

    Full Text Available The Laboratório de Aceleradores e Tecnologias de Radiação (LATR at the Campus Tecnológico e Nuclear, of Instituto Superior Técnico (IST has a horizontal electrostatic particle accelerator based on the Van de Graaff machine which is used for research in the area of material characterization. This machine produces alfa (He+ and proton (H+ beams of some μA currents up to 2 MeV/q energies. Beam focusing is obtained using a cylindrical lens of the Einzel type, assembled near the high voltage terminal. This paper describes the developed system that automatically focuses the ion beam, using a personal computer running the LabVIEW software, a multifunction input/output board and signal conditioning circuits. The focusing procedure consists of a scanning method to find the lens bias voltage which maximizes the beam current measured on a beam stopper target, which is used as feedback for the scanning cycle. This system, as part of a wider start up and shut down automation system built for this particle accelerator, brings great advantages to the operation of the accelerator by turning it faster and easier to operate, requiring less human presence, and adding the possibility of total remote control in safe conditions.

  14. Automated Supernova Discovery (Abstract)

    Science.gov (United States)

    Post, R. S.

    2015-12-01

    (Abstract only) We are developing a system of robotic telescopes for automatic recognition of Supernovas as well as other transient events in collaboration with the Puckett Supernova Search Team. At the SAS2014 meeting, the discovery program, SNARE, was first described. Since then, it has been continuously improved to handle searches under a wide variety of atmospheric conditions. Currently, two telescopes are used to build a reference library while searching for PSN with a partial library. Since data is taken every night without clouds, we must deal with varying atmospheric and high background illumination from the moon. Software is configured to identify a PSN, reshoot for verification with options to change the run plan to acquire photometric or spectrographic data. The telescopes are 24-inch CDK24, with Alta U230 cameras, one in CA and one in NM. Images and run plans are sent between sites so the CA telescope can search while photometry is done in NM. Our goal is to find bright PSNs with magnitude 17.5 or less which is the limit of our planned spectroscopy. We present results from our first automated PSN discoveries and plans for PSN data acquisition.

  15. Augmented Automated Material Accounting Statistics System (AMASS)

    International Nuclear Information System (INIS)

    Lumb, R.F.; Messinger, M.; Tingey, F.H.

    1983-01-01

    This paper describes an extension of the AMASS methodology which was previously presented at the 1981 INMM annual meeting. The main thrust of the current effort is to develop procedures and a computer program for estimating the variance of an Inventory Difference when many sources of variability, other than measurement error, are admitted in the model. Procedures also are included for the estimation of the variances associated with measurement error estimates and their effect on the estimated limit of error of the inventory difference (LEID). The algorithm for the LEID measurement component uncertainty involves the propagated component measurement variance estimates as well as their associated degrees of freedom. The methodology and supporting computer software is referred to as the augmented Automated Material Accounting Statistics System (AMASS). Specifically, AMASS accommodates five source effects. These are: (1) measurement errors (2) known but unmeasured effects (3) measurement adjustment effects (4) unmeasured process hold-up effects (5) residual process variation A major result of this effort is a procedure for determining the effect of bias correction on LEID, properly taking into account all the covariances that exist. This paper briefly describes the basic models that are assumed; some of the estimation procedures consistent with the model; data requirements, emphasizing availability and other practical considerations; discusses implications for bias corrections; and concludes by briefly describing the supporting computer program

  16. Correction to toporek (2014).

    Science.gov (United States)

    2015-01-01

    Reports an error in "Pedagogy of the privileged: Review of Deconstructing Privilege: Teaching and Learning as Allies in the Classroom" by Rebecca L. Toporek (Cultural Diversity and Ethnic Minority Psychology, 2014[Oct], Vol 20[4], 621-622). This article was originally published online incorrectly as a Brief Report. The article authored by Rebecca L. Toporek has been published correctly as a Book Review in the October 2014 print publication (Vol. 20, No. 4, pp. 621-622. http://dx.doi.org/10.1037/a0036529). (The following abstract of the original article appeared in record 2014-42484-006.) Reviews the book, Deconstructing Privilege: Teaching and Learning as Allies in the Classroom edited by Kim A. Case (2013). The purpose of this book is to provide a collection of resources for those teaching about privilege directly, much of this volume may be useful for expanding the context within which educators teach all aspects of psychology. Understanding the history and systems of psychology, clinical practice, research methods, assessment, and all the core areas of psychology could be enhanced by consideration of the structural framework through which psychology has developed and is maintained. The book presents a useful guide for educators, and in particular, those who teach about systems of oppression and privilege directly. For psychologists, this guide provides scholarship and concrete strategies for facilitating students' awareness of multiple dimensions of privilege across content areas. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  17. Radiation protection: A correction

    International Nuclear Information System (INIS)

    1972-01-01

    An error in translation inadvertently distorted the sense of a paragraph in the article entitled 'Ecological Aspects of Radiation Protection', by Dr. P. Recht, which appeared in the Bulletin, Volume 14, No. 2 earlier this year. In the English text the error appears on Page 28, second paragraph, which reads, as published: 'An instance familiar to radiation protection specialists, which has since come to be regarded as a classic illustration of this approach, is the accidental release at the Windscale nuclear centre in the north of England.' In the French original of this text no reference was made, or intended, to the accidental release which took place in 1957; the reference was to the study of the critical population group exposed to routine releases from the centre, as the footnote made clear. A more correct translation of the relevant sentence reads: 'A classic example of this approach, well-known to radiation protection specialists, is that of releases from the Windscale nuclear centre, in the north of England.' A second error appeared in the footnote already referred to. In all languages, the critical population group studied in respect of the Windscale releases is named as that of Cornwall; the reference should be, of course, to that part of the population of Wales who eat laver bread. (author)

  18. Thermodynamics of Error Correction

    Directory of Open Access Journals (Sweden)

    Pablo Sartori

    2015-12-01

    Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  19. Cross plane scattering correction

    International Nuclear Information System (INIS)

    Shao, L.; Karp, J.S.

    1990-01-01

    Most previous scattering correction techniques for PET are based on assumptions made for a single transaxial plane and are independent of axial variations. These techniques will incorrectly estimate the scattering fraction for volumetric PET imaging systems since they do not take the cross-plane scattering into account. In this paper, the authors propose a new point source scattering deconvolution method (2-D). The cross-plane scattering is incorporated into the algorithm by modeling a scattering point source function. In the model, the scattering dependence both on axial and transaxial directions is reflected in the exponential fitting parameters and these parameters are directly estimated from a limited number of measured point response functions. The authors' results comparing the standard in-plane point source deconvolution to the authors' cross-plane source deconvolution show that for a small source, the former technique overestimates the scatter fraction in the plane of the source and underestimate the scatter fraction in adjacent planes. In addition, the authors also propose a simple approximation technique for deconvolution

  20. White matter hyperintensities segmentation: a new semi-automated method

    Directory of Open Access Journals (Sweden)

    Mariangela eIorio

    2013-12-01

    Full Text Available White matter hyperintensities (WMH are brain areas of increased signal on T2-weighted or fluid attenuated inverse recovery magnetic resonance imaging (MRI scans. In this study we present a new semi-automated method to measure WMH load that is based on the segmentation of the intensity histogram of fluid-attenuated inversion recovery images. Thirty patients with Mild Cognitive Impairment with variable WMH load were enrolled. The semi-automated WMH segmentation included: removal of non-brain tissue, spatial normalization, removal of cerebellum and brain stem, spatial filtering, thresholding to segment probable WMH, manual editing for correction of false positives and negatives, generation of WMH map and volumetric estimation of the WMH load. Accuracy was quantitatively evaluated by comparing semi-automated and manual WMH segmentations performed by two independent raters. Differences between the two procedures were assessed using Student’s t tests and similarity was evaluated using linear regression model and Dice Similarity Coefficient (DSC. The volumes of the manual and semi-automated segmentations did not statistically differ (t-value= -1.79, DF=29, p= 0.839 for rater 1; t-value= 1.113, DF=29, p= 0.2749 for rater 2, were highly correlated (R²= 0.921, F (1,29 =155,54, p

  1. 77 FR 48527 - National Customs Automation Program (NCAP) Test Concerning Automated Commercial Environment (ACE...

    Science.gov (United States)

    2012-08-14

    ... National Customs Automation Program (NCAP) test concerning the simplified entry functionality in the... DEPARTMENT OF HOMELAND SECURITY U.S. Customs and Border Protection National Customs Automation Program (NCAP) Test Concerning Automated Commercial Environment (ACE) Simplified Entry: Modification of...

  2. Food systems in correctional settings

    DEFF Research Database (Denmark)

    Smoyer, Amy; Kjær Minke, Linda

    management of food systems may improve outcomes for incarcerated people and help correctional administrators to maximize their health and safety. This report summarizes existing research on food systems in correctional settings and provides examples of food programmes in prison and remand facilities......Food is a central component of life in correctional institutions and plays a critical role in the physical and mental health of incarcerated people and the construction of prisoners' identities and relationships. An understanding of the role of food in correctional settings and the effective......, including a case study of food-related innovation in the Danish correctional system. It offers specific conclusions for policy-makers, administrators of correctional institutions and prison-food-service professionals, and makes proposals for future research....

  3. SAIL: automating interlibrary loan.

    Science.gov (United States)

    Lacroix, E M

    1994-01-01

    The National Library of Medicine (NLM) initiated the System for Automated Interlibrary Loan (SAIL) pilot project to study the feasibility of using imaging technology linked to the DOCLINE system to deliver copies of journal articles. During the project, NLM converted a small number of print journal issues to electronic form, linking the captured articles to the MEDLINE citation unique identifier. DOCLINE requests for these journals that could not be filled by network libraries were routed to SAIL. Nearly 23,000 articles from sixty-four journals recently selected for indexing in Index Medicus were scanned to convert them to electronic images. During fiscal year 1992, 4,586 scanned articles were used to fill 10,444 interlibrary loan (ILL) requests, and more than half of these were used only once. Eighty percent of all the articles were not requested at all. The total cost per article delivered was $10.76, substantially more than it costs to process a photocopy request. Because conversion costs were the major component of the total SAIL cost, and most of the articles captured for the project were not requested, this model was not cost-effective. Data on SAIL journal article use was compared with all ILL requests filled by NLM for the same period. Eighty-eight percent of all articles requested from NLM were requested only once. The results of the SAIL project demonstrated that converting journal articles to electronic images and storing them in anticipation of repeated requests would not meet NLM's objective to improve interlibrary loan. PMID:8004020

  4. AUTOMATED ANALYSIS OF BREAKERS

    Directory of Open Access Journals (Sweden)

    E. M. Farhadzade

    2014-01-01

    Full Text Available Breakers relate to Electric Power Systems’ equipment, the reliability of which influence, to a great extend, on reliability of Power Plants. In particular, the breakers determine structural reliability of switchgear circuit of Power Stations and network substations. Failure in short-circuit switching off by breaker with further failure of reservation unit or system of long-distance protection lead quite often to system emergency.The problem of breakers’ reliability improvement and the reduction of maintenance expenses is becoming ever more urgent in conditions of systematic increasing of maintenance cost and repair expenses of oil circuit and air-break circuit breakers. The main direction of this problem solution is the improvement of diagnostic control methods and organization of on-condition maintenance. But this demands to use a great amount of statistic information about nameplate data of breakers and their operating conditions, about their failures, testing and repairing, advanced developments (software of computer technologies and specific automated information system (AIS.The new AIS with AISV logo was developed at the department: “Reliability of power equipment” of AzRDSI of Energy. The main features of AISV are:· to provide the security and data base accuracy;· to carry out systematic control of breakers conformity with operating conditions;· to make the estimation of individual  reliability’s value and characteristics of its changing for given combination of characteristics variety;· to provide personnel, who is responsible for technical maintenance of breakers, not only with information but also with methodological support, including recommendations for the given problem solving  and advanced methods for its realization.

  5. Automation of solar plants

    Energy Technology Data Exchange (ETDEWEB)

    Yebra, L.J.; Romero, M.; Martinez, D.; Valverde, A. [CIEMAT - Plataforma Solar de Almeria, Tabernas (Spain); Berenguel, M. [Almeria Univ. (Spain). Departamento de Lenguajes y Computacion

    2004-07-01

    This work overviews some of the main activities and research lines that are being carried out within the scope of the specific collaboration agreement between the Plataforma Solar de Almeria-CIEMAT (PSA-CIEMAT) and the Automatic Control, Electronics and Robotics research group of the Universidad de Almeria (TEP197) titled ''Development of control systems and tools for thermosolar plants'' and the projects financed by the MCYT DPI2001-2380-C02-02 and DPI2002-04375-C03. The research is directed by the need of improving the efficiency of the process through which the energy provided by the sun is totally or partially used as energy source, as far as diminishing the costs associated to the operation and maintenance of the installations that use this energy source. The final objective is to develop different automatic control systems and techniques aimed at improving the competitiveness of solar plants. The paper summarizes different objectives and automatic control approaches that are being implemented in different facilities at the PSA-CIEMAT: central receiver systems and solar furnace. For each one of these facilities, a systematic procedure is being followed, composed of several steps: (i) development of dynamic models using the newest modeling technologies (both for simulation and control purposes), (ii) development of fully automated data acquisition and control systems including software tools facilitating the analysis of data and the application of knowledge to the controlled plants and (iii) synthesis of advanced controllers using techniques successfully used in the process industry and development of new and optimized control algorithms for solar plants. These aspects are summarized in this work. (orig.)

  6. Corrective justice and contract law

    Directory of Open Access Journals (Sweden)

    Martín Hevia

    2010-06-01

    Full Text Available This article suggests that the central aspects of contract law in various jurisdictions can be explained within the idea of corrective justice. The article is divided into three parts. The first part distinguishes between corrective justice and distributive justice. The second part describes contract law. The third part focuses on actions for breach of contract and within that context reflects upon the idea of corrective justice.

  7. Corrective justice and contract law

    OpenAIRE

    Martín Hevia

    2010-01-01

    This article suggests that the central aspects of contract law in various jurisdictions can be explained within the idea of corrective justice. The article is divided into three parts. The first part distinguishes between corrective justice and distributive justice. The second part describes contract law. The third part focuses on actions for breach of contract and within that context reflects upon the idea of corrective justice.

  8. Automated calculations for massive fermion production with ai-bar Talc

    International Nuclear Information System (INIS)

    Lorca, A.; Riemann, T.

    2004-01-01

    The package ai-bar Talc has been developed for the automated calculation of radiative corrections to two-fermion production at e + e - colliders. The package uses Diana, Qgraf, Form, Fortran, FF, LoopTools, and further unix/linux tools. Numerical results are presented for e + e - -> e + e - , μ + μ - , bs-bar , tc-bar

  9. Computer-automated tuning of semiconductor double quantum dots into the single-electron regime

    NARCIS (Netherlands)

    Baart, T.A.; Eendebak, P.T.; Reichl, C.; Wegscheider, W.; Vandersypen, L.M.K.

    2016-01-01

    We report the computer-automated tuning of gate-defined semiconductor double quantum dots in GaAs heterostructures. We benchmark the algorithm by creating three double quantum dots inside a linear array of four quantum dots. The algorithm sets the correct gate voltages for all the gates to tune the

  10. An automated approach to the design of decision tree classifiers

    Science.gov (United States)

    Argentiero, P.; Chin, R.; Beaudet, P.

    1982-01-01

    An automated technique is presented for designing effective decision tree classifiers predicated only on a priori class statistics. The procedure relies on linear feature extractions and Bayes table look-up decision rules. Associated error matrices are computed and utilized to provide an optimal design of the decision tree at each so-called 'node'. A by-product of this procedure is a simple algorithm for computing the global probability of correct classification assuming the statistical independence of the decision rules. Attention is given to a more precise definition of decision tree classification, the mathematical details on the technique for automated decision tree design, and an example of a simple application of the procedure using class statistics acquired from an actual Landsat scene.

  11. Space station automation and robotics study. Operator-systems interface

    Science.gov (United States)

    1984-01-01

    This is the final report of a Space Station Automation and Robotics Planning Study, which was a joint project of the Boeing Aerospace Company, Boeing Commercial Airplane Company, and Boeing Computer Services Company. The study is in support of the Advanced Technology Advisory Committee established by NASA in accordance with a mandate by the U.S. Congress. Boeing support complements that provided to the NASA Contractor study team by four aerospace contractors, the Stanford Research Institute (SRI), and the California Space Institute. This study identifies automation and robotics (A&R) technologies that can be advanced by requirements levied by the Space Station Program. The methodology used in the study is to establish functional requirements for the operator system interface (OSI), establish the technologies needed to meet these requirements, and to forecast the availability of these technologies. The OSI would perform path planning, tracking and control, object recognition, fault detection and correction, and plan modifications in connection with extravehicular (EV) robot operations.

  12. An automated instrument for controlled-potential coulometry: System documentation

    Energy Technology Data Exchange (ETDEWEB)

    Holland, M K; Cordaro, J V

    1988-06-01

    An automated controlled-potential coulometer has been developed at the Savannah River Plant for the determination of plutonium. Two such coulometers have been assembled, evaluated, and applied. The software is based upon the methodology used at the Savannah River Plant, however the system is applicable with minimal software modifications to any of the methodologies used throughout the nuclear industry. These state-of-the-art coulometers feature electrical calibration of the integration system, background current corrections, and control-potential adjustment capabilities. Measurement precision within 0.1% has been demonstrated. The systems have also been successfully applied to the determination of pure neptunium solutions. The design and documentation of the automated instrument are described herein. Each individual module's operation, wiring layout, and alignment are described. Interconnection of the modules and system calibration are discussed. A complete set of system prints and a list of associated parts are included. 9 refs., 10 figs., 6 tabs.

  13. Development of an Automated Technique for Failure Modes and Effect Analysis

    DEFF Research Database (Denmark)

    Blanke, M.; Borch, Ole; Allasia, G.

    1999-01-01

    Advances in automation have provided integration of monitoring and control functions to enhance the operator's overview and ability to take remedy actions when faults occur. Automation in plant supervision is technically possible with integrated automation systems as platforms, but new design...... methods are needed to cope efficiently with the complexity and to ensure that the functionality of a supervisor is correct and consistent. In particular these methods are expected to significantly improve fault tolerance of the designed systems. The purpose of this work is to develop a software module...... implementing an automated technique for Failure Modes and Effects Analysis (FMEA). This technique is based on the matrix formulation of FMEA for the investigation of failure propagation through a system. As main result, this technique will provide the design engineer with decision tables for fault handling...

  14. Development of an automated technique for failure modes and effect analysis

    DEFF Research Database (Denmark)

    Blanke, Mogens; Borch, Ole; Bagnoli, F.

    1999-01-01

    Advances in automation have provided integration of monitoring and control functions to enhance the operator's overview and ability to take remedy actions when faults occur. Automation in plant supervision is technically possible with integrated automation systems as platforms, but new design...... methods are needed to cope efficiently with the complexity and to ensure that the functionality of a supervisor is correct and consistent. In particular these methods are expected to significantly improve fault tolerance of the designed systems. The purpose of this work is to develop a software module...... implementing an automated technique for Failure Modes and Effects Analysis (FMEA). This technique is based on the matrix formulation of FMEA for the investigation of failure propagation through a system. As main result, this technique will provide the design engineer with decision tables for fault handling...

  15. Smartnotebook: A semi-automated approach to protein sequential NMR resonance assignments

    International Nuclear Information System (INIS)

    Slupsky, Carolyn M.; Boyko, Robert F.; Booth, Valerie K.; Sykes, Brian D.

    2003-01-01

    Complete and accurate NMR spectral assignment is a prerequisite for high-throughput automated structure determination of biological macromolecules. However, completely automated assignment procedures generally encounter difficulties for all but the most ideal data sets. Sources of these problems include difficulty in resolving correlations in crowded spectral regions, as well as complications arising from dynamics, such as weak or missing peaks, or atoms exhibiting more than one peak due to exchange phenomena. Smartnotebook is a semi-automated assignment software package designed to combine the best features of the automated and manual approaches. The software finds and displays potential connections between residues, while the spectroscopist makes decisions on which connection is correct, allowing rapid and robust assignment. In addition, smartnotebook helps the user fit chains of connected residues to the primary sequence of the protein by comparing the experimentally determined chemical shifts with expected shifts derived from a chemical shift database, while providing bookkeeping throughout the assignment procedure

  16. Dijkstra's interpretation of the approach to solving a problem of program correctness

    Directory of Open Access Journals (Sweden)

    Markoski Branko

    2010-01-01

    Full Text Available Proving the program correctness and designing the correct programs are two connected theoretical problems, which are of great practical importance. The first is solved within program analysis, and the second one in program synthesis, although intertwining of these two processes is often due to connection between the analysis and synthesis of programs. Nevertheless, having in mind the automated methods of proving correctness and methods of automatic program synthesis, the difference is easy to tell. This paper presents denotative interpretation of programming calculation explaining semantics by formulae φ and ψ, in such a way that they can be used for defining state sets for program P.

  17. Layered distributed architecture for plant automation

    International Nuclear Information System (INIS)

    Aravamuthan, G.; Verma, Yachika; Ranjan, Jyoti; Chachondia, Alka S.; Ganesh, G.

    2005-01-01

    The development of plant automation system and associated software remains one of the greatest challenges to the widespread implementation of highly adaptive re-configurable automation technology. This paper presents a layered distributed architecture for a plant automation system designed to support rapid reconfiguration and redeployment of automation components. The paper first presents evolution of automation architecture and their associated environment in the past few decades and then presents the concept of layered system architecture and the use of automation components to support the construction of a wide variety of automation system. It also highlights the role of standards and technology, which can be used in the development of automation components. We have attempted to adhere to open standards and technology for the development of automation component at a various layers. It also highlights the application of this concept in the development of an Operator Information System (OIS) for Advanced Heavy Water Reactor (AHWR). (author)

  18. Level of Automation and Failure Frequency Effects on Simulated Lunar Lander Performance

    Science.gov (United States)

    Marquez, Jessica J.; Ramirez, Margarita

    2014-01-01

    A human-in-the-loop experiment was conducted at the NASA Ames Research Center Vertical Motion Simulator, where instrument-rated pilots completed a simulated terminal descent phase of a lunar landing. Ten pilots participated in a 2 x 2 mixed design experiment, with level of automation as the within-subjects factor and failure frequency as the between subjects factor. The two evaluated levels of automation were high (fully automated landing) and low (manual controlled landing). During test trials, participants were exposed to either a high number of failures (75% failure frequency) or low number of failures (25% failure frequency). In order to investigate the pilots' sensitivity to changes in levels of automation and failure frequency, the dependent measure selected for this experiment was accuracy of failure diagnosis, from which D Prime and Decision Criterion were derived. For each of the dependent measures, no significant difference was found for level of automation and no significant interaction was detected between level of automation and failure frequency. A significant effect was identified for failure frequency suggesting failure frequency has a significant effect on pilots' sensitivity to failure detection and diagnosis. Participants were more likely to correctly identify and diagnose failures if they experienced the higher levels of failures, regardless of level of automation

  19. Standard IEC 61850 substation automation

    Energy Technology Data Exchange (ETDEWEB)

    Bricchi, A.; Mezzadri, D. [Selta, Tortoreto (Italy)

    2008-07-01

    The International Electrotechnical Commission (IEC) 61850 standard is the reference communication protocol for all electrical substations protection and control systems. It creates models of all the elements and functionalities of an electrical substation, including physical elements such as switches or circuit breakers, as well as protection, control and monitoring functionalities. Network managers need to renew power substation automation and control systems in order to improve the efficiency and quality of services offered by electric utilities. Selta has proposed a new integrated solution for the automation of power substations which is fully compliant with the IEC 61850 norms. The solution involves the integration of control, automation, protection, monitoring and maintenance functions and applies leading edge technology to its systems, particularly for the TERNA network. The system is based on the use of many electronic devices at a power plant, each one with a specialized function, and all interconnected via a Station LAN. This solution, was tested on the TERNA network in Italy, in VHV and HV stations. It was shown to offer many advantages, such as an architecture based on full interoperability between control, monitoring and protection equipment; centralized and distributed automation; a LAN station that allows full interoperability between different bay units and protection relays in order to integrate equipment from various suppliers; the integration of automation systems in existing bay units and protection relays equipped with standard communication buses or with proprietary interfaces; and time synchronization for the entire system through a station GPS reception system. 10 refs., 1 tab., 7 figs.

  20. Space power subsystem automation technology

    Science.gov (United States)

    Graves, J. R. (Compiler)

    1982-01-01

    The technology issues involved in power subsystem automation and the reasonable objectives to be sought in such a program were discussed. The complexities, uncertainties, and alternatives of power subsystem automation, along with the advantages from both an economic and a technological perspective were considered. Whereas most spacecraft power subsystems now use certain automated functions, the idea of complete autonomy for long periods of time is almost inconceivable. Thus, it seems prudent that the technology program for power subsystem automation be based upon a growth scenario which should provide a structured framework of deliberate steps to enable the evolution of space power subsystems from the current practice of limited autonomy to a greater use of automation with each step being justified on a cost/benefit basis. Each accomplishment should move toward the objectives of decreased requirement for ground control, increased system reliability through onboard management, and ultimately lower energy cost through longer life systems that require fewer resources to operate and maintain. This approach seems well-suited to the evolution of more sophisticated algorithms and eventually perhaps even the use of some sort of artificial intelligence. Multi-hundred kilowatt systems of the future will probably require an advanced level of autonomy if they are to be affordable and manageable.

  1. Programmable automation systems in PSA

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1997-06-01

    The Finnish safety authority (STUK) requires plant specific PSAs, and quantitative safety goals are set on different levels. The reliability analysis is more problematic when critical safety functions are realized by applying programmable automation systems. Conventional modeling techniques do not necessarily apply to the analysis of these systems, and the quantification seems to be impossible. However, it is important to analyze contribution of programmable automation systems to the plant safety and PSA is the only method with system analytical view over the safety. This report discusses the applicability of PSA methodology (fault tree analyses, failure modes and effects analyses) in the analysis of programmable automation systems. The problem of how to decompose programmable automation systems for reliability modeling purposes is discussed. In addition to the qualitative analysis and structural reliability modeling issues, the possibility to evaluate failure probabilities of programmable automation systems is considered. One solution to the quantification issue is the use of expert judgements, and the principles to apply expert judgements is discussed in the paper. A framework to apply expert judgements is outlined. Further, the impacts of subjective estimates on the interpretation of PSA results are discussed. (orig.) (13 refs.)

  2. Computerized automated remote inspection system

    International Nuclear Information System (INIS)

    The automated inspection system utilizes a computer to control the location of the ultrasonic transducer, the actual inspection process, the display of the data, and the storage of the data on IBM magnetic tape. This automated inspection equipment provides two major advantages. First, it provides a cost savings, because of the reduced inspection time, made possible by the automation of the data acquisition, processing, and storage equipment. This reduced inspection time is also made possible by a computerized data evaluation aid which speeds data interpretation. In addition, the computer control of the transducer location drive allows the exact duplication of a previously located position or flaw. The second major advantage is that the use of automated inspection equipment also allows a higher-quality inspection, because of the automated data acquisition, processing, and storage. This storage of data, in accurate digital form on IBM magnetic tape, for example, facilitates retrieval for comparison with previous inspection data. The equipment provides a multiplicity of scan data which will provide statistical information on any questionable volume or flaw. An automatic alarm for location of all reportable flaws reduces the probability of operator error. This system has the ability to present data on a cathode ray tube as numerical information, a three-dimensional picture, or ''hard-copy'' sheet. One important advantage of this system is the ability to store large amounts of data in compact magnetic tape reels

  3. International Conference Automation : Challenges in Automation, Robotics and Measurement Techniques

    CERN Document Server

    Zieliński, Cezary; Kaliczyńska, Małgorzata

    2016-01-01

    This book presents the set of papers accepted for presentation at the International Conference Automation, held in Warsaw, 2-4 March of 2016. It presents the research results presented by top experts in the fields of industrial automation, control, robotics and measurement techniques. Each chapter presents a thorough analysis of a specific technical problem which is usually followed by numerical analysis, simulation, and description of results of implementation of the solution of a real world problem. The presented theoretical results, practical solutions and guidelines will be valuable for both researchers working in the area of engineering sciences and for practitioners solving industrial problems. .

  4. Demands on digital automation; Anforderungen an die Digitale Automation

    Energy Technology Data Exchange (ETDEWEB)

    Bieler, P.

    1995-12-31

    In chapter 12 of the anthology about building control the demands on digital automation are presented. The following aspects are discussed: variety of the companies` philosophies, demands of the customer/investor, demands of the use of buildings/rooms, the user, point of view of manufacturer of technical plants. (BWI) [Deutsch] Kapitel 12 des Sammelbandes ueber Building Control stellt die Anforderungen an die Digitale Automation vor. In diesem Zusammenhang wird auf folgende Themenbereiche eingegangen: Spektrum der Firmenphilosophien, Forderungen der Auftraggeber/Investoren, der Gebaeude-/Raumnutzung, der Betreiber sowie Sicht der Ersteller betriebstechnischer Anlagen. (BWI)

  5. Unpacking Corrections in Mobile Instruction

    DEFF Research Database (Denmark)

    Levin, Lena; Cromdal, Jakob; Broth, Mathias

    2017-01-01

    that the practice of unpacking the local particulars of corrections (i) provides for the instructional character of the interaction, and (ii) is highly sensitive to the relevant physical and mobile contingencies. These findings contribute to the existing literature on the interactional organisation of correction...

  6. Atmospheric correction of satellite data

    Science.gov (United States)

    Shmirko, Konstantin; Bobrikov, Alexey; Pavlov, Andrey

    2015-11-01

    Atmosphere responses for more than 90% of all radiation measured by satellite. Due to this, atmospheric correction plays an important role in separating water leaving radiance from the signal, evaluating concentration of various water pigments (chlorophyll-A, DOM, CDOM, etc). The elimination of atmospheric intrinsic radiance from remote sensing signal referred to as atmospheric correction.

  7. Stress Management in Correctional Recreation.

    Science.gov (United States)

    Card, Jaclyn A.

    Current economic conditions have created additional sources of stress in the correctional setting. Often, recreation professionals employed in these settings also add to inmate stress. One of the major factors limiting stress management in correctional settings is a lack of understanding of the value, importance, and perceived freedom, of leisure.…

  8. Automated quantification of proliferation with automated hot-spot selection in phosphohistone H3/MART1 dual-stained stage I/II melanoma.

    Science.gov (United States)

    Nielsen, Patricia Switten; Riber-Hansen, Rikke; Schmidt, Henrik; Steiniche, Torben

    2016-04-09

    Staging of melanoma includes quantification of a proliferation index, i.e., presumed melanocytic mitoses of H&E stains are counted manually in hot spots. Yet, its reproducibility and prognostic impact increases by immunohistochemical dual staining for phosphohistone H3 (PHH3) and MART1, which also may enable fully automated quantification by image analysis. To ensure manageable workloads and repeatable measurements in modern pathology, the study aimed to present an automated quantification of proliferation with automated hot-spot selection in PHH3/MART1-stained melanomas. Formalin-fixed, paraffin-embedded tissue from 153 consecutive stage I/II melanoma patients was immunohistochemically dual-stained for PHH3 and MART1. Whole slide images were captured, and the number of PHH3/MART1-positive cells was manually and automatically counted in the global tumor area and in a manually and automatically selected hot spot, i.e., a fixed 1-mm(2) square. Bland-Altman plots and hypothesis tests compared manual and automated procedures, and the Cox proportional hazards model established their prognostic impact. The mean difference between manual and automated global counts was 2.9 cells/mm(2) (P = 0.0071) and 0.23 cells per hot spot (P = 0.96) for automated counts in manually and automatically selected hot spots. In 77 % of cases, manual and automated hot spots overlapped. Fully manual hot-spot counts yielded the highest prognostic performance with an adjusted hazard ratio of 5.5 (95 % CI, 1.3-24, P = 0.024) as opposed to 1.3 (95 % CI, 0.61-2.9, P = 0.47) for automated counts with automated hot spots. The automated index and automated hot-spot selection were highly correlated to their manual counterpart, but altogether their prognostic impact was noticeably reduced. Because correct recognition of only one PHH3/MART1-positive cell seems important, extremely high sensitivity and specificity of the algorithm is required for prognostic purposes. Thus, automated

  9. Automated detection of retinal disease.

    Science.gov (United States)

    Helmchen, Lorens A; Lehmann, Harold P; Abràmoff, Michael D

    2014-11-01

    Nearly 4 in 10 Americans with diabetes currently fail to undergo recommended annual retinal exams, resulting in tens of thousands of cases of blindness that could have been prevented. Advances in automated retinal disease detection could greatly reduce the burden of labor-intensive dilated retinal examinations by ophthalmologists and optometrists and deliver diagnostic services at lower cost. As the current availability of ophthalmologists and optometrists is inadequate to screen all patients at risk every year, automated screening systems deployed in primary care settings and even in patients' homes could fill the current gap in supply. Expanding screens to all patients at risk by switching to automated detection systems would in turn yield significantly higher rates of detecting and treating diabetic retinopathy per dilated retinal examination. Fewer diabetic patients would develop complications such as blindness, while ophthalmologists could focus on more complex cases.

  10. Manual versus automated blood sampling

    DEFF Research Database (Denmark)

    Teilmann, A C; Kalliokoski, Otto; Sørensen, Dorte B

    2014-01-01

    Facial vein (cheek blood) and caudal vein (tail blood) phlebotomy are two commonly used techniques for obtaining blood samples from laboratory mice, while automated blood sampling through a permanent catheter is a relatively new technique in mice. The present study compared physiological parameters......, glucocorticoid dynamics as well as the behavior of mice sampled repeatedly for 24 h by cheek blood, tail blood or automated blood sampling from the carotid artery. Mice subjected to cheek blood sampling lost significantly more body weight, had elevated levels of plasma corticosterone, excreted more fecal...... corticosterone metabolites, and expressed more anxious behavior than did the mice of the other groups. Plasma corticosterone levels of mice subjected to tail blood sampling were also elevated, although less significantly. Mice subjected to automated blood sampling were less affected with regard to the parameters...

  11. Unmet needs in automated cytogenetics

    International Nuclear Information System (INIS)

    Bender, M.A.

    1976-01-01

    Though some, at least, of the goals of automation systems for analysis of clinical cytogenetic material seem either at hand, like automatic metaphase finding, or at least likely to be met in the near future, like operator-assisted semi-automatic analysis of banded metaphase spreads, important areas of cytogenetic analsis, most importantly the determination of chromosomal aberration frequencies in populations of cells or in samples of cells from people exposed to environmental mutagens, await practical methods of automation. Important as are the clinical diagnostic applications, it is apparent that increasing concern over the clastogenic effects of the multitude of potentially clastogenic chemical and physical agents to which human populations are being increasingly exposed, and the resulting emergence of extensive cytogenetic testing protocols, makes the development of automation not only economically feasible but almost mandatory. The nature of the problems involved, and acutal of possible approaches to their solution, are discussed

  12. Computer automation and artificial intelligence

    International Nuclear Information System (INIS)

    Hasnain, S.B.

    1992-01-01

    Rapid advances in computing, resulting from micro chip revolution has increased its application manifold particularly for computer automation. Yet the level of automation available, has limited its application to more complex and dynamic systems which require an intelligent computer control. In this paper a review of Artificial intelligence techniques used to augment automation is presented. The current sequential processing approach usually adopted in artificial intelligence has succeeded in emulating the symbolic processing part of intelligence, but the processing power required to get more elusive aspects of intelligence leads towards parallel processing. An overview of parallel processing with emphasis on transputer is also provided. A Fuzzy knowledge based controller for amination drug delivery in muscle relaxant anesthesia on transputer is described. 4 figs. (author)

  13. Automated analysis of autoradiographic imagery

    International Nuclear Information System (INIS)

    Bisignani, W.T.; Greenhouse, S.C.

    1975-01-01

    A research programme is described which has as its objective the automated characterization of neurological tissue regions from autoradiographs by utilizing hybrid-resolution image processing techniques. An experimental system is discussed which includes raw imagery, scanning an digitizing equipments, feature-extraction algorithms, and regional characterization techniques. The parameters extracted by these algorithms are presented as well as the regional characteristics which are obtained by operating on the parameters with statistical sampling techniques. An approach is presented for validating the techniques and initial experimental results are obtained from an anlysis of an autoradiograph of a region of the hypothalamus. An extension of these automated techniques to other biomedical research areas is discussed as well as the implications of applying automated techniques to biomedical research problems. (author)

  14. Selecting automation for the clinical chemistry laboratory.

    Science.gov (United States)

    Melanson, Stacy E F; Lindeman, Neal I; Jarolim, Petr

    2007-07-01

    Laboratory automation proposes to improve the quality and efficiency of laboratory operations, and may provide a solution to the quality demands and staff shortages faced by today's clinical laboratories. Several vendors offer automation systems in the United States, with both subtle and obvious differences. Arriving at a decision to automate, and the ensuing evaluation of available products, can be time-consuming and challenging. Although considerable discussion concerning the decision to automate has been published, relatively little attention has been paid to the process of evaluating and selecting automation systems. To outline a process for evaluating and selecting automation systems as a reference for laboratories contemplating laboratory automation. Our Clinical Chemistry Laboratory staff recently evaluated all major laboratory automation systems in the United States, with their respective chemistry and immunochemistry analyzers. Our experience is described and organized according to the selection process, the important considerations in clinical chemistry automation, decisions and implementation, and we give conclusions pertaining to this experience. Including the formation of a committee, workflow analysis, submitting a request for proposal, site visits, and making a final decision, the process of selecting chemistry automation took approximately 14 months. We outline important considerations in automation design, preanalytical processing, analyzer selection, postanalytical storage, and data management. Selecting clinical chemistry laboratory automation is a complex, time-consuming process. Laboratories considering laboratory automation may benefit from the concise overview and narrative and tabular suggestions provided.

  15. Toward designing for trust in database automation

    Energy Technology Data Exchange (ETDEWEB)

    Duez, P. P.; Jamieson, G. A. [Cognitive Engineering Laboratory, Univ. of Toronto, 5 King' s College Rd., Toronto, Ont. M5S 3G8 (Canada)

    2006-07-01

    Appropriate reliance on system automation is imperative for safe and productive work, especially in safety-critical systems. It is unsafe to rely on automation beyond its designed use; conversely, it can be both unproductive and unsafe to manually perform tasks that are better relegated to automated tools. Operator trust in automated tools mediates reliance, and trust appears to affect how operators use technology. As automated agents become more complex, the question of trust in automation is increasingly important. In order to achieve proper use of automation, we must engender an appropriate degree of trust that is sensitive to changes in operating functions and context. In this paper, we present research concerning trust in automation in the domain of automated tools for relational databases. Lee and See have provided models of trust in automation. One model developed by Lee and See identifies three key categories of information about the automation that lie along a continuum of attributional abstraction. Purpose-, process-and performance-related information serve, both individually and through inferences between them, to describe automation in such a way as to engender r properly-calibrated trust. Thus, one can look at information from different levels of attributional abstraction as a general requirements analysis for information key to appropriate trust in automation. The model of information necessary to engender appropriate trust in automation [1] is a general one. Although it describes categories of information, it does not provide insight on how to determine the specific information elements required for a given automated tool. We have applied the Abstraction Hierarchy (AH) to this problem in the domain of relational databases. The AH serves as a formal description of the automation at several levels of abstraction, ranging from a very abstract purpose-oriented description to a more concrete description of the resources involved in the automated process

  16. Toward designing for trust in database automation

    International Nuclear Information System (INIS)

    Duez, P. P.; Jamieson, G. A.

    2006-01-01

    Appropriate reliance on system automation is imperative for safe and productive work, especially in safety-critical systems. It is unsafe to rely on automation beyond its designed use; conversely, it can be both unproductive and unsafe to manually perform tasks that are better relegated to automated tools. Operator trust in automated tools mediates reliance, and trust appears to affect how operators use technology. As automated agents become more complex, the question of trust in automation is increasingly important. In order to achieve proper use of automation, we must engender an appropriate degree of trust that is sensitive to changes in operating functions and context. In this paper, we present research concerning trust in automation in the domain of automated tools for relational databases. Lee and See have provided models of trust in automation. One model developed by Lee and See identifies three key categories of information about the automation that lie along a continuum of attributional abstraction. Purpose-, process-and performance-related information serve, both individually and through inferences between them, to describe automation in such a way as to engender r properly-calibrated trust. Thus, one can look at information from different levels of attributional abstraction as a general requirements analysis for information key to appropriate trust in automation. The model of information necessary to engender appropriate trust in automation [1] is a general one. Although it describes categories of information, it does not provide insight on how to determine the specific information elements required for a given automated tool. We have applied the Abstraction Hierarchy (AH) to this problem in the domain of relational databases. The AH serves as a formal description of the automation at several levels of abstraction, ranging from a very abstract purpose-oriented description to a more concrete description of the resources involved in the automated process

  17. BOA: Framework for Automated Builds

    CERN Document Server

    Ratnikova, N

    2003-01-01

    Managing large-scale software products is a complex software engineering task. The automation of the software development, release and distribution process is most beneficial in the large collaborations, where the big number of developers, multiple platforms and distributed environment are typical factors. This paper describes Build and Output Analyzer framework and its components that have been developed in CMS to facilitate software maintenance and improve software quality. The system allows to generate, control and analyze various types of automated software builds and tests, such as regular rebuilds of the development code, software integration for releases and installation of the existing versions.

  18. Status of automated tensile machine

    International Nuclear Information System (INIS)

    Satou, M.; Hamilton, M.L.; Sato, S.; Kohyama, A.

    1992-01-01

    The objective of this work is to develop the Monbusho Automated Tensile machine (MATRON) and install and operate it at the Pacific Northwest Laboratory (PNL). The machine is designed to provide rapid, automated testing of irradiated miniature tensile specimen in a vacuum at elevated temperatures. The MATRON was successfully developed and shipped to PNL for installation in a hot facility. The original installation plan was modified to simplify the current and subsequent installations, and the installation was completed. Detailed procedures governing the operation of the system were written. Testing on irradiated miniature tensile specimen should begin in the near future

  19. Automated Podcasting System for Universities

    Directory of Open Access Journals (Sweden)

    Ypatios Grigoriadis

    2013-03-01

    Full Text Available This paper presents the results achieved at Graz University of Technology (TU Graz in the field of automating the process of recording and publishing university lectures in a very new way. It outlines cornerstones of the development and integration of an automated recording system such as the lecture hall setup, the recording hardware and software architecture as well as the development of a text-based search for the final product by method of indexing video podcasts. Furthermore, the paper takes a look at didactical aspects, evaluations done in this context and future outlook.

  20. 2013 Chinese Intelligent Automation Conference

    CERN Document Server

    Deng, Zhidong

    2013-01-01

    Proceedings of the 2013 Chinese Intelligent Automation Conference presents selected research papers from the CIAC’13, held in Yangzhou, China. The topics include e.g. adaptive control, fuzzy control, neural network based control, knowledge based control, hybrid intelligent control, learning control, evolutionary mechanism based control, multi-sensor integration, failure diagnosis, and reconfigurable control. Engineers and researchers from academia, industry, and government can gain an inside view of new solutions combining ideas from multiple disciplines in the field of intelligent automation.   Zengqi Sun and Zhidong Deng are professors at the Department of Computer Science, Tsinghua University, China.

  1. 2013 Chinese Intelligent Automation Conference

    CERN Document Server

    Deng, Zhidong

    2013-01-01

    Proceedings of the 2013 Chinese Intelligent Automation Conference presents selected research papers from the CIAC’13, held in Yangzhou, China. The topics include e.g. adaptive control, fuzzy control, neural network based control, knowledge based control, hybrid intelligent control, learning control, evolutionary mechanism based control, multi-sensor integration, failure diagnosis, and reconfigurable control. Engineers and researchers from academia, industry, and government can gain an inside view of new solutions combining ideas from multiple disciplines in the field of intelligent automation. Zengqi Sun and Zhidong Deng are professors at the Department of Computer Science, Tsinghua University, China.

  2. BOA: Framework for automated builds

    International Nuclear Information System (INIS)

    Ratnikova, N.

    2003-01-01

    Managing large-scale software products is a complex software engineering task. The automation of the software development, release and distribution process is most beneficial in the large collaborations, where the big number of developers, multiple platforms and distributed environment are typical factors. This paper describes Build and Output Analyzer framework and its components that have been developed in CMS to facilitate software maintenance and improve software quality. The system allows to generate, control and analyze various types of automated software builds and tests, such as regular rebuilds of the development code, software integration for releases and installation of the existing versions

  3. Design automation, languages, and simulations

    CERN Document Server

    Chen, Wai-Kai

    2003-01-01

    As the complexity of electronic systems continues to increase, the micro-electronic industry depends upon automation and simulations to adapt quickly to market changes and new technologies. Compiled from chapters contributed to CRC's best-selling VLSI Handbook, this volume covers a broad range of topics relevant to design automation, languages, and simulations. These include a collaborative framework that coordinates distributed design activities through the Internet, an overview of the Verilog hardware description language and its use in a design environment, hardware/software co-design, syst

  4. Computer automation in veterinary hospitals.

    Science.gov (United States)

    Rogers, H

    1996-05-01

    Computers have been used to automate complex and repetitive tasks in veterinary hospitals since the 1960s. Early systems were expensive, but their use was justified because they performed jobs which would have been impossible or which would have required greater resources in terms of time and personnel had they been performed by other methods. Systems found in most veterinary hospitals today are less costly, magnitudes more capable, and often underused. Modern multitasking operating systems and graphical interfaces bring many opportunities for automation. Commercial and custom programs developed and used in a typical multidoctor mixed species veterinary practice are described.

  5. Formal verification of automated teller machine systems using SPIN

    Science.gov (United States)

    Iqbal, Ikhwan Mohammad; Adzkiya, Dieky; Mukhlash, Imam

    2017-08-01

    Formal verification is a technique for ensuring the correctness of systems. This work focuses on verifying a model of the Automated Teller Machine (ATM) system against some specifications. We construct the model as a state transition diagram that is suitable for verification. The specifications are expressed as Linear Temporal Logic (LTL) formulas. We use Simple Promela Interpreter (SPIN) model checker to check whether the model satisfies the formula. This model checker accepts models written in Process Meta Language (PROMELA), and its specifications are specified in LTL formulas.

  6. Automation U.S.A.: Overcoming Barriers to Automation.

    Science.gov (United States)

    Brody, Herb

    1985-01-01

    Although labor unions and inadequate technology play minor roles, the principal barrier to factory automation is "fear of change." Related problems include long-term benefits, nontechnical executives, and uncertainty of factory cost accounting. Industry support for university programs is helping to educate engineers to design, implement, and…

  7. Logistic control in automated transportation networks

    NARCIS (Netherlands)

    Ebben, Mark

    2001-01-01

    Increasing congestion problems lead to a search for alternative transportation systems. Automated transportation networks, possibly underground, are an option. Logistic control systems are essential for future implementations of such automated transportation networks. This book contributes to the

  8. [Algorithm for the automated processing of rheosignals].

    Science.gov (United States)

    Odinets, G S

    1988-01-01

    Algorithm for rheosignals recognition for a microprocessing device with a representation apparatus and with automated and manual cursor control was examined. The algorithm permits to automate rheosignals registrating and processing taking into account their changeability.

  9. Future Autonomous and Automated Systems Testbed

    Data.gov (United States)

    National Aeronautics and Space Administration — Trust is the greatest obstacle to implementing greater autonomy and automation (A&A) in the human spaceflight program. The Future Autonomous and Automated...

  10. Corrected Integral Shape Averaging Applied to Obstructive Sleep Apnea Detection from the Electrocardiogram

    Directory of Open Access Journals (Sweden)

    C. O'Brien

    2007-01-01

    Full Text Available We present a technique called corrected integral shape averaging (CISA for quantifying shape and shape differences in a set of signals. CISA can be used to account for signal differences which are purely due to affine time warping (jitter and dilation/compression, and hence provide access to intrinsic shape fluctuations. CISA can also be used to define a distance between shapes which has useful mathematical properties; a mean shape signal for a set of signals can be defined, which minimizes the sum of squared shape distances of the set from the mean. The CISA procedure also allows joint estimation of the affine time parameters. Numerical simulations are presented to support the algorithm for obtaining the CISA mean and parameters. Since CISA provides a well-defined shape distance, it can be used in shape clustering applications based on distance measures such as k-means. We present an application in which CISA shape clustering is applied to P-waves extracted from the electrocardiogram of subjects suffering from sleep apnea. The resulting shape clustering distinguishes ECG segments recorded during apnea from those recorded during normal breathing with a sensitivity of 81% and specificity of 84%.

  11. Linear network error correction coding

    CERN Document Server

    Guang, Xuan

    2014-01-01

    There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an

  12. Ask the experts: automation: part I.

    Science.gov (United States)

    Allinson, John L; Blick, Kenneth E; Cohen, Lucinda; Higton, David; Li, Ming

    2013-08-01

    Bioanalysis invited a selection of leading researchers to express their views on automation in the bioanalytical laboratory. The topics discussed include the challenges that the modern bioanalyst faces when integrating automation into existing drug-development processes, the impact of automation and how they envision the modern bioanalytical laboratory changing in the near future. Their enlightening responses provide a valuable insight into the impact of automation and the future of the constantly evolving bioanalytical laboratory.

  13. Office automation: a look beyond word processing

    OpenAIRE

    DuBois, Milan Ephriam, Jr.

    1983-01-01

    Approved for public release; distribution is unlimited Word processing was the first of various forms of office automation technologies to gain widespread acceptance and usability in the business world. For many, it remains the only form of office automation technology. Office automation, however, is not just word processing, although it does include the function of facilitating and manipulating text. In reality, office automation is not one innovation, or one office system, or one tech...

  14. GUI test automation for Qt application

    OpenAIRE

    Wang, Lei

    2015-01-01

    GUI test automation is a popular and interesting subject in the testing industry. Many companies plan to start test automation projects in order to implement efficient, less expensive software testing. However, there are challenges for the testing team who lack experience performing GUI tests automation. Many GUI test automation projects have ended in failure due to mistakes made during the early stages of the project. The major work of this thesis is to find a solution to the challenges of e...

  15. Secure Automated Microgrid Energy System

    Science.gov (United States)

    2016-12-01

    O&M Operations and Maintenance PSO Power System Optimization PV Photovoltaic RAID Redundant Array of Independent Disks RBAC Role...elements of the initial study and operational power system model (feeder size , protective devices, generation sources, controllable loads, transformers...EW-201340) Secure Automated Microgrid Energy System December 2016 This document has been cleared for public release; Distribution Statement A

  16. Adaptation : A Partially Automated Approach

    NARCIS (Netherlands)

    Manjing, Tham; Bukhsh, F.A.; Weigand, H.

    2014-01-01

    This paper showcases the possibility of creating an adaptive auditing system. Adaptation in an audit environment need human intervention at some point. Based on a case study this paper focuses on automation of adaptation process. It is divided into solution design and validation parts. The artifact

  17. AUTOMATING ASSET KNOWLEDGE WITH MTCONNECT.

    Science.gov (United States)

    Venkatesh, Sid; Ly, Sidney; Manning, Martin; Michaloski, John; Proctor, Fred

    2016-01-01

    In order to maximize assets, manufacturers should use real-time knowledge garnered from ongoing and continuous collection and evaluation of factory-floor machine status data. In discrete parts manufacturing, factory machine monitoring has been difficult, due primarily to closed, proprietary automation equipment that make integration difficult. Recently, there has been a push in applying the data acquisition concepts of MTConnect to the real-time acquisition of machine status data. MTConnect is an open, free specification aimed at overcoming the "Islands of Automation" dilemma on the shop floor. With automated asset analysis, manufacturers can improve production to become lean, efficient, and effective. The focus of this paper will be on the deployment of MTConnect to collect real-time machine status to automate asset management. In addition, we will leverage the ISO 22400 standard, which defines an asset and quantifies asset performance metrics. In conjunction with these goals, the deployment of MTConnect in a large aerospace manufacturing facility will be studied with emphasis on asset management and understanding the impact of machine Overall Equipment Effectiveness (OEE) on manufacturing.

  18. CCD characterization and measurements automation

    Czech Academy of Sciences Publication Activity Database

    Kotov, I.V.; Frank, J.; Kotov, A.I.; Kubánek, Petr; O´Connor, P.; Prouza, Michael; Radeka, V.; Takacs, P.

    2012-01-01

    Roč. 695, Dec (2012), 188-192 ISSN 0168-9002 R&D Projects: GA MŠk ME09052 Institutional research plan: CEZ:AV0Z10100502 Keywords : CCD * characterization * test automation Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 1.142, year: 2012

  19. Automated activation-analysis system

    International Nuclear Information System (INIS)

    Minor, M.M.; Hensley, W.K.; Denton, M.M.; Garcia, S.R.

    1981-01-01

    An automated delayed neutron counting and instrumental neutron activation analysis system has been developed at Los Alamos National Laboratory's Omega West Reactor (OWR) to analyze samples for uranium and 31 additional elements with a maximum throughput of 400 samples per day. The system and its mode of operation for a large reconnaissance survey are described

  20. Automated Analysis of Infinite Scenarios

    DEFF Research Database (Denmark)

    Buchholtz, Mikael

    2005-01-01

    The security of a network protocol crucially relies on the scenario in which the protocol is deployed. This paper describes syntactic constructs for modelling network scenarios and presents an automated analysis tool, which can guarantee that security properties hold in all of the (infinitely many...