WorldWideScience

Sample records for field correction automated

  1. Peripheral refractive correction and automated perimetric profiles.

    Science.gov (United States)

    Wild, J M; Wood, J M; Crews, S J

    1988-06-01

    The effect of peripheral refractive error correction on the automated perimetric sensitivity profile was investigated on a sample of 10 clinically normal, experienced observers. Peripheral refractive error was determined at eccentricities of 0 degree, 20 degrees and 40 degrees along the temporal meridian of the right eye using the Canon Autoref R-1, an infra-red automated refractor, under the parametric conditions of the Octopus automated perimeter. Perimetric sensitivity was then undertaken at these eccentricities (stimulus sizes 0 and III) with and without the appropriate peripheral refractive correction using the Octopus 201 automated perimeter. Within the measurement limits of the experimental procedures employed, perimetric sensitivity was not influenced by peripheral refractive correction.

  2. Space Weather Magnetometer Set with Automated AC Spacecraft Field Correction for GEO-KOMPSAT-2A

    Science.gov (United States)

    Auster, U.; Magnes, W.; Delva, M.; Valavanoglou, A.; Leitner, S.; Hillenmaier, O.; Strauch, C.; Brown, P.; Whiteside, B.; Bendyk, M.; Hilgers, A.; Kraft, S.; Luntama, J. P.; Seon, J.

    2016-05-01

    Monitoring the solar wind conditions, in particular its magnetic field (interplanetary magnetic field) ahead of the Earth is essential in performing accurate and reliable space weather forecasting. The magnetic condition of the spacecraft itself is a key parameter for the successful performance of the magnetometer onboard. In practice a condition with negligible magnetic field of the spacecraft cannot always be fulfilled and magnetic sources on the spacecraft interfere with the natural magnetic field measured by the space magnetometer. The presented "ready-to-use" Service Oriented Spacecraft Magnetometer (SOSMAG) is developed for use on any satellite implemented without magnetic cleanliness programme. It enables detection of the spacecraft field AC variations on a proper time scale suitable to distinguish the magnetic field variations relevant to space weather phenomena, such as sudden increase in the interplanetary field or southward turning. This is achieved through the use of dual fluxgate magnetometers on a short boom (1m) and two additional AMR sensors on the spacecraft body, which monitor potential AC disturbers. The measurements of the latter sensors enable an automated correction of the AC signal contributions from the spacecraft in the final magnetic vector. After successful development and test of the EQM prototype, a flight model (FM) is being built for the Korean satellite Geo-Kompsat 2A, with launch foreseen in 2018.

  3. Automated general temperature correction method for dielectric soil moisture sensors

    Science.gov (United States)

    Kapilaratne, R. G. C. Jeewantinie; Lu, Minjiao

    2017-08-01

    An effective temperature correction method for dielectric sensors is important to ensure the accuracy of soil water content (SWC) measurements of local to regional-scale soil moisture monitoring networks. These networks are extensively using highly temperature sensitive dielectric sensors due to their low cost, ease of use and less power consumption. Yet there is no general temperature correction method for dielectric sensors, instead sensor or site dependent correction algorithms are employed. Such methods become ineffective at soil moisture monitoring networks with different sensor setups and those that cover diverse climatic conditions and soil types. This study attempted to develop a general temperature correction method for dielectric sensors which can be commonly used regardless of the differences in sensor type, climatic conditions and soil type without rainfall data. In this work an automated general temperature correction method was developed by adopting previously developed temperature correction algorithms using time domain reflectometry (TDR) measurements to ThetaProbe ML2X, Stevens Hydra probe II and Decagon Devices EC-TM sensor measurements. The rainy day effects removal procedure from SWC data was automated by incorporating a statistical inference technique with temperature correction algorithms. The temperature correction method was evaluated using 34 stations from the International Soil Moisture Monitoring Network and another nine stations from a local soil moisture monitoring network in Mongolia. Soil moisture monitoring networks used in this study cover four major climates and six major soil types. Results indicated that the automated temperature correction algorithms developed in this study can eliminate temperature effects from dielectric sensor measurements successfully even without on-site rainfall data. Furthermore, it has been found that actual daily average of SWC has been changed due to temperature effects of dielectric sensors with a

  4. Evaluation of refractive correction for standard automated perimetry in eyes wearing multifocal contact lenses

    Directory of Open Access Journals (Sweden)

    Kazunori Hirasawa

    2017-10-01

    Full Text Available AIM: To evaluate the refractive correction for standard automated perimetry (SAP in eyes with refractive multifocal contact lenses (CL in healthy young participants. METHODS: Twenty-nine eyes of 29 participants were included. Accommodation was paralyzed in all participants with 1% cyclopentolate hydrochloride. SAP was performed using the Humphrey SITA-standard 24-2 and 10-2 protocol under three refractive conditions: monofocal CL corrected for near distance (baseline; multifocal CL corrected for distance (mCL-D; and mCL-D corrected for near vision using a spectacle lens (mCL-N. Primary outcome measures were the foveal threshold, mean deviation (MD, and pattern standard deviation (PSD. RESULTS: The foveal threshold of mCL-N with both the 24-2 and 10-2 protocols significantly decreased by 2.2-2.5 dB CONCLUSION: Despite the induced mydriasis and the optical design of the multifocal lens used in this study, our results indicated that, when the dome-shaped visual field test is performed with eyes with large pupils and wearing refractive multifocal CLs, distance correction without additional near correction is to be recommended.

  5. Comparatively Studied Color Correction Methods for Color Calibration of Automated Microscopy Complex of Biomedical Specimens

    Directory of Open Access Journals (Sweden)

    T. A. Kravtsova

    2016-01-01

    Full Text Available The paper considers a task of generating the requirements and creating a calibration target for automated microscopy systems (AMS of biomedical specimens to provide the invariance of algorithms and software to the hardware configuration. The required number of color fields of the calibration target and their color coordinates are mostly determined by the color correction method, for which coefficients of the equations are estimated during the calibration process. The paper analyses existing color calibration techniques for digital imaging systems using an optical microscope and shows that there is a lack of published results of comparative studies to demonstrate a particular useful color correction method for microscopic images. A comparative study of ten image color correction methods in RGB space using polynomials and combinations of color coordinate of different orders was carried out. The method of conditioned least squares to estimate the coefficients in the color correction equations using captured images of 217 color fields of the calibration target Kodak Q60-E3 was applied. The regularization parameter in this method was chosen experimentally. It was demonstrated that the best color correction quality characteristics are provided by the method that uses a combination of color coordinates of the 3rd order. The study of the influence of the number and the set of color fields included in calibration target on color correction quality for microscopic images was performed. Six train sets containing 30, 35, 40, 50, 60 and 80 color fields, and test set of 47 color fields not included in any of the train sets were formed. It was found out that the train set of 60 color fields minimizes the color correction error values for both operating modes of digital camera: using "default" color settings and with automatic white balance. At the same time it was established that the use of color fields from the widely used now Kodak Q60-E3 target does not

  6. Automation of one-loop QCD corrections

    CERN Document Server

    Hirschi, Valentin; Frixione, Stefano; Garzelli, Maria Vittoria; Maltoni, Fabio; Pittau, Roberto

    2011-01-01

    We present the complete automation of the computation of one-loop QCD corrections, including UV renormalization, to an arbitrary scattering process in the Standard Model. This is achieved by embedding the OPP integrand reduction technique, as implemented in CutTools, into the MadGraph framework. By interfacing the tool so constructed, which we dub MadLoop, with MadFKS, the fully automatic computation of any infrared-safe observable at the next-to-leading order in QCD is attained. We demonstrate the flexibility and the reach of our method by calculating the production rates for a variety of processes at the 7 TeV LHC.

  7. A fully automated algorithm of baseline correction based on wavelet feature points and segment interpolation

    Science.gov (United States)

    Qian, Fang; Wu, Yihui; Hao, Peng

    2017-11-01

    Baseline correction is a very important part of pre-processing. Baseline in the spectrum signal can induce uneven amplitude shifts across different wavenumbers and lead to bad results. Therefore, these amplitude shifts should be compensated before further analysis. Many algorithms are used to remove baseline, however fully automated baseline correction is convenient in practical application. A fully automated algorithm based on wavelet feature points and segment interpolation (AWFPSI) is proposed. This algorithm finds feature points through continuous wavelet transformation and estimates baseline through segment interpolation. AWFPSI is compared with three commonly introduced fully automated and semi-automated algorithms, using simulated spectrum signal, visible spectrum signal and Raman spectrum signal. The results show that AWFPSI gives better accuracy and has the advantage of easy use.

  8. Automation of electroweak NLO corrections in general models

    Energy Technology Data Exchange (ETDEWEB)

    Lang, Jean-Nicolas [Universitaet Wuerzburg (Germany)

    2016-07-01

    I discuss the automation of generation of scattering amplitudes in general quantum field theories at next-to-leading order in perturbation theory. The work is based on Recola, a highly efficient one-loop amplitude generator for the Standard Model, which I have extended so that it can deal with general quantum field theories. Internally, Recola computes off-shell currents and for new models new rules for off-shell currents emerge which are derived from the Feynman rules. My work relies on the UFO format which can be obtained by a suited model builder, e.g. FeynRules. I have developed tools to derive the necessary counterterm structures and to perform the renormalization within Recola in an automated way. I describe the procedure using the example of the two-Higgs-doublet model.

  9. An automated portal verification system for the tangential breast portal field

    International Nuclear Information System (INIS)

    Yin, F.-F.; Lai, W.; Chen, C. W.; Nelson, D. F.

    1995-01-01

    Purpose/Objective: In order to ensure the treatment is delivered as planned, a portal image is acquired in the accelerator and is compared to the reference image. At present, this comparison is performed by radiation oncologists based on the manually-identified features, which is both time-consuming and potentially error-prone. With the introduction of various electronic portal imaging devices, real-time patient positioning correction is becoming clinically feasible to replace time-delayed analysis using films. However, this procedure requires present of radiation oncologists during patient treatment which is not cost-effective and practically not realistic. Therefore, the efficiency and quality of radiation therapy could be substantially improved if this procedure can be automated. The purpose of this study is to develop a fully computerized verification system for the radiation therapy of breast cancer for which a similar treatment setup is generally employed. Materials/Methods: The automated verification system involves image acquisition, image feature extraction, feature correlation between reference and portal images, and quantitative evaluation of patient setup. In this study, a matrix liquid ion-chamber EPID was used to acquire digital portal images which is directly attached to Varian CL2100C accelerator. For effective use of computation memory, the 12-bit gray levels in original portal images were quantized to form a range of 8-bit gray levels. A typical breast portal image includes three important components: breast and lung tissues in the treatment field, air space within the treatment field, and non-irradiated region. A hierarchical region processing technique was developed to separate these regions sequentially. The inherent hierarchical features were formulated based on different radiation attenuation for different regions as: treatment field edge -- breast skin line -- chest wall. Initially, a combination of a Canny edge detector and a constrained

  10. An Automated Baseline Correction Method Based on Iterative Morphological Operations.

    Science.gov (United States)

    Chen, Yunliang; Dai, Liankui

    2018-05-01

    Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.

  11. Automated particulate sampler field test model operations guide

    Energy Technology Data Exchange (ETDEWEB)

    Bowyer, S.M.; Miley, H.S.

    1996-10-01

    The Automated Particulate Sampler Field Test Model Operations Guide is a collection of documents which provides a complete picture of the Automated Particulate Sampler (APS) and the Field Test in which it was evaluated. The Pacific Northwest National Laboratory (PNNL) Automated Particulate Sampler was developed for the purpose of radionuclide particulate monitoring for use under the Comprehensive Test Ban Treaty (CTBT). Its design was directed by anticipated requirements of small size, low power consumption, low noise level, fully automatic operation, and most predominantly the sensitivity requirements of the Conference on Disarmament Working Paper 224 (CDWP224). This guide is intended to serve as both a reference document for the APS and to provide detailed instructions on how to operate the sampler. This document provides a complete description of the APS Field Test Model and all the activity related to its evaluation and progression.

  12. Multi-objective optimization for an automated and simultaneous phase and baseline correction of NMR spectral data

    Science.gov (United States)

    Sawall, Mathias; von Harbou, Erik; Moog, Annekathrin; Behrens, Richard; Schröder, Henning; Simoneau, Joël; Steimers, Ellen; Neymeyr, Klaus

    2018-04-01

    Spectral data preprocessing is an integral and sometimes inevitable part of chemometric analyses. For Nuclear Magnetic Resonance (NMR) spectra a possible first preprocessing step is a phase correction which is applied to the Fourier transformed free induction decay (FID) signal. This preprocessing step can be followed by a separate baseline correction step. Especially if series of high-resolution spectra are considered, then automated and computationally fast preprocessing routines are desirable. A new method is suggested that applies the phase and the baseline corrections simultaneously in an automated form without manual input, which distinguishes this work from other approaches. The underlying multi-objective optimization or Pareto optimization provides improved results compared to consecutively applied correction steps. The optimization process uses an objective function which applies strong penalty constraints and weaker regularization conditions. The new method includes an approach for the detection of zero baseline regions. The baseline correction uses a modified Whittaker smoother. The functionality of the new method is demonstrated for experimental NMR spectra. The results are verified against gravimetric data. The method is compared to alternative preprocessing tools. Additionally, the simultaneous correction method is compared to a consecutive application of the two correction steps.

  13. Correction of oral contrast artifacts in CT-based attenuation correction of PET images using an automated segmentation algorithm

    International Nuclear Information System (INIS)

    Ahmadian, Alireza; Ay, Mohammad R.; Sarkar, Saeed; Bidgoli, Javad H.; Zaidi, Habib

    2008-01-01

    Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map (μmap), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated μmaps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique in

  14. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: a postmortem study.

    Science.gov (United States)

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q; Ducote, Justin L; Su, Min-Ying; Molloi, Sabee

    2013-12-01

    significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications.

  15. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: A postmortem study

    International Nuclear Information System (INIS)

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q.; Ducote, Justin L.; Su, Min-Ying; Molloi, Sabee

    2013-01-01

    . Conclusions: The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications

  16. Relativistic Scott correction in self-generated magnetic fields

    DEFF Research Database (Denmark)

    Erdos, Laszlo; Fournais, Søren; Solovej, Jan Philip

    2012-01-01

    /3}$ and it is unchanged by including the self-generated magnetic field. We prove the first correction term to this energy, the so-called Scott correction of the form $S(\\alpha Z) Z^2$. The current paper extends the result of \\cite{SSS} on the Scott correction for relativistic molecules to include a self......-generated magnetic field. Furthermore, we show that the corresponding Scott correction function $S$, first identified in \\cite{SSS}, is unchanged by including a magnetic field. We also prove new Lieb-Thirring inequalities for the relativistic kinetic energy with magnetic fields....

  17. Evaluation of refractive correction for standard automated perimetry in eyes wearing multifocal contact lenses.

    Science.gov (United States)

    Hirasawa, Kazunori; Ito, Hikaru; Ohori, Yukari; Takano, Yui; Shoji, Nobuyuki

    2017-01-01

    To evaluate the refractive correction for standard automated perimetry (SAP) in eyes with refractive multifocal contact lenses (CL) in healthy young participants. Twenty-nine eyes of 29 participants were included. Accommodation was paralyzed in all participants with 1% cyclopentolate hydrochloride. SAP was performed using the Humphrey SITA-standard 24-2 and 10-2 protocol under three refractive conditions: monofocal CL corrected for near distance (baseline); multifocal CL corrected for distance (mCL-D); and mCL-D corrected for near vision using a spectacle lens (mCL-N). Primary outcome measures were the foveal threshold, mean deviation (MD), and pattern standard deviation (PSD). The foveal threshold of mCL-N with both the 24-2 and 10-2 protocols significantly decreased by 2.2-2.5 dB ( P correction without additional near correction is to be recommended.

  18. Bias field inconsistency correction of motion-scattered multislice MRI for improved 3D image reconstruction.

    Science.gov (United States)

    Kim, Kio; Habas, Piotr A; Rajagopalan, Vidya; Scott, Julia A; Corbett-Detig, James M; Rousseau, Francois; Barkovich, A James; Glenn, Orit A; Studholme, Colin

    2011-09-01

    A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multislice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types.

  19. Voxel-based morphometry and automated lobar volumetry: The trade-off between spatial scale and statistical correction

    Science.gov (United States)

    Voormolen, Eduard H.J.; Wei, Corie; Chow, Eva W.C.; Bassett, Anne S.; Mikulis, David J.; Crawley, Adrian P.

    2011-01-01

    Voxel-based morphometry (VBM) and automated lobar region of interest (ROI) volumetry are comprehensive and fast methods to detect differences in overall brain anatomy on magnetic resonance images. However, VBM and automated lobar ROI volumetry have detected dissimilar gray matter differences within identical image sets in our own experience and in previous reports. To gain more insight into how diverging results arise and to attempt to establish whether one method is superior to the other, we investigated how differences in spatial scale and in the need to statistically correct for multiple spatial comparisons influence the relative sensitivity of either technique to group differences in gray matter volumes. We assessed the performance of both techniques on a small dataset containing simulated gray matter deficits and additionally on a dataset of 22q11-deletion syndrome patients with schizophrenia (22q11DS-SZ) vs. matched controls. VBM was more sensitive to simulated focal deficits compared to automated ROI volumetry, and could detect global cortical deficits equally well. Moreover, theoretical calculations of VBM and ROI detection sensitivities to focal deficits showed that at increasing ROI size, ROI volumetry suffers more from loss in sensitivity than VBM. Furthermore, VBM and automated ROI found corresponding GM deficits in 22q11DS-SZ patients, except in the parietal lobe. Here, automated lobar ROI volumetry found a significant deficit only after a smaller subregion of interest was employed. Thus, sensitivity to focal differences is impaired relatively more by averaging over larger volumes in automated ROI methods than by the correction for multiple comparisons in VBM. These findings indicate that VBM is to be preferred over automated lobar-scale ROI volumetry for assessing gray matter volume differences between groups. PMID:19619660

  20. Automated aberration correction of arbitrary laser modes in high numerical aperture systems

    OpenAIRE

    Hering, Julian; Waller, Erik H.; Freymann, Georg von

    2016-01-01

    Controlling the point-spread-function in three-dimensional laser lithography is crucial for fabricating structures with highest definition and resolution. In contrast to microscopy, aberrations have to be physically corrected prior to writing, to create well defined doughnut modes, bottlebeams or multi foci modes. We report on a modified Gerchberg-Saxton algorithm for spatial-light-modulator based automated aberration compensation to optimize arbitrary laser-modes in a high numerical aperture...

  1. Automated aberration correction of arbitrary laser modes in high numerical aperture systems.

    Science.gov (United States)

    Hering, Julian; Waller, Erik H; Von Freymann, Georg

    2016-12-12

    Controlling the point-spread-function in three-dimensional laser lithography is crucial for fabricating structures with highest definition and resolution. In contrast to microscopy, aberrations have to be physically corrected prior to writing, to create well defined doughnut modes, bottlebeams or multi foci modes. We report on a modified Gerchberg-Saxton algorithm for spatial-light-modulator based automated aberration compensation to optimize arbitrary laser-modes in a high numerical aperture system. Using circularly polarized light for the measurement and first-guess initial conditions for amplitude and phase of the pupil function our scalar approach outperforms recent algorithms with vectorial corrections. Besides laser lithography also applications like optical tweezers and microscopy might benefit from the method presented.

  2. Software-controlled, highly automated intrafraction prostate motion correction with intrafraction stereographic targeting: System description and clinical results

    International Nuclear Information System (INIS)

    Mutanga, Theodore F.; Boer, Hans C. J. de; Rajan, Vinayakrishnan; Dirkx, Maarten L. P.; Os, Marjolein J. H. van; Incrocci, Luca; Heijmen, Ben J. M.

    2012-01-01

    Purpose: A new system for software-controlled, highly automated correction of intrafraction prostate motion,'' intrafraction stereographic targeting'' (iSGT), is described and evaluated. Methods: At our institute, daily prostate positioning is routinely performed at the start of treatment beam using stereographic targeting (SGT). iSGT was implemented by extension of the SGT software to facilitate fast and accurate intrafraction motion corrections with minimal user interaction. iSGT entails megavoltage (MV) image acquisitions with the first segment of selected IMRT beams, automatic registration of implanted markers, followed by remote couch repositioning to correct for intrafraction motion above a predefined threshold, prior to delivery of the remaining segments. For a group of 120 patients, iSGT with corrections for two nearly lateral beams was evaluated in terms of workload and impact on effective intrafraction displacements in the sagittal plane. Results: SDs of systematic (Σ) and random (σ) displacements relative to the planning CT measured directly after initial SGT setup correction were eff eff eff eff eff eff < 0.7 mm, requiring corrections in 82.4% of the fractions. Because iSGT is highly automated, the extra time added by iSGT is <30 s if a correction is required. Conclusions: Without increasing imaging dose, iSGT successfully reduces intrafraction prostate motion with minimal workload and increase in fraction time. An action level of 2 mm is recommended.

  3. Heel effect adaptive flat field correction of digital x-ray detectors

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Yongjian [X-ray Products, Varian Medical Systems Inc., Liverpool, New York 13088 (United States); Wang, Jue [Department of Mathematics, Union College, Schenectady, New York 12308 (United States)

    2013-08-15

    Purpose: Anode heel effect renders large-scale background nonuniformities in digital radiographs. Conventional offset/gain calibration is performed at mono source-to-image distance (SID), and disregards the SID-dependent characteristic of heel effect. It results in a residual nonuniform background in the corrected radiographs when the SID settings for calibration and correction differ. In this work, the authors develop a robust and efficient computational method for digital x-ray detector gain correction adapted to SID-variant heel effect, without resorting to physical filters, phantoms, complicated heel effect models, or multiple-SID calibration and interpolation.Methods: The authors present the Duo-SID projection correction method. In our approach, conventional offset/gain calibrations are performed only twice, at the minimum and maximum SIDs of the system in typical clinical use. A fast iterative separation algorithm is devised to extract the detector gain and basis heel patterns from the min/max SID calibrations. The resultant detector gain is independent of SID, while the basis heel patterns are parameterized by the min- and max-SID. The heel pattern at any SID is obtained from the min-SID basis heel pattern via projection imaging principles. The system gain desired at a specific acquisition SID is then constructed using the projected heel pattern and detector gain map.Results: The method was evaluated for flat field and anatomical phantom image corrections. It demonstrated promising improvements over interpolation and conventional gain calibration/correction methods, lowering their correction errors by approximately 70% and 80%, respectively. The separation algorithm was able to extract the detector gain and heel patterns with less than 2% error, and the Duo-SID corrected images showed perceptually appealing uniform background across the detector.Conclusions: The Duo-SID correction method has substantially improved on conventional offset/gain corrections for

  4. Heel effect adaptive flat field correction of digital x-ray detectors

    International Nuclear Information System (INIS)

    Yu, Yongjian; Wang, Jue

    2013-01-01

    Purpose: Anode heel effect renders large-scale background nonuniformities in digital radiographs. Conventional offset/gain calibration is performed at mono source-to-image distance (SID), and disregards the SID-dependent characteristic of heel effect. It results in a residual nonuniform background in the corrected radiographs when the SID settings for calibration and correction differ. In this work, the authors develop a robust and efficient computational method for digital x-ray detector gain correction adapted to SID-variant heel effect, without resorting to physical filters, phantoms, complicated heel effect models, or multiple-SID calibration and interpolation.Methods: The authors present the Duo-SID projection correction method. In our approach, conventional offset/gain calibrations are performed only twice, at the minimum and maximum SIDs of the system in typical clinical use. A fast iterative separation algorithm is devised to extract the detector gain and basis heel patterns from the min/max SID calibrations. The resultant detector gain is independent of SID, while the basis heel patterns are parameterized by the min- and max-SID. The heel pattern at any SID is obtained from the min-SID basis heel pattern via projection imaging principles. The system gain desired at a specific acquisition SID is then constructed using the projected heel pattern and detector gain map.Results: The method was evaluated for flat field and anatomical phantom image corrections. It demonstrated promising improvements over interpolation and conventional gain calibration/correction methods, lowering their correction errors by approximately 70% and 80%, respectively. The separation algorithm was able to extract the detector gain and heel patterns with less than 2% error, and the Duo-SID corrected images showed perceptually appealing uniform background across the detector.Conclusions: The Duo-SID correction method has substantially improved on conventional offset/gain corrections for

  5. Field correction for a one meter long permanent-magnet wiggler

    International Nuclear Information System (INIS)

    Fortgang, C.M.

    1992-01-01

    Field errors in wigglers are usually measured and corrected on-axis only, thus ignoring field error gradients. We find that gradient scale lengths are of the same order as electron beam size and therefore can be important. We report measurements of wiggler field errors in three dimensions and expansion of these errors out to first order (including two dipole and two quadrupole components). Conventional techniques for correcting on-axis errors (order zero) create new off-axis (first order) errors. We present a new approach to correcting wiggler fields out to first order. By correcting quadrupole errors in addition to the usual dipole correction, we minimize growth in electron beam size. Correction to first order yields better overlap between the electron and optical beams and should improve laser gain. (Author) 2 refs., 5 figs

  6. Assessment of automated disease detection in diabetic retinopathy screening using two-field photography.

    Science.gov (United States)

    Goatman, Keith; Charnley, Amanda; Webster, Laura; Nussey, Stephen

    2011-01-01

    To assess the performance of automated disease detection in diabetic retinopathy screening using two field mydriatic photography. Images from 8,271 sequential patient screening episodes from a South London diabetic retinopathy screening service were processed by the Medalytix iGrading™ automated grading system. For each screening episode macular-centred and disc-centred images of both eyes were acquired and independently graded according to the English national grading scheme. Where discrepancies were found between the automated result and original manual grade, internal and external arbitration was used to determine the final study grades. Two versions of the software were used: one that detected microaneurysms alone, and one that detected blot haemorrhages and exudates in addition to microaneurysms. Results for each version were calculated once using both fields and once using the macula-centred field alone. Of the 8,271 episodes, 346 (4.2%) were considered unassessable. Referable disease was detected in 587 episodes (7.1%). The sensitivity of the automated system for detecting unassessable images ranged from 97.4% to 99.1% depending on configuration. The sensitivity of the automated system for referable episodes ranged from 98.3% to 99.3%. All the episodes that included proliferative or pre-proliferative retinopathy were detected by the automated system regardless of configuration (192/192, 95% confidence interval 98.0% to 100%). If implemented as the first step in grading, the automated system would have reduced the manual grading effort by between 2,183 and 3,147 patient episodes (26.4% to 38.1%). Automated grading can safely reduce the workload of manual grading using two field, mydriatic photography in a routine screening service.

  7. CONCEPTUAL STRUCTURALLOGIC DIAGRAM PRODUCTION AUTOMATION EXPERT STUDY ON THE ISSUE OF CORRECTNESS OF CALCULATION OF THE TAX ON PROFIT OF ORGANIZATIONS

    Directory of Open Access Journals (Sweden)

    Andrey N. Ishchenko

    2014-01-01

    Full Text Available In this article the possibility of automation of an expert study on the questionof correctness of tax calculation profi t organization. Considered are the problemsof formalization of the expert research inthis field, specify the structure of imprisonment. The author proposes a conceptual structural-logic diagram automation expertresearch in this area.

  8. Models of Automation Surprise: Results of a Field Survey in Aviation

    Directory of Open Access Journals (Sweden)

    Robert De Boer

    2017-09-01

    Full Text Available Automation surprises in aviation continue to be a significant safety concern and the community’s search for effective strategies to mitigate them are ongoing. The literature has offered two fundamentally divergent directions, based on different ideas about the nature of cognition and collaboration with automation. In this paper, we report the results of a field study that empirically compared and contrasted two models of automation surprises: a normative individual-cognition model and a sensemaking model based on distributed cognition. Our data prove a good fit for the sense-making model. This finding is relevant for aviation safety, since our understanding of the cognitive processes that govern human interaction with automation drive what we need to do to reduce the frequency of automation-induced events.

  9. The Center-TRACON Automation System: Simulation and field testing

    Science.gov (United States)

    Denery, Dallas G.; Erzberger, Heinz

    1995-01-01

    A new concept for air traffic management in the terminal area, implemented as the Center-TRACON Automation System, has been under development at NASA Ames in a cooperative program with the FAA since 1991. The development has been strongly influenced by concurrent simulation and field site evaluations. The role of simulation and field activities in the development process will be discussed. Results of recent simulation and field tests will be presented.

  10. Mean Field Analysis of Quantum Annealing Correction.

    Science.gov (United States)

    Matsuura, Shunji; Nishimori, Hidetoshi; Albash, Tameem; Lidar, Daniel A

    2016-06-03

    Quantum annealing correction (QAC) is a method that combines encoding with energy penalties and decoding to suppress and correct errors that degrade the performance of quantum annealers in solving optimization problems. While QAC has been experimentally demonstrated to successfully error correct a range of optimization problems, a clear understanding of its operating mechanism has been lacking. Here we bridge this gap using tools from quantum statistical mechanics. We study analytically tractable models using a mean-field analysis, specifically the p-body ferromagnetic infinite-range transverse-field Ising model as well as the quantum Hopfield model. We demonstrate that for p=2, where the phase transition is of second order, QAC pushes the transition to increasingly larger transverse field strengths. For p≥3, where the phase transition is of first order, QAC softens the closing of the gap for small energy penalty values and prevents its closure for sufficiently large energy penalty values. Thus QAC provides protection from excitations that occur near the quantum critical point. We find similar results for the Hopfield model, thus demonstrating that our conclusions hold in the presence of disorder.

  11. Complacency and Automation Bias in the Use of Imperfect Automation.

    Science.gov (United States)

    Wickens, Christopher D; Clegg, Benjamin A; Vieane, Alex Z; Sebok, Angelia L

    2015-08-01

    We examine the effects of two different kinds of decision-aiding automation errors on human-automation interaction (HAI), occurring at the first failure following repeated exposure to correctly functioning automation. The two errors are incorrect advice, triggering the automation bias, and missing advice, reflecting complacency. Contrasts between analogous automation errors in alerting systems, rather than decision aiding, have revealed that alerting false alarms are more problematic to HAI than alerting misses are. Prior research in decision aiding, although contrasting the two aiding errors (incorrect vs. missing), has confounded error expectancy. Participants performed an environmental process control simulation with and without decision aiding. For those with the aid, automation dependence was created through several trials of perfect aiding performance, and an unexpected automation error was then imposed in which automation was either gone (one group) or wrong (a second group). A control group received no automation support. The correct aid supported faster and more accurate diagnosis and lower workload. The aid failure degraded all three variables, but "automation wrong" had a much greater effect on accuracy, reflecting the automation bias, than did "automation gone," reflecting the impact of complacency. Some complacency was manifested for automation gone, by a longer latency and more modest reduction in accuracy. Automation wrong, creating the automation bias, appears to be a more problematic form of automation error than automation gone, reflecting complacency. Decision-aiding automation should indicate its lower degree of confidence in uncertain environments to avoid the automation bias. © 2015, Human Factors and Ergonomics Society.

  12. Automated and observer based light field indicator edge evaluation in diagnostic X-ray equipment

    International Nuclear Information System (INIS)

    Bottaro, Marcio; Nagy, Balazs Vince; Soares, Fernanda Cristina Salvador; Rosendo, Danilo Cabral

    2017-01-01

    Introduction: To analyze edge detection and optical contrast calculation of light field-indicators used in X-ray via automated- and observer-based methods, and comparison with current standard approaches, which do not give exact definition for light field edge determination. Methods: Automated light sensor array was used to measure the penumbra zone of the edge in the standard X-ray equipment, while trained and naive human observers were asked to mark the light field edge according to their own determination. Different interpretations of the contrast were then calculated and compared. Results: In contrast to automated measurements of edge definition and detection, measurements by human observers showed large inter-observer variation independent of their training with X-ray equipment. Different contrast calculations considering the different edge definitions gave very different contrast values. Conclusion: As the main conclusion, we propose a more exact edge definition of the X-ray light field, corresponding well to the average human observer's edge determination. The new edge definition method with automated systems would reduce human variability in edge determination. Such errors could potentially affect the approval of X-ray equipment, and also increase the radiation dose. The automated measurement based on human observers’ edge definition and the corresponding contrast calculation may lead to a more precise light field calibration, which enables reduced irradiation doses on radiology patients. (author)

  13. Automated and observer based light field indicator edge evaluation in diagnostic X-ray equipment

    Directory of Open Access Journals (Sweden)

    Márcio Bottaro

    Full Text Available Abstract Introduction To analyze edge detection and optical contrast calculation of light field-indicators used in X-ray via automated- and observer-based methods, and comparison with current standard approaches, which do not give exact definition for light field edge determination. Methods Automated light sensor array was used to measure the penumbra zone of the edge in the standard X-ray equipment, while trained and naïve human observers were asked to mark the light field edge according to their own determination. Different interpretations of the contrast were then calculated and compared. Results In contrast to automated measurements of edge definition and detection, measurements by human observers showed large inter-observer variation independent of their training with X-ray equipment. Different contrast calculations considering the different edge definitions gave very different contrast values. Conclusion As the main conclusion, we propose a more exact edge definition of the X-ray light field, corresponding well to the average human observer’s edge determination. The new edge definition method with automated systems would reduce human variability in edge determination. Such errors could potentially affect the approval of X-ray equipment, and also increase the radiation dose. The automated measurement based on human observers’ edge definition and the corresponding contrast calculation may lead to a more precise light field calibration, which enables reduced irradiation doses on radiology patients.

  14. Automated and observer based light field indicator edge evaluation in diagnostic X-ray equipment

    Energy Technology Data Exchange (ETDEWEB)

    Bottaro, Marcio; Nagy, Balazs Vince; Soares, Fernanda Cristina Salvador; Rosendo, Danilo Cabral, E-mail: marcio@iee.usp.br [Universidade de Sao Paulo (USP), SP (Brazil); Optics and Engineering Informatics, Budapest University of Technology and Economics, Budapest (Hungary)

    2017-04-15

    Introduction: To analyze edge detection and optical contrast calculation of light field-indicators used in X-ray via automated- and observer-based methods, and comparison with current standard approaches, which do not give exact definition for light field edge determination. Methods: Automated light sensor array was used to measure the penumbra zone of the edge in the standard X-ray equipment, while trained and naive human observers were asked to mark the light field edge according to their own determination. Different interpretations of the contrast were then calculated and compared. Results: In contrast to automated measurements of edge definition and detection, measurements by human observers showed large inter-observer variation independent of their training with X-ray equipment. Different contrast calculations considering the different edge definitions gave very different contrast values. Conclusion: As the main conclusion, we propose a more exact edge definition of the X-ray light field, corresponding well to the average human observer's edge determination. The new edge definition method with automated systems would reduce human variability in edge determination. Such errors could potentially affect the approval of X-ray equipment, and also increase the radiation dose. The automated measurement based on human observers’ edge definition and the corresponding contrast calculation may lead to a more precise light field calibration, which enables reduced irradiation doses on radiology patients. (author)

  15. Efficient Photometry In-Frame Calibration (EPIC) Gaussian Corrections for Automated Background Normalization of Rate-Tracked Satellite Imagery

    Science.gov (United States)

    Griesbach, J.; Wetterer, C.; Sydney, P.; Gerber, J.

    Photometric processing of non-resolved Electro-Optical (EO) images has commonly required the use of dark and flat calibration frames that are obtained to correct for charge coupled device (CCD) dark (thermal) noise and CCD quantum efficiency/optical path vignetting effects respectively. It is necessary to account/calibrate for these effects so that the brightness of objects of interest (e.g. stars or resident space objects (RSOs)) may be measured in a consistent manner across the CCD field of view. Detected objects typically require further calibration using aperture photometry to compensate for sky background (shot noise). For this, annuluses are measured around each detected object whose contained pixels are used to estimate an average background level that is subtracted from the detected pixel measurements. In a new photometric calibration software tool developed for AFRL/RD, called Efficient Photometry In-Frame Calibration (EPIC), an automated background normalization technique is proposed that eliminates the requirement to capture dark and flat calibration images. The proposed technique simultaneously corrects for dark noise, shot noise, and CCD quantum efficiency/optical path vignetting effects. With this, a constant detection threshold may be applied for constant false alarm rate (CFAR) object detection without the need for aperture photometry corrections. The detected pixels may be simply summed (without further correction) for an accurate instrumental magnitude estimate. The noise distribution associated with each pixel is assumed to be sampled from a Poisson distribution. Since Poisson distributed data closely resembles Gaussian data for parameterized means greater than 10, the data may be corrected by applying bias subtraction and standard-deviation division. EPIC performs automated background normalization on rate-tracked satellite images using the following technique. A deck of approximately 50-100 images is combined by performing an independent median

  16. Magnetic field measurement and correction of VECC K500 superconducting cyclotron

    International Nuclear Information System (INIS)

    Dey, M.K.; Debnath, J.; Bhunia, U.; Pradhan, J.; Rashid, H.; Paul, S.; Dutta, A.; Naser, Z.A.; Singh, V.; Pal, G.; Nandi, C.; Dasgupta, S.; Bhattacharya, S.; Pal, S.; Roy, A.; Bhattacharya, T.; Bhole, R.B.; Bhale, D.; Chatterjee, M.; Prasad, R.; Nabhiraj, P.Y.; Hazra, D.P.; Mallik, C.; Bhandari, R.K.

    2006-01-01

    The VECC K500 superconducting cyclotron magnet is commissioned and magnetic field measurement and correction program was successfully completed in March 2006. Here we report the analysis of the measured field data and subsequent correction of the magnet to improve the field quality. (author)

  17. Radiation corrections to quantum processes in an intense electromagnetic field

    International Nuclear Information System (INIS)

    Narozhny, N.B.

    1979-01-01

    A derivation of an asymptotic expression for the mass correction of order α to the electron propagator in an intense electromagnetic field is presented. It is used for the calculation of radiation corrections to the electron and photon elastic scattering amplitudes in the α 3 approximation. All proper diagrams contributing to the amplitudes and containing the above-mentioned correction to the propagator were considered, but not those which include vertex corrections. It is shown that the expansion parameter of the perturbation theory of quantum electrodynamics in intense fields grows not more slowly than αchi/sup 1/3/ at least for the electron amplitude, where chi = [(eF/sub μν/p/sub ν/) 2 ] 12 /m 3 , p is a momentum of the electron, and F is the electromagnetic field tensor

  18. Text recognition and correction for automated data collection by mobile devices

    Science.gov (United States)

    Ozarslan, Suleyman; Eren, P. Erhan

    2014-03-01

    Participatory sensing is an approach which allows mobile devices such as mobile phones to be used for data collection, analysis and sharing processes by individuals. Data collection is the first and most important part of a participatory sensing system, but it is time consuming for the participants. In this paper, we discuss automatic data collection approaches for reducing the time required for collection, and increasing the amount of collected data. In this context, we explore automated text recognition on images of store receipts which are captured by mobile phone cameras, and the correction of the recognized text. Accordingly, our first goal is to evaluate the performance of the Optical Character Recognition (OCR) method with respect to data collection from store receipt images. Images captured by mobile phones exhibit some typical problems, and common image processing methods cannot handle some of them. Consequently, the second goal is to address these types of problems through our proposed Knowledge Based Correction (KBC) method used in support of the OCR, and also to evaluate the KBC method with respect to the improvement on the accurate recognition rate. Results of the experiments show that the KBC method improves the accurate data recognition rate noticeably.

  19. Scatter factor corrections for elongated fields

    International Nuclear Information System (INIS)

    Higgins, P.D.; Sohn, W.H.; Sibata, C.H.; McCarthy, W.A.

    1989-01-01

    Measurements have been made to determine scatter factor corrections for elongated fields of Cobalt-60 and for nominal linear accelerator energies of 6 MV (Siemens Mevatron 67) and 18 MV (AECL Therac 20). It was found that for every energy the collimator scatter factor varies by 2% or more as the field length-to-width ratio increases beyond 3:1. The phantom scatter factor is independent of which collimator pair is elongated at these energies. For 18 MV photons it was found that the collimator scatter factor is complicated by field-size-dependent backscatter into the beam monitor

  20. Correction of the closed orbit and vertical dispersion and the tuning and field correction system in ISABELLE

    International Nuclear Information System (INIS)

    Parzen, G.

    1979-01-01

    Each ring in ISABELLE will have 10 separately powered systematic field correction coils to make required corrections which are the same in corresponding magnets around the ring. These corrections include changing the ν-value, shaping the working line in ν-space, correction of field errors due to iron saturation effects, the conductor arrangements, the construction of the coil ends, diamagnetic effects in the superconductor and to rate-dependent induced currents. The twelve insertion quadrupoles in the insertion surrounding each crossing point will each have a quadrupole trim coil. The closed orbit will be controlled by a system of 84 horizontal dipole coils and 90 vertical dipole coils in each ring, each coil being separately powered. This system of dipole coils will also be used to correct the vertical dispersion at the crossing points. Two families of skew quadrupoles per ring will be provided for correction of the coupling between the horizontal and vertical motions. Although there will be 258 separately powered correction coils in each ring

  1. Error Field Correction in DIII-D Ohmic Plasmas With Either Handedness

    International Nuclear Information System (INIS)

    Park, Jong-Kyu; Schaffer, Michael J.; La Haye, Robert J.; Scoville, Timothy J.; Menard, Jonathan E.

    2011-01-01

    Error field correction results in DIII-D plasmas are presented in various configurations. In both left-handed and right-handed plasma configurations, where the intrinsic error fields become different due to the opposite helical twist (handedness) of the magnetic field, the optimal error correction currents and the toroidal phases of internal(I)-coils are empirically established. Applications of the Ideal Perturbed Equilibrium Code to these results demonstrate that the field component to be minimized is not the resonant component of the external field, but the total field including ideal plasma responses. Consistency between experiment and theory has been greatly improved along with the understanding of ideal plasma responses, but non-ideal plasma responses still need to be understood to achieve the reliable predictability in tokamak error field correction.

  2. Automated and observer based light field indicator edge evaluation in diagnostic X-ray equipment

    OpenAIRE

    Bottaro, Márcio; Nagy, Balázs Vince; Soares, Fernanda Cristina Salvador; Rosendo, Danilo Cabral

    2017-01-01

    Abstract Introduction To analyze edge detection and optical contrast calculation of light field-indicators used in X-ray via automated- and observer-based methods, and comparison with current standard approaches, which do not give exact definition for light field edge determination. Methods Automated light sensor array was used to measure the penumbra zone of the edge in the standard X-ray equipment, while trained and naïve human observers were asked to mark the light field edge according t...

  3. An automated baseline correction protocol for infrared spectra of atmospheric aerosols collected on polytetrafluoroethylene (Teflon) filters

    Science.gov (United States)

    Kuzmiakova, Adele; Dillner, Ann M.; Takahama, Satoshi

    2016-06-01

    A growing body of research on statistical applications for characterization of atmospheric aerosol Fourier transform infrared (FT-IR) samples collected on polytetrafluoroethylene (PTFE) filters (e.g., Russell et al., 2011; Ruthenburg et al., 2014) and a rising interest in analyzing FT-IR samples collected by air quality monitoring networks call for an automated PTFE baseline correction solution. The existing polynomial technique (Takahama et al., 2013) is not scalable to a project with a large number of aerosol samples because it contains many parameters and requires expert intervention. Therefore, the question of how to develop an automated method for baseline correcting hundreds to thousands of ambient aerosol spectra given the variability in both environmental mixture composition and PTFE baselines remains. This study approaches the question by detailing the statistical protocol, which allows for the precise definition of analyte and background subregions, applies nonparametric smoothing splines to reproduce sample-specific PTFE variations, and integrates performance metrics from atmospheric aerosol and blank samples alike in the smoothing parameter selection. Referencing 794 atmospheric aerosol samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011, we start by identifying key FT-IR signal characteristics, such as non-negative absorbance or analyte segment transformation, to capture sample-specific transitions between background and analyte. While referring to qualitative properties of PTFE background, the goal of smoothing splines interpolation is to learn the baseline structure in the background region to predict the baseline structure in the analyte region. We then validate the model by comparing smoothing splines baseline-corrected spectra with uncorrected and polynomial baseline (PB)-corrected equivalents via three statistical applications: (1) clustering analysis, (2) functional group quantification

  4. Free-field correction values for Interacoustics DD 45 supra-aural audiometric earphones

    DEFF Research Database (Denmark)

    Poulsen, Torben

    2010-01-01

    This paper report free-field correction values for the Interacoustics DD 45 audiometric earphones. The free-field correction values for earphones provide the loudness based equivalence to loudspeaker presentation. Correction values are especially used for the calibration of audiometric equipment...... for the acoustic coupler IEC 60318-3 (NBS 9-A) and for the ear simulator IEC 60318-1. The results are in good agreement with the results of another independent investigation. The reported free-field correction values may be used as part of the basis for future standardization of the DD 45 earphone....

  5. Automated NLO QCD corrections with WHIZARD

    International Nuclear Information System (INIS)

    Weiss, Christian; Siegen Univ.; Chokoufe Nejad, Bijan; Reuter, Juergen; Kilian, Wolfgang

    2015-10-01

    We briefly discuss the current status of NLO QCD automation in the Monte Carlo event generator WHIZARD. The functionality is presented for the explicit study of off-shell top quark production with associated backgrounds at a lepton collider.

  6. High order field-to-field corrections for imaging and overlay to achieve sub 20-nm lithography requirements

    Science.gov (United States)

    Mulkens, Jan; Kubis, Michael; Hinnen, Paul; de Graaf, Roelof; van der Laan, Hans; Padiy, Alexander; Menchtchikov, Boris

    2013-04-01

    Immersion lithography is being extended to the 20-nm and 14-nm node and the lithography performance requirements need to be tightened further to enable this shrink. In this paper we present an integral method to enable high-order fieldto- field corrections for both imaging and overlay, and we show that this method improves the performance with 20% - 50%. The lithography architecture we build for these higher order corrections connects the dynamic scanner actuators with the angle resolved scatterometer via a separate application server. Improvements of CD uniformity are based on enabling the use of freeform intra-field dose actuator and field-to-field control of focus. The feedback control loop uses CD and focus targets placed on the production mask. For the overlay metrology we use small in-die diffraction based overlay targets. Improvements of overlay are based on using the high order intra-field correction actuators on a field-tofield basis. We use this to reduce the machine matching error, extending the heating control and extending the correction capability for process induced errors.

  7. Field nonuniformity correction for quantitative analysis of digitized mammograms

    International Nuclear Information System (INIS)

    Pawluczyk, Olga; Yaffe, Martin J.

    2001-01-01

    Several factors, including the heel effect, variation in distance from the x-ray source to points in the image and path obliquity contribute to the signal nonuniformity of mammograms. To best use digitized mammograms for quantitative image analysis, these field non-uniformities must be corrected. An empirically based correction method, which uses a bowl-shaped calibration phantom, has been developed. Due to the annular spherical shape of the phantom, its attenuation is constant over the entire image. Remaining nonuniformities are due only to the heel and inverse square effects as well as the variable path through the beam filter, compression plate and image receptor. In logarithmic space, a normalized image of the phantom can be added to mammograms to correct for these effects. Then, an analytical correction for path obliquity in the breast can be applied to the images. It was found that the correction causes the errors associated with field nonuniformity to be reduced from 14% to 2% for a 4 cm block of material corresponding to a combination of 50% fibroglandular and 50% fatty breast tissue. A repeatability study has been conducted to show that in regions as far as 20 cm away from the chest wall, variations due to imaging conditions and phantom alignment contribute to <2% of overall corrected signal

  8. Automatic Power Factor Correction Using Capacitive Bank

    OpenAIRE

    Mr.Anant Kumar Tiwari,; Mrs. Durga Sharma

    2014-01-01

    The power factor correction of electrical loads is a problem common to all industrial companies. Earlier the power factor correction was done by adjusting the capacitive bank manually [1]. The automated power factor corrector (APFC) using capacitive load bank is helpful in providing the power factor correction. Proposed automated project involves measuring the power factor value from the load using microcontroller. The design of this auto-adjustable power factor correction is ...

  9. Solving for the Surface: An Automated Approach to THEMIS Atmospheric Correction

    Science.gov (United States)

    Ryan, A. J.; Salvatore, M. R.; Smith, R.; Edwards, C. S.; Christensen, P. R.

    2013-12-01

    Here we present the initial results of an automated atmospheric correction algorithm for the Thermal Emission Imaging System (THEMIS) instrument, whereby high spectral resolution Thermal Emission Spectrometer (TES) data are queried to generate numerous atmospheric opacity values for each THEMIS infrared image. While the pioneering methods of Bandfield et al. [2004] also used TES spectra to atmospherically correct THEMIS data, the algorithm presented here is a significant improvement because of the reduced dependency on user-defined inputs for individual images. Additionally, this technique is particularly useful for correcting THEMIS images that have captured a range of atmospheric conditions and/or surface elevations, issues that have been difficult to correct for using previous techniques. Thermal infrared observations of the Martian surface can be used to determine the spatial distribution and relative abundance of many common rock-forming minerals. This information is essential to understanding the planet's geologic and climatic history. However, the Martian atmosphere also has absorptions in the thermal infrared which complicate the interpretation of infrared measurements obtained from orbit. TES has sufficient spectral resolution (143 bands at 10 cm-1 sampling) to linearly unmix and remove atmospheric spectral end-members from the acquired spectra. THEMIS has the benefit of higher spatial resolution (~100 m/pixel vs. 3x5 km/TES-pixel) but has lower spectral resolution (8 surface sensitive spectral bands). As such, it is not possible to isolate the surface component by unmixing the atmospheric contribution from the THEMIS spectra, as is done with TES. Bandfield et al. [2004] developed a technique using atmospherically corrected TES spectra as tie-points for constant radiance offset correction and surface emissivity retrieval. This technique is the primary method used to correct THEMIS but is highly susceptible to inconsistent results if great care in the

  10. Collective-field-corrected strong field approximation for laser-irradiated metal clusters

    International Nuclear Information System (INIS)

    Keil, Th; Bauer, D

    2014-01-01

    The strong field approximation (SFA) formulated in terms of so-called ‘quantum orbits’ led to much insight into intense-laser driven ionization dynamics. In plain SFA, the emitted electron is treated as a free electron in the laser field alone. However, with improving experimental techniques and more advanced numerical simulations, it becomes more and more obvious that the plain SFA misses interesting effects even on a qualitative level. Examples are holographic side lobes, the low-energy structure, radial patterns in photoelectron spectra at low kinetic energies and strongly rotated angular distributions. For this reason, increasing efforts have been recently devoted to Coulomb corrections of the SFA. In the current paper, we follow a similar line but consider ionization of metal clusters. It is known that photoelectrons from clusters can be much more energetic than those emitted from atoms or small molecules, especially if the Mie resonance of the expanding cluster is evoked. We develop a SFA that takes the collective field inside the cluster via the simple rigid-sphere model into account. Our approach is based on field-corrected quantum orbits so that the acceleration process (or any other spectral feature of interest) can be investigated in detail. (paper)

  11. Mass corrections in string theory and lattice field theory

    International Nuclear Information System (INIS)

    Del Debbio, Luigi; Kerrane, Eoin; Russo, Rodolfo

    2009-01-01

    Kaluza-Klein (KK) compactifications of higher-dimensional Yang-Mills theories contain a number of 4-dimensional scalars corresponding to the internal components of the gauge field. While at tree level the scalar zero modes are massless, it is well known that quantum corrections make them massive. We compute these radiative corrections at 1 loop in an effective field theory framework, using the background field method and proper Schwinger-time regularization. In order to clarify the proper treatment of the sum over KK modes in the effective field theory approach, we consider the same problem in two different UV completions of Yang-Mills: string theory and lattice field theory. In both cases, when the compactification radius R is much bigger than the scale of the UV completion (R>>√(α ' ), a), we recover a mass renormalization that is independent of the UV scale and agrees with the one derived in the effective field theory approach. These results support the idea that the value of the mass corrections is, in this regime, universal for any UV completion that respects locality and gauge invariance. The string analysis suggests that this property holds also at higher loops. The lattice analysis suggests that the mass of the adjoint scalars appearing in N=2, 4 super Yang-Mills is highly suppressed, even if the lattice regularization breaks all supersymmetries explicitly. This is due to an interplay between the higher-dimensional gauge invariance and the degeneracy of bosonic and fermionic degrees of freedom.

  12. Automated Critical PeakPricing Field Tests: 2006 Pilot ProgramDescription and Results

    Energy Technology Data Exchange (ETDEWEB)

    Piette, Mary Ann; Watson, David; Motegi, Naoya; Kiliccote, Sila

    2007-06-19

    During 2006 Lawrence Berkeley National Laboratory (LBNL) and the Demand Response Research Center (DRRC) performed a technology evaluation for the Pacific Gas and Electric Company (PG&E) Emerging Technologies Programs. This report summarizes the design, deployment, and results from the 2006 Automated Critical Peak Pricing Program (Auto-CPP). The program was designed to evaluate the feasibility of deploying automation systems that allow customers to participate in critical peak pricing (CPP) with a fully-automated response. The 2006 program was in operation during the entire six-month CPP period from May through October. The methodology for this field study included site recruitment, control strategy development, automation system deployment, and evaluation of sites' participation in actual CPP events through the summer of 2006. LBNL recruited sites in PG&E's territory in northern California through contacts from PG&E account managers, conferences, and industry meetings. Each site contact signed a memorandum of understanding with LBNL that outlined the activities needed to participate in the Auto-CPP program. Each facility worked with LBNL to select and implement control strategies for demand response and developed automation system designs based on existing Internet connectivity and building control systems. Once the automation systems were installed, LBNL conducted communications tests to ensure that the Demand Response Automation Server (DRAS) correctly provided and logged the continuous communications of the CPP signals with the energy management and control system (EMCS) for each site. LBNL also observed and evaluated Demand Response (DR) shed strategies to ensure proper commissioning of controls. The communication system allowed sites to receive day-ahead as well as day-of signals for pre-cooling, a DR strategy used at a few sites. Measurement of demand response was conducted using two different baseline models for estimating peak load savings. One

  13. Loop Corrections to Standard Model fields in inflation

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xingang [Institute for Theory and Computation, Harvard-Smithsonian Center for Astrophysics,60 Garden Street, Cambridge, MA 02138 (United States); Department of Physics, The University of Texas at Dallas,800 W Campbell Rd, Richardson, TX 75080 (United States); Wang, Yi [Department of Physics, The Hong Kong University of Science and Technology,Clear Water Bay, Kowloon, Hong Kong (China); Xianyu, Zhong-Zhi [Center of Mathematical Sciences and Applications, Harvard University,20 Garden Street, Cambridge, MA 02138 (United States)

    2016-08-08

    We calculate 1-loop corrections to the Schwinger-Keldysh propagators of Standard-Model-like fields of spin-0, 1/2, and 1, with all renormalizable interactions during inflation. We pay special attention to the late-time divergences of loop corrections, and show that the divergences can be resummed into finite results in the late-time limit using dynamical renormalization group method. This is our first step toward studying both the Standard Model and new physics in the primordial universe.

  14. Heterotic α ’-corrections in Double Field Theory

    OpenAIRE

    Bedoya, OscarInstituto de Astronomía y Física del Espacio (CONICET-UBA), Ciudad Universitaria, Buenos Aires, Argentina; Marqués, Diego(Instituto de Astronomía y Física del Espacio (CONICET-UBA), Ciudad Universitaria, Buenos Aires, Argentina); Núñez, Carmen(Instituto de Astronomía y Física del Espacio (CONICET-UBA), Ciudad Universitaria, Buenos Aires, Argentina)

    2014-01-01

    We extend the generalized flux formulation of Double Field Theory to include all the first order bosonic contributions to the α′ expansion of the heterotic string low energy effective theory. The generalized tangent space and duality group are enhanced by α′ corrections, and the gauge symmetries are generated by the usual (gauged) generalized Lie derivative in the extended space. The generalized frame receives derivative corrections through the spin connection with torsion, which is incorpora...

  15. Setup accuracy of stereoscopic X-ray positioning with automated correction for rotational errors in patients treated with conformal arc radiotherapy for prostate cancer

    International Nuclear Information System (INIS)

    Soete, Guy; Verellen, Dirk; Tournel, Koen; Storme, Guy

    2006-01-01

    We evaluated setup accuracy of NovalisBody stereoscopic X-ray positioning with automated correction for rotational errors with the Robotics Tilt Module in patients treated with conformal arc radiotherapy for prostate cancer. The correction of rotational errors was shown to reduce random and systematic errors in all directions. (NovalisBody TM and Robotics Tilt Module TM are products of BrainLAB A.G., Heimstetten, Germany)

  16. Automatic physiological waveform processing for FMRI noise correction and analysis.

    Directory of Open Access Journals (Sweden)

    Daniel J Kelley

    2008-03-01

    Full Text Available Functional MRI resting state and connectivity studies of brain focus on neural fluctuations at low frequencies which share power with physiological fluctuations originating from lung and heart. Due to the lack of automated software to process physiological signals collected at high magnetic fields, a gap exists in the processing pathway between the acquisition of physiological data and its use in fMRI software for both physiological noise correction and functional analyses of brain activation and connectivity. To fill this gap, we developed an open source, physiological signal processing program, called PhysioNoise, in the python language. We tested its automated processing algorithms and dynamic signal visualization on resting monkey cardiac and respiratory waveforms. PhysioNoise consistently identifies physiological fluctuations for fMRI noise correction and also generates covariates for subsequent analyses of brain activation and connectivity.

  17. Quantum mean-field decoding algorithm for error-correcting codes

    International Nuclear Information System (INIS)

    Inoue, Jun-ichi; Saika, Yohei; Okada, Masato

    2009-01-01

    We numerically examine a quantum version of TAP (Thouless-Anderson-Palmer)-like mean-field algorithm for the problem of error-correcting codes. For a class of the so-called Sourlas error-correcting codes, we check the usefulness to retrieve the original bit-sequence (message) with a finite length. The decoding dynamics is derived explicitly and we evaluate the average-case performance through the bit-error rate (BER).

  18. In vivo robotics: the automation of neuroscience and other intact-system biological fields.

    Science.gov (United States)

    Kodandaramaiah, Suhasa B; Boyden, Edward S; Forest, Craig R

    2013-12-01

    Robotic and automation technologies have played a huge role in in vitro biological science, having proved critical for scientific endeavors such as genome sequencing and high-throughput screening. Robotic and automation strategies are beginning to play a greater role in in vivo and in situ sciences, especially when it comes to the difficult in vivo experiments required for understanding the neural mechanisms of behavior and disease. In this perspective, we discuss the prospects for robotics and automation to influence neuroscientific and intact-system biology fields. We discuss how robotic innovations might be created to open up new frontiers in basic and applied neuroscience and present a concrete example with our recent automation of in vivo whole-cell patch clamp electrophysiology of neurons in the living mouse brain. © 2013 New York Academy of Sciences.

  19. Experience of automation failures in training: effects on trust, automation bias, complacency and performance.

    Science.gov (United States)

    Sauer, Juergen; Chavaillaz, Alain; Wastell, David

    2016-06-01

    This work examined the effects of operators' exposure to various types of automation failures in training. Forty-five participants were trained for 3.5 h on a simulated process control environment. During training, participants either experienced a fully reliable, automatic fault repair facility (i.e. faults detected and correctly diagnosed), a misdiagnosis-prone one (i.e. faults detected but not correctly diagnosed) or a miss-prone one (i.e. faults not detected). One week after training, participants were tested for 3 h, experiencing two types of automation failures (misdiagnosis, miss). The results showed that automation bias was very high when operators trained on miss-prone automation encountered a failure of the diagnostic system. Operator errors resulting from automation bias were much higher when automation misdiagnosed a fault than when it missed one. Differences in trust levels that were instilled by the different training experiences disappeared during the testing session. Practitioner Summary: The experience of automation failures during training has some consequences. A greater potential for operator errors may be expected when an automatic system failed to diagnose a fault than when it failed to detect one.

  20. Longitudinal wake field corrections in circular machines

    International Nuclear Information System (INIS)

    Symon, K.R.

    1996-01-01

    In computations of longitudinal particle motions in accelerators and storage rings, the fields produced by the interactions of the beam with the cavity in which it circulates are usually calculated by multiplying Fourier components of the beam current by the appropriate impedances. This procedure neglects the slow variation with time of the Fourier coefficients and of the beam revolution frequency. When there are cavity elements with decay times that are comparable with or larger than the time during which changes in the beam parameters occur, these changes can not be neglected. Corrections for this effect have been worked out in terms of the response functions of elements in the ring. The result is expressed as a correction to the impedance which depends on the way in which the beam parameters are changing. A method is presented for correcting a numerical simulation by keeping track of the steady state and transient terms in the response of a cavity

  1. Treatment planning for SBRT using automated field delivery: A case study

    International Nuclear Information System (INIS)

    Ritter, Timothy A.; Owen, Dawn; Brooks, Cassandra M.; Stenmark, Matthew H.

    2015-01-01

    Stereotactic body radiation therapy (SBRT) treatment planning and delivery can be accomplished using a variety of techniques that achieve highly conformal dose distributions. Herein, we describe a template-based automated treatment field approach that enables rapid delivery of more than 20 coplanar fields. A case study is presented to demonstrate how modest adaptations to traditional SBRT planning can be implemented to take clinical advantage of this technology. Treatment was planned for a left-sided lung lesion adjacent to the chest wall using 25 coplanar treatment fields spaced at 11° intervals. The plan spares the contralateral lung and is in compliance with the conformality standards set forth in Radiation Therapy and Oncology Group protocol 0915, and the dose tolerances found in the report of the American Association of Physicists in Medicine Task Group 101. Using a standard template, treatment planning was accomplished in less than 20 minutes, and each 10 Gy fraction was delivered in approximately 5.4 minutes. For those centers equipped with linear accelerators capable of automated treatment field delivery, the use of more than 20 coplanar fields is a viable SBRT planning approach and yields excellent conformality and quality combined with rapid planning and treatment delivery. Although the case study discusses a laterally located lung lesion, this technique can be applied to centrally located tumors with similar results

  2. Massive Corrections to Entanglement in Minimal E8 Toda Field Theory

    Directory of Open Access Journals (Sweden)

    Olalla A. Castro-Alvaredo

    2017-02-01

    Full Text Available In this letter we study the exponentially decaying corrections to saturation of the second R\\'enyi entropy of one interval of length L in minimal E8 Toda field theory. It has been known for some time that the entanglement entropy of a massive quantum field theory in 1+1 dimensions saturates to a constant value for m1 L <<1 where m1 is the mass of the lightest particle in the spectrum. Subsequently, results by Cardy, Castro-Alvaredo and Doyon have shown that there are exponentially decaying corrections to this behaviour which are characterised by Bessel functions with arguments proportional to m1 L. For the von Neumann entropy the leading correction to saturation takes the precise universal form -K0(2m1 L/8 whereas for the R\\'enyi entropies leading corrections which are proportional to K0(m1 L are expected. Recent numerical work by P\\'almai for the second R\\'enyi entropy of minimal E8 Toda has found next-to-leading order corrections decaying as exp(-2m1 L rather than the expected exp(-m1 L. In this paper we investigate the origin of this result and show that it is incorrect. An exact form factor computation of correlators of branch point twist fields reveals that the leading corrections are proportional to K0(m1 L as expected.

  3. Field test of the PNNL Automated Radioxenon Sampler/Analyzer (ARSA)

    International Nuclear Information System (INIS)

    Lagomarsino, R.J.; Ku, E.; Latner, N.; Sanderson, C.G.

    1998-07-01

    As part of the requirements of the Comprehensive Test Ban Treaty (CTBT), the Automated Radioxenon/Sampler Analyzer (ARSA) was designed and engineered by the Pacific Northwest National Laboratory (PNNL). The instrument is to provide near real-time detection and measurement of the radioxenons released into the atmosphere after a nuclear test. Forty-six field tests, designed to determine the performance of the ARSA prototype under simulated field conditions, were conducted at EML from March to December 1997. This final report contains detailed results of the tests with recommendations for improvements in instrument performance

  4. Field test of the PNNL Automated Radioxenon Sampler/Analyzer (ARSA)

    Energy Technology Data Exchange (ETDEWEB)

    Lagomarsino, R.J.; Ku, E.; Latner, N.; Sanderson, C.G.

    1998-07-01

    As part of the requirements of the Comprehensive Test Ban Treaty (CTBT), the Automated Radioxenon/Sampler Analyzer (ARSA) was designed and engineered by the Pacific Northwest National Laboratory (PNNL). The instrument is to provide near real-time detection and measurement of the radioxenons released into the atmosphere after a nuclear test. Forty-six field tests, designed to determine the performance of the ARSA prototype under simulated field conditions, were conducted at EML from March to December 1997. This final report contains detailed results of the tests with recommendations for improvements in instrument performance.

  5. Method of correcting eddy current magnetic fields in particle accelerator vacuum chambers

    Science.gov (United States)

    Danby, Gordon T.; Jackson, John W.

    1991-01-01

    A method for correcting magnetic field aberrations produced by eddy currents induced in a particle accelerator vacuum chamber housing is provided wherein correction windings are attached to selected positions on the housing and the windings are energized by transformer action from secondary coils, which coils are inductively coupled to the poles of electro-magnets that are powered to confine the charged particle beam within a desired orbit as the charged particles are accelerated through the vacuum chamber by a particle-driving rf field. The power inductively coupled to the secondary coils varies as a function of variations in the power supplied by the particle-accelerating rf field to a beam of particles accelerated through the vacuum chamber, so the current in the energized correction coils is effective to cancel eddy current flux fields that would otherwise be induced in the vacuum chamber by power variations in the particle beam.

  6. Automation of NLO QCD and EW corrections with Sherpa and Recola

    Energy Technology Data Exchange (ETDEWEB)

    Biedermann, Benedikt; Denner, Ansgar; Pellen, Mathieu [Universitaet Wuerzburg, Institut fuer Theoretische Physik und Astrophysik, Wuerzburg (Germany); Braeuer, Stephan; Schumann, Steffen [Georg-August Universitaet Goettingen, II. Physikalisches Institut, Goettingen (Germany); Thompson, Jennifer M. [Universitaet Heidelberg, Institut fuer Theoretische Physik, Heidelberg (Germany)

    2017-07-15

    This publication presents the combination of the one-loop matrix-element generator Recola with the multipurpose Monte Carlo program Sherpa. Since both programs are highly automated, the resulting Sherpa +Recola framework allows for the computation of - in principle - any Standard Model process at both NLO QCD and EW accuracy. To illustrate this, three representative LHC processes have been computed at NLO QCD and EW: vector-boson production in association with jets, off-shell Z-boson pair production, and the production of a top-quark pair in association with a Higgs boson. In addition to fixed-order computations, when considering QCD corrections, all functionalities of Sherpa, i.e. particle decays, QCD parton showers, hadronisation, underlying events, etc. can be used in combination with Recola. This is demonstrated by the merging and matching of one-loop QCD matrix elements for Drell-Yan production in association with jets to the parton shower. The implementation is fully automatised, thus making it a perfect tool for both experimentalists and theorists who want to use state-of-the-art predictions at NLO accuracy. (orig.)

  7. Correction factors for clinical dosemeters used in large field dosimetry

    International Nuclear Information System (INIS)

    Campos, L.L.; Caldas, L.

    1989-08-01

    The determination of the absorbed dose in high-energy photon and electron beams by the user is carried out as a two-step procedure. First the ionization chamber is calibrated at a reference quality by the user at a standard laboratory, and then the chamber is used to determine the absorbed dose with the user's beam. A number of conversion and correction factors have to be applied. Different sets of factors are needed depending on the physical quantity the calibration refers to, the calibration geometry and the chamber design. Another correction factor to be introduced for the absorbed dose determination in large fields refers to radiation effects on the stem, cable and sometimes connectors. A simple method was developed to be suggested to hospital physicists to be followed during large radiation field dosimetry, in order to evaluate the radiation effects of cables and connectors and to determine correction factors for each system or geometry. (author) [pt

  8. Slow-roll corrections in multi-field inflation: a separate universes approach

    Science.gov (United States)

    Karčiauskas, Mindaugas; Kohri, Kazunori; Mori, Taro; White, Jonathan

    2018-05-01

    In view of cosmological parameters being measured to ever higher precision, theoretical predictions must also be computed to an equally high level of precision. In this work we investigate the impact on such predictions of relaxing some of the simplifying assumptions often used in these computations. In particular, we investigate the importance of slow-roll corrections in the computation of multi-field inflation observables, such as the amplitude of the scalar spectrum Pζ, its spectral tilt ns, the tensor-to-scalar ratio r and the non-Gaussianity parameter fNL. To this end we use the separate universes approach and δ N formalism, which allows us to consider slow-roll corrections to the non-Gaussianity of the primordial curvature perturbation as well as corrections to its two-point statistics. In the context of the δ N expansion, we divide slow-roll corrections into two categories: those associated with calculating the correlation functions of the field perturbations on the initial flat hypersurface and those associated with determining the derivatives of the e-folding number with respect to the field values on the initial flat hypersurface. Using the results of Nakamura & Stewart '96, corrections of the first kind can be written in a compact form. Corrections of the second kind arise from using different levels of slow-roll approximation in solving for the super-horizon evolution, which in turn corresponds to using different levels of slow-roll approximation in the background equations of motion. We consider four different levels of approximation and apply the results to a few example models. The various approximations are also compared to exact numerical solutions.

  9. Analysis of an automated background correction method for cardiovascular MR phase contrast imaging in children and young adults

    Energy Technology Data Exchange (ETDEWEB)

    Rigsby, Cynthia K.; Hilpipre, Nicholas; Boylan, Emma E.; Popescu, Andrada R.; Deng, Jie [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Medical Imaging, Chicago, IL (United States); McNeal, Gary R. [Siemens Medical Solutions USA Inc., Customer Solutions Group, Cardiovascular MR R and D, Chicago, IL (United States); Zhang, Gang [Ann and Robert H. Lurie Children' s Hospital of Chicago Research Center, Biostatistics Research Core, Chicago, IL (United States); Choi, Grace [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Pediatrics, Chicago, IL (United States); Greiser, Andreas [Siemens AG Healthcare Sector, Erlangen (Germany)

    2014-03-15

    Phase contrast magnetic resonance imaging (MRI) is a powerful tool for evaluating vessel blood flow. Inherent errors in acquisition, such as phase offset, eddy currents and gradient field effects, can cause significant inaccuracies in flow parameters. These errors can be rectified with the use of background correction software. To evaluate the performance of an automated phase contrast MRI background phase correction method in children and young adults undergoing cardiac MR imaging. We conducted a retrospective review of patients undergoing routine clinical cardiac MRI including phase contrast MRI for flow quantification in the aorta (Ao) and main pulmonary artery (MPA). When phase contrast MRI of the right and left pulmonary arteries was also performed, these data were included. We excluded patients with known shunts and metallic implants causing visible MRI artifact and those with more than mild to moderate aortic or pulmonary stenosis. Phase contrast MRI of the Ao, mid MPA, proximal right pulmonary artery (RPA) and left pulmonary artery (LPA) using 2-D gradient echo Fast Low Angle SHot (FLASH) imaging was acquired during normal respiration with retrospective cardiac gating. Standard phase image reconstruction and the automatic spatially dependent background-phase-corrected reconstruction were performed on each phase contrast MRI dataset. Non-background-corrected and background-phase-corrected net flow, forward flow, regurgitant volume, regurgitant fraction, and vessel cardiac output were recorded for each vessel. We compared standard non-background-corrected and background-phase-corrected mean flow values for the Ao and MPA. The ratio of pulmonary to systemic blood flow (Qp:Qs) was calculated for the standard non-background and background-phase-corrected data and these values were compared to each other and for proximity to 1. In a subset of patients who also underwent phase contrast MRI of the MPA, RPA, and LPA a comparison was made between standard non-background-corrected

  10. Error correcting circuit design with carbon nanotube field effect transistors

    Science.gov (United States)

    Liu, Xiaoqiang; Cai, Li; Yang, Xiaokuo; Liu, Baojun; Liu, Zhongyong

    2018-03-01

    In this work, a parallel error correcting circuit based on (7, 4) Hamming code is designed and implemented with carbon nanotube field effect transistors, and its function is validated by simulation in HSpice with the Stanford model. A grouping method which is able to correct multiple bit errors in 16-bit and 32-bit application is proposed, and its error correction capability is analyzed. Performance of circuits implemented with CNTFETs and traditional MOSFETs respectively is also compared, and the former shows a 34.4% decrement of layout area and a 56.9% decrement of power consumption.

  11. Mean field with corrections in lattice gauge theory

    International Nuclear Information System (INIS)

    Flyvbjerg, H.; Zuber, J.B.; Lautrup, B.

    1981-12-01

    A systematic expansion of the path integral for lattice gauge theory is performed around the mean field solution. In this letter the authors present the results for the pure gauge groups Z(2), SU(2) and SO(3). The agreement with Monte Carlo calculations is excellent. For the discrete group the calculation is performed with and without gauge fixing, whereas for the continuous groups gauge fixing is mandatory. In the case of SU(2) the absence of a phase transition is correctly signalled by mean field theory. (Auth.)

  12. A few more comments on secularly growing loop corrections in strong electric fields

    International Nuclear Information System (INIS)

    Akhmedov, E.T.; Popov, F.K.

    2015-01-01

    We extend the observations of our previous paper http://dx.doi.org/10.1007/JHEP09(2014)071 [http://arxiv.org/abs/1405.5285]. particular, we show that the secular growth of the loop corrections to the two-point correlation functions is gauge independent: we observe the same growth in the case of the static gauge for the constant background electric field. Furthermore we solve the kinetic equation describing photon production from the background fields, which was derived in our previous paper and allows one to sum up leading secularly growing corrections from all loops. Finally, we show that in the constant electric field background the one-loop correction to the current of the produced pairs is not zero: it also grows with time and violates time translational and reversal invariance of QED on the constant electric field background.

  13. The magnetic field for the ZEUS central detector - analysis and correction of the field measurement

    International Nuclear Information System (INIS)

    Mengel, S.

    1992-06-01

    The magnetic field in the central tracking region of the ZEUS-detector - a facility to investigate highly energetic electron-proton-collisions at the HERA-collider at DESY Hamburg - is generated by a superconducting coil and reaches 18 kG (1.8 T). Some of the tracking devices particularly the drift chambers in the proton forward and rear direction (FTD1-3 and RTD) are not fully contained within the coil and therefore situated in a highly inhomogeneous magnetic field: The radial component B r is up to 6.6 kG, maximum gradients are found to be 300 G/cm for δB r /δr. Evaluating the space drifttime relation necessitates a detailed knowledge of the magnetic field. To reach this goal we analysed the field measurements and corrected them for systematic errors. The corrected data were compared with the field calculations (TOSCA-maps). Measurements and calculations are confirmed by studying consistency with Maxwell's equations. The accuracy reached is better than 100 G throughout the forward and central drift chambers (FTD1-3, CTD) and better than 150 G in the RTD. (orig.) [de

  14. Ideal flood field images for SPECT uniformity correction

    International Nuclear Information System (INIS)

    Oppenheim, B.E.; Appledorn, C.R.

    1984-01-01

    Since as little as 2.5% camera non-uniformity can cause disturbing artifacts in SPECT imaging, the ideal flood field images for uniformity correction would be made with the collimator in place using a perfectly uniform sheet source. While such a source is not realizable the equivalent images can be generated by mapping the activity distribution of a Co-57 sheet source and correcting subsequent images of the source with this mapping. Mapping is accomplished by analyzing equal-time images of the source made in multiple precisely determined positions. The ratio of counts detected in the same region of two images is a measure of the ratio of the activities of the two portions of the source imaged in that region. The activity distribution in the sheet source is determined from a set of such ratios. The more source positions imaged in a given time, the more accurate the source mapping, according to results of a computer simulation. A 1.9 mCi Co-57 sheet source was shifted by 12 mm increments along the horizontal and vertical axis of the camera face to 9 positions on each axis. The source was imaged for 20 min in each position and 214 million total counts were accumulated. The activity distribution of the source, relative to the center pixel, was determined for a 31 x 31 array. The integral uniformity was found to be 2.8%. The RMS error for such a mapping was determined by computer simulation to be 0.46%. The activity distribution was used to correct a high count flood field image for non-uniformities attributable to the Co-57 source. Such a corrected image represents camera plus collimator response to an almost perfectly uniform sheet source

  15. Quantum corrections to scalar field dynamics in a slow-roll space-time

    Energy Technology Data Exchange (ETDEWEB)

    Herranen, Matti [Niels Bohr International Academy and Discovery Center, Niels Bohr Institute,University of Copenhagen,Blegdamsvej 17, 2100 Copenhagen (Denmark); Markkanen, Tommi [Helsinki Institute of Physics and Department of Physics,P.O. Box 64, FI-00014, University of Helsinki (Finland); Tranberg, Anders [Faculty of Science and Technology, University of Stavanger, 4036 Stavanger (Norway)

    2014-05-07

    We consider the dynamics of a quantum scalar field in the background of a slow-roll inflating Universe. We compute the one-loop quantum corrections to the field and Friedmann equation of motion, in both a 1PI and a 2PI expansion, to leading order in slow-roll. Generalizing the works of http://dx.doi.org/10.1016/j.nuclphysb.2006.04.029, http://dx.doi.org/10.1103/PhysRevLett.107.191103, http://dx.doi.org/10.1103/PhysRevD.76.103507 and http://dx.doi.org/10.1016/j.nuclphysb.2006.04.010, we then solve these equations to compute the effect on the primordial power spectrum, for the case of a self-interacting inflaton and a self-interacting spectator field. We find that for the inflaton the corrections are negligible due to the smallness of the coupling constant despite the large IR enhancement of the loop contributions. For a curvaton scenario, on the other hand, we find tension in using the 1PI loop corrections, which may indicate that the quantum corrections could be non-perturbatively large in this case, thus requiring resummation.

  16. An automated analysis workflow for optimization of force-field parameters using neutron scattering data

    Energy Technology Data Exchange (ETDEWEB)

    Lynch, Vickie E.; Borreguero, Jose M. [Neutron Data Analysis & Visualization Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Bhowmik, Debsindhu [Computational Sciences & Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Ganesh, Panchapakesan; Sumpter, Bobby G. [Center for Nanophase Material Sciences, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Computational Sciences & Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Proffen, Thomas E. [Neutron Data Analysis & Visualization Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Goswami, Monojoy, E-mail: goswamim@ornl.gov [Center for Nanophase Material Sciences, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Computational Sciences & Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States)

    2017-07-01

    Graphical abstract: - Highlights: • An automated workflow to optimize force-field parameters. • Used the workflow to optimize force-field parameter for a system containing nanodiamond and tRNA. • The mechanism relies on molecular dynamics simulation and neutron scattering experimental data. • The workflow can be generalized to any other experimental and simulation techniques. - Abstract: Large-scale simulations and data analysis are often required to explain neutron scattering experiments to establish a connection between the fundamental physics at the nanoscale and data probed by neutrons. However, to perform simulations at experimental conditions it is critical to use correct force-field (FF) parameters which are unfortunately not available for most complex experimental systems. In this work, we have developed a workflow optimization technique to provide optimized FF parameters by comparing molecular dynamics (MD) to neutron scattering data. We describe the workflow in detail by using an example system consisting of tRNA and hydrophilic nanodiamonds in a deuterated water (D{sub 2}O) environment. Quasi-elastic neutron scattering (QENS) data show a faster motion of the tRNA in the presence of nanodiamond than without the ND. To compare the QENS and MD results quantitatively, a proper choice of FF parameters is necessary. We use an efficient workflow to optimize the FF parameters between the hydrophilic nanodiamond and water by comparing to the QENS data. Our results show that we can obtain accurate FF parameters by using this technique. The workflow can be generalized to other types of neutron data for FF optimization, such as vibrational spectroscopy and spin echo.

  17. SU-C-304-07: Are Small Field Detector Correction Factors Strongly Dependent On Machine-Specific Characteristics?

    International Nuclear Information System (INIS)

    Mathew, D; Tanny, S; Parsai, E; Sperling, N

    2015-01-01

    Purpose: The current small field dosimetry formalism utilizes quality correction factors to compensate for the difference in detector response relative to dose deposited in water. The correction factors are defined on a machine-specific basis for each beam quality and detector combination. Some research has suggested that the correction factors may only be weakly dependent on machine-to-machine variations, allowing for determinations of class-specific correction factors for various accelerator models. This research examines the differences in small field correction factors for three detectors across two Varian Truebeam accelerators to determine the correction factor dependence on machine-specific characteristics. Methods: Output factors were measured on two Varian Truebeam accelerators for equivalently tuned 6 MV and 6 FFF beams. Measurements were obtained using a commercial plastic scintillation detector (PSD), two ion chambers, and a diode detector. Measurements were made at a depth of 10 cm with an SSD of 100 cm for jaw-defined field sizes ranging from 3×3 cm 2 to 0.6×0.6 cm 2 , normalized to values at 5×5cm 2 . Correction factors for each field on each machine were calculated as the ratio of the detector response to the PSD response. Percent change of correction factors for the chambers are presented relative to the primary machine. Results: The Exradin A26 demonstrates a difference of 9% for 6×6mm 2 fields in both the 6FFF and 6MV beams. The A16 chamber demonstrates a 5%, and 3% difference in 6FFF and 6MV fields at the same field size respectively. The Edge diode exhibits less than 1.5% difference across both evaluated energies. Field sizes larger than 1.4×1.4cm2 demonstrated less than 1% difference for all detectors. Conclusion: Preliminary results suggest that class-specific correction may not be appropriate for micro-ionization chamber. For diode systems, the correction factor was substantially similar and may be useful for class-specific reference

  18. SU-C-304-07: Are Small Field Detector Correction Factors Strongly Dependent On Machine-Specific Characteristics?

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, D; Tanny, S; Parsai, E; Sperling, N [University of Toledo Medical Center, Toledo, OH (United States)

    2015-06-15

    Purpose: The current small field dosimetry formalism utilizes quality correction factors to compensate for the difference in detector response relative to dose deposited in water. The correction factors are defined on a machine-specific basis for each beam quality and detector combination. Some research has suggested that the correction factors may only be weakly dependent on machine-to-machine variations, allowing for determinations of class-specific correction factors for various accelerator models. This research examines the differences in small field correction factors for three detectors across two Varian Truebeam accelerators to determine the correction factor dependence on machine-specific characteristics. Methods: Output factors were measured on two Varian Truebeam accelerators for equivalently tuned 6 MV and 6 FFF beams. Measurements were obtained using a commercial plastic scintillation detector (PSD), two ion chambers, and a diode detector. Measurements were made at a depth of 10 cm with an SSD of 100 cm for jaw-defined field sizes ranging from 3×3 cm{sup 2} to 0.6×0.6 cm{sup 2}, normalized to values at 5×5cm{sup 2}. Correction factors for each field on each machine were calculated as the ratio of the detector response to the PSD response. Percent change of correction factors for the chambers are presented relative to the primary machine. Results: The Exradin A26 demonstrates a difference of 9% for 6×6mm{sup 2} fields in both the 6FFF and 6MV beams. The A16 chamber demonstrates a 5%, and 3% difference in 6FFF and 6MV fields at the same field size respectively. The Edge diode exhibits less than 1.5% difference across both evaluated energies. Field sizes larger than 1.4×1.4cm2 demonstrated less than 1% difference for all detectors. Conclusion: Preliminary results suggest that class-specific correction may not be appropriate for micro-ionization chamber. For diode systems, the correction factor was substantially similar and may be useful for class

  19. Automated Studies of Continuing Current in Lightning Flashes

    Science.gov (United States)

    Martinez-Claros, Jose

    Continuing current (CC) is a continuous luminosity in the lightning channel that lasts longer than 10 ms following a lightning return stroke to ground. Lightning flashes following CC are associated with direct damage to power lines and are thought to be responsible for causing lightning-induced forest fires. The development of an algorithm that automates continuing current detection by combining NLDN (National Lightning Detection Network) and LEFA (Langmuir Electric Field Array) datasets for CG flashes will be discussed. The algorithm was applied to thousands of cloud-to-ground (CG) flashes within 40 km of Langmuir Lab, New Mexico measured during the 2013 monsoon season. It counts the number of flashes in a single minute of data and the number of return strokes of an individual lightning flash; records the time and location of each return stroke; performs peak analysis on E-field data, and uses the slope of interstroke interval (ISI) E-field data fits to recognize whether continuing current (CC) exists within the interval. Following CC detection, duration and magnitude are measured. The longest observed C in 5588 flashes was 631 ms. The performance of the algorithm (vs. human judgement) was checked on 100 flashes. At best, the reported algorithm is "correct" 80% of the time, where correct means that multiple stations agree with each other and with a human on both the presence and duration of CC. Of the 100 flashes that were validated against human judgement, 62% were hybrid. Automated analysis detects the first but misses the second return stroke in many cases where the second return stroke is followed by long CC. This problem is also present in human interpretation of field change records.

  20. End-to-end workflow for finite element analysis of tumor treating fields in glioblastomas

    Science.gov (United States)

    Timmons, Joshua J.; Lok, Edwin; San, Pyay; Bui, Kevin; Wong, Eric T.

    2017-11-01

    Tumor Treating Fields (TTFields) therapy is an approved modality of treatment for glioblastoma. Patient anatomy-based finite element analysis (FEA) has the potential to reveal not only how these fields affect tumor control but also how to improve efficacy. While the automated tools for segmentation speed up the generation of FEA models, multi-step manual corrections are required, including removal of disconnected voxels, incorporation of unsegmented structures and the addition of 36 electrodes plus gel layers matching the TTFields transducers. Existing approaches are also not scalable for the high throughput analysis of large patient volumes. A semi-automated workflow was developed to prepare FEA models for TTFields mapping in the human brain. Magnetic resonance imaging (MRI) pre-processing, segmentation, electrode and gel placement, and post-processing were all automated. The material properties of each tissue were applied to their corresponding mask in silico using COMSOL Multiphysics (COMSOL, Burlington, MA, USA). The fidelity of the segmentations with and without post-processing was compared against the full semi-automated segmentation workflow approach using Dice coefficient analysis. The average relative differences for the electric fields generated by COMSOL were calculated in addition to observed differences in electric field-volume histograms. Furthermore, the mesh file formats in MPHTXT and NASTRAN were also compared using the differences in the electric field-volume histogram. The Dice coefficient was less for auto-segmentation without versus auto-segmentation with post-processing, indicating convergence on a manually corrected model. An existent but marginal relative difference of electric field maps from models with manual correction versus those without was identified, and a clear advantage of using the NASTRAN mesh file format was found. The software and workflow outlined in this article may be used to accelerate the investigation of TTFields in

  1. End-to-end workflow for finite element analysis of tumor treating fields in glioblastomas.

    Science.gov (United States)

    Timmons, Joshua J; Lok, Edwin; San, Pyay; Bui, Kevin; Wong, Eric T

    2017-10-12

    Tumor Treating Fields (TTFields) therapy is an approved modality of treatment for glioblastoma. Patient anatomy-based finite element analysis (FEA) has the potential to reveal not only how these fields affect tumor control but also how to improve efficacy. While the automated tools for segmentation speed up the generation of FEA models, multi-step manual corrections are required, including removal of disconnected voxels, incorporation of unsegmented structures and the addition of 36 electrodes plus gel layers matching the TTFields transducers. Existing approaches are also not scalable for the high throughput analysis of large patient volumes. A semi-automated workflow was developed to prepare FEA models for TTFields mapping in the human brain. Magnetic resonance imaging (MRI) pre-processing, segmentation, electrode and gel placement, and post-processing were all automated. The material properties of each tissue were applied to their corresponding mask in silico using COMSOL Multiphysics (COMSOL, Burlington, MA, USA). The fidelity of the segmentations with and without post-processing was compared against the full semi-automated segmentation workflow approach using Dice coefficient analysis. The average relative differences for the electric fields generated by COMSOL were calculated in addition to observed differences in electric field-volume histograms. Furthermore, the mesh file formats in MPHTXT and NASTRAN were also compared using the differences in the electric field-volume histogram. The Dice coefficient was less for auto-segmentation without versus auto-segmentation with post-processing, indicating convergence on a manually corrected model. An existent but marginal relative difference of electric field maps from models with manual correction versus those without was identified, and a clear advantage of using the NASTRAN mesh file format was found. The software and workflow outlined in this article may be used to accelerate the investigation of TTFields in

  2. The effect of individual differences in working memory in older adults on performance with different degrees of automated technology.

    Science.gov (United States)

    Pak, Richard; McLaughlin, Anne Collins; Leidheiser, William; Rovira, Ericka

    2017-04-01

    A leading hypothesis to explain older adults' overdependence on automation is age-related declines in working memory. However, it has not been empirically examined. The purpose of the current experiment was to examine how working memory affected performance with different degrees of automation in older adults. In contrast to the well-supported idea that higher degrees of automation, when the automation is correct, benefits performance but higher degrees of automation, when the automation fails, increasingly harms performance, older adults benefited from higher degrees of automation when the automation was correct but were not differentially harmed by automation failures. Surprisingly, working memory did not interact with degree of automation but did interact with automation correctness or failure. When automation was correct, older adults with higher working memory ability had better performance than those with lower abilities. But when automation was incorrect, all older adults, regardless of working memory ability, performed poorly. Practitioner Summary: The design of automation intended for older adults should focus on ways of making the correctness of the automation apparent to the older user and suggest ways of helping them recover when it is malfunctioning.

  3. Copula-based assimilation of radar and gauge information to derive bias-corrected precipitation fields

    Directory of Open Access Journals (Sweden)

    S. Vogl

    2012-07-01

    Full Text Available This study addresses the problem of combining radar information and gauge measurements. Gauge measurements are the best available source of absolute rainfall intensity albeit their spatial availability is limited. Precipitation information obtained by radar mimics well the spatial patterns but is biased for their absolute values.

    In this study copula models are used to describe the dependence structure between gauge observations and rainfall derived from radar reflectivity at the corresponding grid cells. After appropriate time series transformation to generate "iid" variates, only the positive pairs (radar >0, gauge >0 of the residuals are considered. As not each grid cell can be assigned to one gauge, the integration of point information, i.e. gauge rainfall intensities, is achieved by considering the structure and the strength of dependence between the radar pixels and all the gauges within the radar image. Two different approaches, namely Maximum Theta and Multiple Theta, are presented. They finally allow for generating precipitation fields that mimic the spatial patterns of the radar fields and correct them for biases in their absolute rainfall intensities. The performance of the approach, which can be seen as a bias-correction for radar fields, is demonstrated for the Bavarian Alps. The bias-corrected rainfall fields are compared to a field of interpolated gauge values (ordinary kriging and are validated with available gauge measurements. The simulated precipitation fields are compared to an operationally corrected radar precipitation field (RADOLAN. The copula-based approach performs similarly well as indicated by different validation measures and successfully corrects for errors in the radar precipitation.

  4. A brain MRI bias field correction method created in the Gaussian multi-scale space

    Science.gov (United States)

    Chen, Mingsheng; Qin, Mingxin

    2017-07-01

    A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.

  5. Error field and its correction strategy in tokamaks

    International Nuclear Information System (INIS)

    In, Yongkyoon

    2014-01-01

    While error field correction (EFC) is to minimize the unwanted kink-resonant non-axisymmetric components, resonant magnetic perturbation (RMP) application is to maximize the benefits of pitch-resonant non-axisymmetric components. As the plasma response against non-axisymmetric field increases with beta increase, feedback-controlled EFC is a more promising EFC strategy in reactor-relevant high-beta regimes. Nonetheless, various physical aspects and uncertainties associated with EFC should be taken into account and clarified in the terms of multiple low-n EFC and multiple MHD modes, in addition to the compatibility issue with RMP application. Such a multi-faceted view of EFC strategy is briefly discussed. (author)

  6. [Transient elevation of intraocular pressure in primary open-angle glaucoma patients after automated visual field examination in the winter].

    Science.gov (United States)

    Nishino, Kazuaki; Yoshida, Fujiko; Nitta, Akari; Saito, Mieko; Saito, Kazuuchi

    2013-12-01

    To evaluate retrospectively seasonal fluctuations of transient intraocular pressure (IOP) elevation after automated visual field examination in patients with primary open-angle glaucoma (POAG). We reviewed 53 consecutive patients with POAG who visited Kaimeido Ophthalmic and Dental Clinic from January 2011 to March 2013, 21 men and 32 women aged 67.7 +/- 11.2 years. The patients were divided into 4 groups, spring, summer, autumn, and winter according to the month of automated visual field examination and both eyes of each patient were enrolled. IOP was measured immediately after automated visual field examination (vf IOP) and compared with the average IOP from the previous 3 months (pre IOP) and with the average IOP from the following 3 months (post IOP) in each season. IOP elevation rate was defined as (vf IOP- pre IOP)/pre IOP x 100% and calculated for each season (paired t test). Additionally, the correlation between mean deviation (MD) and IOP elevation rate was evaluated (single regression analysis). Exclusion criteria were patients who received cataract surgery during this study or had a history of any previous glaucoma surgery. The automated visual field test was performed with a Humphrey field analyzer and the Central 30-2 FASTPAC threshold program. The average vf IOP was 14.5 +/- 2.5 mmHg, higher than pre IOP 13.8 +/- 2.4 mmHg (p field examination, especially in the winter but not in the summer.

  7. Feasibility study for the computerized automation of the Annapolis Field Office of EPA region III

    International Nuclear Information System (INIS)

    Ames, H.S.; Barton, G.W. Jr.; Bystroff, R.I.; Crawford, R.W.; Kray, A.M.; Maples, M.D.

    1976-08-01

    This report describes a feasibility study for computerized automation of the Annapolis Field Office (AFO) of EPA's Region III. The AFO laboratory provides analytical support for a number of EPA divisions; its primary function at present is analysis of water samples from rivers, estuaries, and the ocean in the Chesapeake Bay area. Automation of the AFO laboratory is found to be not only feasible but also highly desirable. An automation system is proposed which will give major improvements in analytical capacity, quality control, sample management, and reporting capabilities. This system is similar to the LLL-developed automation systems already installed at other EPA laboratories, with modifications specific to the needs of the AFO laboratory and the addition of sample file control. It is estimated that the initial cost of the system, nearly $300,000, would be recouped in about three years by virtue of the increased capacity and efficiency of operation

  8. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  9. MRI intensity inhomogeneity correction by combining intensity and spatial information

    International Nuclear Information System (INIS)

    Vovk, Uros; Pernus, Franjo; Likar, Bostjan

    2004-01-01

    We propose a novel fully automated method for retrospective correction of intensity inhomogeneity, which is an undesired phenomenon in many automatic image analysis tasks, especially if quantitative analysis is the final goal. Besides most commonly used intensity features, additional spatial image features are incorporated to improve inhomogeneity correction and to make it more dynamic, so that local intensity variations can be corrected more efficiently. The proposed method is a four-step iterative procedure in which a non-parametric inhomogeneity correction is conducted. First, the probability distribution of image intensities and corresponding second derivatives is obtained. Second, intensity correction forces, condensing the probability distribution along the intensity feature, are computed for each voxel. Third, the inhomogeneity correction field is estimated by regularization of all voxel forces, and fourth, the corresponding partial inhomogeneity correction is performed. The degree of inhomogeneity correction dynamics is determined by the size of regularization kernel. The method was qualitatively and quantitatively evaluated on simulated and real MR brain images. The obtained results show that the proposed method does not corrupt inhomogeneity-free images and successfully corrects intensity inhomogeneity artefacts even if these are more dynamic

  10. Operational proof of automation

    International Nuclear Information System (INIS)

    Jaerschky, R.; Reifenhaeuser, R.; Schlicht, K.

    1976-01-01

    Automation of the power plant process may imply quite a number of problems. The automation of dynamic operations requires complicated programmes often interfering in several branched areas. This reduces clarity for the operating and maintenance staff, whilst increasing the possibilities of errors. The synthesis and the organization of standardized equipment have proved very successful. The possibilities offered by this kind of automation for improving the operation of power plants will only sufficiently and correctly be turned to profit, however, if the application of these technics of equipment is further improved and if its volume is tallied with a definite etc. (orig.) [de

  11. Operational proof of automation

    International Nuclear Information System (INIS)

    Jaerschky, R.; Schlicht, K.

    1977-01-01

    Automation of the power plant process may imply quite a number of problems. The automation of dynamic operations requires complicated programmes often interfering in several branched areas. This reduces clarity for the operating and maintenance staff, whilst increasing the possibilities of errors. The synthesis and the organization of standardized equipment have proved very successful. The possibilities offered by this kind of automation for improving the operation of power plants will only sufficiently and correctly be turned to profit, however, if the application of these equipment techniques is further improved and if it stands in a certain ratio with a definite efficiency. (orig.) [de

  12. Clearing the waters: Evaluating the need for site-specific field fluorescence corrections based on turbidity measurements

    Science.gov (United States)

    Saraceno, John F.; Shanley, James B.; Downing, Bryan D.; Pellerin, Brian A.

    2017-01-01

    In situ fluorescent dissolved organic matter (fDOM) measurements have gained increasing popularity as a proxy for dissolved organic carbon (DOC) concentrations in streams. One challenge to accurate fDOM measurements in many streams is light attenuation due to suspended particles. Downing et al. (2012) evaluated the need for corrections to compensate for particle interference on fDOM measurements using a single sediment standard in a laboratory study. The application of those results to a large river improved unfiltered field fDOM accuracy. We tested the same correction equation in a headwater tropical stream and found that it overcompensated fDOM when turbidity exceeded ∼300 formazin nephelometric units (FNU). Therefore, we developed a site-specific, field-based fDOM correction equation through paired in situ fDOM measurements of filtered and unfiltered streamwater. The site-specific correction increased fDOM accuracy up to a turbidity as high as 700 FNU, the maximum observed in this study. The difference in performance between the laboratory-based correction equation of Downing et al. (2012) and our site-specific, field-based correction equation likely arises from differences in particle size distribution between the sediment standard used in the lab (silt) and that observed in our study (fine to medium sand), particularly during high flows. Therefore, a particle interference correction equation based on a single sediment type may not be ideal when field sediment size is significantly different. Given that field fDOM corrections for particle interference under turbid conditions are a critical component in generating accurate DOC estimates, we describe a way to develop site-specific corrections.

  13. Short wavelength automated perimetry can detect visual field changes in diabetic patients without retinopathy

    Directory of Open Access Journals (Sweden)

    Othman Ali Zico

    2014-01-01

    Full Text Available Purpose: The purpose of the following study is to compare short wave automated perimetry (SWAP versus standard automated perimetry (SAP for early detection of diabetic retinopathy (DR. Materials and Methods: A total of 40 diabetic patients, divided into group I without DR (20 patients = 40 eyes and group II with mild non-proliferative DR (20 patients = 40 eyes were included. They were tested with central 24-2 threshold test with both shortwave and SAP to compare sensitivity values and local visual field indices in both of them. A total of 20 healthy age and gender matched subjects were assessed as a control group. Results: Control group showed no differences between SWAP and SAP regarding mean deviation (MD, corrected pattern standard deviation (CPSD or short fluctuations (SF. In group I, MD showed significant more deflection in SWAP (−4.44 ± 2.02 dB compared to SAP (−0.96 ± 1.81 dB (P = 0.000002. However, CPSD and SF were not different between SWAP and SAP. In group II, MD and SF showed significantly different values in SWAP (−5.75 ± 3.11 dB and 2.0 ± 0.95 compared to SAP (−3.91 ± 2.87 dB and 2.86 ± 1.23 (P = 0.01 and 0.006 respectively. There are no differences regarding CPSD between SWAP and SAP. The SWAP technique was significantly more sensitive than SAP in patients without retinopathy (p, but no difference exists between the two techniques in patients with non-proliferative DR. Conclusion: The SWAP technique has a higher yield and efficacy to pick up abnormal findings in diabetic patients without overt retinopathy rather than patients with clinical retinopathy.

  14. Automated Critical Peak Pricing Field Tests: Program Descriptionand Results

    Energy Technology Data Exchange (ETDEWEB)

    Piette, Mary Ann; Watson, David; Motegi, Naoya; Kiliccote, Sila; Xu, Peng

    2006-04-06

    California utilities have been exploring the use of critical peak prices (CPP) to help reduce needle peaks in customer end-use loads. CPP is a form of price-responsive demand response (DR). Recent experience has shown that customers have limited knowledge of how to operate their facilities in order to reduce their electricity costs under CPP (Quantum 2004). While the lack of knowledge about how to develop and implement DR control strategies is a barrier to participation in DR programs like CPP, another barrier is the lack of automation of DR systems. During 2003 and 2004, the PIER Demand Response Research Center (DRRC) conducted a series of tests of fully automated electric demand response (Auto-DR) at 18 facilities. Overall, the average of the site-specific average coincident demand reductions was 8% from a variety of building types and facilities. Many electricity customers have suggested that automation will help them institutionalize their electric demand savings and improve their overall response and DR repeatability. This report focuses on and discusses the specific results of the Automated Critical Peak Pricing (Auto-CPP, a specific type of Auto-DR) tests that took place during 2005, which build on the automated demand response (Auto-DR) research conducted through PIER and the DRRC in 2003 and 2004. The long-term goal of this project is to understand the technical opportunities of automating demand response and to remove technical and market impediments to large-scale implementation of automated demand response (Auto-DR) in buildings and industry. A second goal of this research is to understand and identify best practices for DR strategies and opportunities. The specific objectives of the Automated Critical Peak Pricing test were as follows: (1) Demonstrate how an automated notification system for critical peak pricing can be used in large commercial facilities for demand response (DR). (2) Evaluate effectiveness of such a system. (3) Determine how customers

  15. Space environments and their effects on space automation and robotics

    Science.gov (United States)

    Garrett, Henry B.

    1990-01-01

    Automated and robotic systems will be exposed to a variety of environmental anomalies as a result of adverse interactions with the space environment. As an example, the coupling of electrical transients into control systems, due to EMI from plasma interactions and solar array arcing, may cause spurious commands that could be difficult to detect and correct in time to prevent damage during critical operations. Spacecraft glow and space debris could introduce false imaging information into optical sensor systems. The presentation provides a brief overview of the primary environments (plasma, neutral atmosphere, magnetic and electric fields, and solid particulates) that cause such adverse interactions. The descriptions, while brief, are intended to provide a basis for the other papers presented at this conference which detail the key interactions with automated and robotic systems. Given the growing complexity and sensitivity of automated and robotic space systems, an understanding of adverse space environments will be crucial to mitigating their effects.

  16. Development of an automated asbestos counting software based on fluorescence microscopy.

    Science.gov (United States)

    Alexandrov, Maxym; Ichida, Etsuko; Nishimura, Tomoki; Aoki, Kousuke; Ishida, Takenori; Hirota, Ryuichi; Ikeda, Takeshi; Kawasaki, Tetsuo; Kuroda, Akio

    2015-01-01

    An emerging alternative to the commonly used analytical methods for asbestos analysis is fluorescence microscopy (FM), which relies on highly specific asbestos-binding probes to distinguish asbestos from interfering non-asbestos fibers. However, all types of microscopic asbestos analysis require laborious examination of large number of fields of view and are prone to subjective errors and large variability between asbestos counts by different analysts and laboratories. A possible solution to these problems is automated counting of asbestos fibers by image analysis software, which would lower the cost and increase the reliability of asbestos testing. This study seeks to develop a fiber recognition and counting software for FM-based asbestos analysis. We discuss the main features of the developed software and the results of its testing. Software testing showed good correlation between automated and manual counts for the samples with medium and high fiber concentrations. At low fiber concentrations, the automated counts were less accurate, leading us to implement correction mode for automated counts. While the full automation of asbestos analysis would require further improvements in accuracy of fiber identification, the developed software could already assist professional asbestos analysts and record detailed fiber dimensions for the use in epidemiological research.

  17. A median filter approach for correcting errors in a vector field

    Science.gov (United States)

    Schultz, H.

    1985-01-01

    Techniques are presented for detecting and correcting errors in a vector field. These methods employ median filters which are frequently used in image processing to enhance edges and remove noise. A detailed example is given for wind field maps produced by a spaceborne scatterometer. The error detection and replacement algorithm was tested with simulation data from the NASA Scatterometer (NSCAT) project.

  18. Depolarization corrections to the coercive field in thin-film ferroelectrics

    International Nuclear Information System (INIS)

    Dawber, M; Chandra, P; Littlewood, P B; Scott, J F

    2003-01-01

    Empirically, the coercive field needed to reverse the polarization in a ferroelectric increases with decreasing film thickness. For ferroelectric films of 100 μm to 100 nm in thickness the coercive field has been successfully described by a semi-empirical scaling law. Accounting for depolarization corrections, we show that this scaling behaviour is consistent with field measurements of ultrathin ferroelectric capacitors down to one nanometre in film thickness. Our results also indicate that the minimum film thickness, determined by a polarization instability, can be tuned by the choice of electrodes, and recommendations for next-generation ferroelectric devices are discussed. (letter to the editor)

  19. Depolarization corrections to the coercive field in thin-film ferroelectrics

    CERN Document Server

    Dawber, M; Littlewood, P B; Scott, J F

    2003-01-01

    Empirically, the coercive field needed to reverse the polarization in a ferroelectric increases with decreasing film thickness. For ferroelectric films of 100 mu m to 100 nm in thickness the coercive field has been successfully described by a semi-empirical scaling law. Accounting for depolarization corrections, we show that this scaling behaviour is consistent with field measurements of ultrathin ferroelectric capacitors down to one nanometre in film thickness. Our results also indicate that the minimum film thickness, determined by a polarization instability, can be tuned by the choice of electrodes, and recommendations for next-generation ferroelectric devices are discussed. (letter to the editor)

  20. Automated movement correction for dynamic PET/CT images: evaluation with phantom and patient data.

    Science.gov (United States)

    Ye, Hu; Wong, Koon-Pong; Wardak, Mirwais; Dahlbom, Magnus; Kepe, Vladimir; Barrio, Jorge R; Nelson, Linda D; Small, Gary W; Huang, Sung-Cheng

    2014-01-01

    Head movement during a dynamic brain PET/CT imaging results in mismatch between CT and dynamic PET images. It can cause artifacts in CT-based attenuation corrected PET images, thus affecting both the qualitative and quantitative aspects of the dynamic PET images and the derived parametric images. In this study, we developed an automated retrospective image-based movement correction (MC) procedure. The MC method first registered the CT image to each dynamic PET frames, then re-reconstructed the PET frames with CT-based attenuation correction, and finally re-aligned all the PET frames to the same position. We evaluated the MC method's performance on the Hoffman phantom and dynamic FDDNP and FDG PET/CT images of patients with neurodegenerative disease or with poor compliance. Dynamic FDDNP PET/CT images (65 min) were obtained from 12 patients and dynamic FDG PET/CT images (60 min) were obtained from 6 patients. Logan analysis with cerebellum as the reference region was used to generate regional distribution volume ratio (DVR) for FDDNP scan before and after MC. For FDG studies, the image derived input function was used to generate parametric image of FDG uptake constant (Ki) before and after MC. Phantom study showed high accuracy of registration between PET and CT and improved PET images after MC. In patient study, head movement was observed in all subjects, especially in late PET frames with an average displacement of 6.92 mm. The z-direction translation (average maximum = 5.32 mm) and x-axis rotation (average maximum = 5.19 degrees) occurred most frequently. Image artifacts were significantly diminished after MC. There were significant differences (Pdynamic brain FDDNP and FDG PET/CT scans could improve the qualitative and quantitative aspects of images of both tracers.

  1. Advanced health monitor for automated driving functions

    NARCIS (Netherlands)

    Mikovski Iotov, I.

    2017-01-01

    There is a trend in the automotive domain where driving functions are taken from the driver by automated driving functions. In order to guarantee the correct behavior of these auto-mated driving functions, the report introduces an Advanced Health Monitor that uses Tem-poral Logic and Probabilistic

  2. Errors of first-order probe correction for higher-order probes in spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Laitinen, Tommi; Nielsen, Jeppe Majlund; Pivnenko, Sergiy

    2004-01-01

    An investigation is performed to study the error of the far-field pattern determined from a spherical near-field antenna measurement in the case where a first-order (mu=+-1) probe correction scheme is applied to the near-field signal measured by a higher-order probe.......An investigation is performed to study the error of the far-field pattern determined from a spherical near-field antenna measurement in the case where a first-order (mu=+-1) probe correction scheme is applied to the near-field signal measured by a higher-order probe....

  3. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  4. Quark number density and susceptibility calculation with one correction in mean field potential

    International Nuclear Information System (INIS)

    Singh, S. Somorendro

    2016-01-01

    We calculate quark number density and susceptibility of a model which has one loop correction in mean field potential. The calculation shows continuous increasing in the number density and susceptibility up to the temperature T = 0.4 GeV. Then the value of number density and susceptibility approach to the lattice result for higher value of temperature. The result indicates that the calculated values of the model fit well and the result increase the temperature to reach the lattice data with the one loop correction in the mean field potential. (author)

  5. Effects of Field-Map Distortion Correction on Resting State Functional Connectivity MRI

    Directory of Open Access Journals (Sweden)

    Hiroki Togo

    2017-12-01

    Full Text Available Magnetic field inhomogeneities cause geometric distortions of echo planar images used for functional magnetic resonance imaging (fMRI. To reduce this problem, distortion correction (DC with field map is widely used for both task and resting-state fMRI (rs-fMRI. Although DC with field map has been reported to improve the quality of task fMRI, little is known about its effects on rs-fMRI. Here, we tested the influence of field-map DC on rs-fMRI results using two rs-fMRI datasets derived from 40 healthy subjects: one with DC (DC+ and the other without correction (DC−. Independent component analysis followed by the dual regression approach was used for evaluation of resting-state functional connectivity networks (RSN. We also obtained the ratio of low-frequency to high-frequency signal power (0.01–0.1 Hz and above 0.1 Hz, respectively; LFHF ratio to assess the quality of rs-fMRI signals. For comparison of RSN between DC+ and DC− datasets, the default mode network showed more robust functional connectivity in the DC+ dataset than the DC− dataset. Basal ganglia RSN showed some decreases in functional connectivity primarily in white matter, indicating imperfect registration/normalization without DC. Supplementary seed-based and simulation analyses supported the utility of DC. Furthermore, we found a higher LFHF ratio after field map correction in the anterior cingulate cortex, posterior cingulate cortex, ventral striatum, and cerebellum. In conclusion, field map DC improved detection of functional connectivity derived from low-frequency rs-fMRI signals. We encourage researchers to include a DC step in the preprocessing pipeline of rs-fMRI analysis.

  6. Automated operation and management of the oil fields in Western Siberia

    Energy Technology Data Exchange (ETDEWEB)

    Guernault, P.; Valleur, M.

    1979-11-01

    In October 1978, Technip signed a contract worth 850 mf with the Soviet central purchasing organization, Mashinoimport, for the design and construction of 2 large complexes intended to improve the production of the Soviet Samotlor and Fyodorovsk fields. These fields are located in West Siberia near the towns of Nijnivartovsk and Surgut, in the OB Valley, approximately 600 km south of the Arctic Circle. They are among the largest in the Soviet Union. The present output of the Samotlor field exceeds 100 mt/yr; the 2 fields taken together comprise 2400 wells in the final stage, spread over an area of 2000 sq km. These installations thus are the largest to be designed to date with the gas lift method: i.e., the reinjection of high pressure gas into the production string. They make use of high performance compressor plants but their main feature is above all their very high level of automation.

  7. [Application of an Adaptive Inertia Weight Particle Swarm Algorithm in the Magnetic Resonance Bias Field Correction].

    Science.gov (United States)

    Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao

    2016-06-01

    An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.

  8. TLS FIELD DATA BASED INTENSITY CORRECTION FOR FOREST ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    J. Heinzel

    2016-06-01

    Full Text Available Terrestrial laser scanning (TLS is increasingly used for forestry applications. Besides the three dimensional point coordinates, the 'intensity' of the reflected signal plays an important role in forestry and vegetation studies. The benefit of the signal intensity is caused by the wavelength of the laser that is within the near infrared (NIR for most scanners. The NIR is highly indicative for various vegetation characteristics. However, the intensity as recorded by most terrestrial scanners is distorted by both external and scanner specific factors. Since details about system internal alteration of the signal are often unknown to the user, model driven approaches are impractical. On the other hand, existing data driven calibration procedures require laborious acquisition of separate reference datasets or areas of homogenous reflection characteristics from the field data. In order to fill this gap, the present study introduces an approach to correct unwanted intensity variations directly from the point cloud of the field data. The focus is on the variation over range and sensor specific distortions. Instead of an absolute calibration of the values, a relative correction within the dataset is sufficient for most forestry applications. Finally, a method similar to time series detrending is presented with the only pre-condition of a relative equal distribution of forest objects and materials over range. Our test data covers 50 terrestrial scans captured with a FARO Focus 3D S120 scanner using a laser wavelength of 905 nm. Practical tests demonstrate that our correction method removes range and scanner based alterations of the intensity.

  9. HDR Pathological Image Enhancement Based on Improved Bias Field Correction and Guided Image Filter

    Directory of Open Access Journals (Sweden)

    Qingjiao Sun

    2016-01-01

    Full Text Available Pathological image enhancement is a significant topic in the field of pathological image processing. This paper proposes a high dynamic range (HDR pathological image enhancement method based on improved bias field correction and guided image filter (GIF. Firstly, a preprocessing including stain normalization and wavelet denoising is performed for Haematoxylin and Eosin (H and E stained pathological image. Then, an improved bias field correction model is developed to enhance the influence of light for high-frequency part in image and correct the intensity inhomogeneity and detail discontinuity of image. Next, HDR pathological image is generated based on least square method using low dynamic range (LDR image, H and E channel images. Finally, the fine enhanced image is acquired after the detail enhancement process. Experiments with 140 pathological images demonstrate the performance advantages of our proposed method as compared with related work.

  10. Local field corrections in the lattice dynamics of chromium | Ndukwe ...

    African Journals Online (AJOL)

    This work extends the inclusion of local field corrections in the calculation of the phonon dispersion curves to the transition metal, chromium (Cr3+) using the formalism of lattice dynamics based on the transition metal model potential approach in the adiabatic and hatmonic approximations. The results obtained here have a ...

  11. Understanding human management of automation errors

    Science.gov (United States)

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  12. Automated Identification of Initial Storm Electrification and End-of-Storm Electrification Using Electric Field Mill Sensors

    Science.gov (United States)

    Maier, Launa M.; Huddleston, Lisa L.

    2017-01-01

    Kennedy Space Center (KSC) operations are located in a region which experiences one of the highest lightning densities across the United States. As a result, on average, KSC loses almost 30 minutes of operational availability each day for lightning sensitive activities. KSC is investigating using existing instrumentation and automated algorithms to improve the timeliness and accuracy of lightning warnings. Additionally, the automation routines will be warning on a grid to minimize under-warnings associated with not being located in the center of the warning area and over-warnings associated with encompassing too large an area. This study discusses utilization of electric field mill data to provide improved warning times. Specifically, this paper will demonstrate improved performance of an enveloping algorithm of the electric field mill data as compared with the electric field zero crossing to identify initial storm electrification. End-of-Storm-Oscillation (EOSO) identification algorithms will also be analyzed to identify performance improvement, if any, when compared with 30 minutes after the last lightning flash.

  13. Coulomb’s law corrections and fermion field localization in a tachyonic de Sitter thick braneworld

    International Nuclear Information System (INIS)

    Cartas-Fuentevilla, Roberto; Escalante, Alberto; Germán, Gabriel; Herrera-Aguilar, Alfredo; Mora-Luna, Refugio Rigel

    2016-01-01

    Following recent studies which show that it is possible to localize gravity as well as scalar and gauge vector fields in a tachyonic de Sitter thick braneworld, we investigate the solution of the gauge hierarchy problem, the localization of fermion fields in this model, the recovering of the Coulomb law on the non-relativistic limit of the Yukawa interaction between bulk fermions and gauge bosons localized in the brane, and confront the predicted 5D corrections to the photon mass with its upper experimental/observational bounds, finding the model physically viable since it passes these tests. In order to achieve the latter aims we first consider the Yukawa interaction term between the fermionic and the tachyonic scalar fields MF(T)ΨΨ-bar in the action and analyze four distinct tachyonic functions F(T) that lead to four different structures of the respective fermionic mass spectra with different physics. In particular, localization of the massless left-chiral fermion zero mode is possible for three of these cases. We further analyze the phenomenology of these Yukawa interactions among fermion fields and gauge bosons localized on the brane and obtain the crucial and necessary information to compute the corrections to Coulomb’s law coming from massive KK vector modes in the non-relativistic limit. These corrections are exponentially suppressed due to the presence of the mass gap in the mass spectrum of the bulk gauge vector field. From our results we conclude that corrections to Coulomb’s law in the thin brane limit have the same form (up to a numerical factor) as far as the left-chiral massless fermion field is localized on the brane. Finally we compute the corrections to the Coulomb’s law for an arbitrarily thick brane scenario which can be interpreted as 5D corrections to the photon mass. By performing consistent estimations with brane phenomenology, we found that the predicted corrections to the photon mass, which are well bounded by the experimentally

  14. Coulomb’s law corrections and fermion field localization in a tachyonic de Sitter thick braneworld

    Energy Technology Data Exchange (ETDEWEB)

    Cartas-Fuentevilla, Roberto; Escalante, Alberto [Instituto de Física, Benemérita Universidad Autónoma de Puebla,Apdo. postal J-48, 72570 Puebla, Pue. (Mexico); Germán, Gabriel [Instituto de Ciencias Físicas, Universidad Nacional Autónoma de México,Apdo. Postal 48-3, 62251 Cuernavaca, Morelos (Mexico); Rudolf Peierls Centre for Theoretical Physics, University of Oxford, 1 Keble Road,Oxford, OX1 3NP (United Kingdom); Herrera-Aguilar, Alfredo [Instituto de Física, Benemérita Universidad Autónoma de Puebla,Apdo. postal J-48, 72570 Puebla, Pue. (Mexico); Institutode Física y Matemáticas, Universidad Michoacana de San Nicolás de Hidalgo,Edificio C-3, Ciudad Universitaria, CP 58040, Morelia, Michoacán (Mexico); Mora-Luna, Refugio Rigel [Instituto de Ciencias Físicas, Universidad Nacional Autónoma de México,Apdo. Postal 48-3, 62251 Cuernavaca, Morelos (Mexico)

    2016-05-11

    Following recent studies which show that it is possible to localize gravity as well as scalar and gauge vector fields in a tachyonic de Sitter thick braneworld, we investigate the solution of the gauge hierarchy problem, the localization of fermion fields in this model, the recovering of the Coulomb law on the non-relativistic limit of the Yukawa interaction between bulk fermions and gauge bosons localized in the brane, and confront the predicted 5D corrections to the photon mass with its upper experimental/observational bounds, finding the model physically viable since it passes these tests. In order to achieve the latter aims we first consider the Yukawa interaction term between the fermionic and the tachyonic scalar fields MF(T)ΨΨ-bar in the action and analyze four distinct tachyonic functions F(T) that lead to four different structures of the respective fermionic mass spectra with different physics. In particular, localization of the massless left-chiral fermion zero mode is possible for three of these cases. We further analyze the phenomenology of these Yukawa interactions among fermion fields and gauge bosons localized on the brane and obtain the crucial and necessary information to compute the corrections to Coulomb’s law coming from massive KK vector modes in the non-relativistic limit. These corrections are exponentially suppressed due to the presence of the mass gap in the mass spectrum of the bulk gauge vector field. From our results we conclude that corrections to Coulomb’s law in the thin brane limit have the same form (up to a numerical factor) as far as the left-chiral massless fermion field is localized on the brane. Finally we compute the corrections to the Coulomb’s law for an arbitrarily thick brane scenario which can be interpreted as 5D corrections to the photon mass. By performing consistent estimations with brane phenomenology, we found that the predicted corrections to the photon mass, which are well bounded by the experimentally

  15. Correction of inhomogeneous RF field using multiple SPGR signals for high-field spin-echo MRI

    International Nuclear Information System (INIS)

    Ishimori, Yoshiyuki; Monma, Masahiko; Yamada, Kazuhiro; Kimura, Hirohiko; Uematsu, Hidemasa; Fujiwara, Yasuhiro; Yamaguchi, Isao

    2007-01-01

    The purpose of this study was to propose a simple and useful method for correcting nonuniformity of high-field (3 Tesla) T 1 -weighted spin-echo (SE) images based on a B1 field map estimated from gradient recalled echo (GRE) signals. The method of this study was to estimate B1 inhomogeneity, spoiled gradient recalled echo (SPGR) images were collected using a fixed repetition time of 70 ms, flip angles of 45 and 90 degrees, and echo times of 4.8 and 10.4 ms. Selection of flip angles was based on the observation that the relative intensity changes in SPGR signals were very similar among different tissues at larger flip angles than the Ernst angle. Accordingly, spatial irregularity that was observed on a signal ratio map of the SPGR images acquired with these 2 flip angles was ascribed to inhomogeneity of the B1 field. Dual echo time was used to eliminate T 2 * effects. The ratio map that was acquired was scaled to provide an intensity correction map for SE images. Both phantom and volunteer studies were performed using a 3T magnetic resonance scanner to validate the method. In the phantom study, the uniformity of the T 1 -weighted SE image improved by 23%. Images of human heads also showed practically sufficient improvement in the image uniformity. The present method improves the image uniformity of high-field T 1 -weighted SE images. (author)

  16. Building automation and perceived control : a field study on motorized exterior blinds in Dutch offices

    NARCIS (Netherlands)

    Meerbeek, B.W.; te Kulve, Marije; Gritti, T.; Aarts, M.P.J.; Loenen, van E.J.; Aarts, E.H.L.

    2014-01-01

    As a result of the technological advances and increasing focus on energy efficient buildings, simple forms of building automation including automatic motorized blinds systems found their ways into today's office environments. In a five-month field study, qualitative and quantitative methods were

  17. Manual versus Automated Rodent Behavioral Assessment: Comparing Efficacy and Ease of Bederson and Garcia Neurological Deficit Scores to an Open Field Video-Tracking System.

    Science.gov (United States)

    Desland, Fiona A; Afzal, Aqeela; Warraich, Zuha; Mocco, J

    2014-01-01

    Animal models of stroke have been crucial in advancing our understanding of the pathophysiology of cerebral ischemia. Currently, the standards for determining neurological deficit in rodents are the Bederson and Garcia scales, manual assessments scoring animals based on parameters ranked on a narrow scale of severity. Automated open field analysis of a live-video tracking system that analyzes animal behavior may provide a more sensitive test. Results obtained from the manual Bederson and Garcia scales did not show significant differences between pre- and post-stroke animals in a small cohort. When using the same cohort, however, post-stroke data obtained from automated open field analysis showed significant differences in several parameters. Furthermore, large cohort analysis also demonstrated increased sensitivity with automated open field analysis versus the Bederson and Garcia scales. These early data indicate use of automated open field analysis software may provide a more sensitive assessment when compared to traditional Bederson and Garcia scales.

  18. Manual versus Automated Rodent Behavioral Assessment: Comparing Efficacy and Ease of Bederson and Garcia Neurological Deficit Scores to an Open Field Video-Tracking System

    Directory of Open Access Journals (Sweden)

    Fiona A. Desland

    2014-01-01

    Full Text Available Animal models of stroke have been crucial in advancing our understanding of the pathophysiology of cerebral ischemia. Currently, the standards for determining neurological deficit in rodents are the Bederson and Garcia scales, manual assessments scoring animals based on parameters ranked on a narrow scale of severity. Automated open field analysis of a live-video tracking system that analyzes animal behavior may provide a more sensitive test. Results obtained from the manual Bederson and Garcia scales did not show significant differences between pre- and post-stroke animals in a small cohort. When using the same cohort, however, post-stroke data obtained from automated open field analysis showed significant differences in several parameters. Furthermore, large cohort analysis also demonstrated increased sensitivity with automated open field analysis versus the Bederson and Garcia scales. These early data indicate use of automated open field analysis software may provide a more sensitive assessment when compared to traditional Bederson and Garcia scales.

  19. Numerical correction of distorted images in full-field optical coherence tomography

    Science.gov (United States)

    Min, Gihyeon; Kim, Ju Wan; Choi, Woo June; Lee, Byeong Ha

    2012-03-01

    We propose a numerical method which can numerically correct the distorted en face images obtained with a full field optical coherence tomography (FF-OCT) system. It is shown that the FF-OCT image of the deep region of a biological sample is easily blurred or degraded because the sample has a refractive index (RI) much higher than its surrounding medium in general. It is analyzed that the focal plane of the imaging system is segregated from the imaging plane of the coherence-gated system due to the RI mismatch. This image-blurring phenomenon is experimentally confirmed by imaging the chrome pattern of a resolution test target through its glass substrate in water. Moreover, we demonstrate that the blurred image can be appreciably corrected by using the numerical correction process based on the Fresnel-Kirchhoff diffraction theory. The proposed correction method is applied to enhance the image of a human hair, which permits the distinct identification of the melanin granules inside the cortex layer of the hair shaft.

  20. Influence and Correction from the Human Body on the Measurement of a Power-Frequency Electric Field Sensor

    Directory of Open Access Journals (Sweden)

    Dongping Xiao

    2016-06-01

    Full Text Available According to the operating specifications of existing electric field measuring instruments, measuring technicians must be located far from the instruments to eliminate the influence of the human body occupancy on a spatial electric field. Nevertheless, in order to develop a portable safety protection instrument with an effective electric field warning function for working staff in a high-voltage environment, it is necessary to study the influence of an approaching human body on the measurement of an electric field and to correct the measurement results. A single-shaft electric field measuring instrument called the Type LP-2000, which was developed by our research team, is used as the research object in this study. First, we explain the principle of electric field measurement and describe the capacitance effect produced by the human body. Through a theoretical analysis, we show that the measured electric field value decreases as a human body approaches. Their relationship is linearly proportional. Then, the ratio is identified as a correction coefficient to correct for the influence of human body proximity. The conclusion drawn from the theoretical analysis is proved via simulation. The correction coefficient kb = 1.8010 is obtained on the basis of the linear fitting of simulated data. Finally, a physical experiment is performed. When no human is present, we compare the results from the Type LP-2000 measured with Narda EFA-300 and the simulated value to verify the accuracy of the Type LP-2000. For the case of an approaching human body, the correction coefficient kb* = 1.9094 is obtained by comparing the data measured with the Type LP-2000 to the simulated value. The correction coefficient obtained from the experiment (i.e., kb* is highly consistent with that obtained from the simulation (i.e., kb. Two experimental programs are set; under these programs, the excitation voltages and distance measuring points are regulated to produce different

  1. Influence and Correction from the Human Body on the Measurement of a Power-Frequency Electric Field Sensor.

    Science.gov (United States)

    Xiao, Dongping; Liu, Huaitong; Zhou, Qiang; Xie, Yutong; Ma, Qichao

    2016-06-10

    According to the operating specifications of existing electric field measuring instruments, measuring technicians must be located far from the instruments to eliminate the influence of the human body occupancy on a spatial electric field. Nevertheless, in order to develop a portable safety protection instrument with an effective electric field warning function for working staff in a high-voltage environment, it is necessary to study the influence of an approaching human body on the measurement of an electric field and to correct the measurement results. A single-shaft electric field measuring instrument called the Type LP-2000, which was developed by our research team, is used as the research object in this study. First, we explain the principle of electric field measurement and describe the capacitance effect produced by the human body. Through a theoretical analysis, we show that the measured electric field value decreases as a human body approaches. Their relationship is linearly proportional. Then, the ratio is identified as a correction coefficient to correct for the influence of human body proximity. The conclusion drawn from the theoretical analysis is proved via simulation. The correction coefficient kb = 1.8010 is obtained on the basis of the linear fitting of simulated data. Finally, a physical experiment is performed. When no human is present, we compare the results from the Type LP-2000 measured with Narda EFA-300 and the simulated value to verify the accuracy of the Type LP-2000. For the case of an approaching human body, the correction coefficient kb* = 1.9094 is obtained by comparing the data measured with the Type LP-2000 to the simulated value. The correction coefficient obtained from the experiment (i.e., kb*) is highly consistent with that obtained from the simulation (i.e., kb). Two experimental programs are set; under these programs, the excitation voltages and distance measuring points are regulated to produce different electric field

  2. Communication technology in process automation - more than a field bus; Kommunikationstechnik in der Prozessautomatisierung - mehr als Feldbus

    Energy Technology Data Exchange (ETDEWEB)

    Schwibach, Martin [BASF SE, Ludwigshafen (Germany)

    2009-07-01

    In recent years, communication technology has come to play an increasingly important role in process automation. For many decades, standardized 4-20 mA electrical signals, which had replaced earlier pneumatic systems, remained the communication basis for nearly all automation technology applications. It was only in the 1990s, along with the sudden, exponential growth of IT, that a wind of change began sweeping through automation technology as well. This has had a profound and lasting impact on system architectures. Terms like HART, OPC and field bus are now familiar automation vocabulary. Networked automation systems have become the norm, and crosssystem communication, horizontal and vertical information integration, and remote access are routine. Reliability and availability. Sustainability and investment protection. These are the basic requirements that every new solution has to fulfill before it can be successfully employed in process plants. What does this mean in concrete terms? The use of modern communication technologies must not bypass the requirements made on previous and existing data transmission technologies. All current requirement profiles for conventional transmission systems must also be satisfied by network-based technologies, from field bus through to wireless. This is particularly important with regard to functional safety, availability, explosion protection, and EMC. More advanced requirements, such as interoperability, IT security or diagnostics must also be met. Over the years, NAMUR has published a series of papers on these topics which can serve as guidelines. The potentials for using modern communication technologies clearly lie in those areas where conventional solutions have proved uneconomic or unsuitable. Rather than simply achieving fault-free system operation, the overriding goal when implementing and further developing communication structures in automation technology should therefore always be to create new, value

  3. Consequences of the center-of-mass correction in nuclear mean-field models

    International Nuclear Information System (INIS)

    Bender, M.; Rutz, K.; Reinhard, P.G.; Maruhn, J.A.

    2000-01-01

    We study the influence of the scheme for the correction for spurious center-of-mass motion on the fit of effective interactions for self-consistent nuclear mean-field calculations. We find that interactions with very simple center-of-mass correction have significantly larger surface coefficients than interactions for which the center-of-mass correction was calculated for the actual many-body state during the fit. The reason for that is that the effective interaction has to counteract the wrong trends with nucleon number of all simplified schemes for center-of-mass correction which puts a wrong trend with mass number into the effective interaction itself. The effect becomes clearly visible when looking at the deformation energy of largely deformed systems, e.g. superdeformed states or fission barriers of heavy nuclei. (orig.)

  4. Automatic computation of radiative corrections

    International Nuclear Information System (INIS)

    Fujimoto, J.; Ishikawa, T.; Shimizu, Y.; Kato, K.; Nakazawa, N.; Kaneko, T.

    1997-01-01

    Automated systems are reviewed focusing on their general structure and requirement specific to the calculation of radiative corrections. Detailed description of the system and its performance is presented taking GRACE as a concrete example. (author)

  5. Advanced health monitor for automated driving functions

    OpenAIRE

    Mikovski Iotov, I.

    2017-01-01

    There is a trend in the automotive domain where driving functions are taken from the driver by automated driving functions. In order to guarantee the correct behavior of these auto-mated driving functions, the report introduces an Advanced Health Monitor that uses Tem-poral Logic and Probabilistic Analysis to indicate the system’s health.

  6. Correcting GRACE gravity fields for ocean tide effects

    DEFF Research Database (Denmark)

    Knudsen, Per; Andersen, Ole Baltazar

    2002-01-01

    [1] The GRACE mission will be launch in early 2002 and will map the Earth's gravity fields and its variations with unprecedented accuracy during its 5-year lifetime. Unless ocean tide signals and their load upon the solid earth are removed from the GRACE data, their long period aliases obscure more...... tide model if altimetry corrected for inverted barometer effects was used in its derivation. To study the temporal characteristics of the ocean tidal constituents when sampled by GRACE, approximate alias frequencies were derived assuming a sampling of half a sidereal day. Those results show...

  7. Automated MRI segmentation for individualized modeling of current flow in the human head.

    Science.gov (United States)

    Huang, Yu; Dmochowski, Jacek P; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C

    2013-12-01

    High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Fully automated individualized modeling may now be feasible

  8. Automated one-loop calculations with GOSAM

    International Nuclear Information System (INIS)

    Cullen, Gavin; Greiner, Nicolas; Heinrich, Gudrun; Reiter, Thomas; Luisoni, Gionata

    2011-11-01

    We present the program package GoSam which is designed for the automated calculation of one-loop amplitudes for multi-particle processes in renormalisable quantum field theories. The amplitudes, which are generated in terms of Feynman diagrams, can be reduced using either D-dimensional integrand-level decomposition or tensor reduction. GoSam can be used to calculate one-loop QCD and/or electroweak corrections to Standard Model processes and offers the flexibility to link model files for theories Beyond the Standard Model. A standard interface to programs calculating real radiation is also implemented. We demonstrate the flexibility of the program by presenting examples of processes with up to six external legs attached to the loop. (orig.)

  9. Automated one-loop calculations with GOSAM

    Energy Technology Data Exchange (ETDEWEB)

    Cullen, Gavin [Edinburgh Univ. (United Kingdom). School of Physics and Astronomy; Deutsches Elektronen-Synchrotron, Zeuthen [DESY; Germany; Greiner, Nicolas [Illinois Univ., Urbana-Champaign, IL (United States). Dept. of Physics; Max-Planck-Institut fuer Physik, Muenchen (Germany); Heinrich, Gudrun; Reiter, Thomas [Max-Planck-Institut fuer Physik, Muenchen (Germany); Luisoni, Gionata [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology; Mastrolia, Pierpaolo [Max-Planck-Institut fuer Physik, Muenchen (Germany); Padua Univ. (Italy). Dipt. di Fisica; Ossola, Giovanni [New York City Univ., NY (United States). New York City College of Technology; New York City Univ., NY (United States). The Graduate School and University Center; Tramontano, Francesco [European Organization for Nuclear Research (CERN), Geneva (Switzerland)

    2011-11-15

    We present the program package GoSam which is designed for the automated calculation of one-loop amplitudes for multi-particle processes in renormalisable quantum field theories. The amplitudes, which are generated in terms of Feynman diagrams, can be reduced using either D-dimensional integrand-level decomposition or tensor reduction. GoSam can be used to calculate one-loop QCD and/or electroweak corrections to Standard Model processes and offers the flexibility to link model files for theories Beyond the Standard Model. A standard interface to programs calculating real radiation is also implemented. We demonstrate the flexibility of the program by presenting examples of processes with up to six external legs attached to the loop. (orig.)

  10. Automatic segmentation for brain MR images via a convex optimized segmentation and bias field correction coupled model.

    Science.gov (United States)

    Chen, Yunjie; Zhao, Bo; Zhang, Jianwei; Zheng, Yuhui

    2014-09-01

    Accurate segmentation of magnetic resonance (MR) images remains challenging mainly due to the intensity inhomogeneity, which is also commonly known as bias field. Recently active contour models with geometric information constraint have been applied, however, most of them deal with the bias field by using a necessary pre-processing step before segmentation of MR data. This paper presents a novel automatic variational method, which can segment brain MR images meanwhile correcting the bias field when segmenting images with high intensity inhomogeneities. We first define a function for clustering the image pixels in a smaller neighborhood. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. In order to reduce the effect of the noise, the local intensity variations are described by the Gaussian distributions with different means and variances. Then, the objective functions are integrated over the entire domain. In order to obtain the global optimal and make the results independent of the initialization of the algorithm, we reconstructed the energy function to be convex and calculated it by using the Split Bregman theory. A salient advantage of our method is that its result is independent of initialization, which allows robust and fully automated application. Our method is able to estimate the bias of quite general profiles, even in 7T MR images. Moreover, our model can also distinguish regions with similar intensity distribution with different variances. The proposed method has been rigorously validated with images acquired on variety of imaging modalities with promising results. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. A New Tool for Automated Data Collection and Complete On-site Flux Data Processing for Eddy Covariance Measurements

    Science.gov (United States)

    Begashaw, I. G.; Kathilankal, J. C.; Li, J.; Beaty, K.; Ediger, K.; Forgione, A.; Fratini, G.; Johnson, D.; Velgersdyk, M.; Hupp, J. R.; Xu, L.; Burba, G. G.

    2014-12-01

    The eddy covariance method is widely used for direct measurements of turbulent exchange of gases and energy between the surface and atmosphere. In the past, raw data were collected first in the field and then processed back in the laboratory to achieve fully corrected publication-ready flux results. This post-processing consumed significant amount of time and resources, and precluded researchers from accessing near real-time final flux results. A new automated measurement system with novel hardware and software designs was developed, tested, and deployed starting late 2013. The major advancements with this automated flux system include: 1) Enabling logging high-frequency, three-dimensional wind speeds and multiple gas densities (CO2, H2O and CH4), low-frequency meteorological data, and site metadata simultaneously through a specially designed file format 2) Conducting fully corrected, real-time on-site flux computations using conventional as well as user-specified methods, by implementing EddyPro Software on a small low-power microprocessor 3) Providing precision clock control and coordinate information for data synchronization and inter-site data comparison by incorporating a GPS and Precision Time Protocol. Along with these innovations, a data management server application was also developed to chart fully corrected real-time fluxes to assist remote system monitoring, to send e-mail alerts, and to automate data QA/QC, transfer and archiving at individual stations or on a network level. Combination of all of these functions was designed to help save substantial amount of time and costs associated with managing a research site by eliminating the post-field data processing, reducing user errors and facilitating real-time access to fully corrected flux results. The design, functionality, and test results from this new eddy covariance measurement tool will be presented.

  12. Building Automation Systems.

    Science.gov (United States)

    Honeywell, Inc., Minneapolis, Minn.

    A number of different automation systems for use in monitoring and controlling building equipment are described in this brochure. The system functions include--(1) collection of information, (2) processing and display of data at a central panel, and (3) taking corrective action by sounding alarms, making adjustments, or automatically starting and…

  13. An automated phase correction algorithm for retrieving permittivity and permeability of electromagnetic metamaterials

    Directory of Open Access Journals (Sweden)

    Z. X. Cao

    2014-06-01

    Full Text Available To retrieve complex-valued effective permittivity and permeability of electromagnetic metamaterials (EMMs based on resonant effect from scattering parameters using a complex logarithmic function is not inevitable. When complex values are expressed in terms of magnitude and phase, an infinite number of permissible phase angles is permissible due to the multi-valued property of complex logarithmic functions. Special attention needs to be paid to ensure continuity of the effective permittivity and permeability of lossy metamaterials as frequency sweeps. In this paper, an automated phase correction (APC algorithm is proposed to properly trace and compensate phase angles of the complex logarithmic function which may experience abrupt phase jumps near the resonant frequency region of the concerned EMMs, and hence the continuity of the effective optical properties of lossy metamaterials is ensured. The algorithm is then verified to extract effective optical properties from the simulated scattering parameters of the four different types of metamaterial media: a cut-wire cell array, a split ring resonator (SRR cell array, an electric-LC (E-LC resonator cell array, and a combined SRR and wire cell array respectively. The results demonstrate that the proposed algorithm is highly accurate and effective.

  14. Integrals of random fields treated by the model correction factor method

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  15. Visualization and correction of automated segmentation, tracking and lineaging from 5-D stem cell image sequences.

    Science.gov (United States)

    Wait, Eric; Winter, Mark; Bjornsson, Chris; Kokovay, Erzsebet; Wang, Yue; Goderie, Susan; Temple, Sally; Cohen, Andrew R

    2014-10-03

    Neural stem cells are motile and proliferative cells that undergo mitosis, dividing to produce daughter cells and ultimately generating differentiated neurons and glia. Understanding the mechanisms controlling neural stem cell proliferation and differentiation will play a key role in the emerging fields of regenerative medicine and cancer therapeutics. Stem cell studies in vitro from 2-D image data are well established. Visualizing and analyzing large three dimensional images of intact tissue is a challenging task. It becomes more difficult as the dimensionality of the image data increases to include time and additional fluorescence channels. There is a pressing need for 5-D image analysis and visualization tools to study cellular dynamics in the intact niche and to quantify the role that environmental factors play in determining cell fate. We present an application that integrates visualization and quantitative analysis of 5-D (x,y,z,t,channel) and large montage confocal fluorescence microscopy images. The image sequences show stem cells together with blood vessels, enabling quantification of the dynamic behaviors of stem cells in relation to their vascular niche, with applications in developmental and cancer biology. Our application automatically segments, tracks, and lineages the image sequence data and then allows the user to view and edit the results of automated algorithms in a stereoscopic 3-D window while simultaneously viewing the stem cell lineage tree in a 2-D window. Using the GPU to store and render the image sequence data enables a hybrid computational approach. An inference-based approach utilizing user-provided edits to automatically correct related mistakes executes interactively on the system CPU while the GPU handles 3-D visualization tasks. By exploiting commodity computer gaming hardware, we have developed an application that can be run in the laboratory to facilitate rapid iteration through biological experiments. We combine unsupervised image

  16. Automated fetal brain segmentation from 2D MRI slices for motion correction.

    Science.gov (United States)

    Keraudren, K; Kuklisova-Murgasova, M; Kyriakopoulou, V; Malamateniou, C; Rutherford, M A; Kainz, B; Hajnal, J V; Rueckert, D

    2014-11-01

    Motion correction is a key element for imaging the fetal brain in-utero using Magnetic Resonance Imaging (MRI). Maternal breathing can introduce motion, but a larger effect is frequently due to fetal movement within the womb. Consequently, imaging is frequently performed slice-by-slice using single shot techniques, which are then combined into volumetric images using slice-to-volume reconstruction methods (SVR). For successful SVR, a key preprocessing step is to isolate fetal brain tissues from maternal anatomy before correcting for the motion of the fetal head. This has hitherto been a manual or semi-automatic procedure. We propose an automatic method to localize and segment the brain of the fetus when the image data is acquired as stacks of 2D slices with anatomy misaligned due to fetal motion. We combine this segmentation process with a robust motion correction method, enabling the segmentation to be refined as the reconstruction proceeds. The fetal brain localization process uses Maximally Stable Extremal Regions (MSER), which are classified using a Bag-of-Words model with Scale-Invariant Feature Transform (SIFT) features. The segmentation process is a patch-based propagation of the MSER regions selected during detection, combined with a Conditional Random Field (CRF). The gestational age (GA) is used to incorporate prior knowledge about the size and volume of the fetal brain into the detection and segmentation process. The method was tested in a ten-fold cross-validation experiment on 66 datasets of healthy fetuses whose GA ranged from 22 to 39 weeks. In 85% of the tested cases, our proposed method produced a motion corrected volume of a relevant quality for clinical diagnosis, thus removing the need for manually delineating the contours of the brain before motion correction. Our method automatically generated as a side-product a segmentation of the reconstructed fetal brain with a mean Dice score of 93%, which can be used for further processing. Copyright

  17. N3 Bias Field Correction Explained as a Bayesian Modeling Method

    DEFF Research Database (Denmark)

    Larsen, Christian Thode; Iglesias, Juan Eugenio; Van Leemput, Koen

    2014-01-01

    Although N3 is perhaps the most widely used method for MRI bias field correction, its underlying mechanism is in fact not well understood. Specifically, the method relies on a relatively heuristic recipe of alternating iterative steps that does not optimize any particular objective function. In t...

  18. Radiative corrections to the quark masses in the ferromagnetic Ising and Potts field theories

    Directory of Open Access Journals (Sweden)

    Sergei B. Rutkevich

    2017-10-01

    Full Text Available We consider the Ising Field Theory (IFT, and the 3-state Potts Field Theory (PFT, which describe the scaling limits of the two-dimensional lattice q-state Potts model with q=2, and q=3, respectively. At zero magnetic field h=0, both field theories are integrable away from the critical point, have q degenerate vacua in the ferromagnetic phase, and q(q−1 particles of the same mass – the kinks interpolating between two different vacua. Application of a weak magnetic field induces confinement of kinks into bound states – the “mesons” (for q=2,3 consisting predominantly of two kinks, and “baryons” (for q=3, which are essentially the three-kink excitations. The kinks in the confinement regime are also called “the quarks”. We review and refine the Form Factor Perturbation Theory (FFPT, adapting it to the analysis of the confinement problem in the limit of small h, and apply it to calculate the corrections to the kink (quark masses induced by the multi-kink fluctuations caused by the weak magnetic field. It is shown that the subleading third-order ∼h3 correction to the kink mass vanishes in the IFT. The leading second order ∼h2 correction to the kink mass in the 3-state PFT is estimated by truncation the infinite form factor expansion at the first term representing contribution of the two-kink fluctuations into the kink self-energy.

  19. Radiative corrections to the quark masses in the ferromagnetic Ising and Potts field theories

    Science.gov (United States)

    Rutkevich, Sergei B.

    2017-10-01

    We consider the Ising Field Theory (IFT), and the 3-state Potts Field Theory (PFT), which describe the scaling limits of the two-dimensional lattice q-state Potts model with q = 2, and q = 3, respectively. At zero magnetic field h = 0, both field theories are integrable away from the critical point, have q degenerate vacua in the ferromagnetic phase, and q (q - 1) particles of the same mass - the kinks interpolating between two different vacua. Application of a weak magnetic field induces confinement of kinks into bound states - the "mesons" (for q = 2 , 3) consisting predominantly of two kinks, and "baryons" (for q = 3), which are essentially the three-kink excitations. The kinks in the confinement regime are also called "the quarks". We review and refine the Form Factor Perturbation Theory (FFPT), adapting it to the analysis of the confinement problem in the limit of small h, and apply it to calculate the corrections to the kink (quark) masses induced by the multi-kink fluctuations caused by the weak magnetic field. It is shown that the subleading third-order ∼h3 correction to the kink mass vanishes in the IFT. The leading second order ∼h2 correction to the kink mass in the 3-state PFT is estimated by truncation the infinite form factor expansion at the first term representing contribution of the two-kink fluctuations into the kink self-energy.

  20. Possibilities of the common research-development action in the field of automated logistical engines

    Directory of Open Access Journals (Sweden)

    Pap Lajos

    2003-12-01

    Full Text Available The paper briefly presents the R&D cooperation of the Department of Materials Handling and Logistics and Departments of Automation. The main fields of cooperation are introduced. Different kind of Linear Motor (hereafter LM drives are being developed and tested for warehouse and rolling conveyor systems. Modern control strategies using AI methods are being investigated and tested for Automated guide vehicle. Wireless communication methods are being searched and developed for mobile material handling devices. Application possibilities of voice recognition and image processing are being tested for control of material handling robots and devices. Application of process visualization programs are being developed and investigated. Multi-level industrial communication system is being developed for the laboratories of the cooperating departments.

  1. Automation bias: empirical results assessing influencing factors.

    Science.gov (United States)

    Goddard, Kate; Roudsari, Abdul; Wyatt, Jeremy C

    2014-05-01

    To investigate the rate of automation bias - the propensity of people to over rely on automated advice and the factors associated with it. Tested factors were attitudinal - trust and confidence, non-attitudinal - decision support experience and clinical experience, and environmental - task difficulty. The paradigm of simulated decision support advice within a prescribing context was used. The study employed within participant before-after design, whereby 26 UK NHS General Practitioners were shown 20 hypothetical prescribing scenarios with prevalidated correct and incorrect answers - advice was incorrect in 6 scenarios. They were asked to prescribe for each case, followed by being shown simulated advice. Participants were then asked whether they wished to change their prescription, and the post-advice prescription was recorded. Rate of overall decision switching was captured. Automation bias was measured by negative consultations - correct to incorrect prescription switching. Participants changed prescriptions in 22.5% of scenarios. The pre-advice accuracy rate of the clinicians was 50.38%, which improved to 58.27% post-advice. The CDSS improved the decision accuracy in 13.1% of prescribing cases. The rate of automation bias, as measured by decision switches from correct pre-advice, to incorrect post-advice was 5.2% of all cases - a net improvement of 8%. More immediate factors such as trust in the specific CDSS, decision confidence, and task difficulty influenced rate of decision switching. Lower clinical experience was associated with more decision switching. Age, DSS experience and trust in CDSS generally were not significantly associated with decision switching. This study adds to the literature surrounding automation bias in terms of its potential frequency and influencing factors. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Fully automated laboratory and field-portable goniometer used for performing accurate and precise multiangular reflectance measurements

    Science.gov (United States)

    Harms, Justin D.; Bachmann, Charles M.; Ambeau, Brittany L.; Faulring, Jason W.; Ruiz Torres, Andres J.; Badura, Gregory; Myers, Emily

    2017-10-01

    Field-portable goniometers are created for a wide variety of applications. Many of these applications require specific types of instruments and measurement schemes and must operate in challenging environments. Therefore, designs are based on the requirements that are specific to the application. We present a field-portable goniometer that was designed for measuring the hemispherical-conical reflectance factor (HCRF) of various soils and low-growing vegetation in austere coastal and desert environments and biconical reflectance factors in laboratory settings. Unlike some goniometers, this system features a requirement for "target-plane tracking" to ensure that measurements can be collected on sloped surfaces, without compromising angular accuracy. The system also features a second upward-looking spectrometer to measure the spatially dependent incoming illumination, an integrated software package to provide full automation, an automated leveling system to ensure a standard frame of reference, a design that minimizes the obscuration due to self-shading to measure the opposition effect, and the ability to record a digital elevation model of the target region. This fully automated and highly mobile system obtains accurate and precise measurements of HCRF in a wide variety of terrain and in less time than most other systems while not sacrificing consistency or repeatability in laboratory environments.

  3. Single-shot imaging with higher-dimensional encoding using magnetic field monitoring and concomitant field correction.

    Science.gov (United States)

    Testud, Frederik; Gallichan, Daniel; Layton, Kelvin J; Barmet, Christoph; Welz, Anna M; Dewdney, Andrew; Cocosco, Chris A; Pruessmann, Klaas P; Hennig, Jürgen; Zaitsev, Maxim

    2015-03-01

    PatLoc (Parallel Imaging Technique using Localized Gradients) accelerates imaging and introduces a resolution variation across the field-of-view. Higher-dimensional encoding employs more spatial encoding magnetic fields (SEMs) than the corresponding image dimensionality requires, e.g. by applying two quadratic and two linear spatial encoding magnetic fields to reconstruct a 2D image. Images acquired with higher-dimensional single-shot trajectories can exhibit strong artifacts and geometric distortions. In this work, the source of these artifacts is analyzed and a reliable correction strategy is derived. A dynamic field camera was built for encoding field calibration. Concomitant fields of linear and nonlinear spatial encoding magnetic fields were analyzed. A combined basis consisting of spherical harmonics and concomitant terms was proposed and used for encoding field calibration and image reconstruction. A good agreement between the analytical solution for the concomitant fields and the magnetic field simulations of the custom-built PatLoc SEM coil was observed. Substantial image quality improvements were obtained using a dynamic field camera for encoding field calibration combined with the proposed combined basis. The importance of trajectory calibration for single-shot higher-dimensional encoding is demonstrated using the combined basis including spherical harmonics and concomitant terms, which treats the concomitant fields as an integral part of the encoding. © 2014 Wiley Periodicals, Inc.

  4. Higher magnetic field multipoles generated by superconductor magnetization within a set of nested superconducting correction coils

    International Nuclear Information System (INIS)

    Green, M.A.

    1990-01-01

    Correction elements in colliding beam accelerators such as the Superconducting Super Collider (SSC) can be the source of undesirable higher magnetic field multipoles due to magnetization of the superconductor within the corrector. Quadrupole and sextupole correctors located within the main dipole will produce sextupole and decapole due to magnetization of the superconductor within the correction coils. Lumped nested correction coils can produce a large number of skew and normal magnetization multipoles which may have an adverse effect on a stored beam at injection into a high energy colliding beam machine such as the SSC. Multipole magnetization field components have been measured within the HERA storage ring dipole magnets. Calculations of these components using the SCMAG04 code, which agree substantially with the measured multipoles, are presented in the report. As a result, in the proposed continuous correction winding for the SSC, dipoles have been replaced with lumped correction elements every six dipole magnets (about 120 meters apart). Nested lumped correction elements will also produce undesirable higher magnetization multipoles. This report shows a method by which the higher multipole generated by nested correction elements can be identified. (author)

  5. Automated one-loop calculations with GoSam

    International Nuclear Information System (INIS)

    Cullen, Gavin; Greiner, Nicolas; Heinrich, Gudrun; Reiter, Thomas; Luisoni, Gionata; Mastrolia, Pierpaolo; Ossola, Giovanni; Tramontano, Francesco

    2012-01-01

    We present the program package GoSam which is designed for the automated calculation of one-loop amplitudes for multi-particle processes in renormalisable quantum field theories. The amplitudes, which are generated in terms of Feynman diagrams, can be reduced using either D-dimensional integrand-level decomposition or tensor reduction. GoSam can be used to calculate one-loop QCD and/or electroweak corrections to Standard Model processes and offers the flexibility to link model files for theories Beyond the Standard Model. A standard interface to programs calculating real radiation is also implemented. We demonstrate the flexibility of the program by presenting examples of processes with up to six external legs attached to the loop. (orig.)

  6. Automated One-Loop Calculations with GoSam

    CERN Document Server

    Cullen, Gavin; Heinrich, Gudrun; Luisoni, Gionata; Mastrolia, Pierpaolo; Ossola, Giovanni; Reiter, Thomas; Tramontano, Francesco

    2012-01-01

    We present the program package GoSam which is designed for the automated calculation of one-loop amplitudes for multi-particle processes in renormalisable quantum field theories. The amplitudes, which are generated in terms of Feynman diagrams, can be reduced using either D-dimensional integrand-level decomposition or tensor reduction. GoSam can be used to calculate one-loop QCD and/or electroweak corrections to Standard Model processes and offers the flexibility to link model files for theories Beyond the Standard Model. A standard interface to programs calculating real radiation is also implemented. We demonstrate the flexibility of the program by presenting examples of processes with up to six external legs attached to the loop.

  7. Image-guided regularization level set evolution for MR image segmentation and bias field correction.

    Science.gov (United States)

    Wang, Lingfeng; Pan, Chunhong

    2014-01-01

    Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Automated processing for proton spectroscopic imaging using water reference deconvolution.

    Science.gov (United States)

    Maudsley, A A; Wu, Z; Meyerhoff, D J; Weiner, M W

    1994-06-01

    Automated formation of MR spectroscopic images (MRSI) is necessary before routine application of these methods is possible for in vivo studies; however, this task is complicated by the presence of spatially dependent instrumental distortions and the complex nature of the MR spectrum. A data processing method is presented for completely automated formation of in vivo proton spectroscopic images, and applied for analysis of human brain metabolites. This procedure uses the water reference deconvolution method (G. A. Morris, J. Magn. Reson. 80, 547(1988)) to correct for line shape distortions caused by instrumental and sample characteristics, followed by parametric spectral analysis. Results for automated image formation were found to compare favorably with operator dependent spectral integration methods. While the water reference deconvolution processing was found to provide good correction of spatially dependent resonance frequency shifts, it was found to be susceptible to errors for correction of line shape distortions. These occur due to differences between the water reference and the metabolite distributions.

  9. Local-field correction in the lattice dynamics of b.b.c. transition metals

    International Nuclear Information System (INIS)

    Onwuagba, B.N.

    1984-01-01

    It is shown that the off-diagonal components of the inverse dielectric matrix which determine the local-field correction associated with s-d interactions, make contributions to the dynamical matrix for phonon dispersion in the body-centred cubic transition metals V, Nb and Ta which tend to cancel the Born-Mayer contribution, just as the diagonal components of the inverse dielectric matrix tend to cancel or screen the long-range (Coulombic) contribution. Numerical calculations show that the cancellation of the Born-Mayer contribution to the dynamical matrix by the local-field correction is such that the effective short-range interatomic potential turns out to be attractive rather than repulsive in these metals and accounts for some peculiar shapes of the major soft modes observed in these metals

  10. Mobile home automation-merging mobile value added services and home automation technologies

    OpenAIRE

    Rosendahl, Andreas; Hampe, Felix J.; Botterweck, Goetz

    2007-01-01

    non-peer-reviewed In this paper we study mobile home automation, a field that emerges from an integration of mobile application platforms and home automation technologies. In a conceptual introduction we first illustrate the need for such applications by introducing a two-dimensional conceptual model of mobility. Subsequently we suggest an architecture and discuss different options of how a user might access a mobile home automation service and the controlled devices. As another contrib...

  11. Importance of the CMAP Correction to the CHARMM22 Protein Force Field: Dynamics of Hen Lysozyme

    OpenAIRE

    Buck, Matthias; Bouguet-Bonnet, Sabine; Pastor, Richard W.; MacKerell, Alexander D.

    2005-01-01

    The recently developed CMAP correction to the CHARMM22 force field (C22) is evaluated from 25 ns molecular dynamics simulations on hen lysozyme. Substantial deviations from experimental backbone root mean-square fluctuations and N-H NMR order parameters obtained in the C22 trajectories (especially in the loops) are eliminated by the CMAP correction. Thus, the C22/CMAP force field yields improved dynamical and structural properties of proteins in molecular dynamics simulations.

  12. Automated model building

    CERN Document Server

    Caferra, Ricardo; Peltier, Nicholas

    2004-01-01

    This is the first book on automated model building, a discipline of automated deduction that is of growing importance Although models and their construction are important per se, automated model building has appeared as a natural enrichment of automated deduction, especially in the attempt to capture the human way of reasoning The book provides an historical overview of the field of automated deduction, and presents the foundations of different existing approaches to model construction, in particular those developed by the authors Finite and infinite model building techniques are presented The main emphasis is on calculi-based methods, and relevant practical results are provided The book is of interest to researchers and graduate students in computer science, computational logic and artificial intelligence It can also be used as a textbook in advanced undergraduate courses

  13. Monte Carlo and experimental determination of correction factors for gamma knife perfexion small field dosimetry measurements

    Science.gov (United States)

    Zoros, E.; Moutsatsos, A.; Pappas, E. P.; Georgiou, E.; Kollias, G.; Karaiskos, P.; Pantelis, E.

    2017-09-01

    Detector-, field size- and machine-specific correction factors are required for precise dosimetry measurements in small and non-standard photon fields. In this work, Monte Carlo (MC) simulation techniques were used to calculate the k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} and k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors for a series of ionization chambers, a synthetic microDiamond and diode dosimeters, used for reference and/or output factor (OF) measurements in the Gamma Knife Perfexion photon fields. Calculations were performed for the solid water (SW) and ABS plastic phantoms, as well as for a water phantom of the same geometry. MC calculations for the k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors in SW were compared against corresponding experimental results for a subset of ionization chambers and diode detectors. Reference experimental OF data were obtained through the weighted average of corresponding measurements using TLDs, EBT-2 films and alanine pellets. k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} values close to unity (within 1%) were calculated for most of ionization chambers in water. Greater corrections of up to 6.0% were observed for chambers with relatively large air-cavity dimensions and steel central electrode. A phantom correction of 1.006 and 1.024 (breaking down to 1.014 from the ABS sphere and 1.010 from the accompanying ABS phantom adapter) were calculated for the SW and ABS phantoms, respectively, adding up to k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} corrections in water. Both measurements and MC calculations for the diode and microDiamond detectors resulted in lower than unit k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors, due to their denser sensitive volume and encapsulation materials. In comparison, higher than unit k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} results for the ionization chambers suggested field size depended dose underestimations (being significant for the 4 mm field), with magnitude depending on the combination of

  14. Evolution of a Benthic Imaging System From a Towed Camera to an Automated Habitat Characterization System

    Science.gov (United States)

    2008-09-01

    automated processing of images for color correction, segmentation of foreground targets from sediment and classification of targets to taxonomic category...element in the development of HabCam as a tool for habitat characterization is the automated processing of images for color correction, segmentation of

  15. Implementation and Application of PSF-Based EPI Distortion Correction to High Field Animal Imaging

    Directory of Open Access Journals (Sweden)

    Dominik Paul

    2009-01-01

    Full Text Available The purpose of this work is to demonstrate the functionality and performance of a PSF-based geometric distortion correction for high-field functional animal EPI. The EPI method was extended to measure the PSF and a postprocessing chain was implemented in Matlab for offline distortion correction. The correction procedure was applied to phantom and in vivo imaging of mice and rats at 9.4T using different SE-EPI and DWI-EPI protocols. Results show the significant improvement in image quality for single- and multishot EPI. Using a reduced FOV in the PSF encoding direction clearly reduced the acquisition time for PSF data by an acceleration factor of 2 or 4, without affecting the correction quality.

  16. Illumination correction in psoriasis lesions images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær

    2003-01-01

    An approach to automatically correct illumination problems in dermatological images is presented. The illumination function is estimated after combining the thematic map indicating skin-produced by an automated classification scheme- with the dermatological image data. The user is only required t...

  17. Diffusion in the kicked quantum rotator by random corrections to a linear and sine field

    International Nuclear Information System (INIS)

    Hilke, M.; Flores, J.C.

    1992-01-01

    We discuss the diffusion in momentum space, of the kicked quantum rotator, by introducing random corrections to a linear and sine external field. For the linear field we obtain a linear diffusion behavior identical to the case with zero average in the external field. But for the sine field, accelerator modes with quadratic diffusion are found for particular values of the kicking period. (orig.)

  18. A new controller for the JET error field correction coils

    International Nuclear Information System (INIS)

    Zanotto, L.; Sartori, F.; Bigi, M.; Piccolo, F.; De Benedetti, M.

    2005-01-01

    This paper describes the hardware and the software structure of a new controller for the JET error field correction coils (EFCC) system, a set of ex-vessel coils that recently replaced the internal saddle coils. The EFCC controller has been developed on a conventional VME hardware platform using a new software framework, recently designed for real-time applications at JET, and replaces the old disruption feedback controller increasing the flexibility and the optimization of the system. The use of conventional hardware has required a particular effort in designing the software part in order to meet the specifications. The peculiarities of the new controller will be highlighted, such as its very useful trigger logic interface, which allows in principle exploring various error field experiment scenarios

  19. Fast conjugate phase image reconstruction based on a Chebyshev approximation to correct for B0 field inhomogeneity and concomitant gradients.

    Science.gov (United States)

    Chen, Weitian; Sica, Christopher T; Meyer, Craig H

    2008-11-01

    Off-resonance effects can cause image blurring in spiral scanning and various forms of image degradation in other MRI methods. Off-resonance effects can be caused by both B0 inhomogeneity and concomitant gradient fields. Previously developed off-resonance correction methods focus on the correction of a single source of off-resonance. This work introduces a computationally efficient method of correcting for B0 inhomogeneity and concomitant gradients simultaneously. The method is a fast alternative to conjugate phase reconstruction, with the off-resonance phase term approximated by Chebyshev polynomials. The proposed algorithm is well suited for semiautomatic off-resonance correction, which works well even with an inaccurate or low-resolution field map. The proposed algorithm is demonstrated using phantom and in vivo data sets acquired by spiral scanning. Semiautomatic off-resonance correction alone is shown to provide a moderate amount of correction for concomitant gradient field effects, in addition to B0 imhomogeneity effects. However, better correction is provided by the proposed combined method. The best results were produced using the semiautomatic version of the proposed combined method.

  20. Automated system of monitoring and positioning of functional units of mining technological machines for coal-mining enterprises

    Directory of Open Access Journals (Sweden)

    Meshcheryakov Yaroslav

    2018-01-01

    Full Text Available This article is show to the development of an automated monitoring and positioning system for functional nodes of mining technological machines. It describes the structure, element base, algorithms for identifying the operating states of a walking excavator; various types of errors in the functioning of microelectromechanical gyroscopes and accelerometers, as well as methods for their correction based on the Madgwick fusion filter. The results of industrial tests of an automated monitoring and positioning system for functional units on one of the opencast coal mines of Kuzbass are presented. This work is addressed to specialists working in the fields of the development of embedded systems and control systems, radio electronics, mechatronics, and robotics.

  1. Determination of small field synthetic single-crystal diamond detector correction factors for CyberKnife, Leksell Gamma Knife Perfexion and linear accelerator.

    Science.gov (United States)

    Veselsky, T; Novotny, J; Pastykova, V; Koniarova, I

    2017-12-01

    The aim of this study was to determine small field correction factors for a synthetic single-crystal diamond detector (PTW microDiamond) for routine use in clinical dosimetric measurements. Correction factors following small field Alfonso formalism were calculated by comparison of PTW microDiamond measured ratio M Qclin fclin /M Qmsr fmsr with Monte Carlo (MC) based field output factors Ω Qclin,Qmsr fclin,fmsr determined using Dosimetry Diode E or with MC simulation itself. Diode measurements were used for the CyberKnife and Varian Clinac 2100C/D linear accelerator. PTW microDiamond correction factors for Leksell Gamma Knife (LGK) were derived using MC simulated reference values from the manufacturer. PTW microDiamond correction factors for CyberKnife field sizes 25-5 mm were mostly smaller than 1% (except for 2.9% for 5 mm Iris field and 1.4% for 7.5 mm fixed cone field). The correction of 0.1% and 2.0% for 8 mm and 4 mm collimators, respectively, needed to be applied to PTW microDiamond measurements for LGK Perfexion. Finally, PTW microDiamond M Qclin fclin /M Qmsr fmsr for the linear accelerator varied from MC corrected Dosimetry Diode data by less than 0.5% (except for 1 × 1 cm 2 field size with 1.3% deviation). Regarding low resulting correction factor values, the PTW microDiamond detector may be considered an almost ideal tool for relative small field dosimetry in a large variety of stereotactic and radiosurgery treatment devices. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  2. Online corrections - Evidence based practice utilizing electronic portal imaging to improve the accuracy of field placement for locally advanced prostate cancer

    International Nuclear Information System (INIS)

    Middleton, M.; Medwell, S.; Rolfo, A.; Joon, M.L.

    2003-01-01

    The requirement of accurate field placement in the treatment of locally advanced prostate cancer is of great significance given the onset of dose escalation and increased Planning Target Volume (PTV) conformity. With these factors in mind, it becomes essential to ensure accurate field placement for the duration of a course of Radiotherapy. This study examines the role of Online Corrections to increase accuracy of field placement, utilizing Varian Vision EPI equipment. The study also examines the hypothetical scenario of effect on three-dimensional computer dosimetry if Online Corrections were not performed, incorporating TCP and NTCP data. Field placement data was collected on patients receiving radical radiotherapy to the prostate utilizing the Varian Vision (TM)EPI software. Both intra and inter field data was collected with Online Corrections being carried out within the confines of the BAROC PROSTATE EPI POLICY. Analysis was performed on the data to illustrate the value of Online Corrections in the pursuit of accurate field placement. This evidence was further supported by computer dosimetry presenting the worst case possible impact upon a patients total course of treatment if Online Corrections were not performed. The use of Online Corrections can prove to be of enormous benefit to both patient and practitioner. For centres with the available technology, it places the responsibility of field placement upon the Radiation Therapist. This responsibility in turn impacts on the education, training and empowerment of the Radiation Therapy group. These are issues of the utmost importance to centres considering the use of Online Corrections

  3. Transcranial Magnetic Stimulation: An Automated Procedure to Obtain Coil-specific Models for Field Calculations

    DEFF Research Database (Denmark)

    Madsen, Kristoffer Hougaard; Ewald, Lars; Siebner, Hartwig R.

    2015-01-01

    Background: Field calculations for transcranial magnetic stimulation (TMS) are increasingly implemented online in neuronavigation systems and in more realistic offline approaches based on finite-element methods. They are often based on simplified and/or non-validated models of the magnetic vector...... potential of the TMS coils. Objective: To develop an approach to reconstruct the magnetic vector potential based on automated measurements. Methods: We implemented a setup that simultaneously measures the three components of the magnetic field with high spatial resolution. This is complemented by a novel...... approach to determine the magnetic vector potential via volume integration of the measured field. Results: The integration approach reproduces the vector potential with very good accuracy. The vector potential distribution of a standard figure-of-eight shaped coil determined with our setup corresponds well...

  4. Automation, parallelism, and robotics for proteomics.

    Science.gov (United States)

    Alterovitz, Gil; Liu, Jonathan; Chow, Jijun; Ramoni, Marco F

    2006-07-01

    The speed of the human genome project (Lander, E. S., Linton, L. M., Birren, B., Nusbaum, C. et al., Nature 2001, 409, 860-921) was made possible, in part, by developments in automation of sequencing technologies. Before these technologies, sequencing was a laborious, expensive, and personnel-intensive task. Similarly, automation and robotics are changing the field of proteomics today. Proteomics is defined as the effort to understand and characterize proteins in the categories of structure, function and interaction (Englbrecht, C. C., Facius, A., Comb. Chem. High Throughput Screen. 2005, 8, 705-715). As such, this field nicely lends itself to automation technologies since these methods often require large economies of scale in order to achieve cost and time-saving benefits. This article describes some of the technologies and methods being applied in proteomics in order to facilitate automation within the field as well as in linking proteomics-based information with other related research areas.

  5. Field demonstration of automated demand response for both winter and summer events in large buildings in the Pacific Northwest

    Energy Technology Data Exchange (ETDEWEB)

    Piette, M.A.; Kiliccote, S.; Dudley, J.H. [Lawrence Berkeley National Laboratory, Berkeley, CA (United States)

    2013-11-15

    There are growing strains on the electric grid as cooling peaks grow and equipment ages. Increased penetration of renewables on the grid is also straining electricity supply systems and the need for flexible demand is growing. This paper summarizes results of a series of field test of automated demand response systems in large buildings in the Pacific Northwest. The objective of the research was twofold. One objective was to evaluate the use demand response automation technologies. A second objective was to evaluate control strategies that could change the electric load shape in both winter and summer conditions. Winter conditions focused on cold winter mornings, a time when the electric grid is often stressed. The summer test evaluated DR strategies in the afternoon. We found that we could automate both winter and summer control strategies with the open automated demand response communication standard. The buildings were able to provide significant demand response in both winter and summer events.

  6. Development, field testing and implementation of automated hydraulically controlled, variable volume loading systems for reciprocating compressors

    Energy Technology Data Exchange (ETDEWEB)

    Hickman, Dwayne A. [ACI Services, Inc., Cambridge, OH (United States); Slupsky, John [Kvaerner Process Systems, Calgary, Alberta (Canada); Chrisman, Bruce M.; Hurley, Tom J. [Cooper Energy Services, Oklahoma City, OK (United States). Ajax Division

    2003-07-01

    Automated, variable volume unloaders provide the ability to smoothly load/unload reciprocating compressors to maintain ideal operations in ever-changing environments. Potential advantages provided by this load control system include: maximizing unit capacity, optimizing power economy, maintaining low exhaust emissions, and maintaining process suction and discharge pressures. Obstacles foreseen include: reliability, stability, serviceability and automation integration. Results desired include: increased productivity for the compressor and its operators, increased up time, and more stable process control. This presentation covers: system design features with descriptions of how different types of the devices were developed, initial test data, and how they can be effectively operated; three actual-case studies detailing the reasons why automated, hydraulically controlled, variable volume, head-end unloaders were chosen over other types of unloading devices; sophisticated software used in determining the device sizing and predicted performance; mechanical and field considerations; installation, serviceability and operating considerations; device control issues, including PC and PLC considerations; monitoring of actual performance and comparison of such with predicted performance; analysis of mechanical reliability and stability; and preliminary costs versus return on investment analysis. (author)

  7. Complex Automated Negotiations Theories, Models, and Software Competitions

    CERN Document Server

    Zhang, Minjie; Robu, Valentin; Matsuo, Tokuro

    2013-01-01

    Complex Automated Negotiations are a widely studied, emerging area in the field of Autonomous Agents and Multi-Agent Systems. In general, automated negotiations can be complex, since there are a lot of factors that characterize such negotiations. For this book, we solicited papers on all aspects of such complex automated negotiations, which are studied in the field of Autonomous Agents and Multi-Agent Systems. This book includes two parts, which are Part I: Agent-based Complex Automated Negotiations and Part II: Automated Negotiation Agents Competition. Each chapter in Part I is an extended version of ACAN 2011 papers after peer reviews by three PC members. Part II includes ANAC 2011 (The Second Automated Negotiating Agents Competition), in which automated agents who have different negotiation strategies and implemented by different developers are automatically negotiate in the several negotiation domains. ANAC is an international competition in which automated negotiation strategies, submitted by a number of...

  8. Automation in Immunohematology

    Directory of Open Access Journals (Sweden)

    Meenu Bajpai

    2012-01-01

    Full Text Available There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process.

  9. Automated imaging system for single molecules

    Science.gov (United States)

    Schwartz, David Charles; Runnheim, Rodney; Forrest, Daniel

    2012-09-18

    There is provided a high throughput automated single molecule image collection and processing system that requires minimal initial user input. The unique features embodied in the present disclosure allow automated collection and initial processing of optical images of single molecules and their assemblies. Correct focus may be automatically maintained while images are collected. Uneven illumination in fluorescence microscopy is accounted for, and an overall robust imaging operation is provided yielding individual images prepared for further processing in external systems. Embodiments described herein are useful in studies of any macromolecules such as DNA, RNA, peptides and proteins. The automated image collection and processing system and method of same may be implemented and deployed over a computer network, and may be ergonomically optimized to facilitate user interaction.

  10. RCrane: semi-automated RNA model building.

    Science.gov (United States)

    Keating, Kevin S; Pyle, Anna Marie

    2012-08-01

    RNA crystals typically diffract to much lower resolutions than protein crystals. This low-resolution diffraction results in unclear density maps, which cause considerable difficulties during the model-building process. These difficulties are exacerbated by the lack of computational tools for RNA modeling. Here, RCrane, a tool for the partially automated building of RNA into electron-density maps of low or intermediate resolution, is presented. This tool works within Coot, a common program for macromolecular model building. RCrane helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RCrane then allows the crystallographer to review the newly built structure and select alternative backbone conformations where desired. This tool can also be used to automatically correct the backbone structure of previously built nucleotides. These automated corrections can fix incorrect sugar puckers, steric clashes and other structural problems.

  11. Physiologic noise regression, motion regression, and TOAST dynamic field correction in complex-valued fMRI time series.

    Science.gov (United States)

    Hahn, Andrew D; Rowe, Daniel B

    2012-02-01

    As more evidence is presented suggesting that the phase, as well as the magnitude, of functional MRI (fMRI) time series may contain important information and that there are theoretical drawbacks to modeling functional response in the magnitude alone, removing noise in the phase is becoming more important. Previous studies have shown that retrospective correction of noise from physiologic sources can remove significant phase variance and that dynamic main magnetic field correction and regression of estimated motion parameters also remove significant phase fluctuations. In this work, we investigate the performance of physiologic noise regression in a framework along with correction for dynamic main field fluctuations and motion regression. Our findings suggest that including physiologic regressors provides some benefit in terms of reduction in phase noise power, but it is small compared to the benefit of dynamic field corrections and use of estimated motion parameters as nuisance regressors. Additionally, we show that the use of all three techniques reduces phase variance substantially, removes undesirable spatial phase correlations and improves detection of the functional response in magnitude and phase. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. A new bias field correction method combining N3 and FCM for improved segmentation of breast density on MRI.

    Science.gov (United States)

    Lin, Muqing; Chan, Siwa; Chen, Jeon-Hor; Chang, Daniel; Nie, Ke; Chen, Shih-Ting; Lin, Cheng-Ju; Shih, Tzu-Ching; Nalcioglu, Orhan; Su, Min-Ying

    2011-01-01

    Quantitative breast density is known as a strong risk factor associated with the development of breast cancer. Measurement of breast density based on three-dimensional breast MRI may provide very useful information. One important step for quantitative analysis of breast density on MRI is the correction of field inhomogeneity to allow an accurate segmentation of the fibroglandular tissue (dense tissue). A new bias field correction method by combining the nonparametric nonuniformity normalization (N3) algorithm and fuzzy-C-means (FCM)-based inhomogeneity correction algorithm is developed in this work. The analysis is performed on non-fat-sat T1-weighted images acquired using a 1.5 T MRI scanner. A total of 60 breasts from 30 healthy volunteers was analyzed. N3 is known as a robust correction method, but it cannot correct a strong bias field on a large area. FCM-based algorithm can correct the bias field on a large area, but it may change the tissue contrast and affect the segmentation quality. The proposed algorithm applies N3 first, followed by FCM, and then the generated bias field is smoothed using Gaussian kernal and B-spline surface fitting to minimize the problem of mistakenly changed tissue contrast. The segmentation results based on the N3+FCM corrected images were compared to the N3 and FCM alone corrected images and another method, coherent local intensity clustering (CLIC), corrected images. The segmentation quality based on different correction methods were evaluated by a radiologist and ranked. The authors demonstrated that the iterative N3+FCM correction method brightens the signal intensity of fatty tissues and that separates the histogram peaks between the fibroglandular and fatty tissues to allow an accurate segmentation between them. In the first reading session, the radiologist found (N3+FCM > N3 > FCM) ranking in 17 breasts, (N3+FCM > N3 = FCM) ranking in 7 breasts, (N3+FCM = N3 > FCM) in 32 breasts, (N3+FCM = N3 = FCM) in 2 breasts, and (N3 > N3

  13. Decision Making In A High-Tech World: Automation Bias and Countermeasures

    Science.gov (United States)

    Mosier, Kathleen L.; Skitka, Linda J.; Burdick, Mark R.; Heers, Susan T.; Rosekind, Mark R. (Technical Monitor)

    1996-01-01

    Automated decision aids and decision support systems have become essential tools in many high-tech environments. In aviation, for example, flight management systems computers not only fly the aircraft, but also calculate fuel efficient paths, detect and diagnose system malfunctions and abnormalities, and recommend or carry out decisions. Air Traffic Controllers will soon be utilizing decision support tools to help them predict and detect potential conflicts and to generate clearances. Other fields as disparate as nuclear power plants and medical diagnostics are similarly becoming more and more automated. Ideally, the combination of human decision maker and automated decision aid should result in a high-performing team, maximizing the advantages of additional cognitive and observational power in the decision-making process. In reality, however, the presence of these aids often short-circuits the way that even very experienced decision makers have traditionally handled tasks and made decisions, and introduces opportunities for new decision heuristics and biases. Results of recent research investigating the use of automated aids have indicated the presence of automation bias, that is, errors made when decision makers rely on automated cues as a heuristic replacement for vigilant information seeking and processing. Automation commission errors, i.e., errors made when decision makers inappropriately follow an automated directive, or automation omission errors, i.e., errors made when humans fail to take action or notice a problem because an automated aid fails to inform them, can result from this tendency. Evidence of the tendency to make automation-related omission and commission errors has been found in pilot self reports, in studies using pilots in flight simulations, and in non-flight decision making contexts with student samples. Considerable research has found that increasing social accountability can successfully ameliorate a broad array of cognitive biases and

  14. Judson_Mansouri_Automated_Chemical_Curation_QSAREnvRes_Data

    Data.gov (United States)

    U.S. Environmental Protection Agency — Here we describe the development of an automated KNIME workflow to curate and correct errors in the structure and identity of chemicals using the publically...

  15. Automated disposal of produced water from a coalbed methane well field, a case history

    International Nuclear Information System (INIS)

    Luckianow, B.J.; Findley, M.L.; Paschal, W.T.

    1994-01-01

    This paper provides an overview of the automated disposal system for produced water designed and operated by Taurus Exploration, Inc. This presentation draws from Taurus' case study in the planning, design, construction, and operation of production water disposal facilities for the Mt. Olive well field, located in the Black Warrior Basin of Alabama. The common method for disposing of water produced from coalbed methane wells in the Warrior Basin is to discharge into a receiving stream. The limiting factor in the discharge method is the capability of the receiving stream to assimilate the chloride component of the water discharged. During the winter and spring, the major tributaries of the Black Warrior River are capable of assimilating far more production water than operations can generate. During the summer and fall months, however, these same tributaries can approach near zero flow, resulting in insufficient flow for dilution. During such periods pumping shut-down within the well field can be avoided by routing production waters into a storage facility. This paper discusses the automated production water disposal system on Big Sandy Creek designed and operated by Taurus. This system allows for continuous discharge to the receiving stream, thus taking full advantage of Big Sandy Creek's assimilative capacity, while allowing a provision for excess produced water storage and future stream discharge

  16. RCrane: semi-automated RNA model building

    International Nuclear Information System (INIS)

    Keating, Kevin S.; Pyle, Anna Marie

    2012-01-01

    RCrane is a new tool for the partially automated building of RNA crystallographic models into electron-density maps of low or intermediate resolution. This tool helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RNA crystals typically diffract to much lower resolutions than protein crystals. This low-resolution diffraction results in unclear density maps, which cause considerable difficulties during the model-building process. These difficulties are exacerbated by the lack of computational tools for RNA modeling. Here, RCrane, a tool for the partially automated building of RNA into electron-density maps of low or intermediate resolution, is presented. This tool works within Coot, a common program for macromolecular model building. RCrane helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RCrane then allows the crystallographer to review the newly built structure and select alternative backbone conformations where desired. This tool can also be used to automatically correct the backbone structure of previously built nucleotides. These automated corrections can fix incorrect sugar puckers, steric clashes and other structural problems

  17. Automated planning of breast radiotherapy using cone beam CT imaging

    International Nuclear Information System (INIS)

    Amit, Guy; Purdie, Thomas G.

    2015-01-01

    Purpose: Develop and clinically validate a methodology for using cone beam computed tomography (CBCT) imaging in an automated treatment planning framework for breast IMRT. Methods: A technique for intensity correction of CBCT images was developed and evaluated. The technique is based on histogram matching of CBCT image sets, using information from “similar” planning CT image sets from a database of paired CBCT and CT image sets (n = 38). Automated treatment plans were generated for a testing subset (n = 15) on the planning CT and the corrected CBCT. The plans generated on the corrected CBCT were compared to the CT-based plans in terms of beam parameters, dosimetric indices, and dose distributions. Results: The corrected CBCT images showed considerable similarity to their corresponding planning CTs (average mutual information 1.0±0.1, average sum of absolute differences 185 ± 38). The automated CBCT-based plans were clinically acceptable, as well as equivalent to the CT-based plans with average gantry angle difference of 0.99°±1.1°, target volume overlap index (Dice) of 0.89±0.04 although with slightly higher maximum target doses (4482±90 vs 4560±84, P < 0.05). Gamma index analysis (3%, 3 mm) showed that the CBCT-based plans had the same dose distribution as plans calculated with the same beams on the registered planning CTs (average gamma index 0.12±0.04, gamma <1 in 99.4%±0.3%). Conclusions: The proposed method demonstrates the potential for a clinically feasible and efficient online adaptive breast IMRT planning method based on CBCT imaging, integrating automation

  18. On the transmit field inhomogeneity correction of relaxation‐compensated amide and NOE CEST effects at 7 T

    Science.gov (United States)

    Windschuh, Johannes; Siero, Jeroen C.W.; Zaiss, Moritz; Luijten, Peter R.; Klomp, Dennis W.J.; Hoogduin, Hans

    2017-01-01

    High field MRI is beneficial for chemical exchange saturation transfer (CEST) in terms of high SNR, CNR, and chemical shift dispersion. These advantages may, however, be counter‐balanced by the increased transmit field inhomogeneity normally associated with high field MRI. The relatively high sensitivity of the CEST contrast to B 1 inhomogeneity necessitates the development of correction methods, which is essential for the clinical translation of CEST. In this work, two B 1 correction algorithms for the most studied CEST effects, amide‐CEST and nuclear Overhauser enhancement (NOE), were analyzed. Both methods rely on fitting the multi‐pool Bloch‐McConnell equations to the densely sampled CEST spectra. In the first method, the correction is achieved by using a linear B 1 correction of the calculated amide and NOE CEST effects. The second method uses the Bloch‐McConnell fit parameters and the desired B 1 amplitude to recalculate the CEST spectra, followed by the calculation of B 1‐corrected amide and NOE CEST effects. Both algorithms were systematically studied in Bloch‐McConnell equations and in human data, and compared with the earlier proposed ideal interpolation‐based B 1 correction method. In the low B 1 regime of 0.15–0.50 μT (average power), a simple linear model was sufficient to mitigate B 1 inhomogeneity effects on a par with the interpolation B 1 correction, as demonstrated by a reduced correlation of the CEST contrast with B 1 in both the simulations and the experiments. PMID:28111824

  19. Enabling full-field physics-based optical proximity correction via dynamic model generation

    Science.gov (United States)

    Lam, Michael; Clifford, Chris; Raghunathan, Ananthan; Fenger, Germain; Adam, Kostas

    2017-07-01

    As extreme ultraviolet lithography becomes closer to reality for high volume production, its peculiar modeling challenges related to both inter and intrafield effects have necessitated building an optical proximity correction (OPC) infrastructure that operates with field position dependency. Previous state-of-the-art approaches to modeling field dependency used piecewise constant models where static input models are assigned to specific x/y-positions within the field. OPC and simulation could assign the proper static model based on simulation-level placement. However, in the realm of 7 and 5 nm feature sizes, small discontinuities in OPC from piecewise constant model changes can cause unacceptable levels of edge placement errors. The introduction of dynamic model generation (DMG) can be shown to effectively avoid these dislocations by providing unique mask and optical models per simulation region, allowing a near continuum of models through the field. DMG allows unique models for electromagnetic field, apodization, aberrations, etc. to vary through the entire field and provides a capability to precisely and accurately model systematic field signatures.

  20. Assessing the Efficiency of Phenotyping Early Traits in a Greenhouse Automated Platform for Predicting Drought Tolerance of Soybean in the Field.

    Science.gov (United States)

    Peirone, Laura S; Pereyra Irujo, Gustavo A; Bolton, Alejandro; Erreguerena, Ignacio; Aguirrezábal, Luis A N

    2018-01-01

    Conventional field phenotyping for drought tolerance, the most important factor limiting yield at a global scale, is labor-intensive and time-consuming. Automated greenhouse platforms can increase the precision and throughput of plant phenotyping and contribute to a faster release of drought tolerant varieties. The aim of this work was to establish a framework of analysis to identify early traits which could be efficiently measured in a greenhouse automated phenotyping platform, for predicting the drought tolerance of field grown soybean genotypes. A group of genotypes was evaluated, which showed variation in their drought susceptibility index (DSI) for final biomass and leaf area. A large number of traits were measured before and after the onset of a water deficit treatment, which were analyzed under several criteria: the significance of the regression with the DSI, phenotyping cost, earliness, and repeatability. The most efficient trait was found to be transpiration efficiency measured at 13 days after emergence. This trait was further tested in a second experiment with different water deficit intensities, and validated using a different set of genotypes against field data from a trial network in a third experiment. The framework applied in this work for assessing traits under different criteria could be helpful for selecting those most efficient for automated phenotyping.

  1. Assessing the Efficiency of Phenotyping Early Traits in a Greenhouse Automated Platform for Predicting Drought Tolerance of Soybean in the Field

    Directory of Open Access Journals (Sweden)

    Laura S. Peirone

    2018-05-01

    Full Text Available Conventional field phenotyping for drought tolerance, the most important factor limiting yield at a global scale, is labor-intensive and time-consuming. Automated greenhouse platforms can increase the precision and throughput of plant phenotyping and contribute to a faster release of drought tolerant varieties. The aim of this work was to establish a framework of analysis to identify early traits which could be efficiently measured in a greenhouse automated phenotyping platform, for predicting the drought tolerance of field grown soybean genotypes. A group of genotypes was evaluated, which showed variation in their drought susceptibility index (DSI for final biomass and leaf area. A large number of traits were measured before and after the onset of a water deficit treatment, which were analyzed under several criteria: the significance of the regression with the DSI, phenotyping cost, earliness, and repeatability. The most efficient trait was found to be transpiration efficiency measured at 13 days after emergence. This trait was further tested in a second experiment with different water deficit intensities, and validated using a different set of genotypes against field data from a trial network in a third experiment. The framework applied in this work for assessing traits under different criteria could be helpful for selecting those most efficient for automated phenotyping.

  2. Home automation as an example of construction innovation

    NARCIS (Netherlands)

    Vlies, R.D. van der; Bronswijk, J.E.M.H. van

    2009-01-01

    Home automation can contribute to the health of (older) adults. Home automation covers a broad field of ‘intelligent’ electronic or mechanical devices in the home (domestic) environment. Realizing home automation is technically possible, though still not common. In this paper main influential

  3. Automated evaluation of ultrasonic indications

    International Nuclear Information System (INIS)

    Hansch, M.K.T.; Stegemann, D.

    1994-01-01

    Future requirements of reliability and reproducibility in quality assurance demand computer evaluation of defect indications. The ultrasonic method with its large field of applications and a high potential for automation provides all preconditions for fully automated inspection. The survey proposes several desirable hardware improvements, data acquisition requirements and software configurations. (orig.) [de

  4. AUTOMATION OF CONVEYOR BELT TRANSPORT

    Directory of Open Access Journals (Sweden)

    Nenad Marinović

    1990-12-01

    Full Text Available Belt conveyor transport, although one of the most economical mining transport system, introduce many problems to mantain the continuity of the operation. Every stop causes economical loses. Optimal operation require correct tension of the belt, correct belt position and velocity and faultless rolls, which are together input conditions for automation. Detection and position selection of the faults are essential for safety to eliminate fire hazard and for efficient maintenance. Detection and location of idler roll faults are still open problem and up to now not solved successfully (the paper is published in Croatian.

  5. Correcting PSP electron measurements for the effects of spacecraft electrostatic and magnetic fields

    Science.gov (United States)

    McGinnis, D.; Halekas, J. S.; Larson, D. E.; Whittlesey, P. L.; Kasper, J. C.

    2017-12-01

    The near-Sun environment which the Parker Solar Probe will investigate presents a unique challenge for the measurement of thermal and suprathermal electrons. Over one orbital period, the ionizing photon flux and charged particle densities vary to such an extent that the spacecraft could charge to electrostatic potentials ranging from a few volts to tens of volts or more, and it may even develop negative electrostatic potentials near closest approach. In addition, significant permanent magnetic fields from spacecraft components will perturb thermal electron trajectories. Given these effects, electron distribution function (EDF) measurements made by the SWEAP/SPAN electron sensors will be significantly affected. It is thus important to try to understand the extent and nature of such effects, and to remediate them as much as possible. To this end, we have incorporated magnetic fields and a model electrostatic potential field into particle tracing simulations to predict particle trajectories through the near spacecraft environment. These simulations allow us to estimate how the solid angle elements measured by SPAN deflect and stretch in the presence of these fields and therefore how and to what extent EDF measurements will be distorted. In this work, we demonstrate how this technique can be used to produce a `dewarping' correction factor. Further, we show that this factor can correct synthetic datasets simulating the warped EDFs that the SPAN instruments are likely to measure over a wide range of spacecraft potentials and plasma Debye lengths.

  6. SU-E-T-469: A Practical Approach for the Determination of Small Field Output Factors Using Published Monte Carlo Derived Correction Factors

    International Nuclear Information System (INIS)

    Calderon, E; Siergiej, D

    2014-01-01

    Purpose: Output factor determination for small fields (less than 20 mm) presents significant challenges due to ion chamber volume averaging and diode over-response. Measured output factor values between detectors are known to have large deviations as field sizes are decreased. No set standard to resolve this difference in measurement exists. We observed differences between measured output factors of up to 14% using two different detectors. Published Monte Carlo derived correction factors were used to address this challenge and decrease the output factor deviation between detectors. Methods: Output factors for Elekta's linac-based stereotactic cone system were measured using the EDGE detector (Sun Nuclear) and the A16 ion chamber (Standard Imaging). Measurements conditions were 100 cm SSD (source to surface distance) and 1.5 cm depth. Output factors were first normalized to a 10.4 cm × 10.4 cm field size using a daisy-chaining technique to minimize the dependence of field size on detector response. An equation expressing the relation between published Monte Carlo correction factors as a function of field size for each detector was derived. The measured output factors were then multiplied by the calculated correction factors. EBT3 gafchromic film dosimetry was used to independently validate the corrected output factors. Results: Without correction, the deviation in output factors between the EDGE and A16 detectors ranged from 1.3 to 14.8%, depending on cone size. After applying the calculated correction factors, this deviation fell to 0 to 3.4%. Output factors determined with film agree within 3.5% of the corrected output factors. Conclusion: We present a practical approach to applying published Monte Carlo derived correction factors to measured small field output factors for the EDGE and A16 detectors. Using this method, we were able to decrease the percent deviation between both detectors from 14.8% to 3.4% agreement

  7. Costs to Automate Demand Response - Taxonomy and Results from Field Studies and Programs

    Energy Technology Data Exchange (ETDEWEB)

    Piette, Mary A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Schetrit, Oren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kiliccote, Sila [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cheung, Iris [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Li, Becky Z [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-07-31

    During the past decade, the technology to automate demand response (DR) in buildings and industrial facilities has advanced significantly. Automation allows rapid, repeatable, reliable operation. This study focuses on costs for DR automation in commercial buildings with some discussion on residential buildings and industrial facilities. DR automation technology relies on numerous components, including communication systems, hardware and software gateways, standards-based messaging protocols, controls and integration platforms, and measurement and telemetry systems. This report compares cost data from several DR automation programs and pilot projects, evaluates trends in the cost per unit of DR and kilowatts (kW) available from automated systems, and applies a standard naming convention and classification or taxonomy for system elements. Median costs for the 56 installed automated DR systems studied here are about $200/kW. The deviation around this median is large with costs in some cases being an order of magnitude great or less than the median. This wide range is a result of variations in system age, size of load reduction, sophistication, and type of equipment included in cost analysis. The costs to automate fast DR systems for ancillary services are not fully analyzed in this report because additional research is needed to determine the total cost to install, operate, and maintain these systems. However, recent research suggests that they could be developed at costs similar to those of existing hot-summer DR automation systems. This report considers installation and configuration costs and does include the costs of owning and operating DR automation systems. Future analysis of the latter costs should include the costs to the building or facility manager costs as well as utility or third party program manager cost.

  8. Increased Automation in Stereo Camera Calibration Techniques

    Directory of Open Access Journals (Sweden)

    Brandi House

    2006-08-01

    Full Text Available Robotic vision has become a very popular field in recent years due to the numerous promising applications it may enhance. However, errors within the cameras and in their perception of their environment can cause applications in robotics to fail. To help correct these internal and external imperfections, stereo camera calibrations are performed. There are currently many accurate methods of camera calibration available; however, most or all of them are time consuming and labor intensive. This research seeks to automate the most labor intensive aspects of a popular calibration technique developed by Jean-Yves Bouguet. His process requires manual selection of the extreme corners of a checkerboard pattern. The modified process uses embedded LEDs in the checkerboard pattern to act as active fiducials. Images are captured of the checkerboard with the LEDs on and off in rapid succession. The difference of the two images automatically highlights the location of the four extreme corners, and these corner locations take the place of the manual selections. With this modification to the calibration routine, upwards of eighty mouse clicks are eliminated per stereo calibration. Preliminary test results indicate that accuracy is not substantially affected by the modified procedure. Improved automation to camera calibration procedures may finally penetrate the barriers to the use of calibration in practice.

  9. Optimization of automation: I. Estimation method of cognitive automation rates reflecting the effects of automation on human operators in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Seong, Poong Hyun

    2014-01-01

    Highlights: • We propose an estimation method of the automation rate by taking the advantages of automation as the estimation measures. • We conduct the experiments to examine the validity of the suggested method. • The higher the cognitive automation rate is, the greater the decreased rate of the working time will be. • The usefulness of the suggested estimation method is proved by statistical analyses. - Abstract: Since automation was introduced in various industrial fields, the concept of the automation rate has been used to indicate the inclusion proportion of automation among all work processes or facilities. Expressions of the inclusion proportion of automation are predictable, as is the ability to express the degree of the enhancement of human performance. However, many researchers have found that a high automation rate does not guarantee high performance. Therefore, to reflect the effects of automation on human performance, this paper proposes a new estimation method of the automation rate that considers the effects of automation on human operators in nuclear power plants (NPPs). Automation in NPPs can be divided into two types: system automation and cognitive automation. Some general descriptions and characteristics of each type of automation are provided, and the advantages of automation are investigated. The advantages of each type of automation are used as measures of the estimation method of the automation rate. One advantage was found to be a reduction in the number of tasks, and another was a reduction in human cognitive task loads. The system and the cognitive automation rate were proposed as quantitative measures by taking advantage of the aforementioned benefits. To quantify the required human cognitive task loads and thus suggest the cognitive automation rate, Conant’s information-theory-based model was applied. The validity of the suggested method, especially as regards the cognitive automation rate, was proven by conducting

  10. Automated Testing of Event-Driven Applications

    DEFF Research Database (Denmark)

    Jensen, Casper Svenning

    may be tested by selecting an interesting input (i.e. a sequence of events), and deciding if a failure occurs when the selected input is applied to the event-driven application under test. Automated testing promises to reduce the workload for developers by automatically selecting interesting inputs...... and detect failures. However, it is non-trivial to conduct automated testing of event-driven applications because of, for example, infinite input spaces and the absence of specifications of correct application behavior. In this PhD dissertation, we identify a number of specific challenges when conducting...... automated testing of event-driven applications, and we present novel techniques for solving these challenges. First, we present an algorithm for stateless model-checking of event-driven applications with partial-order reduction, and we show how this algorithm may be used to systematically test web...

  11. Markov random field and Gaussian mixture for segmented MRI-based partial volume correction in PET

    International Nuclear Information System (INIS)

    Bousse, Alexandre; Thomas, Benjamin A; Erlandsson, Kjell; Hutton, Brian F; Pedemonte, Stefano; Ourselin, Sébastien; Arridge, Simon

    2012-01-01

    In this paper we propose a segmented magnetic resonance imaging (MRI) prior-based maximum penalized likelihood deconvolution technique for positron emission tomography (PET) images. The model assumes the existence of activity classes that behave like a hidden Markov random field (MRF) driven by the segmented MRI. We utilize a mean field approximation to compute the likelihood of the MRF. We tested our method on both simulated and clinical data (brain PET) and compared our results with PET images corrected with the re-blurred Van Cittert (VC) algorithm, the simplified Guven (SG) algorithm and the region-based voxel-wise (RBV) technique. We demonstrated our algorithm outperforms the VC algorithm and outperforms SG and RBV corrections when the segmented MRI is inconsistent (e.g. mis-segmentation, lesions, etc) with the PET image. (paper)

  12. Corrections to classical kinetic and transport theory for a two-temparature, fully ionized plasma in electromagnetic fields

    International Nuclear Information System (INIS)

    Oeien, A.H.

    1977-06-01

    Sets of lower order and higher order kinetic and macroscopic equations are developed for a plasma where collisions are important but electrons and ions are allowed to have different temperatures when transports, due to gradients and fields, set in. Solving the lower order kinetic equations and taking appropriate velocity moments we show that usual classical transports emerge. From the higher order kinetic equations special notice is taken of some new correction terms to the classical transports. These corrections are linear in gradients and fields, some of which are found in a two-temperature state only. (Auth.)

  13. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  14. Automated Agricultural Field Extraction from Multi-temporal Web Enabled Landsat Data

    Science.gov (United States)

    Yan, L.; Roy, D. P.

    2012-12-01

    Agriculture has caused significant anthropogenic surface change. In many regions agricultural field sizes may be increasing to maximize yields and reduce costs resulting in decreased landscape spatial complexity and increased homogenization of land uses with potential for significant biogeochemical and ecological effects. To date, studies of the incidence, drivers and impacts of changing field sizes have not been undertaken over large areas because of computational constraints and because consistently processed appropriate resolution data have not been available or affordable. The Landsat series of satellites provides near-global coverage, long term, and appropriate spatial resolution (30m) satellite data to document changing field sizes. The recent free availability of all the Landsat data in the U.S. Landsat archive now provides the opportunity to study field size changes in a global and consistent way. Commercial software can be used to extract fields from Landsat data but are inappropriate for large area application because they require considerable human interaction. This paper presents research to develop and validate an automated computational Geographic Object Based Image Analysis methodology to extract agricultural fields and derive field sizes from Web Enabled Landsat Data (WELD) (http://weld.cr.usgs.gov/). WELD weekly products (30m reflectance and brightness temperature) are classified into Satellite Image Automatic Mapper™ (SIAM™) spectral categories and an edge intensity map and a map of the probability of each pixel being agricultural are derived from five years of 52 weeks of WELD and corresponding SIAM™ data. These data are fused to derive candidate agriculture field segments using a variational region-based geometric active contour model. Geometry-based algorithms are used to decompose connected segments belonging to multiple fields into coherent isolated field objects with a divide and conquer strategy to detect and merge partial circle

  15. High magnetic field multipoles generated by superconductor magnetization within a set of nested superconducting correction coils

    International Nuclear Information System (INIS)

    Green, M.A.

    1990-04-01

    Correction elements in colliding beam accelerators such as the SSC can be the source of undesirable higher magnetic field multipoles due to magnetization of the superconductor within the corrector. Quadrupole and sextupole correctors located within the main dipole will produce sextupole and decapole due to magnetization of the superconductor within the correction coils. Lumped nested correction coils can produce a large number of skew and normal magnetization multipoles which may have an adverse effect on a stored beam at injection into a high energy colliding beam machine such as the SSC. 6 refs., 2 figs., 2 tabs

  16. The importance of matched poloidal spectra to error field correction in DIII-D

    Energy Technology Data Exchange (ETDEWEB)

    Paz-Soldan, C., E-mail: paz-soldan@fusion.gat.com; Lanctot, M. J.; Buttery, R. J.; La Haye, R. J.; Strait, E. J. [General Atomics, P.O. Box 85608, San Diego, California 92121 (United States); Logan, N. C.; Park, J.-K.; Solomon, W. M. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States); Shiraki, D.; Hanson, J. M. [Department of Applied Physics and Applied Mathematics, Columbia University, New York, New York 10027 (United States)

    2014-07-15

    Optimal error field correction (EFC) is thought to be achieved when coupling to the least-stable “dominant” mode of the plasma is nulled at each toroidal mode number (n). The limit of this picture is tested in the DIII-D tokamak by applying superpositions of in- and ex-vessel coil set n = 1 fields calculated to be fully orthogonal to the n = 1 dominant mode. In co-rotating H-mode and low-density Ohmic scenarios, the plasma is found to be, respectively, 7× and 20× less sensitive to the orthogonal field as compared to the in-vessel coil set field. For the scenarios investigated, any geometry of EFC coil can thus recover a strong majority of the detrimental effect introduced by the n = 1 error field. Despite low sensitivity to the orthogonal field, its optimization in H-mode is shown to be consistent with minimizing the neoclassical toroidal viscosity torque and not the higher-order n = 1 mode coupling.

  17. Large scale oil lease automation and electronic custody transfer

    International Nuclear Information System (INIS)

    Price, C.R.; Elmer, D.C.

    1995-01-01

    Typically, oil field production operations have only been automated at fields with long term production profiles and enhanced recovery. The automation generally consists of monitoring and control at the wellhead and centralized facilities. However, Union Pacific Resources Co. (UPRC) has successfully implemented a large scale automation program for rapid-decline primary recovery Austin Chalk wells where purchasers buy and transport oil from each individual wellsite. This project has resulted in two significant benefits. First, operators are using the system to re-engineer their work processes. Second, an inter-company team created a new electronic custody transfer method. This paper will describe: the progression of the company's automation objectives in the area; the field operator's interaction with the system, and the related benefits; the research and development of the new electronic custody transfer method

  18. Scaling up Ecological Measurements of Coral Reefs Using Semi-Automated Field Image Collection and Analysis

    Directory of Open Access Journals (Sweden)

    Manuel González-Rivero

    2016-01-01

    Full Text Available Ecological measurements in marine settings are often constrained in space and time, with spatial heterogeneity obscuring broader generalisations. While advances in remote sensing, integrative modelling and meta-analysis enable generalisations from field observations, there is an underlying need for high-resolution, standardised and geo-referenced field data. Here, we evaluate a new approach aimed at optimising data collection and analysis to assess broad-scale patterns of coral reef community composition using automatically annotated underwater imagery, captured along 2 km transects. We validate this approach by investigating its ability to detect spatial (e.g., across regions and temporal (e.g., over years change, and by comparing automated annotation errors to those of multiple human annotators. Our results indicate that change of coral reef benthos can be captured at high resolution both spatially and temporally, with an average error below 5%, among key benthic groups. Cover estimation errors using automated annotation varied between 2% and 12%, slightly larger than human errors (which varied between 1% and 7%, but small enough to detect significant changes among dominant groups. Overall, this approach allows a rapid collection of in-situ observations at larger spatial scales (km than previously possible, and provides a pathway to link, calibrate, and validate broader analyses across even larger spatial scales (10–10,000 km2.

  19. Reconstructing interacting entropy-corrected holographic scalar field models of dark energy in the non-flat universe

    Energy Technology Data Exchange (ETDEWEB)

    Karami, K; Khaledian, M S [Department of Physics, University of Kurdistan, Pasdaran Street, Sanandaj (Iran, Islamic Republic of); Jamil, Mubasher, E-mail: KKarami@uok.ac.ir, E-mail: MS.Khaledian@uok.ac.ir, E-mail: mjamil@camp.nust.edu.pk [Center for Advanced Mathematics and Physics (CAMP), National University of Sciences and Technology (NUST), Islamabad (Pakistan)

    2011-02-15

    Here we consider the entropy-corrected version of the holographic dark energy (DE) model in the non-flat universe. We obtain the equation of state parameter in the presence of interaction between DE and dark matter. Moreover, we reconstruct the potential and the dynamics of the quintessence, tachyon, K-essence and dilaton scalar field models according to the evolutionary behavior of the interacting entropy-corrected holographic DE model.

  20. Automated drawing generation system

    International Nuclear Information System (INIS)

    Yoshinaga, Toshiaki; Kawahata, Junichi; Yoshida, Naoto; Ono, Satoru

    1991-01-01

    Since automated CAD drawing generation systems still require human intervention, improvements were focussed on an interactive processing section (data input and correcting operation) which necessitates a vast amount of work. As a result, human intervention was eliminated, the original objective of a computerized system. This is the first step taken towards complete automation. The effects of development and commercialization of the system are as described below. (1) The interactive processing time required for generating drawings was improved. It was determined that introduction of the CAD system has reduced the time required for generating drawings. (2) The difference in skills between workers preparing drawings has been eliminated and the quality of drawings has been made uniform. (3) The extent of knowledge and experience demanded of workers has been reduced. (author)

  1. One-pion exchange current corrections for nuclear magnetic moments in relativistic mean field theory

    International Nuclear Information System (INIS)

    Li Jian; Yao, J.M.; Meng Jie; Arima, Akito

    2011-01-01

    The one-pion exchange current corrections to isoscalar and isovector magnetic moments of double-closed shell nuclei plus and minus one nucleon with A = 15, 17, 39 and 41 have been studied in the relativistic mean field (RMF) theory and compared with previous relativistic and non-relativistic results. It has been found that the one-pion exchange current gives a negligible contribution to the isoscalar magnetic moments but a significant correction to the isovector ones. However, the one-pion exchange current enhances the isovector magnetic moments further and does not improve the corresponding description for the concerned nuclei in the present work. (author)

  2. Safeguards Automated Facility Evaluation (SAFE) methodology

    International Nuclear Information System (INIS)

    Chapman, L.D.; Grady, L.M.; Bennett, H.A.; Sasser, D.W.; Engi, D.

    1978-08-01

    An automated approach to facility safeguards effectiveness evaluation has been developed. This automated process, called Safeguards Automated Facility Evaluation (SAFE), consists of a collection of a continuous stream of operational modules for facility characterization, the selection of critical paths, and the evaluation of safeguards effectiveness along these paths. The technique has been implemented on an interactive computer time-sharing system and makes use of computer graphics for the processing and presentation of information. Using this technique, a comprehensive evaluation of a safeguards system can be provided by systematically varying the parameters that characterize the physical protection components of a facility to reflect the perceived adversary attributes and strategy, environmental conditions, and site operational conditions. The SAFE procedure has broad applications in the nuclear facility safeguards field as well as in the security field in general. Any fixed facility containing valuable materials or components to be protected from theft or sabotage could be analyzed using this same automated evaluation technique

  3. Quantitative Estimation for the Effectiveness of Automation

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun

    2012-01-01

    In advanced MCR, various automation systems are applied to enhance the human performance and reduce the human errors in industrial fields. It is expected that automation provides greater efficiency, lower workload, and fewer human errors. However, these promises are not always fulfilled. As the new types of events related to application of the imperfect and complex automation are occurred, it is required to analyze the effects of automation system for the performance of human operators. Therefore, we suggest the quantitative estimation method to analyze the effectiveness of the automation systems according to Level of Automation (LOA) classification, which has been developed over 30 years. The estimation of the effectiveness of automation will be achieved by calculating the failure probability of human performance related to the cognitive activities

  4. Automated MAD and MIR structure solution

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.; Berendzen, Joel

    1999-01-01

    A fully automated procedure for solving MIR and MAD structures has been developed using a scoring scheme to convert the structure-solution process into an optimization problem. Obtaining an electron-density map from X-ray diffraction data can be difficult and time-consuming even after the data have been collected, largely because MIR and MAD structure determinations currently require many subjective evaluations of the qualities of trial heavy-atom partial structures before a correct heavy-atom solution is obtained. A set of criteria for evaluating the quality of heavy-atom partial solutions in macromolecular crystallography have been developed. These have allowed the conversion of the crystal structure-solution process into an optimization problem and have allowed its automation. The SOLVE software has been used to solve MAD data sets with as many as 52 selenium sites in the asymmetric unit. The automated structure-solution process developed is a major step towards the fully automated structure-determination, model-building and refinement procedure which is needed for genomic scale structure determinations

  5. Disassembly automation automated systems with cognitive abilities

    CERN Document Server

    Vongbunyong, Supachai

    2015-01-01

    This book presents a number of aspects to be considered in the development of disassembly automation, including the mechanical system, vision system and intelligent planner. The implementation of cognitive robotics increases the flexibility and degree of autonomy of the disassembly system. Disassembly, as a step in the treatment of end-of-life products, can allow the recovery of embodied value left within disposed products, as well as the appropriate separation of potentially-hazardous components. In the end-of-life treatment industry, disassembly has largely been limited to manual labor, which is expensive in developed countries. Automation is one possible solution for economic feasibility. The target audience primarily comprises researchers and experts in the field, but the book may also be beneficial for graduate students.

  6. Top-quark physics as a prime application of automated higher-order corrections

    Energy Technology Data Exchange (ETDEWEB)

    Weiss, Christian

    2017-07-15

    Experiments in high energy physics have reached an unprecedented accuracy. This accuracy has to be matched by the theoretical predictions used to search for new physics. For this purpose, sophisticated computer programs are necessary, both for the calculation of matrix elements (tree-level and loop) and in the field of Monte-Carlo event generation. The hadronic initial state at the LHC poses significant challenges for measurement and simulation. A future lepton collider, like the proposed international linear collider (ILC) in Japan or compact linear collider (CLIC) at CERN would have a much cleaner initial state. Such a machine would achieve an even higher precision. In the field of lepton colliders, the Whizard event generator has been established as the program of choice due to its unique treatment of beam structure functions and initial-state radiation. In this thesis, we present the extension of Whizard to next-to-leading order accuracy, thus augmenting it to the state of the art. We use the Frixione-Kunszt-Signer (FKS) subtraction scheme to subtract divergences, of which a detailed outline is given. This new functionality is used to perform in-depth studies of the top quark. Being the heaviest particle in the standard model, its strong connection to the Higgs sector as well as its abundant production at a future lepton collider makes it an excellent object of study. Yet, its lifetime is very short and high-multiplicity final-states of its decay products are decayed in the detector. This thesis investigates the influence of NLO QCD corrections to the fully off-shell top production processes e{sup +}e{sup -}→μ{sup +}ν{sub μ}e{sup -} anti ν{sub e}b anti b and e{sup +}e{sup -}→μ{sup +}ν{sub μ}e{sup -} anti ν{sub e}b anti bH. These calculations have not been performed for the first time. Moreover, the incorporation of NLO QCD corrections into the resummation of the top production threshold and its matching to the relativistic continuum for the process

  7. Top-quark physics as a prime application of automated higher-order corrections

    International Nuclear Information System (INIS)

    Weiss, Christian

    2017-07-01

    Experiments in high energy physics have reached an unprecedented accuracy. This accuracy has to be matched by the theoretical predictions used to search for new physics. For this purpose, sophisticated computer programs are necessary, both for the calculation of matrix elements (tree-level and loop) and in the field of Monte-Carlo event generation. The hadronic initial state at the LHC poses significant challenges for measurement and simulation. A future lepton collider, like the proposed international linear collider (ILC) in Japan or compact linear collider (CLIC) at CERN would have a much cleaner initial state. Such a machine would achieve an even higher precision. In the field of lepton colliders, the Whizard event generator has been established as the program of choice due to its unique treatment of beam structure functions and initial-state radiation. In this thesis, we present the extension of Whizard to next-to-leading order accuracy, thus augmenting it to the state of the art. We use the Frixione-Kunszt-Signer (FKS) subtraction scheme to subtract divergences, of which a detailed outline is given. This new functionality is used to perform in-depth studies of the top quark. Being the heaviest particle in the standard model, its strong connection to the Higgs sector as well as its abundant production at a future lepton collider makes it an excellent object of study. Yet, its lifetime is very short and high-multiplicity final-states of its decay products are decayed in the detector. This thesis investigates the influence of NLO QCD corrections to the fully off-shell top production processes e"+e"-→μ"+ν_μe"- anti ν_eb anti b and e"+e"-→μ"+ν_μe"- anti ν_eb anti bH. These calculations have not been performed for the first time. Moreover, the incorporation of NLO QCD corrections into the resummation of the top production threshold and its matching to the relativistic continuum for the process e"+e"-→bW"++ anti bW"-. All results are obtained with

  8. Automated solid-phase subcloning based on beads brought into proximity by magnetic force.

    Science.gov (United States)

    Hudson, Elton P; Nikoshkov, Andrej; Uhlen, Mathias; Rockberg, Johan

    2012-01-01

    In the fields of proteomics, metabolic engineering and synthetic biology there is a need for high-throughput and reliable cloning methods to facilitate construction of expression vectors and genetic pathways. Here, we describe a new approach for solid-phase cloning in which both the vector and the gene are immobilized to separate paramagnetic beads and brought into proximity by magnetic force. Ligation events were directly evaluated using fluorescent-based microscopy and flow cytometry. The highest ligation efficiencies were obtained when gene- and vector-coated beads were brought into close contact by application of a magnet during the ligation step. An automated procedure was developed using a laboratory workstation to transfer genes into various expression vectors and more than 95% correct clones were obtained in a number of various applications. The method presented here is suitable for efficient subcloning in an automated manner to rapidly generate a large number of gene constructs in various vectors intended for high throughput applications.

  9. Determination of the Optimized Automation Rate considering Effects of Automation on Human Operators in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun; Kim, Jong Hyun; Kim, Man Cheol

    2015-01-01

    Automation refers to the use of a device or a system to perform a function previously performed by a human operator. It is introduced to reduce the human errors and to enhance the performance in various industrial fields, including the nuclear industry. However, these positive effects are not always achieved in complex systems such as nuclear power plants (NPPs). An excessive introduction of automation can generate new roles for human operators and change activities in unexpected ways. As more automation systems are accepted, the ability of human operators to detect automation failures and resume manual control is diminished. This disadvantage of automation is called the Out-of-the- Loop (OOTL) problem. We should consider the positive and negative effects of automation at the same time to determine the appropriate level of the introduction of automation. Thus, in this paper, we suggest an estimation method to consider the positive and negative effects of automation at the same time to determine the appropriate introduction of automation. This concept is limited in that it does not consider the effects of automation on human operators. Thus, a new estimation method for automation rate was suggested to overcome this problem

  10. On the correctness of the thermoluminescent high-temperature ratio (HTR) method for estimating ionization density effects in mixed radiation fields

    International Nuclear Information System (INIS)

    Bilski, Pawel

    2010-01-01

    The high-temperature ratio (HTR) method which exploits changes in the LiF:Mg,Ti glow-curve due to high-LET radiation, has been used for several years to estimate LET in an unknown radiation field. As TL efficiency is known to decrease after doses of densely ionizing radiation, a LET estimate is used to correct the TLD-measured values of dose. The HTR method is purely empirical and its general correctness is questionable. The validity of the HTR method was investigated by theoretical simulation of various mixed radiation fields. The LET eff values estimated with the HTR method for mixed radiation fields were found in general to be incorrect, in some cases underestimating the true values of dose-averaged LET by an order of magnitude. The method produced correct estimates of average LET only in cases of almost mono-energetic fields (i.e. in non-mixed radiation conditions). The value of LET eff found by the HTR method may therefore be treated as a qualitative indicator of increased LET, but not as a quantitative estimator of average LET. However, HTR-based correction of the TLD-measured dose value (HTR-B method) was found to be quite reliable. In all cases studied, application of this technique improved the result. Most of the measured doses fell within 10% of the true values. A further empirical improvement to the method is proposed. One may therefore recommend the HTR-B method to correct for decreased TL efficiency in mixed high-LET fields.

  11. Comparison of automated and manual shielding block fabrication

    International Nuclear Information System (INIS)

    Weeks, K.J.; Fraass, B.A.; McShan, D.L.; Hardybala, S.S.; Hargreaves, E.A.; Lichter, A.S.

    1989-01-01

    This work reports the results of a study comparing computer controlled and manual shielding block cutting. The general problems inherent in automated block cutting have been identified and minimized. A system whose accuracy is sufficient for clinical applications has been developed. The relative accuracy of our automated system versus experienced technician controlled cutting was investigated. In general, it is found that automated cutting is somewhat faster and more accurate than manual cutting for very large fields, but that the reverse is true for most smaller fields. The relative cost effectiveness of automated cutting is dependent on the percentage of computer designed blocks which are generated in the clinical setting. At the present time, the traditional manual method is still favored

  12. Automatic Contextual Text Correction Using The Linguistic Habits Graph Lhg

    Directory of Open Access Journals (Sweden)

    Marcin Gadamer

    2009-01-01

    Full Text Available Automatic text correction is an essential problem of today text processors and editors. Thispaper introduces a novel algorithm for automation of contextual text correction using a LinguisticHabit Graph (LHG also introduced in this paper. A specialist internet crawler hasbeen constructed for searching through web sites in order to build a Linguistic Habit Graphafter text corpuses gathered in polish web sites. The achieved correction results on a basis ofthis algorithm using this LHG were compared with commercial programs which also enableto make text correction: Microsoft Word 2007, Open Office Writer 3.0 and search engineGoogle. The achieved results of text correction were much better than correction made bythese commercial tools.

  13. Laboratory automation: trajectory, technology, and tactics.

    Science.gov (United States)

    Markin, R S; Whalen, S A

    2000-05-01

    Laboratory automation is in its infancy, following a path parallel to the development of laboratory information systems in the late 1970s and early 1980s. Changes on the horizon in healthcare and clinical laboratory service that affect the delivery of laboratory results include the increasing age of the population in North America, the implementation of the Balanced Budget Act (1997), and the creation of disease management companies. Major technology drivers include outcomes optimization and phenotypically targeted drugs. Constant cost pressures in the clinical laboratory have forced diagnostic manufacturers into less than optimal profitability states. Laboratory automation can be a tool for the improvement of laboratory services and may decrease costs. The key to improvement of laboratory services is implementation of the correct automation technology. The design of this technology should be driven by required functionality. Automation design issues should be centered on the understanding of the laboratory and its relationship to healthcare delivery and the business and operational processes in the clinical laboratory. Automation design philosophy has evolved from a hardware-based approach to a software-based approach. Process control software to support repeat testing, reflex testing, and transportation management, and overall computer-integrated manufacturing approaches to laboratory automation implementation are rapidly expanding areas. It is clear that hardware and software are functionally interdependent and that the interface between the laboratory automation system and the laboratory information system is a key component. The cost-effectiveness of automation solutions suggested by vendors, however, has been difficult to evaluate because the number of automation installations are few and the precision with which operational data have been collected to determine payback is suboptimal. The trend in automation has moved from total laboratory automation to a

  14. International Conference Automation : Challenges in Automation, Robotics and Measurement Techniques

    CERN Document Server

    Zieliński, Cezary; Kaliczyńska, Małgorzata

    2016-01-01

    This book presents the set of papers accepted for presentation at the International Conference Automation, held in Warsaw, 2-4 March of 2016. It presents the research results presented by top experts in the fields of industrial automation, control, robotics and measurement techniques. Each chapter presents a thorough analysis of a specific technical problem which is usually followed by numerical analysis, simulation, and description of results of implementation of the solution of a real world problem. The presented theoretical results, practical solutions and guidelines will be valuable for both researchers working in the area of engineering sciences and for practitioners solving industrial problems. .

  15. Participation through Automation: Fully Automated Critical PeakPricing in Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Piette, Mary Ann; Watson, David S.; Motegi, Naoya; Kiliccote,Sila; Linkugel, Eric

    2006-06-20

    California electric utilities have been exploring the use of dynamic critical peak prices (CPP) and other demand response programs to help reduce peaks in customer electric loads. CPP is a tariff design to promote demand response. Levels of automation in DR can be defined as follows: Manual Demand Response involves a potentially labor-intensive approach such as manually turning off or changing comfort set points at each equipment switch or controller. Semi-Automated Demand Response involves a pre-programmed demand response strategy initiated by a person via centralized control system. Fully Automated Demand Response does not involve human intervention, but is initiated at a home, building, or facility through receipt of an external communications signal. The receipt of the external signal initiates pre-programmed demand response strategies. They refer to this as Auto-DR. This paper describes the development, testing, and results from automated CPP (Auto-CPP) as part of a utility project in California. The paper presents the project description and test methodology. This is followed by a discussion of Auto-DR strategies used in the field test buildings. They present a sample Auto-CPP load shape case study, and a selection of the Auto-CPP response data from September 29, 2005. If all twelve sites reached their maximum saving simultaneously, a total of approximately 2 MW of DR is available from these twelve sites that represent about two million ft{sup 2}. The average DR was about half that value, at about 1 MW. These savings translate to about 0.5 to 1.0 W/ft{sup 2} of demand reduction. They are continuing field demonstrations and economic evaluations to pursue increasing penetrations of automated DR that has demonstrated ability to provide a valuable DR resource for California.

  16. Automated Orthorectification of VHR Satellite Images by SIFT-Based RPC Refinement

    Directory of Open Access Journals (Sweden)

    Hakan Kartal

    2018-06-01

    Full Text Available Raw remotely sensed images contain geometric distortions and cannot be used directly for map-based applications, accurate locational information extraction or geospatial data integration. A geometric correction process must be conducted to minimize the errors related to distortions and achieve the desired location accuracy before further analysis. A considerable number of images might be needed when working over large areas or in temporal domains in which manual geometric correction requires more labor and time. To overcome these problems, new algorithms have been developed to make the geometric correction process autonomous. The Scale Invariant Feature Transform (SIFT algorithm is an image matching algorithm used in remote sensing applications that has received attention in recent years. In this study, the effects of the incidence angle, surface topography and land cover (LC characteristics on SIFT-based automated orthorectification were investigated at three different study sites with different topographic conditions and LC characteristics using Pleiades very high resolution (VHR images acquired at different incidence angles. The results showed that the location accuracy of the orthorectified images increased with lower incidence angle images. More importantly, the topographic characteristics had no observable impacts on the location accuracy of SIFT-based automated orthorectification, and the results showed that Ground Control Points (GCPs are mainly concentrated in the “Forest” and “Semi Natural Area” LC classes. A multi-thread code was designed to reduce the automated processing time, and the results showed that the process performed 7 to 16 times faster using an automated approach. Analyses performed on various spectral modes of multispectral data showed that the arithmetic data derived from pan-sharpened multispectral images can be used in automated SIFT-based RPC orthorectification.

  17. Automated spectro-goniometer: A spherical robot for the field measurement of the directional reflectance of snow

    International Nuclear Information System (INIS)

    Painter, Thomas H.; Paden, Brad; Dozier, Jeff

    2003-01-01

    We describe an automated spectro-goniometer (ASG) that rapidly measures the spectral hemispherical-directional reflectance factor (HDRF) of snow in the field across the wavelength range 0.4≤λ≤2.5 μm. Few measurements of snow's HDRF exist in the literature, in part caused by a lack of a portable instrument capable of rapid, repeatable sampling. The ASG is a two-link spherical robot coupled to a field spectroradiometer. The ASG is the first revolute joint and first automated field goniometer for use over snow and other smooth surfaces. It is light enough (∼50 kg) to be portable in a sled by an individual. The ASG samples the HDRF at arbitrary angular resolution and 0.5 Hz sampling rate. The arm attaches to the fixed-point frame 0.65 m above the surface. With vertical and oblique axes, the ASG places the sensor of the field spectroradiometer at any point on the hemisphere above a snow target. In standard usage, the ASG has the sun as the illumination source to facilitate in situ measurements over fragile surfaces not easily transported to the laboratory and to facilitate simultaneous illumination conditions for validation and calibration of satellite retrievals. The kinematics of the ASG is derived using Rodrigues' formula applied to the 2 degree-of-freedom arm. We describe the inverse kinematics for the ASG and solve the inverse problem from a given view angle to the necessary rotation about each axis. Its two-dimensional hemispheric sampling space facilitates the measurement of spectral reflectance from snow and other relatively smooth surfaces into any direction. The measurements will be used to validate radiative transfer model results of directional reflectance and to validate/calibrate directional satellite measurements of reflectance from these smooth surfaces

  18. Joint deformable liver registration and bias field correction for MR-guided HDR brachytherapy.

    Science.gov (United States)

    Rak, Marko; König, Tim; Tönnies, Klaus D; Walke, Mathias; Ricke, Jens; Wybranski, Christian

    2017-12-01

    In interstitial high-dose rate brachytherapy, liver cancer is treated by internal radiation, requiring percutaneous placement of applicators within or close to the tumor. To maximize utility, the optimal applicator configuration is pre-planned on magnetic resonance images. The pre-planned configuration is then implemented via a magnetic resonance-guided intervention. Mapping the pre-planning information onto interventional data would reduce the radiologist's cognitive load during the intervention and could possibly minimize discrepancies between optimally pre-planned and actually placed applicators. We propose a fast and robust two-step registration framework suitable for interventional settings: first, we utilize a multi-resolution rigid registration to correct for differences in patient positioning (rotation and translation). Second, we employ a novel iterative approach alternating between bias field correction and Markov random field deformable registration in a multi-resolution framework to compensate for non-rigid movements of the liver, the tumors and the organs at risk. In contrast to existing pre-correction methods, our multi-resolution scheme can recover bias field artifacts of different extents at marginal computational costs. We compared our approach to deformable registration via B-splines, demons and the SyN method on 22 registration tasks from eleven patients. Results showed that our approach is more accurate than the contenders for liver as well as for tumor tissues. We yield average liver volume overlaps of 94.0 ± 2.7% and average surface-to-surface distances of 2.02 ± 0.87 mm and 3.55 ± 2.19 mm for liver and tumor tissue, respectively. The reported distances are close to (or even below) the slice spacing (2.5 - 3.0 mm) of our data. Our approach is also the fastest, taking 35.8 ± 12.8 s per task. The presented approach is sufficiently accurate to map information available from brachytherapy pre-planning onto interventional data. It

  19. Bioprocessing automation in cell therapy manufacturing: Outcomes of special interest group automation workshop.

    Science.gov (United States)

    Ball, Oliver; Robinson, Sarah; Bure, Kim; Brindley, David A; Mccall, David

    2018-04-01

    Phacilitate held a Special Interest Group workshop event in Edinburgh, UK, in May 2017. The event brought together leading stakeholders in the cell therapy bioprocessing field to identify present and future challenges and propose potential solutions to automation in cell therapy bioprocessing. Here, we review and summarize discussions from the event. Deep biological understanding of a product, its mechanism of action and indication pathogenesis underpin many factors relating to bioprocessing and automation. To fully exploit the opportunities of bioprocess automation, therapeutics developers must closely consider whether an automation strategy is applicable, how to design an 'automatable' bioprocess and how to implement process modifications with minimal disruption. Major decisions around bioprocess automation strategy should involve all relevant stakeholders; communication between technical and business strategy decision-makers is of particular importance. Developers should leverage automation to implement in-process testing, in turn applicable to process optimization, quality assurance (QA)/ quality control (QC), batch failure control, adaptive manufacturing and regulatory demands, but a lack of precedent and technical opportunities can complicate such efforts. Sparse standardization across product characterization, hardware components and software platforms is perceived to complicate efforts to implement automation. The use of advanced algorithmic approaches such as machine learning may have application to bioprocess and supply chain optimization. Automation can substantially de-risk the wider supply chain, including tracking and traceability, cryopreservation and thawing and logistics. The regulatory implications of automation are currently unclear because few hardware options exist and novel solutions require case-by-case validation, but automation can present attractive regulatory incentives. Copyright © 2018 International Society for Cellular Therapy

  20. Using Modeling and Simulation to Predict Operator Performance and Automation-Induced Complacency With Robotic Automation: A Case Study and Empirical Validation.

    Science.gov (United States)

    Wickens, Christopher D; Sebok, Angelia; Li, Huiyang; Sarter, Nadine; Gacy, Andrew M

    2015-09-01

    The aim of this study was to develop and validate a computational model of the automation complacency effect, as operators work on a robotic arm task, supported by three different degrees of automation. Some computational models of complacency in human-automation interaction exist, but those are formed and validated within the context of fairly simplified monitoring failures. This research extends model validation to a much more complex task, so that system designers can establish, without need for human-in-the-loop (HITL) experimentation, merits and shortcomings of different automation degrees. We developed a realistic simulation of a space-based robotic arm task that could be carried out with three different levels of trajectory visualization and execution automation support. Using this simulation, we performed HITL testing. Complacency was induced via several trials of correctly performing automation and then was assessed on trials when automation failed. Following a cognitive task analysis of the robotic arm operation, we developed a multicomponent model of the robotic operator and his or her reliance on automation, based in part on visual scanning. The comparison of model predictions with empirical results revealed that the model accurately predicted routine performance and predicted the responses to these failures after complacency developed. However, the scanning models do not account for the entire attention allocation effects of complacency. Complacency modeling can provide a useful tool for predicting the effects of different types of imperfect automation. The results from this research suggest that focus should be given to supporting situation awareness in automation development. © 2015, Human Factors and Ergonomics Society.

  1. Realtime Automation Networks in moVing industrial Environments

    Directory of Open Access Journals (Sweden)

    Rafael Leidinger

    2012-04-01

    Full Text Available The radio-based wireless data communication has made the realization of new technical solutions possible in many fields of the automation technology (AT. For about ten years, a constant disproportionate growth of wireless technologies can be observed in the automation technology. However, it shows that especially for the AT, conven-tional technologies of office automation are unsuitable and/or not manageable. The employment of mobile ser-vices in the industrial automation technology has the potential of significant cost and time savings. This leads to an increased productivity in various fields of the AT, for example in the factory and process automation or in production logistics. In this paper technologies and solu-tions for an automation-suited supply of mobile wireless services will be introduced under the criteria of real time suitability, IT-security and service orientation. Emphasis will be put on the investigation and develop-ment of wireless convergence layers for different radio technologies, on the central provision of support services for an easy-to-use, central, backup enabled management of combined wired / wireless networks and on the study on integrability in a Profinet real-time Ethernet network [1].

  2. Quantitative evaluation of automated skull-stripping methods applied to contemporary and legacy images: effects of diagnosis, bias correction, and slice location

    DEFF Research Database (Denmark)

    Fennema-Notestine, Christine; Ozyurt, I Burak; Clark, Camellia P

    2006-01-01

    Performance of automated methods to isolate brain from nonbrain tissues in magnetic resonance (MR) structural images may be influenced by MR signal inhomogeneities, type of MR image set, regional anatomy, and age and diagnosis of subjects studied. The present study compared the performance of four...... methods: Brain Extraction Tool (BET; Smith [2002]: Hum Brain Mapp 17:143-155); 3dIntracranial (Ward [1999] Milwaukee: Biophysics Research Institute, Medical College of Wisconsin; in AFNI); a Hybrid Watershed algorithm (HWA, Segonne et al. [2004] Neuroimage 22:1060-1075; in FreeSurfer); and Brain Surface...... Extractor (BSE, Sandor and Leahy [1997] IEEE Trans Med Imag 16:41-54; Shattuck et al. [2001] Neuroimage 13:856-876) to manually stripped images. The methods were applied to uncorrected and bias-corrected datasets; Legacy and Contemporary T1-weighted image sets; and four diagnostic groups (depressed...

  3. State-of-the art comparability of corrected emission spectra. 2. Field laboratory assessment of calibration performance using spectral fluorescence standards.

    Science.gov (United States)

    Resch-Genger, Ute; Bremser, Wolfram; Pfeifer, Dietmar; Spieles, Monika; Hoffmann, Angelika; DeRose, Paul C; Zwinkels, Joanne C; Gauthier, François; Ebert, Bernd; Taubert, R Dieter; Voigt, Jan; Hollandt, Jörg; Macdonald, Rainer

    2012-05-01

    In the second part of this two-part series on the state-of-the-art comparability of corrected emission spectra, we have extended this assessment to the broader community of fluorescence spectroscopists by involving 12 field laboratories that were randomly selected on the basis of their fluorescence measuring equipment. These laboratories performed a reference material (RM)-based fluorometer calibration with commercially available spectral fluorescence standards following a standard operating procedure that involved routine measurement conditions and the data evaluation software LINKCORR developed and provided by the Federal Institute for Materials Research and Testing (BAM). This instrument-specific emission correction curve was subsequently used for the determination of the corrected emission spectra of three test dyes, X, QS, and Y, revealing an average accuracy of 6.8% for the corrected emission spectra. This compares well with the relative standard uncertainties of 4.2% for physical standard-based spectral corrections demonstrated in the first part of this study (previous paper in this issue) involving an international group of four expert laboratories. The excellent comparability of the measurements of the field laboratories also demonstrates the effectiveness of RM-based correction procedures.

  4. A computational framework for automation of point defect calculations

    International Nuclear Information System (INIS)

    Goyal, Anuj; Gorai, Prashun; Peng, Haowei

    2017-01-01

    We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.

  5. Optimization of automation: III. Development of optimization method for determining automation rate in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Kim, Man Cheol; Seong, Poong Hyun

    2016-01-01

    Highlights: • We propose an appropriate automation rate that enables the best human performance. • We analyze the shortest working time considering Situation Awareness Recovery (SAR). • The optimized automation rate is estimated by integrating the automation and ostracism rate estimation methods. • The process to derive the optimized automation rate is demonstrated through case studies. - Abstract: Automation has been introduced in various industries, including the nuclear field, because it is commonly believed that automation promises greater efficiency, lower workloads, and fewer operator errors through reducing operator errors and enhancing operator and system performance. However, the excessive introduction of automation has deteriorated operator performance due to the side effects of automation, which are referred to as Out-of-the-Loop (OOTL), and this is critical issue that must be resolved. Thus, in order to determine the optimal level of automation introduction that assures the best human operator performance, a quantitative method of optimizing the automation is proposed in this paper. In order to propose the optimization method for determining appropriate automation levels that enable the best human performance, the automation rate and ostracism rate, which are estimation methods that quantitatively analyze the positive and negative effects of automation, respectively, are integrated. The integration was conducted in order to derive the shortest working time through considering the concept of situation awareness recovery (SAR), which states that the automation rate with the shortest working time assures the best human performance. The process to derive the optimized automation rate is demonstrated through an emergency operation scenario-based case study. In this case study, four types of procedures are assumed through redesigning the original emergency operating procedure according to the introduced automation and ostracism levels. Using the

  6. 2015 Chinese Intelligent Automation Conference

    CERN Document Server

    Li, Hongbo

    2015-01-01

    Proceedings of the 2015 Chinese Intelligent Automation Conference presents selected research papers from the CIAC’15, held in Fuzhou, China. The topics include adaptive control, fuzzy control, neural network based control, knowledge based control, hybrid intelligent control, learning control, evolutionary mechanism based control, multi-sensor integration, failure diagnosis, reconfigurable control, etc. Engineers and researchers from academia, industry and the government can gain valuable insights into interdisciplinary solutions in the field of intelligent automation.

  7. Correcting Inconsistencies and Errors in Bacterial Genome Metadata Using an Automated Curation Tool in Excel (AutoCurE).

    Science.gov (United States)

    Schmedes, Sarah E; King, Jonathan L; Budowle, Bruce

    2015-01-01

    Whole-genome data are invaluable for large-scale comparative genomic studies. Current sequencing technologies have made it feasible to sequence entire bacterial genomes with relative ease and time with a substantially reduced cost per nucleotide, hence cost per genome. More than 3,000 bacterial genomes have been sequenced and are available at the finished status. Publically available genomes can be readily downloaded; however, there are challenges to verify the specific supporting data contained within the download and to identify errors and inconsistencies that may be present within the organizational data content and metadata. AutoCurE, an automated tool for bacterial genome database curation in Excel, was developed to facilitate local database curation of supporting data that accompany downloaded genomes from the National Center for Biotechnology Information. AutoCurE provides an automated approach to curate local genomic databases by flagging inconsistencies or errors by comparing the downloaded supporting data to the genome reports to verify genome name, RefSeq accession numbers, the presence of archaea, BioProject/UIDs, and sequence file descriptions. Flags are generated for nine metadata fields if there are inconsistencies between the downloaded genomes and genomes reports and if erroneous or missing data are evident. AutoCurE is an easy-to-use tool for local database curation for large-scale genome data prior to downstream analyses.

  8. Automated Diatom Analysis Applied to Traditional Light Microscopy: A Proof-of-Concept Study

    Science.gov (United States)

    Little, Z. H. L.; Bishop, I.; Spaulding, S. A.; Nelson, H.; Mahoney, C.

    2017-12-01

    Diatom identification and enumeration by high resolution light microscopy is required for many areas of research and water quality assessment. Such analyses, however, are both expertise and labor-intensive. These challenges motivate the need for an automated process to efficiently and accurately identify and enumerate diatoms. Improvements in particle analysis software have increased the likelihood that diatom enumeration can be automated. VisualSpreadsheet software provides a possible solution for automated particle analysis of high-resolution light microscope diatom images. We applied the software, independent of its complementary FlowCam hardware, to automated analysis of light microscope images containing diatoms. Through numerous trials, we arrived at threshold settings to correctly segment 67% of the total possible diatom valves and fragments from broad fields of view. (183 light microscope images were examined containing 255 diatom particles. Of the 255 diatom particles present, 216 diatoms valves and fragments of valves were processed, with 170 properly analyzed and focused upon by the software). Manual analysis of the images yielded 255 particles in 400 seconds, whereas the software yielded a total of 216 particles in 68 seconds, thus highlighting that the software has an approximate five-fold efficiency advantage in particle analysis time. As in past efforts, incomplete or incorrect recognition was found for images with multiple valves in contact or valves with little contrast. The software has potential to be an effective tool in assisting taxonomists with diatom enumeration by completing a large portion of analyses. Benefits and limitations of the approach are presented to allow for development of future work in image analysis and automated enumeration of traditional light microscope images containing diatoms.

  9. Recent progress in the field of automated welding applied to maintenance activities

    International Nuclear Information System (INIS)

    Cullafroz, M.

    2004-01-01

    Automated and robot welding has 5 advantages compared to manual welding: -) under some conditions the automated circular welding does not require requalification testing as manual welding does, -) welding heads in robots have a reduced size compared to manual gears so they can enter and treat complex piping, -) by using an adequate viewing system the operator can be more than 10 meters away from the welding site which means that the radiation doses he receives is cut by a factor 1.5 to 2, -) whatever the configuration is, the deposition rate in automated welding stays high, the quality standard is steady and the risk of repairing is low, -) a gain in productivity if adequate equipment is used. In general, automated welding requires a TIG welding process and is applied in maintenance activities to: -) the main primary system and other circuits in stainless austenitic steels, -) the main secondary system and other circuits in low-percentage carbon steels, and -) the closure of spent fuel canisters. An application to the repairing of BWR's pipes is shown. (A.C.)

  10. Sci—Fri AM: Mountain — 01: Validation of a new formulism and the related correction factors on output factor determination for small photon fields

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yizhen; Younge, Kelly; Nielsen, Michelle; Mutanga, Theodore [Peel Regional Cancer Center, Trillium Health Partners, Mississauga, ON (Canada); Cui, Congwu [Peel Regional Cancer Center, Trillium Health Partners, Mississauga, ON (Canada); Department of Radiation Oncology, University of Toronto, Toronto, ON (Canada); Das, Indra J. [Radiation Oncology Dept., Indiana University- School of Medicine, Indianapolis, IN (United States)

    2014-08-15

    Small field dosimetry measurements including output factors are difficult due to lack of charged-particle equilibrium, occlusion of the radiation source, the finite size of detectors, and non-water equivalence of detector components. With available detectors significant variations could be measured that will lead to incorrect delivered dose to patients. IAEA/AAPM have provided a framework and formulation to correct the detector response in small photon fields. Monte Carlo derived correction factors for some commonly used small field detectors are now available, however validation has not been performed prior to this study. An Exradin A16 chamber, EDGE detector and SFD detector were used to perform the output factor measurement for a series of conical fields (5–30mm) on a Varian iX linear accelerator. Discrepancies up to 20%, 10% and 6% were observed for 5, 7.5 and 10 mm cones between the initial output factors measured by the EDGE detector and the A16 ion chamber, while the discrepancies for the conical fields larger than 10 mm were less than 4%. After the application of the correction, the output factors agree well with each other to within 1%. Caution is needed when determining the output factors for small photon fields, especially for fields 10 mm in diameter or smaller. More than one type of detector should be used, each with proper corrections applied to the measurement results. It is concluded that with the application of correction factors to appropriately chosen detectors, output can be measured accurately for small fields.

  11. Variational-integral perturbation corrections of some lower excited states for hydrogen atoms in magnetic fields

    International Nuclear Information System (INIS)

    Yuan Lin; Zhou Ben-Hu; Zhao Yun-Hui; Xu Jun; Hai Wen-Hua

    2012-01-01

    A variational-integral perturbation method (VIPM) is established by combining the variational perturbation with the integral perturbation. The first-order corrected wave functions are constructed, and the second-order energy corrections for the ground state and several lower excited states are calculated by applying the VIPM to the hydrogen atom in a strong uniform magnetic field. Our calculations demonstrated that the energy calculated by the VIPM only shows a negative value, which indicates that the VIPM method is more accurate than the other methods. Our study indicated that the VIPM can not only increase the accuracy of the results but also keep the convergence of the wave functions

  12. Instrumentation, Field Network and Process Automation for the Cryogenic System of the LHC Test String

    CERN Document Server

    Suraci, A; Balle, C; Blanco-Viñuela, E; Casas-Cubillos, J; Gomes, P; Pelletier, S; Serio, L; Vauthier, N; Balle, Ch.

    2001-01-01

    CERN is now setting up String 2, a full-size prototype of a regular cell of the LHC arc. It is composed of two quadrupole, six dipole magnets, and a separate cryogenic distribution line (QRL) for the supply and recovery of the cryogen. An electrical feed box (DFB), with up to 38 High Temperature Superconducting (HTS) leads, powers the magnets. About 700 sensors and actuators are distributed along four Profibus DP and two Profibus PA field buses. The process automation is handled by two controllers, running 126 Closed Control Loops (CCL). This paper describes the cryogenic control system, associated instrumentation, and their commissioning.

  13. Corrections for a constant radial magnetic field in the muon g - 2 and electric-dipole-moment experiments in storage rings

    Energy Technology Data Exchange (ETDEWEB)

    Silenko, Alexander J. [Belarusian State University, Research Institute for Nuclear Problems, Minsk (Belarus); Joint Institute for Nuclear Research, Bogoliubov Laboratory of Theoretical Physics, Dubna (Russian Federation)

    2017-10-15

    We calculate the corrections for constant radial magnetic field in muon g - 2 and electric-dipole-moment experiments in storage rings. While the correction is negligible for the current generation of g - 2 experiments, it affects the upcoming muon electric-dipole-moment experiment at Fermilab. (orig.)

  14. Field of view extension and truncation correction for MR-based human attenuation correction in simultaneous MR/PET imaging

    International Nuclear Information System (INIS)

    Blumhagen, Jan O.; Ladebeck, Ralf; Fenchel, Matthias; Braun, Harald; Quick, Harald H.; Faul, David; Scheffler, Klaus

    2014-01-01

    Purpose: In quantitative PET imaging, it is critical to accurately measure and compensate for the attenuation of the photons absorbed in the tissue. While in PET/CT the linear attenuation coefficients can be easily determined from a low-dose CT-based transmission scan, in whole-body MR/PET the computation of the linear attenuation coefficients is based on the MR data. However, a constraint of the MR-based attenuation correction (AC) is the MR-inherent field-of-view (FoV) limitation due to static magnetic field (B 0 ) inhomogeneities and gradient nonlinearities. Therefore, the MR-based human AC map may be truncated or geometrically distorted toward the edges of the FoV and, consequently, the PET reconstruction with MR-based AC may be biased. This is especially of impact laterally where the patient arms rest beside the body and are not fully considered. Methods: A method is proposed to extend the MR FoV by determining an optimal readout gradient field which locally compensates B 0 inhomogeneities and gradient nonlinearities. This technique was used to reduce truncation in AC maps of 12 patients, and the impact on the PET quantification was analyzed and compared to truncated data without applying the FoV extension and additionally to an established approach of PET-based FoV extension. Results: The truncation artifacts in the MR-based AC maps were successfully reduced in all patients, and the mean body volume was thereby increased by 5.4%. In some cases large patient-dependent changes in SUV of up to 30% were observed in individual lesions when compared to the standard truncated attenuation map. Conclusions: The proposed technique successfully extends the MR FoV in MR-based attenuation correction and shows an improvement of PET quantification in whole-body MR/PET hybrid imaging. In comparison to the PET-based completion of the truncated body contour, the proposed method is also applicable to specialized PET tracers with little uptake in the arms and might reduce the

  15. Automated system for review of radiotherapy treatment sheets

    International Nuclear Information System (INIS)

    Collado Chamorro, P.; Sanz Freire, C. J.; Vazquez Galinanes, A.; Diaz Pascual, V.; Gomez amez, J.; Martinez Sanchez, S.; Ossola Lentati, G. A.

    2011-01-01

    In many modern radiotherapy services begins to leaf treatment implemented in electronic form. In our department has developed an automated reporting system, that check the following parameters: treatment completed correctly, number of sessions and cumulative dose administered. Likewise treatments are verified in the allocated separate unit, and over-writing table parameters.

  16. System for Automated Calibration of Vector Modulators

    Science.gov (United States)

    Lux, James; Boas, Amy; Li, Samuel

    2009-01-01

    Vector modulators are used to impose baseband modulation on RF signals, but non-ideal behavior limits the overall performance. The non-ideal behavior of the vector modulator is compensated using data collected with the use of an automated test system driven by a LabVIEW program that systematically applies thousands of control-signal values to the device under test and collects RF measurement data. The technology innovation automates several steps in the process. First, an automated test system, using computer controlled digital-to-analog converters (DACs) and a computer-controlled vector network analyzer (VNA) systematically can apply different I and Q signals (which represent the complex number by which the RF signal is multiplied) to the vector modulator under test (VMUT), while measuring the RF performance specifically, gain and phase. The automated test system uses the LabVIEW software to control the test equipment, collect the data, and write it to a file. The input to the Lab - VIEW program is either user-input for systematic variation, or is provided in a file containing specific test values that should be fed to the VMUT. The output file contains both the control signals and the measured data. The second step is to post-process the file to determine the correction functions as needed. The result of the entire process is a tabular representation, which allows translation of a desired I/Q value to the required analog control signals to produce a particular RF behavior. In some applications, corrected performance is needed only for a limited range. If the vector modulator is being used as a phase shifter, there is only a need to correct I and Q values that represent points on a circle, not the entire plane. This innovation has been used to calibrate 2-GHz MMIC (monolithic microwave integrated circuit) vector modulators in the High EIRP Cluster Array project (EIRP is high effective isotropic radiated power). These calibrations were then used to create

  17. Analysis and correction of intrinsic non-axisymmetric magnetic fields in high-β DIII-D plasmas

    International Nuclear Information System (INIS)

    Garofalo, A.M.; La Haye, R.J.; Scoville, J.T.

    2002-01-01

    Rapid plasma toroidal rotation, sufficient for stabilization of the n=1 resistive wall mode, can be sustained by improving the axisymmetry of the toroidal magnetic field geometry of DIII-D. The required symmetrization is determined experimentally both by optimizing currents in external n=1 correction coils with respect to the plasma rotation, and by use of the n=1 magnetic feedback to detect and minimize the plasma response to non-axisymmetric fields as β increases. Both methods point to an intrinsic ∼7 G (0.03% of the toroidal field), m/n=2/1 resonant helical field at the q=2 surface as the cause of the plasma rotation slowdown above the no-wall β limit. The drag exerted by this field on the plasma rotation is consistent with the behaviour of 'slipping' in a simple induction motor model. (author)

  18. Development of an Automated Technique for Failure Modes and Effect Analysis

    DEFF Research Database (Denmark)

    Blanke, M.; Borch, Ole; Allasia, G.

    1999-01-01

    Advances in automation have provided integration of monitoring and control functions to enhance the operator's overview and ability to take remedy actions when faults occur. Automation in plant supervision is technically possible with integrated automation systems as platforms, but new design...... methods are needed to cope efficiently with the complexity and to ensure that the functionality of a supervisor is correct and consistent. In particular these methods are expected to significantly improve fault tolerance of the designed systems. The purpose of this work is to develop a software module...... implementing an automated technique for Failure Modes and Effects Analysis (FMEA). This technique is based on the matrix formulation of FMEA for the investigation of failure propagation through a system. As main result, this technique will provide the design engineer with decision tables for fault handling...

  19. Development of an automated technique for failure modes and effect analysis

    DEFF Research Database (Denmark)

    Blanke, Mogens; Borch, Ole; Bagnoli, F.

    1999-01-01

    Advances in automation have provided integration of monitoring and control functions to enhance the operator's overview and ability to take remedy actions when faults occur. Automation in plant supervision is technically possible with integrated automation systems as platforms, but new design...... methods are needed to cope efficiently with the complexity and to ensure that the functionality of a supervisor is correct and consistent. In particular these methods are expected to significantly improve fault tolerance of the designed systems. The purpose of this work is to develop a software module...... implementing an automated technique for Failure Modes and Effects Analysis (FMEA). This technique is based on the matrix formulation of FMEA for the investigation of failure propagation through a system. As main result, this technique will provide the design engineer with decision tables for fault handling...

  20. FAST AUTOMATED DECOUPLING AT RHIC

    International Nuclear Information System (INIS)

    BEEBE-WANG, J.J.

    2005-01-01

    Coupling correction is essential for the operational performance of RHIC. The independence of the transverse degrees of freedom makes diagnostics and tune control easier, and it is advantageous to operate an accelerator close to the coupling resonance to minimize nearby nonlinear sidebands. An automated coupling correction application iDQmini has been developed for RHIC routine operations. The application decouples RHIC globally by minimizing the tune separation through finding the optimal settings of two orthogonal skew quadrupole families. The program iDQmini provides options of automatic, semi-automatic and manual decoupling operations. It accesses tune information from all RHIC tune measurement systems: the PLL (phase lock loop), the high frequency Schottky system and the tune meter. It also supplies tune and skew quadrupole scans, finding the minimum tune separation, display the real time results and interface with the RHIC control system. We summarize the capabilities of the coupling correction application iDQmini, and discuss the operational protections incorporated in the program

  1. Reload safety analysis automation tools

    International Nuclear Information System (INIS)

    Havlůj, F.; Hejzlar, J.; Vočka, R.

    2013-01-01

    Performing core physics calculations for the sake of reload safety analysis is a very demanding and time consuming process. This process generally begins with the preparation of libraries for the core physics code using a lattice code. The next step involves creating a very large set of calculations with the core physics code. Lastly, the results of the calculations must be interpreted, correctly applying uncertainties and checking whether applicable limits are satisfied. Such a procedure requires three specialized experts. One must understand the lattice code in order to correctly calculate and interpret its results. The next expert must have a good understanding of the physics code in order to create libraries from the lattice code results and to correctly define all the calculations involved. The third expert must have a deep knowledge of the power plant and the reload safety analysis procedure in order to verify, that all the necessary calculations were performed. Such a procedure involves many steps and is very time consuming. At ÚJV Řež, a.s., we have developed a set of tools which can be used to automate and simplify the whole process of performing reload safety analysis. Our application QUADRIGA automates lattice code calculations for library preparation. It removes user interaction with the lattice code and reduces his task to defining fuel pin types, enrichments, assembly maps and operational parameters all through a very nice and user-friendly GUI. The second part in reload safety analysis calculations is done by CycleKit, a code which is linked with our core physics code ANDREA. Through CycleKit large sets of calculations with complicated interdependencies can be performed using simple and convenient notation. CycleKit automates the interaction with ANDREA, organizes all the calculations, collects the results, performs limit verification and displays the output in clickable html format. Using this set of tools for reload safety analysis simplifies

  2. On the roles of direct feedback and error field correction in stabilizing resistive-wall modes

    International Nuclear Information System (INIS)

    In, Y.; Bogatu, I.N.; Kim, J.S.; Garofalo, A.M.; Jackson, G.L.; La Haye, R.J.; Schaffer, M.J.; Strait, E.J.; Lanctot, M.J.; Reimerdes, H.; Marrelli, L.; Martin, P.; Okabayashi, M.

    2010-01-01

    Active feedback control in the DIII-D tokamak has fully stabilized the current-driven ideal kink resistive-wall mode (RWM). While complete stabilization is known to require both low frequency error field correction (EFC) and high frequency feedback, unambiguous identification has been made about the distinctive role of each in a fully feedback-stabilized discharge. Specifically, the role of direct RWM feedback, which nullifies the RWM perturbation in a time scale faster than the mode growth time, cannot be replaced by low frequency EFC, which minimizes the lack of axisymmetry of external magnetic fields. (letter)

  3. Quantum corrections to Schwarzschild black hole

    Energy Technology Data Exchange (ETDEWEB)

    Calmet, Xavier; El-Menoufi, Basem Kamal [University of Sussex, Department of Physics and Astronomy, Brighton (United Kingdom)

    2017-04-15

    Using effective field theory techniques, we compute quantum corrections to spherically symmetric solutions of Einstein's gravity and focus in particular on the Schwarzschild black hole. Quantum modifications are covariantly encoded in a non-local effective action. We work to quadratic order in curvatures simultaneously taking local and non-local corrections into account. Looking for solutions perturbatively close to that of classical general relativity, we find that an eternal Schwarzschild black hole remains a solution and receives no quantum corrections up to this order in the curvature expansion. In contrast, the field of a massive star receives corrections which are fully determined by the effective field theory. (orig.)

  4. Automated registration of multispectral MR vessel wall images of the carotid artery

    Energy Technology Data Exchange (ETDEWEB)

    Klooster, R. van ' t; Staring, M.; Reiber, J. H. C.; Lelieveldt, B. P. F.; Geest, R. J. van der, E-mail: rvdgeest@lumc.nl [Department of Radiology, Division of Image Processing, Leiden University Medical Center, 2300 RC Leiden (Netherlands); Klein, S. [Department of Radiology and Department of Medical Informatics, Biomedical Imaging Group Rotterdam, Erasmus MC, Rotterdam 3015 GE (Netherlands); Kwee, R. M.; Kooi, M. E. [Department of Radiology, Cardiovascular Research Institute Maastricht, Maastricht University Medical Center, Maastricht 6202 AZ (Netherlands)

    2013-12-15

    Purpose: Atherosclerosis is the primary cause of heart disease and stroke. The detailed assessment of atherosclerosis of the carotid artery requires high resolution imaging of the vessel wall using multiple MR sequences with different contrast weightings. These images allow manual or automated classification of plaque components inside the vessel wall. Automated classification requires all sequences to be in alignment, which is hampered by patient motion. In clinical practice, correction of this motion is performed manually. Previous studies applied automated image registration to correct for motion using only nondeformable transformation models and did not perform a detailed quantitative validation. The purpose of this study is to develop an automated accurate 3D registration method, and to extensively validate this method on a large set of patient data. In addition, the authors quantified patient motion during scanning to investigate the need for correction. Methods: MR imaging studies (1.5T, dedicated carotid surface coil, Philips) from 55 TIA/stroke patients with ipsilateral <70% carotid artery stenosis were randomly selected from a larger cohort. Five MR pulse sequences were acquired around the carotid bifurcation, each containing nine transverse slices: T1-weighted turbo field echo, time of flight, T2-weighted turbo spin-echo, and pre- and postcontrast T1-weighted turbo spin-echo images (T1W TSE). The images were manually segmented by delineating the lumen contour in each vessel wall sequence and were manually aligned by applying throughplane and inplane translations to the images. To find the optimal automatic image registration method, different masks, choice of the fixed image, different types of the mutual information image similarity metric, and transformation models including 3D deformable transformation models, were evaluated. Evaluation of the automatic registration results was performed by comparing the lumen segmentations of the fixed image and

  5. Automated registration of multispectral MR vessel wall images of the carotid artery

    International Nuclear Information System (INIS)

    Klooster, R. van 't; Staring, M.; Reiber, J. H. C.; Lelieveldt, B. P. F.; Geest, R. J. van der; Klein, S.; Kwee, R. M.; Kooi, M. E.

    2013-01-01

    Purpose: Atherosclerosis is the primary cause of heart disease and stroke. The detailed assessment of atherosclerosis of the carotid artery requires high resolution imaging of the vessel wall using multiple MR sequences with different contrast weightings. These images allow manual or automated classification of plaque components inside the vessel wall. Automated classification requires all sequences to be in alignment, which is hampered by patient motion. In clinical practice, correction of this motion is performed manually. Previous studies applied automated image registration to correct for motion using only nondeformable transformation models and did not perform a detailed quantitative validation. The purpose of this study is to develop an automated accurate 3D registration method, and to extensively validate this method on a large set of patient data. In addition, the authors quantified patient motion during scanning to investigate the need for correction. Methods: MR imaging studies (1.5T, dedicated carotid surface coil, Philips) from 55 TIA/stroke patients with ipsilateral <70% carotid artery stenosis were randomly selected from a larger cohort. Five MR pulse sequences were acquired around the carotid bifurcation, each containing nine transverse slices: T1-weighted turbo field echo, time of flight, T2-weighted turbo spin-echo, and pre- and postcontrast T1-weighted turbo spin-echo images (T1W TSE). The images were manually segmented by delineating the lumen contour in each vessel wall sequence and were manually aligned by applying throughplane and inplane translations to the images. To find the optimal automatic image registration method, different masks, choice of the fixed image, different types of the mutual information image similarity metric, and transformation models including 3D deformable transformation models, were evaluated. Evaluation of the automatic registration results was performed by comparing the lumen segmentations of the fixed image and

  6. Automated mass correction and data interpretation for protein open-access liquid chromatography-mass spectrometry.

    Science.gov (United States)

    Wagner, Craig D; Hall, John T; White, Wendy L; Miller, Luke A D; Williams, Jon D

    2007-02-01

    Characterization of recombinant protein purification fractions and final products by liquid chromatography-mass spectrometry (LC/MS) are requested more frequently each year. A protein open-access (OA) LC/MS system was developed in our laboratory to meet this demand. This paper compares the system that we originally implemented in our facilities in 2003 to the one now in use, and discusses, in more detail, recent enhancements that have improved its robustness, reliability, and data reporting capabilities. The system utilizes instruments equipped with reversed-phase chromatography and an orthogonal accelerated time-of-flight mass spectrometer fitted with an electrospray source. Sample analysis requests are accomplished using a simple form on a web-enabled laboratory information management system (LIMS). This distributed form is accessible from any intranet-connected company desktop computer. Automated data acquisition and processing are performed using a combination of in-house (OA-Self Service, OA-Monitor, and OA-Analysis Engine) and vendor-supplied programs (AutoLynx, and OpenLynx) located on acquisition computers and off-line processing workstations. Analysis results are then reported via the same web-based LIMS. Also presented are solutions to problems not addressed on commercially available, small-molecule OA-LC/MS systems. These include automated transforming of mass-to-charge (m/z) spectra to mass spectra and automated data interpretation that considers minor variants to the protein sequence-such as common post-translational modifications (PTMs). Currently, our protein OA-LC/MS platform runs on five LC/MS instruments located in three separate GlaxoSmithKline R&D sites in the US and UK. To date, more than 8000 protein OA-LC/MS samples have been analyzed. With these user friendly and highly automated OA systems in place, mass spectrometry plays a key role in assessing the quality of recombinant proteins, either produced at our facilities or bought from external

  7. Histograms of Oriented 3D Gradients for Fully Automated Fetal Brain Localization and Robust Motion Correction in 3 T Magnetic Resonance Images.

    Science.gov (United States)

    Serag, Ahmed; Macnaught, Gillian; Denison, Fiona C; Reynolds, Rebecca M; Semple, Scott I; Boardman, James P

    2017-01-01

    Fetal brain magnetic resonance imaging (MRI) is a rapidly emerging diagnostic imaging tool. However, automated fetal brain localization is one of the biggest obstacles in expediting and fully automating large-scale fetal MRI processing. We propose a method for automatic localization of fetal brain in 3 T MRI when the images are acquired as a stack of 2D slices that are misaligned due to fetal motion. First, the Histogram of Oriented Gradients (HOG) feature descriptor is extended from 2D to 3D images. Then, a sliding window is used to assign a score to all possible windows in an image, depending on the likelihood of it containing a brain, and the window with the highest score is selected. In our evaluation experiments using a leave-one-out cross-validation strategy, we achieved 96% of complete brain localization using a database of 104 MRI scans at gestational ages between 34 and 38 weeks. We carried out comparisons against template matching and random forest based regression methods and the proposed method showed superior performance. We also showed the application of the proposed method in the optimization of fetal motion correction and how it is essential for the reconstruction process. The method is robust and does not rely on any prior knowledge of fetal brain development.

  8. Histograms of Oriented 3D Gradients for Fully Automated Fetal Brain Localization and Robust Motion Correction in 3 T Magnetic Resonance Images

    Directory of Open Access Journals (Sweden)

    Ahmed Serag

    2017-01-01

    Full Text Available Fetal brain magnetic resonance imaging (MRI is a rapidly emerging diagnostic imaging tool. However, automated fetal brain localization is one of the biggest obstacles in expediting and fully automating large-scale fetal MRI processing. We propose a method for automatic localization of fetal brain in 3 T MRI when the images are acquired as a stack of 2D slices that are misaligned due to fetal motion. First, the Histogram of Oriented Gradients (HOG feature descriptor is extended from 2D to 3D images. Then, a sliding window is used to assign a score to all possible windows in an image, depending on the likelihood of it containing a brain, and the window with the highest score is selected. In our evaluation experiments using a leave-one-out cross-validation strategy, we achieved 96% of complete brain localization using a database of 104 MRI scans at gestational ages between 34 and 38 weeks. We carried out comparisons against template matching and random forest based regression methods and the proposed method showed superior performance. We also showed the application of the proposed method in the optimization of fetal motion correction and how it is essential for the reconstruction process. The method is robust and does not rely on any prior knowledge of fetal brain development.

  9. Development of An Optimization Method for Determining Automation Rate in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun; Kim, Jong Hyun

    2014-01-01

    Since automation was introduced in various industrial fields, it has been known that automation provides positive effects like greater efficiency and fewer human errors, and negative effect defined as out-of-the-loop (OOTL). Thus, before introducing automation in nuclear field, the estimation of the positive and negative effects of automation on human operators should be conducted. In this paper, by focusing on CPS, the optimization method to find an appropriate proportion of automation is suggested by integrating the suggested cognitive automation rate and the concepts of the level of ostracism. The cognitive automation rate estimation method was suggested to express the reduced amount of human cognitive loads, and the level of ostracism was suggested to express the difficulty in obtaining information from the automation system and increased uncertainty of human operators' diagnosis. The maximized proportion of automation that maintains the high level of attention for monitoring the situation is derived by an experiment, and the automation rate is estimated by the suggested automation rate estimation method. It is expected to derive an appropriate inclusion proportion of the automation system avoiding the OOTL problem and having maximum efficacy at the same time

  10. Novel insights in agent-based complex automated negotiation

    CERN Document Server

    Lopez-Carmona, Miguel; Ito, Takayuki; Zhang, Minjie; Bai, Quan; Fujita, Katsuhide

    2014-01-01

    This book focuses on all aspects of complex automated negotiations, which are studied in the field of autonomous agents and multi-agent systems. This book consists of two parts. I: Agent-Based Complex Automated Negotiations, and II: Automated Negotiation Agents Competition. The chapters in Part I are extended versions of papers presented at the 2012 international workshop on Agent-Based Complex Automated Negotiation (ACAN), after peer reviews by three Program Committee members. Part II examines in detail ANAC 2012 (The Third Automated Negotiating Agents Competition), in which automated agents that have different negotiation strategies and are implemented by different developers are automatically negotiated in the several negotiation domains. ANAC is an international competition in which automated negotiation strategies, submitted by a number of universities and research institutes across the world, are evaluated in tournament style. The purpose of the competition is to steer the research in the area of bilate...

  11. Trust in automation and meta-cognitive accuracy in NPP operating crews

    Energy Technology Data Exchange (ETDEWEB)

    Skraaning Jr, G.; Miberg Skjerve, A. B. [OECD Halden Reactor Project, PO Box 173, 1751 Halden (Norway)

    2006-07-01

    Nuclear power plant operators can over-trust or under-trust automation. Operator trust in automation is said to be mis-calibrated when the level of trust is not corresponding to the actual level of automation reliability. A possible consequence of mis-calibrated trust is degraded meta-cognitive accuracy. Meta-cognitive accuracy is the ability to correctly monitor the effectiveness of ones own performance while engaged in complex tasks. When operators misjudge their own performance, human control actions will be poorly regulated and safety and/or efficiency may suffer. An analysis of simulator data showed that meta-cognitive accuracy and trust in automation were highly correlated for knowledge-based scenarios, but uncorrelated for rule-based scenarios. In the knowledge-based scenarios, the operators overestimated their performance effectiveness under high levels of trust, they underestimated performance under low levels of trust, but showed realistic self-assessment under intermediate levels of trust in automation. The result was interpreted to suggest that trust in automation impact the meta-cognitive accuracy of the operators. (authors)

  12. Trust in automation and meta-cognitive accuracy in NPP operating crews

    International Nuclear Information System (INIS)

    Skraaning Jr, G.; Miberg Skjerve, A. B.

    2006-01-01

    Nuclear power plant operators can over-trust or under-trust automation. Operator trust in automation is said to be mis-calibrated when the level of trust is not corresponding to the actual level of automation reliability. A possible consequence of mis-calibrated trust is degraded meta-cognitive accuracy. Meta-cognitive accuracy is the ability to correctly monitor the effectiveness of ones own performance while engaged in complex tasks. When operators misjudge their own performance, human control actions will be poorly regulated and safety and/or efficiency may suffer. An analysis of simulator data showed that meta-cognitive accuracy and trust in automation were highly correlated for knowledge-based scenarios, but uncorrelated for rule-based scenarios. In the knowledge-based scenarios, the operators overestimated their performance effectiveness under high levels of trust, they underestimated performance under low levels of trust, but showed realistic self-assessment under intermediate levels of trust in automation. The result was interpreted to suggest that trust in automation impact the meta-cognitive accuracy of the operators. (authors)

  13. Born--Infeld theory of electroweak and gravitational fields: Possible correction to Newton and Coulomb laws

    OpenAIRE

    Palatnik, Dmitriy

    2002-01-01

    In this note one suggests a possibility of direct observation of the $\\theta$-parameter, introduced in the Born--Infeld theory of electroweak and gravitational fields, developed in quant-ph/0202024. Namely, one may treat $\\theta$ as a universal constant, responsible for correction to the Coulomb and Newton laws, allowing direct interaction between electrical charges and masses.

  14. On the covariant formalism of the effective field theory of gravity and leading order corrections

    DEFF Research Database (Denmark)

    Codello, Alessandro; Jain, Rajeev Kumar

    2016-01-01

    We construct the covariant effective field theory of gravity as an expansion in inverse powers of the Planck mass, identifying the leading and next-to-leading quantum corrections. We determine the form of the effective action for the cases of pure gravity with cosmological constant as well...... as gravity coupled to matter. By means of heat kernel methods we renormalize and compute the leading quantum corrections to quadratic order in a curvature expansion. The final effective action in our covariant formalism is generally non-local and can be readily used to understand the phenomenology...... on different spacetimes. In particular, we point out that on curved backgrounds the observable leading quantum gravitational effects are less suppressed than on Minkowski spacetime....

  15. Quantum gravitational corrections for spinning particles

    International Nuclear Information System (INIS)

    Fröb, Markus B.

    2016-01-01

    We calculate the quantum corrections to the gauge-invariant gravitational potentials of spinning particles in flat space, induced by loops of both massive and massless matter fields of various types. While the corrections to the Newtonian potential induced by massless conformal matter for spinless particles are well known, and the same corrections due to massless minimally coupled scalars http://dx.doi.org/10.1088/0264-9381/27/24/245008, massless non-conformal scalars http://dx.doi.org/10.1103/PhysRevD.87.104027 and massive scalars, fermions and vector bosons http://dx.doi.org/10.1103/PhysRevD.91.064047 have been recently derived, spinning particles receive additional corrections which are the subject of the present work. We give both fully analytic results valid for all distances from the particle, and present numerical results as well as asymptotic expansions. At large distances from the particle, the corrections due to massive fields are exponentially suppressed in comparison to the corrections from massless fields, as one would expect. However, a surprising result of our analysis is that close to the particle itself, on distances comparable to the Compton wavelength of the massive fields running in the loops, these corrections can be enhanced with respect to the massless case.

  16. Human-centred automation programme: review of experiment related studies

    International Nuclear Information System (INIS)

    Grimstad, Tone; Andresen, Gisle; Skjerve, Ann Britt Miberg

    2000-04-01

    Twenty-three empirical studies concerning automation and performance have been reviewed. The purposes of the review are to support experimental studies in the Human-Centred Automation (HCA) programme and to develop a general theory on HCA. Each study was reviewed with regard to twelve study characteristics: domain, type of study, purpose, definition of automation, variables, theoretical basis, models of operator performance, methods applied, experimental design, outcome, stated scope of results, strengths and limitations. Seven of the studies involved domain experts, the rest used students as participants. The majority of the articles originated from the aviation domain: only the study conducted in HAMMLAB considered process control in power plants. In the experimental studies, the independent variable was level of automation, or reliability of automation, while the most common dependent variables were workload, situation awareness, complacency, trust, and criteria of performance, e.g., number of correct responses or response time. Although the studies highlight important aspects of human-automation interaction, it is still unclear how system performance is affected. Nevertheless, the fact that many factors seem to be involved is taken as support for the system-oriented approach of the HCA programme. In conclusion, the review provides valuable input both to the design of experiments and to the development of a general theory. (Author). refs

  17. Corrections for a constant radial magnetic field in the muon \\varvec{g}-2 and electric-dipole-moment experiments in storage rings

    Science.gov (United States)

    Silenko, Alexander J.

    2017-10-01

    We calculate the corrections for constant radial magnetic field in muon {g}-2 and electric-dipole-moment experiments in storage rings. While the correction is negligible for the current generation of {g}-2 experiments, it affects the upcoming muon electric-dipole-moment experiment at Fermilab.

  18. Automated quantification of proliferation with automated hot-spot selection in phosphohistone H3/MART1 dual-stained stage I/II melanoma.

    Science.gov (United States)

    Nielsen, Patricia Switten; Riber-Hansen, Rikke; Schmidt, Henrik; Steiniche, Torben

    2016-04-09

    Staging of melanoma includes quantification of a proliferation index, i.e., presumed melanocytic mitoses of H&E stains are counted manually in hot spots. Yet, its reproducibility and prognostic impact increases by immunohistochemical dual staining for phosphohistone H3 (PHH3) and MART1, which also may enable fully automated quantification by image analysis. To ensure manageable workloads and repeatable measurements in modern pathology, the study aimed to present an automated quantification of proliferation with automated hot-spot selection in PHH3/MART1-stained melanomas. Formalin-fixed, paraffin-embedded tissue from 153 consecutive stage I/II melanoma patients was immunohistochemically dual-stained for PHH3 and MART1. Whole slide images were captured, and the number of PHH3/MART1-positive cells was manually and automatically counted in the global tumor area and in a manually and automatically selected hot spot, i.e., a fixed 1-mm(2) square. Bland-Altman plots and hypothesis tests compared manual and automated procedures, and the Cox proportional hazards model established their prognostic impact. The mean difference between manual and automated global counts was 2.9 cells/mm(2) (P = 0.0071) and 0.23 cells per hot spot (P = 0.96) for automated counts in manually and automatically selected hot spots. In 77 % of cases, manual and automated hot spots overlapped. Fully manual hot-spot counts yielded the highest prognostic performance with an adjusted hazard ratio of 5.5 (95 % CI, 1.3-24, P = 0.024) as opposed to 1.3 (95 % CI, 0.61-2.9, P = 0.47) for automated counts with automated hot spots. The automated index and automated hot-spot selection were highly correlated to their manual counterpart, but altogether their prognostic impact was noticeably reduced. Because correct recognition of only one PHH3/MART1-positive cell seems important, extremely high sensitivity and specificity of the algorithm is required for prognostic purposes. Thus, automated

  19. Practical automation for mature producing areas

    International Nuclear Information System (INIS)

    Luppens, J.C.

    1995-01-01

    Successful installation and operation of supervisory control and data acquisition (SCADA) systems on two US gulf coast platforms, prompted the installation of the first SCADA, or automation, system in Oklahoma in 1989. The initial installation consisted of four remote terminal units (RTU's) at four beam-pumped leases and a PC-based control system communicating by means of a 900-MHz data repeated. This first installation was a building block for additional wells to be automated, and then additional systems, consisting of RTU's, a PC, and a data repeated, were installed. By the end of 1992 there were 98 RTU's operating on five separation systems and additional RTU's are being installed on a regular basis. This paper outlines the logical development of automation systems on properties in Oklahoma operated by Phillips Petroleum Co. Those factors critical to the success of the effort are (1) designing data-gathering and control capability in conjunction with the field operations staff to meet and not exceed their needs; (2) selection of a computer operating system and automation software package; (3) selection of computer, RTU, and end-device hardware; and (4) continuous involvement of the field operations staff in the installation, operation, and maintenance of the systems. Additionally, specific tangible and intangible results are discussed

  20. Automated measurement of spatial preference in the open field test with transmitted lighting.

    Science.gov (United States)

    Kulikov, Alexander V; Tikhonova, Maria A; Kulikov, Victor A

    2008-05-30

    New modification of the open field was designed to improve automation of the test. The main innovations were: (1) transmitted lighting and (2) estimation of probability to find pixels associated with an animal in the selected region of arena as an objective index of spatial preference. Transmitted (inverted) lighting significantly ameliorated the contrast between an animal and arena and allowed to track white animals with similar efficacy as colored ones. Probability as a measure of preference of selected region was mathematically proved and experimentally verified. A good correlation between probability and classic indices of spatial preference (number of region entries and time spent therein) was shown. The algorithm of calculation of probability to find pixels associated with an animal in the selected region was implemented in the EthoStudio software. Significant interstrain differences in locomotion and the central zone preference (index of anxiety) were shown using the inverted lighting and the EthoStudio software in mice of six inbred strains. The effects of arena shape (circle or square) and a novel object presence in the center of arena on the open field behavior in mice were studied.

  1. Bright-field scanning confocal electron microscopy using a double aberration-corrected transmission electron microscope.

    Science.gov (United States)

    Wang, Peng; Behan, Gavin; Kirkland, Angus I; Nellist, Peter D; Cosgriff, Eireann C; D'Alfonso, Adrian J; Morgan, Andrew J; Allen, Leslie J; Hashimoto, Ayako; Takeguchi, Masaki; Mitsuishi, Kazutaka; Shimojo, Masayuki

    2011-06-01

    Scanning confocal electron microscopy (SCEM) offers a mechanism for three-dimensional imaging of materials, which makes use of the reduced depth of field in an aberration-corrected transmission electron microscope. The simplest configuration of SCEM is the bright-field mode. In this paper we present experimental data and simulations showing the form of bright-field SCEM images. We show that the depth dependence of the three-dimensional image can be explained in terms of two-dimensional images formed in the detector plane. For a crystalline sample, this so-called probe image is shown to be similar to a conventional diffraction pattern. Experimental results and simulations show how the diffracted probes in this image are elongated in thicker crystals and the use of this elongation to estimate sample thickness is explored. Copyright © 2010 Elsevier B.V. All rights reserved.

  2. Quantum corrections to conductivity observed at intermediate magnetic fields in a high mobility GaAs/AlGaAs 2-dimensional electron gas

    International Nuclear Information System (INIS)

    Taboryski, R.; Veje, E.; Lindelof, P.E.

    1990-01-01

    Magnetoresistance with the field perpendicular to the 2-dimensional electron gas in a high mobility GaAs/AlGaAs heterostructure at low temperatures is studied. At the lowest magnetic field we observe the weak localization. At magnetic fields, where the product of the mobility and the magnetic field is of the order of unity, the quantum correction to conductivity due to the electron-electron interaction is as a source of magnetoresistance. A consistent analysis of experiments in this regime is for the first time performed. In addition to the well known electron-electron term with the expected temperature dependence, we find a new type of temperature independent quantum correction, which varies logarithmically with mobility. (orig.)

  3. Activities of daily living measured by the Harvard Automated Phone Task track with cognitive decline over time in non-demented elderly

    Science.gov (United States)

    Marshall, Gad A.; Aghjayan, Sarah L.; Dekhtyar, Maria; Locascio, Joseph J.; Jethwani, Kamal; Amariglio, Rebecca E.; Johnson, Keith A.; Sperling, Reisa A.; Rentz, Dorene M.

    2017-01-01

    Background Impairment in activities of daily living is a major burden to both patients and caregivers. Mild impairment in instrumental activities of daily living is often seen at the stage of mild cognitive impairment. The field of Alzheimer’s disease is moving toward earlier diagnosis and intervention and more sensitive and ecologically valid assessments of instrumental or complex activities of daily living are needed. The Harvard Automated Phone Task, a novel performance-based activities of daily living instrument, has the potential to fill this gap. Objective To further validate the Harvard Automated Phone Task by assessing its longitudinal relationship to global cognition and specific cognitive domains in clinically normal elderly and individuals with mild cognitive impairment. Design In a longitudinal study, the Harvard Automated Phone Task was associated with cognitive measures using mixed effects models. The Harvard Automated Phone Task’s ability to discriminate across diagnostic groups at baseline was also assessed. Setting Academic clinical research center. Participants Two hundred and seven participants (45 young normal, 141 clinically normal elderly, and 21 mild cognitive impairment) were recruited from the community and the memory disorders clinics at Brigham and Women’s Hospital and Massachusetts General Hospital. Measurements Participants performed the three tasks of the Harvard Automated Phone Task, which consist of navigating an interactive voice response system to refill a prescription (APT-Script), select a new primary care physician (APT-PCP), and make a bank account transfer and payment (APT-Bank). The 3 tasks were scored based on time, errors, repetitions, and correct completion of the task. The primary outcome measure used for each of the tasks was total time adjusted for correct completion. Results The Harvard Automated Phone Task discriminated well between young normal, clinically normal elderly, and mild cognitive impairment

  4. Automated three-dimensional X-ray analysis using a dual-beam FIB

    International Nuclear Information System (INIS)

    Schaffer, Miroslava; Wagner, Julian; Schaffer, Bernhard; Schmied, Mario; Mulders, Hans

    2007-01-01

    We present a fully automated method for three-dimensional (3D) elemental analysis demonstrated using a ceramic sample of chemistry (Ca)MgTiO x . The specimen is serially sectioned by a focused ion beam (FIB) microscope, and energy-dispersive X-ray spectrometry (EDXS) is used for elemental analysis of each cross-section created. A 3D elemental model is reconstructed from the stack of two-dimensional (2D) data. This work concentrates on issues arising from process automation, the large sample volume of approximately 17x17x10 μm 3 , and the insulating nature of the specimen. A new routine for post-acquisition data correction of different drift effects is demonstrated. Furthermore, it is shown that EDXS data may be erroneous for specimens containing voids, and that back-scattered electron images have to be used to correct for these errors

  5. SU-C-201-06: Small Field Correction Factors for the MicroDiamond Detector in the Gamma Knife-Model C Derived Using Monte Carlo Methods

    International Nuclear Information System (INIS)

    Barrett, J C; Knill, C

    2016-01-01

    Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes. Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to

  6. SU-C-201-06: Small Field Correction Factors for the MicroDiamond Detector in the Gamma Knife-Model C Derived Using Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, J C [Wayne State University, Detroit, MI (United States); Karmanos Cancer Institute McLaren-Macomb, Clinton Township, MI (United States); Knill, C [Wayne State University, Detroit, MI (United States); Beaumont Hospital, Canton, MI (United States)

    2016-06-15

    Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes. Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to

  7. Electromagnetic fields with vanishing quantum corrections

    Czech Academy of Sciences Publication Activity Database

    Ortaggio, Marcello; Pravda, Vojtěch

    2018-01-01

    Roč. 779, 10 April (2018), s. 393-395 ISSN 0370-2693 R&D Projects: GA ČR GA13-10042S Institutional support: RVO:67985840 Keywords : nonlinear electrodynamics * quantum corrections Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 4.807, year: 2016 https://www.sciencedirect.com/science/article/pii/S0370269318300327?via%3Dihub

  8. Electromagnetic fields with vanishing quantum corrections

    Czech Academy of Sciences Publication Activity Database

    Ortaggio, Marcello; Pravda, Vojtěch

    2018-01-01

    Roč. 779, 10 April (2018), s. 393-395 ISSN 0370-2693 R&D Projects: GA ČR GA13-10042S Institutional support: RVO:67985840 Keywords : nonlinear electrodynamics * quantum corrections Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 4.807, year: 2016 https://www. science direct.com/ science /article/pii/S0370269318300327?via%3Dihub

  9. Comparison of Size Modulation Standard Automated Perimetry and Conventional Standard Automated Perimetry with a 10-2 Test Program in Glaucoma Patients.

    Science.gov (United States)

    Hirasawa, Kazunori; Takahashi, Natsumi; Satou, Tsukasa; Kasahara, Masayuki; Matsumura, Kazuhiro; Shoji, Nobuyuki

    2017-08-01

    This prospective observational study compared the performance of size modulation standard automated perimetry with the Octopus 600 10-2 test program, with stimulus size modulation during testing, based on stimulus intensity and conventional standard automated perimetry, with that of the Humphrey 10-2 test program in glaucoma patients. Eighty-seven eyes of 87 glaucoma patients underwent size modulation standard automated perimetry with Dynamic strategy and conventional standard automated perimetry using the SITA standard strategy. The main outcome measures were global indices, point-wise threshold, visual defect size and depth, reliability indices, and test duration; these were compared between size modulation standard automated perimetry and conventional standard automated perimetry. Global indices and point-wise threshold values between size modulation standard automated perimetry and conventional standard automated perimetry were moderately to strongly correlated (p 33.40, p modulation standard automated perimetry than with conventional standard automated perimetry, but the visual-field defect size was smaller (p modulation-standard automated perimetry than on conventional standard automated perimetry. The reliability indices, particularly the false-negative response, of size modulation standard automated perimetry were worse than those of conventional standard automated perimetry (p modulation standard automated perimetry than with conventional standard automated perimetry (p = 0.02). Global indices and the point-wise threshold value of the two testing modalities correlated well. However, the potential of a large stimulus presented at an area with a decreased sensitivity with size modulation standard automated perimetry could underestimate the actual threshold in the 10-2 test protocol, as compared with conventional standard automated perimetry.

  10. Operational experience with open communication in the field of building automation at the IBM center Ehningen; GA-Betriebserfahrung - mit offener Kommunikation im Informatikzentrum Ehningen

    Energy Technology Data Exchange (ETDEWEB)

    Damnig, A [IBM Deutschland Informationssysteme GmbH, Ehningen bei Boeblingen (Germany)

    1995-12-31

    In chapter 21 of the anthology about building control the operational experience with open communication in the field of building automation is discussed. The following aspects are discussed: building automation at IBM in Ehningen, the FACN experience, what has been achieved? Energy and operation optimisation. (BWI) [Deutsch] Kapitel 21 des Sammelbandes ueber Building Control ist dem Thema der Betriebserfahrung mit Gebaeudeautomationen mit offener Kommunikation gewidmet. In diesem Zusammenhang werden folgende Themenbereiche angesprochen: Gebaeudeautomation bei IBM in Ehningen, das FACN-Protokoll; Betriebserfahrungen; Was wurde erreicht?; Energie- und Betriebsoptimierungen. (BWI)

  11. Automated extraction of radiation dose information from CT dose report images.

    Science.gov (United States)

    Li, Xinhua; Zhang, Da; Liu, Bob

    2011-06-01

    The purpose of this article is to describe the development of an automated tool for retrieving texts from CT dose report images. Optical character recognition was adopted to perform text recognitions of CT dose report images. The developed tool is able to automate the process of analyzing multiple CT examinations, including text recognition, parsing, error correction, and exporting data to spreadsheets. The results were precise for total dose-length product (DLP) and were about 95% accurate for CT dose index and DLP of scanned series.

  12. On the covariant formalism of the effective field theory of gravity and leading order corrections

    International Nuclear Information System (INIS)

    Codello, Alessandro; Jain, Rajeev Kumar

    2016-01-01

    We construct the covariant effective field theory of gravity as an expansion in inverse powers of the Planck mass, identifying the leading and next-to-leading quantum corrections. We determine the form of the effective action for the cases of pure gravity with cosmological constant as well as gravity coupled to matter. By means of heat kernel methods we renormalize and compute the leading quantum corrections to quadratic order in a curvature expansion. The final effective action in our covariant formalism is generally non-local and can be readily used to understand the phenomenology on different spacetimes. In particular, we point out that on curved backgrounds the observable leading quantum gravitational effects are less suppressed than on Minkowski spacetime. (paper)

  13. Sloan Digital Sky Survey photometric telescope automation and observing software

    International Nuclear Information System (INIS)

    Eric H. Neilsen, Jr.; email = neilsen@fnal.gov

    2002-01-01

    The photometric telescope (PT) provides observations necessary for the photometric calibration of the Sloan Digital Sky Survey (SDSS). Because the attention of the observing staff is occupied by the operation of the 2.5 meter telescope which takes the survey data proper, the PT must reliably take data with little supervision. In this paper we describe the PT's observing program, MOP, which automates most tasks necessary for observing. MOP's automated target selection is closely modeled on the actions a human observer might take, and is built upon a user interface that can be (and has been) used for manual operation. This results in an interface that makes it easy for an observer to track the activities of the automating procedures and intervene with minimum disturbance when necessary. MOP selects targets from the same list of standard star and calibration fields presented to the user, and chooses standard star fields covering ranges of airmass, color, and time necessary to monitor atmospheric extinction and produce a photometric solution. The software determines when additional standard star fields are unnecessary, and selects survey calibration fields according to availability and priority. Other automated features of MOP, such as maintaining the focus and keeping a night log, are also built around still functional manual interfaces, allowing the observer to be as active in observing as desired; MOP's automated features may be used as tools for manual observing, ignored entirely, or allowed to run the telescope with minimal supervision when taking routine data

  14. "Booster" training: evaluation of instructor-led bedside cardiopulmonary resuscitation skill training and automated corrective feedback to improve cardiopulmonary resuscitation compliance of Pediatric Basic Life Support providers during simulated cardiac arrest.

    Science.gov (United States)

    Sutton, Robert M; Niles, Dana; Meaney, Peter A; Aplenc, Richard; French, Benjamin; Abella, Benjamin S; Lengetti, Evelyn L; Berg, Robert A; Helfaer, Mark A; Nadkarni, Vinay

    2011-05-01

    To investigate the effectiveness of brief bedside "booster" cardiopulmonary resuscitation (CPR) training to improve CPR guideline compliance of hospital-based pediatric providers. Prospective, randomized trial. General pediatric wards at Children's Hospital of Philadelphia. Sixty-nine Basic Life Support-certified hospital-based providers. CPR recording/feedback defibrillators were used to evaluate CPR quality during simulated pediatric arrest. After a 60-sec pretraining CPR evaluation, subjects were randomly assigned to one of three instructional/feedback methods to be used during CPR booster training sessions. All sessions (training/CPR manikin practice) were of equal duration (2 mins) and differed only in the method of corrective feedback given to participants during the session. The study arms were as follows: 1) instructor-only training; 2) automated defibrillator feedback only; and 3) instructor training combined with automated feedback. Before instruction, 57% of the care providers performed compressions within guideline rate recommendations (rate >90 min(-1) and 38 mm); and 36% met overall CPR compliance (rate and depth within targets). After instruction, guideline compliance improved (instructor-only training: rate 52% to 87% [p .01], and overall CPR compliance, 43% to 78% [p CPR compliance, 35% to 96% [p training combined with automated feedback: rate 48% to 100% [p CPR compliance, 30% to 100% [p CPR instruction, most certified Pediatric Basic Life Support providers did not perform guideline-compliant CPR. After a brief bedside training, CPR quality improved irrespective of training content (instructor vs. automated feedback). Future studies should investigate bedside training to improve CPR quality during actual pediatric cardiac arrests.

  15. SU-G-BRB-05: Automation of the Photon Dosimetric Quality Assurance Program of a Linear Accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Lebron, S; Lu, B; Yan, G; Li, J; Liu, C [University of Florida, Gainesville, FL (United States)

    2016-06-15

    Purpose: To develop an automated method to calculate a linear accelerator (LINAC) photon radiation field size, flatness, symmetry, output and beam quality in a single delivery for flattened (FF) and flattening-filter-free (FFF) beams using an ionization chamber array. Methods: The proposed method consists of three control points that deliver 30×30, 10×10 and 5×5cm{sup 2} fields (FF or FFF) in a step-and-shoot sequence where the number of monitor units is weighted for each field size. The IC Profiler (Sun Nuclear Inc.) with 5mm detector spacing was used for this study. The corrected counts (CCs) were calculated and the locations of the maxima and minima values of the first-order gradient determined data of each sub field. Then, all CCs for each field size are summed in order to obtain the final profiles. For each profile, the radiation field size, symmetry, flatness, output factor and beam quality were calculated. For field size calculation, a parameterized gradient method was used. For method validation, profiles were collected in the detector array both, individually and as part of the step-and-shoot plan, with 9.9cm buildup for FF and FFF beams at 90cm source-to-surface distance. The same data were collected with the device (plus buildup) placed on a movable platform to achieve a 1mm resolution. Results: The differences between the dosimetric quantities calculated from both deliveries, individually and step-and-shoot, were within 0.31±0.20% and 0.04±0.02mm. The differences between the calculated field sizes with 5mm and 1mm resolution were ±0.1mm. Conclusion: The proposed single delivery method proved to be simple and efficient in automating the photon dosimetric monthly and annual quality assurance.

  16. Code it rite the first time : automated invoice processing solution designed to ensure validity to field ticket coding

    Energy Technology Data Exchange (ETDEWEB)

    Chandler, G.

    2010-03-15

    An entrepreneur who ran 55 rigs for a major oilfield operator in Calgary has developed a solution for the oil industry that reduces field ticketing errors from 40 per cent to almost none. The Code-Rite not only simplifies field ticketing but can eliminate weeks of trying to balance authorization for expenditure (AFE) numbers. A service provider who wants a field ticket signed for billing purposes following a service call to a well site receives all pertinent information on a barcode that includes AFE number, location, routing, approval authority and mailing address. Attaching the label to the field ticket provides all the invoicing information needed. This article described the job profile, education and life experiences and opportunities that led the innovator to develop this technology that solves an industry-wide problem. Code-Rite is currently being used by 3 large upstream oil and gas operators and plans are underway to automate the entire invoice processing system. 1 fig.

  17. Proceedings of the international conference on advancements in automation, robotics and sensing: souvenir

    International Nuclear Information System (INIS)

    Vinod, B.; Sundaram, M.; Sujatha, K.S.; Brislin, J. Joe; Prabhakarab, S.

    2016-01-01

    Robotics and automation is a thriving domain in the field of engineering, comprising of major areas like electrical, electronics, mechanical, automation, computer and robotics engineering. This conference address issues related to technical advances in all these fields. Papers relevant to INIS are indexed separately

  18. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.

    Directory of Open Access Journals (Sweden)

    Jun Yi Wang

    Full Text Available Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation to 0.978 (for SegAdapter-corrected segmentation for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large

  19. How automated image analysis techniques help scientists in species identification and classification?

    Science.gov (United States)

    Yousef Kalafi, Elham; Town, Christopher; Kaur Dhillon, Sarinder

    2017-09-04

    Identification of taxonomy at a specific level is time consuming and reliant upon expert ecologists. Hence the demand for automated species identification increased over the last two decades. Automation of data classification is primarily focussed on images, incorporating and analysing image data has recently become easier due to developments in computational technology. Research efforts in identification of species include specimens' image processing, extraction of identical features, followed by classifying them into correct categories. In this paper, we discuss recent automated species identification systems, categorizing and evaluating their methods. We reviewed and compared different methods in step by step scheme of automated identification and classification systems of species images. The selection of methods is influenced by many variables such as level of classification, number of training data and complexity of images. The aim of writing this paper is to provide researchers and scientists an extensive background study on work related to automated species identification, focusing on pattern recognition techniques in building such systems for biodiversity studies.

  20. Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions

    Directory of Open Access Journals (Sweden)

    Johann Christian Rose

    2016-12-01

    Full Text Available In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter.

  1. Coordinated joint motion control system with position error correction

    Science.gov (United States)

    Danko, George L.

    2016-04-05

    Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.

  2. Sensors and Automated Analyzers for Radionuclides

    International Nuclear Information System (INIS)

    Grate, Jay W.; Egorov, Oleg B.

    2003-01-01

    The production of nuclear weapons materials has generated large quantities of nuclear waste and significant environmental contamination. We have developed new, rapid, automated methods for determination of radionuclides using sequential injection methodologies to automate extraction chromatographic separations, with on-line flow-through scintillation counting for real time detection. This work has progressed in two main areas: radionuclide sensors for water monitoring and automated radiochemical analyzers for monitoring nuclear waste processing operations. Radionuclide sensors have been developed that collect and concentrate radionuclides in preconcentrating minicolumns with dual functionality: chemical selectivity for radionuclide capture and scintillation for signal output. These sensors can detect pertechnetate to below regulatory levels and have been engineered into a prototype for field testing. A fully automated process monitor has been developed for total technetium in nuclear waste streams. This instrument performs sample acidification, speciation adjustment, separation and detection in fifteen minutes or less

  3. Experiences in Building Python Automation Framework for Verification and Data Collections

    Directory of Open Access Journals (Sweden)

    2010-09-01

    Full Text Available

    This paper describes our experiences in building a Python automation framework. Specifically, the automation framework is used to support verification and data collection scripts. The scripts control various test equipments in addition to the device under test (DUT to characterize a specific performance with a specific configuration or to evaluate the correctness of the behaviour of the DUT. The specific focus on this paper is on documenting our experiences in building an automation framework using Python: on the purposes, goals and the benefits, rather than on a tutorial of how to build such a framework.

  4. Automated 3-D Radiation Mapping

    International Nuclear Information System (INIS)

    Tarpinian, J. E.

    1991-01-01

    This work describes an automated radiation detection and imaging system which combines several state-of-the-art technologies to produce a portable but very powerful visualization tool for planning work in radiation environments. The system combines a radiation detection system, a computerized radiation imaging program, and computerized 3-D modeling to automatically locate and measurements are automatically collected and imaging techniques are used to produce colored, 'isodose' images of the measured radiation fields. The isodose lines from the images are then superimposed over the 3-D model of the area. The final display shows the various components in a room and their associated radiation fields. The use of an automated radiation detection system increases the quality of radiation survey obtained measurements. The additional use of a three-dimensional display allows easier visualization of the area and associated radiological conditions than two-dimensional sketches

  5. Operator quantum error-correcting subsystems for self-correcting quantum memories

    International Nuclear Information System (INIS)

    Bacon, Dave

    2006-01-01

    The most general method for encoding quantum information is not to encode the information into a subspace of a Hilbert space, but to encode information into a subsystem of a Hilbert space. Recently this notion has led to a more general notion of quantum error correction known as operator quantum error correction. In standard quantum error-correcting codes, one requires the ability to apply a procedure which exactly reverses on the error-correcting subspace any correctable error. In contrast, for operator error-correcting subsystems, the correction procedure need not undo the error which has occurred, but instead one must perform corrections only modulo the subsystem structure. This does not lead to codes which differ from subspace codes, but does lead to recovery routines which explicitly make use of the subsystem structure. Here we present two examples of such operator error-correcting subsystems. These examples are motivated by simple spatially local Hamiltonians on square and cubic lattices. In three dimensions we provide evidence, in the form a simple mean field theory, that our Hamiltonian gives rise to a system which is self-correcting. Such a system will be a natural high-temperature quantum memory, robust to noise without external intervening quantum error-correction procedures

  6. About development of automation control systems

    Science.gov (United States)

    Myshlyaev, L. P.; Wenger, K. G.; Ivushkin, K. A.; Makarov, V. N.

    2018-05-01

    The shortcomings of approaches to the development of modern control automation systems and ways of their improvement are given: the correct formation of objects for study and optimization; a joint synthesis of control objects and control systems, an increase in the structural diversity of the elements of control systems. Diagrams of control systems with purposefully variable structure of their elements are presented. Structures of control algorithms for an object with a purposefully variable structure are given.

  7. [Time consumption and quality of an automated fusion tool for SPECT and MRI images of the brain].

    Science.gov (United States)

    Fiedler, E; Platsch, G; Schwarz, A; Schmiedehausen, K; Tomandl, B; Huk, W; Rupprecht, Th; Rahn, N; Kuwert, T

    2003-10-01

    Although the fusion of images from different modalities may improve diagnostic accuracy, it is rarely used in clinical routine work due to logistic problems. Therefore we evaluated performance and time needed for fusing MRI and SPECT images using a semiautomated dedicated software. PATIENTS, MATERIAL AND METHOD: In 32 patients regional cerebral blood flow was measured using (99m)Tc ethylcystein dimer (ECD) and the three-headed SPECT camera MultiSPECT 3. MRI scans of the brain were performed using either a 0,2 T Open or a 1,5 T Sonata. Twelve of the MRI data sets were acquired using a 3D-T1w MPRAGE sequence, 20 with a 2D acquisition technique and different echo sequences. Image fusion was performed on a Syngo workstation using an entropy minimizing algorithm by an experienced user of the software. The fusion results were classified. We measured the time needed for the automated fusion procedure and in case of need that for manual realignment after automated, but insufficient fusion. The mean time of the automated fusion procedure was 123 s. It was for the 2D significantly shorter than for the 3D MRI datasets. For four of the 2D data sets and two of the 3D data sets an optimal fit was reached using the automated approach. The remaining 26 data sets required manual correction. The sum of the time required for automated fusion and that needed for manual correction averaged 320 s (50-886 s). The fusion of 3D MRI data sets lasted significantly longer than that of the 2D MRI data. The automated fusion tool delivered in 20% an optimal fit, in 80% manual correction was necessary. Nevertheless, each of the 32 SPECT data sets could be merged in less than 15 min with the corresponding MRI data, which seems acceptable for clinical routine use.

  8. Self-propelled in-tube shuttle and control system for automated measurements of magnetic field alignment

    International Nuclear Information System (INIS)

    Boroski, W.N.; Nicol, T.H.; Pidcoe, S.V.

    1990-03-01

    A magnetic field alignment gauge is used to measure the field angle as a function of axial position in each of the magnets for the Superconducting Super Collider (SSC). Present measurements are made by manually pushing the through the magnet bore tube and stopping at intervals to record field measurements. Gauge location is controlled through graduation marks and alignment pins on the push rods. Field measurements are recorded on a logging multimeter with tape output. Described is a computerized control system being developed to replace the manual procedure for field alignment measurements. The automated system employs a pneumatic walking device to move the measurement gauge through the bore tube. Movement of the device, called the Self-Propelled In-Tube Shuttle (SPITS), is accomplished through an integral, gas driven, double-acting cylinder. The motion of the SPITS is transferred to the bore tube by means of a pair of controlled, retractable support feet. Control of the SPITS is accomplished through an RS-422 interface from an IBM-compatible computer to a series of solenoid-actuated air valves. Direction of SPITS travel is determined by the air-valve sequence, and is managed through the control software. Precise axial position of the gauge within the magnet is returned to the control system through an optically-encoded digital position transducer attached to the shuttle. Discussed is the performance of the transport device and control system during preliminary testing of the first prototype shuttle. 1 ref., 7 figs

  9. Experiences of Using Automated Assessment in Computer Science Courses

    Directory of Open Access Journals (Sweden)

    John English

    2015-10-01

    Full Text Available In this paper we discuss the use of automated assessment in a variety of computer science courses that have been taught at Israel Academic College by the authors. The course assignments were assessed entirely automatically using Checkpoint, a web-based automated assessment framework. The assignments all used free-text questions (where the students type in their own answers. Students were allowed to correct errors based on feedback provided by the system and resubmit their answers. A total of 141 students were surveyed to assess their opinions of this approach, and we analysed their responses. Analysis of the questionnaire showed a low correlation between questions, indicating the statistical independence of the individual questions. As a whole, student feedback on using Checkpoint was very positive, emphasizing the benefits of multiple attempts, impartial marking, and a quick turnaround time for submissions. Many students said that Checkpoint gave them confidence in learning and motivation to practise. Students also said that the detailed feedback that Checkpoint generated when their programs failed helped them understand their mistakes and how to correct them.

  10. Field of dreamers and dreamed-up fields: functional and fake perimetry.

    Science.gov (United States)

    Thompson, J C; Kosmorsky, G S; Ellis, B D

    1996-01-01

    Hysterical and malingering patients can manifest visual field defects on perimetry (visual field testing), including defects suggestive of true visual pathway pathology. It has been shown that control subjects can easily imitate some pathologic defects with automated, computed perimetry. The authors sought to determine whether subjects could imitate the same pathologic defect with manual and automated perimetry. Six subjects posed as patients with neurologic problems. They had manual perimetry with both an experienced and inexperienced technician followed by automated perimetry. They were later interviewed about the methods of the technicians and the difficulty of the exercise. Four of six subjects easily imitated the assigned defects with both technicians on manual perimetry and with automated perimetry. These included quadrantic, altitudinal, hemianopic, and enlarged blind-spot defects. Two subjects who were assigned cecocentral and paracentral scotomas instead produced enlarged blind spots by manual perimetry and defects suggestive of chiasmal pathology by automated perimetry. Paradoxically, some subjects found that experienced technicians were easier to fool than inexperienced technicians because of the systematic way in which experienced technicians defined defects. With minimal coaching, some subjects can imitate visual fields with enlarged blind spots, quadrantic, hemianopic, and altitudinal defects with ease and reproducibility by both automated and manual perimetry. Cecocentral and paracentral scotomas are harder to imitate but may be mistaken as representing chiasmal pathology. Paradoxically, experienced technicians may not be better at detecting hysterical or malingering individuals.

  11. Automated surveillance of healthcare-associated infections : state of the art

    NARCIS (Netherlands)

    Sips, Meander E; Bonten, Marc J M; van Mourik, Maaike S M

    PURPOSE OF REVIEW: This review describes recent advances in the field of automated surveillance of healthcare-associated infections (HAIs), with a focus on data sources and the development of semiautomated or fully automated algorithms. RECENT FINDINGS: The availability of high-quality data in

  12. Automation surprise : results of a field survey of Dutch pilots

    NARCIS (Netherlands)

    de Boer, R.J.; Hurts, Karel

    2017-01-01

    Automation surprise (AS) has often been associated with aviation safety incidents. Although numerous laboratory studies have been conducted, few data are available from routine flight operations. A survey among a representative sample of 200 Dutch airline pilots was used to determine the prevalence

  13. Quantum corrections to inflaton and curvaton dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Markkanen, Tommi [Helsinki Institute of Physics and Department of Physics, University of Helsinki, P.O. Box 64, FI-00014, Helsinki (Finland); Tranberg, Anders, E-mail: tommi.markkanen@helsinki.fi, E-mail: anders.tranberg@nbi.dk [Niels Bohr International Academy and Discovery Center, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen (Denmark)

    2012-11-01

    We compute the fully renormalized one-loop effective action for two interacting and self-interacting scalar fields in FRW space-time. We then derive and solve the quantum corrected equations of motion both for fields that dominate the energy density (such as an inflaton) and fields that do not (such as a subdominant curvaton). In particular, we introduce quantum corrected Friedmann equations that determine the evolution of the scale factor. We find that in general, gravitational corrections are negligible for the field dynamics. For the curvaton-type fields this leaves only the effect of the flat-space Coleman-Weinberg-type effective potential, and we find that these can be significant. For the inflaton case, both the corrections to the potential and the Friedmann equations can lead to behaviour very different from the classical evolution. Even to the point that inflation, although present at tree level, can be absent at one-loop order.

  14. Estimating Regional Mass Balance of Himalayan Glaciers Using Hexagon Imagery: An Automated Approach

    Science.gov (United States)

    Maurer, J. M.; Rupper, S.

    2013-12-01

    Currently there is much uncertainty regarding the present and future state of Himalayan glaciers, which supply meltwater for river systems vital to more than 1.4 billion people living throughout Asia. Previous assessments of regional glacier mass balance in the Himalayas using various remote sensing and field-based methods give inconsistent results, and most assessments are over relatively short (e.g., single decade) timescales. This study aims to quantify multi-decadal changes in volume and extent of Himalayan glaciers through efficient use of the large database of declassified 1970-80s era Hexagon stereo imagery. Automation of the DEM extraction process provides an effective workflow for many images to be processed and glacier elevation changes quantified with minimal user input. The tedious procedure of manual ground control point selection necessary for block-bundle adjustment (as ephemeral data is not available for the declassified images) is automated using the Maximally Stable Extremal Regions algorithm, which matches image elements between raw Hexagon images and georeferenced Landsat 15 meter panchromatic images. Additional automated Hexagon DEM processing, co-registration, and bias correction allow for direct comparison with modern ASTER and SRTM elevation data, thus quantifying glacier elevation and area changes over several decades across largely inaccessible mountainous regions. As consistent methodology is used for all glaciers, results will likely reveal significant spatial and temporal patterns in regional ice mass balance. Ultimately, these findings could have important implications for future water resource management in light of environmental change.

  15. SU-E-T-101: Determination and Comparison of Correction Factors Obtained for TLDs in Small Field Lung Heterogenous Phantom Using Acuros XB and EGSnrc

    International Nuclear Information System (INIS)

    Soh, R; Lee, J; Harianto, F

    2014-01-01

    Purpose: To determine and compare the correction factors obtained for TLDs in 2 × 2cm 2 small field in lung heterogenous phantom using Acuros XB (AXB) and EGSnrc. Methods: This study will simulate the correction factors due to the perturbation of TLD-100 chips (Harshaw/Thermoscientific, 3 × 3 × 0.9mm 3 , 2.64g/cm 3 ) in small field lung medium for Stereotactic Body Radiation Therapy (SBRT). A physical lung phantom was simulated by a 14cm thick composite cork phantom (0.27g/cm 3 , HU:-743 ± 11) sandwiched between 4cm thick Plastic Water (CIRS,Norfolk). Composite cork has been shown to be a good lung substitute material for dosimetric studies. 6MV photon beam from Varian Clinac iX (Varian Medical Systems, Palo Alto, CA) with field size 2 × 2cm 2 was simulated. Depth dose profiles were obtained from the Eclipse treatment planning system Acuros XB (AXB) and independently from DOSxyznrc, EGSnrc. Correction factors was calculated by the ratio of unperturbed to perturbed dose. Since AXB has limitations in simulating actual material compositions, EGSnrc will also simulate the AXB-based material composition for comparison to the actual lung phantom. Results: TLD-100, with its finite size and relatively high density, causes significant perturbation in 2 × 2cm 2 small field in a low lung density phantom. Correction factors calculated by both EGSnrc and AXB was found to be as low as 0.9. It is expected that the correction factor obtained by EGSnrc wlll be more accurate as it is able to simulate the actual phantom material compositions. AXB have a limited material library, therefore it only approximates the composition of TLD, Composite cork and Plastic water, contributing to uncertainties in TLD correction factors. Conclusion: It is expected that the correction factors obtained by EGSnrc will be more accurate. Studies will be done to investigate the correction factors for higher energies where perturbation may be more pronounced

  16. Quantum error correction for beginners

    International Nuclear Information System (INIS)

    Devitt, Simon J; Nemoto, Kae; Munro, William J

    2013-01-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)

  17. 2013 Chinese Intelligent Automation Conference

    CERN Document Server

    Deng, Zhidong

    2013-01-01

    Proceedings of the 2013 Chinese Intelligent Automation Conference presents selected research papers from the CIAC’13, held in Yangzhou, China. The topics include e.g. adaptive control, fuzzy control, neural network based control, knowledge based control, hybrid intelligent control, learning control, evolutionary mechanism based control, multi-sensor integration, failure diagnosis, and reconfigurable control. Engineers and researchers from academia, industry, and government can gain an inside view of new solutions combining ideas from multiple disciplines in the field of intelligent automation.   Zengqi Sun and Zhidong Deng are professors at the Department of Computer Science, Tsinghua University, China.

  18. 2013 Chinese Intelligent Automation Conference

    CERN Document Server

    Deng, Zhidong

    2013-01-01

    Proceedings of the 2013 Chinese Intelligent Automation Conference presents selected research papers from the CIAC’13, held in Yangzhou, China. The topics include e.g. adaptive control, fuzzy control, neural network based control, knowledge based control, hybrid intelligent control, learning control, evolutionary mechanism based control, multi-sensor integration, failure diagnosis, and reconfigurable control. Engineers and researchers from academia, industry, and government can gain an inside view of new solutions combining ideas from multiple disciplines in the field of intelligent automation. Zengqi Sun and Zhidong Deng are professors at the Department of Computer Science, Tsinghua University, China.

  19. Automated Podcasting System for Universities

    Directory of Open Access Journals (Sweden)

    Ypatios Grigoriadis

    2013-03-01

    Full Text Available This paper presents the results achieved at Graz University of Technology (TU Graz in the field of automating the process of recording and publishing university lectures in a very new way. It outlines cornerstones of the development and integration of an automated recording system such as the lecture hall setup, the recording hardware and software architecture as well as the development of a text-based search for the final product by method of indexing video podcasts. Furthermore, the paper takes a look at didactical aspects, evaluations done in this context and future outlook.

  20. Smartnotebook: A semi-automated approach to protein sequential NMR resonance assignments

    International Nuclear Information System (INIS)

    Slupsky, Carolyn M.; Boyko, Robert F.; Booth, Valerie K.; Sykes, Brian D.

    2003-01-01

    Complete and accurate NMR spectral assignment is a prerequisite for high-throughput automated structure determination of biological macromolecules. However, completely automated assignment procedures generally encounter difficulties for all but the most ideal data sets. Sources of these problems include difficulty in resolving correlations in crowded spectral regions, as well as complications arising from dynamics, such as weak or missing peaks, or atoms exhibiting more than one peak due to exchange phenomena. Smartnotebook is a semi-automated assignment software package designed to combine the best features of the automated and manual approaches. The software finds and displays potential connections between residues, while the spectroscopist makes decisions on which connection is correct, allowing rapid and robust assignment. In addition, smartnotebook helps the user fit chains of connected residues to the primary sequence of the protein by comparing the experimentally determined chemical shifts with expected shifts derived from a chemical shift database, while providing bookkeeping throughout the assignment procedure

  1. Automated scheduling and planning from theory to practice

    CERN Document Server

    Ozcan, Ender; Urquhart, Neil

    2013-01-01

      Solving scheduling problems has long presented a challenge for computer scientists and operations researchers. The field continues to expand as researchers and practitioners examine ever more challenging problems and develop automated methods capable of solving them. This book provides 11 case studies in automated scheduling, submitted by leading researchers from across the world. Each case study examines a challenging real-world problem by analysing the problem in detail before investigating how the problem may be solved using state of the art techniques.The areas covered include aircraft scheduling, microprocessor instruction scheduling, sports fixture scheduling, exam scheduling, personnel scheduling and production scheduling.  Problem solving methodologies covered include exact as well as (meta)heuristic approaches, such as local search techniques, linear programming, genetic algorithms and ant colony optimisation.The field of automated scheduling has the potential to impact many aspects of our lives...

  2. Automated tuning of the advanced photon source booster synchrotron

    International Nuclear Information System (INIS)

    Biedron, S.G.; Milton, S.V.

    1997-01-01

    The acceleration cycle of the Advanced Photon Source (APS) booster synchrotron is completed within 223 ms and is repeated at 2 Hz. Unless properly corrected, transverse and longitudinal injection errors can lead to inefficient booster performance. In order to simplify daily operation, automated tuning methods have been developed. Through the use of beam position monitor (BPM) reading, transfer line corrector magnets, magnet ramp timing, and empirically determined response functions, the injection process is optimized by correcting the first turn trajectory to the measured closed orbit. These tuning algorithms and their implementation are described here along with an evaluation of their performance

  3. Corrective measures technology for shallow land burial at arid sites: field studies of biointrusion barriers and erosion control

    International Nuclear Information System (INIS)

    Nyhan, J.W.; Hakonson, T.E.; Lopez, E.A.

    1986-03-01

    The field research program involving corrective measures technologies for arid shallow land burial (SLB) sites is described. Results of field testing of a biointrusion barrier installed at a close-out waste disposal site (Area B) at Los Alamos are presented. Soil erosion and infiltration of water into a simulated trench cap with various surface treatments were measured, and the interaction between erosion control and subsurface water dynamics is discussed relative to waste management

  4. System of automated map design

    International Nuclear Information System (INIS)

    Ponomarjov, S.Yu.; Rybalko, S.I.; Proskura, N.I.

    1992-01-01

    Preprint 'System of automated map design' contains information about the program shell for construction of territory map, performing level line drawing of arbitrary two-dimension field (in particular, the radionuclide concentration field). The work schedule and data structures are supplied, as well as data on system performance. The preprint can become useful for experts in radioecology and for all persons involved in territory pollution mapping or multi-purpose geochemical mapping. (author)

  5. Saturne II: characteristics of the proton beam, field qualities and corrections, acceleration of the polarized protons

    International Nuclear Information System (INIS)

    Laclare, J.-L.

    1978-01-01

    Indicated specifications of Saturne II are summed up: performance of the injection system, quality of the guidance field (magnetic measurements and multipolar corrections), transverse and longitudinal instabilities, characteristics of the beam stored in the machine and of the extracted beam. The problem of depolarization along the acceleration cycle is briefly discussed (1 or 2% between injection and 3 GeV) [fr

  6. Automated calculations for massive fermion production with ai-bar Talc

    International Nuclear Information System (INIS)

    Lorca, A.; Riemann, T.

    2004-01-01

    The package ai-bar Talc has been developed for the automated calculation of radiative corrections to two-fermion production at e + e - colliders. The package uses Diana, Qgraf, Form, Fortran, FF, LoopTools, and further unix/linux tools. Numerical results are presented for e + e - -> e + e - , μ + μ - , bs-bar , tc-bar

  7. Assessing atmospheric bias correction for dynamical consistency using potential vorticity

    International Nuclear Information System (INIS)

    Rocheta, Eytan; Sharma, Ashish; Evans, Jason P

    2014-01-01

    Correcting biases in atmospheric variables prior to impact studies or dynamical downscaling can lead to new biases as dynamical consistency between the ‘corrected’ fields is not maintained. Use of these bias corrected fields for subsequent impact studies and dynamical downscaling provides input conditions that do not appropriately represent intervariable relationships in atmospheric fields. Here we investigate the consequences of the lack of dynamical consistency in bias correction using a measure of model consistency—the potential vorticity (PV). This paper presents an assessment of the biases present in PV using two alternative correction techniques—an approach where bias correction is performed individually on each atmospheric variable, thereby ignoring the physical relationships that exists between the multiple variables that are corrected, and a second approach where bias correction is performed directly on the PV field, thereby keeping the system dynamically coherent throughout the correction process. In this paper we show that bias correcting variables independently results in increased errors above the tropopause in the mean and standard deviation of the PV field, which are improved when using the alternative proposed. Furthermore, patterns of spatial variability are improved over nearly all vertical levels when applying the alternative approach. Results point to a need for a dynamically consistent atmospheric bias correction technique which results in fields that can be used as dynamically consistent lateral boundaries in follow-up downscaling applications. (letter)

  8. Automated Identification of Northern Leaf Blight-Infected Maize Plants from Field Imagery Using Deep Learning.

    Science.gov (United States)

    DeChant, Chad; Wiesner-Hanks, Tyr; Chen, Siyuan; Stewart, Ethan L; Yosinski, Jason; Gore, Michael A; Nelson, Rebecca J; Lipson, Hod

    2017-11-01

    Northern leaf blight (NLB) can cause severe yield loss in maize; however, scouting large areas to accurately diagnose the disease is time consuming and difficult. We demonstrate a system capable of automatically identifying NLB lesions in field-acquired images of maize plants with high reliability. This approach uses a computational pipeline of convolutional neural networks (CNNs) that addresses the challenges of limited data and the myriad irregularities that appear in images of field-grown plants. Several CNNs were trained to classify small regions of images as containing NLB lesions or not; their predictions were combined into separate heat maps, then fed into a final CNN trained to classify the entire image as containing diseased plants or not. The system achieved 96.7% accuracy on test set images not used in training. We suggest that such systems mounted on aerial- or ground-based vehicles can help in automated high-throughput plant phenotyping, precision breeding for disease resistance, and reduced pesticide use through targeted application across a variety of plant and disease categories.

  9. Software development based on high speed PC oscilloscope for automated pulsed magnetic field measurement system

    International Nuclear Information System (INIS)

    Sun Yuxiang; Shang Lei; Li Ji; Ge Lei

    2011-01-01

    It introduces a method of a software development which is based on high speed PC oscilloscope for pulsed magnetic field measurement system. The previous design has been improved by this design, high-speed virtual oscilloscope has been used in the field for the first time. In the design, the automatic data acquisition, data process, data analysis and storage have been realized. Automated point checking reduces the workload. The use of precise motion bench increases the positioning accuracy. The software gets the data from PC oscilloscope by calling DLLs and includes the function of oscilloscope, such as trigger, ranges, and sample rate setting etc. Spline Interpolation and Bandstop Filter are used to denoise the signals. The core of the software is the state machine which controls the motion of stepper motors and data acquisition and stores the data automatically. NI Vision Acquisition Software and Database Connectivity Toolkit make the video surveillance of laboratory and MySQL database connectivity available. The raw signal and processed signal have been compared in this paper. The waveform has been greatly improved by the signal processing. (authors)

  10. Level of Automation and Failure Frequency Effects on Simulated Lunar Lander Performance

    Science.gov (United States)

    Marquez, Jessica J.; Ramirez, Margarita

    2014-01-01

    A human-in-the-loop experiment was conducted at the NASA Ames Research Center Vertical Motion Simulator, where instrument-rated pilots completed a simulated terminal descent phase of a lunar landing. Ten pilots participated in a 2 x 2 mixed design experiment, with level of automation as the within-subjects factor and failure frequency as the between subjects factor. The two evaluated levels of automation were high (fully automated landing) and low (manual controlled landing). During test trials, participants were exposed to either a high number of failures (75% failure frequency) or low number of failures (25% failure frequency). In order to investigate the pilots' sensitivity to changes in levels of automation and failure frequency, the dependent measure selected for this experiment was accuracy of failure diagnosis, from which D Prime and Decision Criterion were derived. For each of the dependent measures, no significant difference was found for level of automation and no significant interaction was detected between level of automation and failure frequency. A significant effect was identified for failure frequency suggesting failure frequency has a significant effect on pilots' sensitivity to failure detection and diagnosis. Participants were more likely to correctly identify and diagnose failures if they experienced the higher levels of failures, regardless of level of automation

  11. Bright-field scanning confocal electron microscopy using a double aberration-corrected transmission electron microscope

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Peng; Behan, Gavin; Kirkland, Angus I. [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Nellist, Peter D., E-mail: peter.nellist@materials.ox.ac.uk [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Cosgriff, Eireann C.; D' Alfonso, Adrian J.; Morgan, Andrew J.; Allen, Leslie J. [School of Physics, University of Melbourne, Parkville, Victoria 3010 (Australia); Hashimoto, Ayako [Advanced Nano-characterization Center, National Institute for Materials Science (NIMS), 3-13 Sakura, Tsukuba 305-0003 (Japan); Takeguchi, Masaki [Advanced Nano-characterization Center, National Institute for Materials Science (NIMS), 3-13 Sakura, Tsukuba 305-0003 (Japan); High Voltage Electron Microscopy Station, NIMS, 3-13 Sakura, Tsukuba 305-0003 (Japan); Mitsuishi, Kazutaka [Advanced Nano-characterization Center, National Institute for Materials Science (NIMS), 3-13 Sakura, Tsukuba 305-0003 (Japan); Quantum Dot Research Center, NIMS, 3-13 Sakura, Tsukuba 305-0003 (Japan); Shimojo, Masayuki [High Voltage Electron Microscopy Station, NIMS, 3-13 Sakura, Tsukuba 305-0003 (Japan); Advanced Science Research Laboratory, Saitama Institute of Technology, 1690 Fusaiji, Fukaya 369-0293 (Japan)

    2011-06-15

    Scanning confocal electron microscopy (SCEM) offers a mechanism for three-dimensional imaging of materials, which makes use of the reduced depth of field in an aberration-corrected transmission electron microscope. The simplest configuration of SCEM is the bright-field mode. In this paper we present experimental data and simulations showing the form of bright-field SCEM images. We show that the depth dependence of the three-dimensional image can be explained in terms of two-dimensional images formed in the detector plane. For a crystalline sample, this so-called probe image is shown to be similar to a conventional diffraction pattern. Experimental results and simulations show how the diffracted probes in this image are elongated in thicker crystals and the use of this elongation to estimate sample thickness is explored. -- Research Highlights: {yields} The confocal probe image in a scanning confocal electron microscopy image reveals information about the thickness and height of the crystalline layer. {yields} The form of the contrast in a three-dimensional bright-field scanning confocal electron microscopy image can be explained in terms of the confocal probe image. {yields} Despite the complicated form of the contrast in bright-field scanning confocal electron microscopy, we see that depth information is transferred on a 10 nm scale.

  12. Design and experimental testing of air slab caps which convert commercial electron diodes into dual purpose, correction-free diodes for small field dosimetry.

    Science.gov (United States)

    Charles, P H; Cranmer-Sargison, G; Thwaites, D I; Kairn, T; Crowe, S B; Pedrazzini, G; Aland, T; Kenny, J; Langton, C M; Trapp, J V

    2014-10-01

    Two diodes which do not require correction factors for small field relative output measurements are designed and validated using experimental methodology. This was achieved by adding an air layer above the active volume of the diode detectors, which canceled out the increase in response of the diodes in small fields relative to standard field sizes. Due to the increased density of silicon and other components within a diode, additional electrons are created. In very small fields, a very small air gap acts as an effective filter of electrons with a high angle of incidence. The aim was to design a diode that balanced these perturbations to give a response similar to a water-only geometry. Three thicknesses of air were placed at the proximal end of a PTW 60017 electron diode (PTWe) using an adjustable "air cap". A set of output ratios (ORDet (fclin) ) for square field sizes of side length down to 5 mm was measured using each air thickness and compared to ORDet (fclin) measured using an IBA stereotactic field diode (SFD). kQclin,Qmsr (fclin,fmsr) was transferred from the SFD to the PTWe diode and plotted as a function of air gap thickness for each field size. This enabled the optimal air gap thickness to be obtained by observing which thickness of air was required such that kQclin,Qmsr (fclin,fmsr) was equal to 1.00 at all field sizes. A similar procedure was used to find the optimal air thickness required to make a modified Sun Nuclear EDGE detector (EDGEe) which is "correction-free" in small field relative dosimetry. In addition, the feasibility of experimentally transferring kQclin,Qmsr (fclin,fmsr) values from the SFD to unknown diodes was tested by comparing the experimentally transferred kQclin,Qmsr (fclin,fmsr) values for unmodified PTWe and EDGEe diodes to Monte Carlo simulated values. 1.0 mm of air was required to make the PTWe diode correction-free. This modified diode (PTWeair) produced output factors equivalent to those in water at all field sizes (5-50 mm

  13. Fast Automated Decoupling at RHIC

    CERN Document Server

    Beebe-Wang, Joanne

    2005-01-01

    Coupling correction is essential for the operational performance of RHIC. The independence of the transverse degrees of freedom makes diagnostics and tune control easier, and it is advantageous to operate an accelerator close to the coupling resonance to minimize nearby nonlinear sidebands. An automated decoupling application has been developed at RHIC for coupling correction during routine operations. The application decouples RHIC globally by minimizing the tune separation through finding the optimal settings of two orthogonal skew quadrupole families. The program provides options of automatic, semi-automatic and manual decoupling operations. It accesses tune information from all RHIC tune measurement systems: the PLL (Phase Lock Loop), the high frequency Schottky system, and the tune meter. It also supplies tune and skew quadrupole scans, finding the minimum tune separation, display the real time results and interface with the RHIC control system. We summarize the capabilities of the decoupling application...

  14. Classical Electron Model with QED Corrections

    OpenAIRE

    Lenk, Ron

    2010-01-01

    In this article we build a metric for a classical general relativistic electron model with QED corrections. We calculate the stress-energy tensor for the radiative corrections to the Coulomb potential in both the near-field and far-field approximations. We solve the three field equations in both cases by using a perturbative expansion to first order in alpha (the fine-structure constant) while insisting that the usual (+, +, -, -) structure of the stress-energy tensor is maintained. The resul...

  15. Reduction of density-modification bias by β correction

    International Nuclear Information System (INIS)

    Skubák, Pavol; Pannu, Navraj S.

    2011-01-01

    A cross-validation-based method for bias reduction in ‘classical’ iterative density modification of experimental X-ray crystallography maps provides significantly more accurate phase-quality estimates and leads to improved automated model building. Density modification often suffers from an overestimation of phase quality, as seen by escalated figures of merit. A new cross-validation-based method to address this estimation bias by applying a bias-correction parameter ‘β’ to maximum-likelihood phase-combination functions is proposed. In tests on over 100 single-wavelength anomalous diffraction data sets, the method is shown to produce much more reliable figures of merit and improved electron-density maps. Furthermore, significantly better results are obtained in automated model building iterated with phased refinement using the more accurate phase probability parameters from density modification

  16. Calibration and application of an automated seepage meter for monitoring water flow across the sediment-water interface.

    Science.gov (United States)

    Zhu, Tengyi; Fu, Dafang; Jenkinson, Byron; Jafvert, Chad T

    2015-04-01

    The advective flow of sediment pore water is an important parameter for understanding natural geochemical processes within lake, river, wetland, and marine sediments and also for properly designing permeable remedial sediment caps placed over contaminated sediments. Automated heat pulse seepage meters can be used to measure the vertical component of sediment pore water flow (i.e., vertical Darcy velocity); however, little information on meter calibration as a function of ambient water temperature exists in the literature. As a result, a method with associated equations for calibrating a heat pulse seepage meter as a function of ambient water temperature is fully described in this paper. Results of meter calibration over the temperature range 7.5 to 21.2 °C indicate that errors in accuracy are significant if proper temperature-dependence calibration is not performed. The proposed calibration method allows for temperature corrections to be made automatically in the field at any ambient water temperature. The significance of these corrections is discussed.

  17. Correction magnet power supplies for APS machine

    International Nuclear Information System (INIS)

    Kang, Y.G.

    1991-04-01

    A number of correction magnets are required for the advanced photon source (APS) machine to correct the beam. There are five kinds of correction magnets for the storage ring, two for the injector synchrotron, and two for the positron accumulator ring (PAR). Table I shoes a summary of the correction magnet power supplies for the APS machine. For the storage ring, the displacement of the quadrupole magnets due to the low frequency vibration below 25 Hz has the most significant effect on the stability of the positron closed orbit. The primary external source of the low frequency vibration is the ground motion of approximately 20 μm amplitude, with frequency components concentrated below 10 Hz. These low frequency vibrations can be corrected by using the correction magnets, whose field strengths are controlled individually through the feedback loop comprising the beam position monitoring system. The correction field require could be either positive or negative. Thus for all the correction magnets, bipolar power supplies (BPSs) are required to produce both polarities of correction fields. Three different types of BPS are used for all the correction magnets. Type I BPSs cover all the correction magnets for the storage ring, except for the trim dipoles. The maximum output current of the Type I BPS is 140 Adc. A Type II BPS powers a trim dipole, and its maximum output current is 60 Adc. The injector synchrotron and PAR correction magnets are powered form Type III BPSs, whose maximum output current is 25 Adc

  18. Towards an Automated Acoustic Detection System for Free Ranging Elephants.

    Science.gov (United States)

    Zeppelzauer, Matthias; Hensman, Sean; Stoeger, Angela S

    The human-elephant conflict is one of the most serious conservation problems in Asia and Africa today. The involuntary confrontation of humans and elephants claims the lives of many animals and humans every year. A promising approach to alleviate this conflict is the development of an acoustic early warning system. Such a system requires the robust automated detection of elephant vocalizations under unconstrained field conditions. Today, no system exists that fulfills these requirements. In this paper, we present a method for the automated detection of elephant vocalizations that is robust to the diverse noise sources present in the field. We evaluate the method on a dataset recorded under natural field conditions to simulate a real-world scenario. The proposed method outperformed existing approaches and robustly and accurately detected elephants. It thus can form the basis for a future automated early warning system for elephants. Furthermore, the method may be a useful tool for scientists in bioacoustics for the study of wildlife recordings.

  19. Automation and Robotics for Human Mars Exploration (AROMA)

    Science.gov (United States)

    Hofmann, Peter; von Richter, Andreas

    2003-01-01

    Automation and Robotics (A&R) systems are a key technology for Mars exploration. All over the world initiatives in this field aim at developing new A&R systems and technologies for planetary surface exploration. From December 2000 to February 2002 Kayser-Threde GmbH, Munich, Germany lead a study called AROMA (Automation and Robotics for Human Mars Exploration) under ESA contract in order to define a reference architecture of A&R elements in support of a human Mars exploration program. One of the goals of this effort is to initiate new developments and to maintain the competitiveness of European industry within this field. c2003 Published by Elsevier Science Ltd.

  20. Industrial automation in floating production vessels for deep water oil and gas fields

    International Nuclear Information System (INIS)

    de Garcia, A.L.; Ferrante, A.J.

    1990-01-01

    The process supervision in offshore platforms was performed in the past through the use of local pneumatic instrumentation, based on relays, semi-graphic panels and button operated control panels. Considering the advanced technology used in the new floating production projects for deep water, it became mandatory to develop supervision systems capable of integrating different control panels, increasing the level of monitorization and reducing the number of operators and control rooms. From the point of view of field integration, a standardized architecture makes the communication between different production platforms and the regional headquarters, where all the equipment and support infrastructure for the computerized network is installed, possible. This test paper describes the characteristics of the initial systems, the main problems observed, the studies performed and the results obtained in relation to the design and implementation of computational systems with open architecture for automation of process control in floating production systems for deep water in Brazil

  1. Altering user' acceptance of automation through prior automation exposure.

    Science.gov (United States)

    Bekier, Marek; Molesworth, Brett R C

    2017-06-01

    Air navigation service providers worldwide see increased use of automation as one solution to overcome the capacity constraints imbedded in the present air traffic management (ATM) system. However, increased use of automation within any system is dependent on user acceptance. The present research sought to determine if the point at which an individual is no longer willing to accept or cooperate with automation can be manipulated. Forty participants underwent training on a computer-based air traffic control programme, followed by two ATM exercises (order counterbalanced), one with and one without the aid of automation. Results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation ('tipping point') decreased; suggesting it is indeed possible to alter automation acceptance. Practitioner Summary: This paper investigates whether the point at which a user of automation rejects automation (i.e. 'tipping point') is constant or can be manipulated. The results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation decreased; suggesting it is possible to alter automation acceptance.

  2. Distribution automation at BC Hydro : a case study

    Energy Technology Data Exchange (ETDEWEB)

    Siew, C. [BC Hydro, Vancouver, BC (Canada). Smart Grid Development Program

    2009-07-01

    This presentation discussed a distribution automation study conducted by BC Hydro to determine methods of improving grid performance by supporting intelligent transmission and distribution systems. The utility's smart grid program includes a number of utility-side and customer-side applications, including enabled demand response, microgrid, and operational efficiency applications. The smart grid program will improve reliability and power quality by 40 per cent, improve conservation and energy efficiency throughout the province, and provide enhanced customer service. Programs and initiatives currently underway at the utility include distribution management, smart metering, distribution automation, and substation automation programs. The utility's automation functionality will include fault interruption and locating, restoration capability, and restoration success. A decision support system has also been established to assist control room and field operating personnel with monitoring and control of the electric distribution system. Protection, control and monitoring (PCM) and volt VAR optimization upgrades are also planned. Reclosers are also being automated, and an automation guide has been developed for switches. tabs., figs.

  3. Mixed model phase evolution for correction of magnetic field inhomogeneity effects in 3D quantitative gradient echo-based MRI

    DEFF Research Database (Denmark)

    Fatnassi, Chemseddine; Boucenna, Rachid; Zaidi, Habib

    2017-01-01

    PURPOSE: In 3D gradient echo magnetic resonance imaging (MRI), strong field gradients B0macro are visually observed at air/tissue interfaces. At low spatial resolution in particular, the respective field gradients lead to an apparent increase in intravoxel dephasing, and subsequently, to signal...... loss or inaccurate R2* estimates. If the strong field gradients are measured, their influence can be removed by postprocessing. METHODS: Conventional corrections usually assume a linear phase evolution with time. For high macroscopic gradient inhomogeneities near the edge of the brain...

  4. Advancements, prospects, and impacts of automated driving systems

    Directory of Open Access Journals (Sweden)

    Ching-Yao Chan

    2017-09-01

    Full Text Available Over the last decade, significant progress has been made in automated driving systems (ADS. Given the current momentum and progress, ADS can be expected to continue to advance and a variety of ADS products will become commercially available within a decade. It is envisioned that automated driving technology will lead to a paradigm shift in transportation systems in terms of user experience, mode choices, and business models. In this paper, we start with a review of the state-of-the-art in the field of ADS and their deployment paths. It is followed by a discussion of the future prospects of ADS and their effects on various aspects of the transportation field. We then identify two specific use cases of ADS where the impacts can be significant – personal mobility services and vehicle automation for aging society. A survey of impact assessment studies and the associated methodologies for evaluating ADS is given, which is followed by concluding remarks at the end of the paper.

  5. Dijkstra's interpretation of the approach to solving a problem of program correctness

    Directory of Open Access Journals (Sweden)

    Markoski Branko

    2010-01-01

    Full Text Available Proving the program correctness and designing the correct programs are two connected theoretical problems, which are of great practical importance. The first is solved within program analysis, and the second one in program synthesis, although intertwining of these two processes is often due to connection between the analysis and synthesis of programs. Nevertheless, having in mind the automated methods of proving correctness and methods of automatic program synthesis, the difference is easy to tell. This paper presents denotative interpretation of programming calculation explaining semantics by formulae φ and ψ, in such a way that they can be used for defining state sets for program P.

  6. “Booster” training: Evaluation of instructor-led bedside cardiopulmonary resuscitation skill training and automated corrective feedback to improve cardiopulmonary resuscitation compliance of Pediatric Basic Life Support providers during simulated cardiac arrest

    Science.gov (United States)

    Sutton, Robert M.; Niles, Dana; Meaney, Peter A.; Aplenc, Richard; French, Benjamin; Abella, Benjamin S.; Lengetti, Evelyn L.; Berg, Robert A.; Helfaer, Mark A.; Nadkarni, Vinay

    2013-01-01

    Objective To investigate the effectiveness of brief bedside “booster” cardiopulmonary resuscitation (CPR) training to improve CPR guideline compliance of hospital-based pediatric providers. Design Prospective, randomized trial. Setting General pediatric wards at Children’s Hospital of Philadelphia. Subjects Sixty-nine Basic Life Support–certified hospital-based providers. Intervention CPR recording/feedback defibrillators were used to evaluate CPR quality during simulated pediatric arrest. After a 60-sec pretraining CPR evaluation, subjects were randomly assigned to one of three instructional/feedback methods to be used during CPR booster training sessions. All sessions (training/CPR manikin practice) were of equal duration (2 mins) and differed only in the method of corrective feedback given to participants during the session. The study arms were as follows: 1) instructor-only training; 2) automated defibrillator feedback only; and 3) instructor training combined with automated feedback. Measurements and Main Results Before instruction, 57% of the care providers performed compressions within guideline rate recommendations (rate >90 min−1 and 38 mm); and 36% met overall CPR compliance (rate and depth within targets). After instruction, guideline compliance improved (instructor-only training: rate 52% to 87% [p .01], and overall CPR compliance, 43% to 78% [p CPR compliance, 35% to 96% [p training combined with automated feedback: rate 48% to 100% [p CPR compliance, 30% to 100% [p CPR instruction, most certified Pediatric Basic Life Support providers did not perform guideline-compliant CPR. After a brief bedside training, CPR quality improved irrespective of training content (instructor vs. automated feedback). Future studies should investigate bedside training to improve CPR quality during actual pediatric cardiac arrests. PMID:20625336

  7. Technology Transfer Opportunities: Automated Ground-Water Monitoring

    Science.gov (United States)

    Smith, Kirk P.; Granato, Gregory E.

    1997-01-01

    Introduction A new automated ground-water monitoring system developed by the U.S. Geological Survey (USGS) measures and records values of selected water-quality properties and constituents using protocols approved for manual sampling. Prototypes using the automated process have demonstrated the ability to increase the quantity and quality of data collected and have shown the potential for reducing labor and material costs for ground-water quality data collection. Automation of water-quality monitoring systems in the field, in laboratories, and in industry have increased data density and utility while reducing operating costs. Uses for an automated ground-water monitoring system include, (but are not limited to) monitoring ground-water quality for research, monitoring known or potential contaminant sites, such as near landfills, underground storage tanks, or other facilities where potential contaminants are stored, and as an early warning system monitoring groundwater quality near public water-supply wells.

  8. Analysis of the Failures and Corrective Actions for the LHC Cryogenics Radiation Tolerant Electronics and its Field Instruments

    CERN Document Server

    Balle, Ch; Vauthier, N

    2014-01-01

    The LHC cryogenic system radiation tolerant electronics and their associated field instruments have been in nominal conditions since before the commissioning of the first LHC beams in September 2008. This system is made of about 15’000 field instruments (thermometers, pressure sensors, liquid helium level gauges, electrical heaters and position switches), 7’500 electronic cards and 853 electronic crates. Since mid-2008 a software tool has been deployed, this allows an operator to report a problem and then lists the corrective actions. The tool is a great help in detecting recurrent problems that may be tackled by a hardware or software consolidation. The corrective actions range from simple resets, exchange of defective equipment, repair of electrical connectors, etc. However a recurrent problem that heals by itself is present on some channels. This type of fault is extremely difficult to diagnose and it appears as a temporary opening of an electrical circuit; its duration can range from a few minutes to ...

  9. Automated calibration of laser spectrometer measurements of δ18 O and δ2 H values in water vapour using a Dew Point Generator.

    Science.gov (United States)

    Munksgaard, Niels C; Cheesman, Alexander W; Gray-Spence, Andrew; Cernusak, Lucas A; Bird, Michael I

    2018-06-30

    Continuous measurement of stable O and H isotope compositions in water vapour requires automated calibration for remote field deployments. We developed a new low-cost device for calibration of both water vapour mole fraction and isotope composition. We coupled a commercially available dew point generator (DPG) to a laser spectrometer and developed hardware for water and air handling along with software for automated operation and data processing. We characterised isotopic fractionation in the DPG, conducted a field test and assessed the influence of critical parameters on the performance of the device. An analysis time of 1 hour was sufficient to achieve memory-free analysis of two water vapour standards and the δ 18 O and δ 2 H values were found to be independent of water vapour concentration over a range of ≈20,000-33,000 ppm. The reproducibility of the standard vapours over a 10-day period was better than 0.14 ‰ and 0.75 ‰ for δ 18 O and δ 2 H values, respectively (1 σ, n = 11) prior to drift correction and calibration. The analytical accuracy was confirmed by the analysis of a third independent vapour standard. The DPG distillation process requires that isotope calibration takes account of DPG temperature, analysis time, injected water volume and air flow rate. The automated calibration system provides high accuracy and precision and is a robust, cost-effective option for long-term field measurements of water vapour isotopes. The necessary modifications to the DPG are minor and easily reversible. Copyright © 2018 John Wiley & Sons, Ltd.

  10. Automated landmark-guided deformable image registration.

    Science.gov (United States)

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-07

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency.

  11. Automated landmark-guided deformable image registration

    International Nuclear Information System (INIS)

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-01

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency. (paper)

  12. Automated reported system using structured data entry: Application to prostate US

    International Nuclear Information System (INIS)

    Kim, Bo Hyun; Paik, Chul Hwa; Lee, Won Yong

    2001-01-01

    To improve efficacy in producing and searching the radiological reported of prostate US in daily practice and clinical research by developing an automated reporting system using structured data entry system. The report database was established with appropriate fields. A structured data entry form for prostate US was created. The rules for automated transformation from the entered data a text report have been decide. Two programmers coded the programs according to the rules. We have successful developed an automated reporting system for prostate US using structured data entry. Patients. deg Φs demographic information, the order information, and the contents of the main body and conclusion of the radiological report were included as individual fields in the database. The report contents were input by selecting corresponding fields in a structured data entry entry form, which has transformed into a text report. The automated reporting system using structured data entry is an efficient way to establish radiological report database and could be successfully applied to prostate US. If its utility can be extended to other US examinations, it will become a useful tool for both radiological reporting and database management.

  13. Comparison of Threshold Saccadic Vector Optokinetic Perimetry (SVOP) and Standard Automated Perimetry (SAP) in Glaucoma. Part II: Patterns of Visual Field Loss and Acceptability.

    Science.gov (United States)

    McTrusty, Alice D; Cameron, Lorraine A; Perperidis, Antonios; Brash, Harry M; Tatham, Andrew J; Agarwal, Pankaj K; Murray, Ian C; Fleck, Brian W; Minns, Robert A

    2017-09-01

    We compared patterns of visual field loss detected by standard automated perimetry (SAP) to saccadic vector optokinetic perimetry (SVOP) and examined patient perceptions of each test. A cross-sectional study was done of 58 healthy subjects and 103 with glaucoma who were tested using SAP and two versions of SVOP (v1 and v2). Visual fields from both devices were categorized by masked graders as: 0, normal; 1, paracentral defect; 2, nasal step; 3, arcuate defect; 4, altitudinal; 5, biarcuate; and 6, end-stage field loss. SVOP and SAP classifications were cross-tabulated. Subjects completed a questionnaire on their opinions of each test. We analyzed 142 (v1) and 111 (v2) SVOP and SAP test pairs. SVOP v2 had a sensitivity of 97.7% and specificity of 77.9% for identifying normal versus abnormal visual fields. SAP and SVOP v2 classifications showed complete agreement in 54% of glaucoma patients, with a further 23% disagreeing by one category. On repeat testing, 86% of SVOP v2 classifications agreed with the previous test, compared to 91% of SAP classifications; 71% of subjects preferred SVOP compared to 20% who preferred SAP. Eye-tracking perimetry can be used to obtain threshold visual field sensitivity values in patients with glaucoma and produce maps of visual field defects, with patterns exhibiting close agreement to SAP. Patients preferred eye-tracking perimetry compared to SAP. This first report of threshold eye tracking perimetry shows good agreement with conventional automated perimetry and provides a benchmark for future iterations.

  14. Application of the iterative probe correction technique for a high-order probe in spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Laitinen, Tommi; Pivnenko, Sergey; Breinbjerg, Olav

    2006-01-01

    An iterative probe-correction technique for spherical near-field antenna measurements is examined. This technique has previously been shown to be well-suited for non-ideal first-order probes. In this paper, its performance in the case of a high-order probe (a dual-ridged horn) is examined....

  15. Analysis of the thoracic aorta using a semi-automated post processing tool

    International Nuclear Information System (INIS)

    Entezari, Pegah; Kino, Aya; Honarmand, Amir R.; Galizia, Mauricio S.; Yang, Yan; Collins, Jeremy; Yaghmai, Vahid; Carr, James C.

    2013-01-01

    Objective: To evaluates a semi-automated method for Thoracic Aortic Aneurysm (TAA) measurement using ECG-gated Dual Source CT Angiogram (DSCTA). Methods: This retrospective HIPAA compliant study was approved by our IRB. Transaxial maximum diameters of outer wall to outer wall were studied in fifty patients at seven anatomic locations of the thoracic aorta: annulus, sinus, sinotubular junction (STJ), mid ascending aorta (MAA) at the level of right pulmonary artery, proximal aortic arch (PROX) immediately proximal to innominate artery, distal aortic arch (DIST) immediately distal to left subclavian artery, and descending aorta (DESC) at the level of diaphragm. Measurements were performed using a manual method and semi-automated software. All readers repeated their measurements. Inter-method, intra-observer and inter-observer agreements were evaluated according to intraclass correlation coefficient (ICC) and Bland–Altman plot. The number of cases with manual contouring or center line adjustment for the semi-automated method and also the post-processing time for each method were recorded. Results: The mean difference between semi-automated and manual methods was less than 1.3 mm at all seven points. Strong inter-method, inter-observer and intra-observer agreement was recorded at all levels (ICC ≥ 0.9). The maximum rate of manual adjustment of center line and contour was at the level of annulus. The average time for manual post-processing of the aorta was 19 ± 0.3 min, while it took 8.26 ± 2.1 min to do the measurements with the semi-automated tool (Vitrea version 6.0.0.1 software). The center line was edited manually at all levels, with most corrections at the level of annulus (60%), while the contour was adjusted at all levels with highest and lowest number of corrections at the levels of annulus and DESC (75% and 0.07% of the cases), respectively. Conclusion: Compared to the commonly used manual method, semi-automated measurement of vessel dimensions is

  16. EddyOne automated analysis of PWR/WWER steam generator tubes eddy current data

    International Nuclear Information System (INIS)

    Nadinic, B.; Vanjak, Z.

    2004-01-01

    INETEC Institute for Nuclear Technology developed software package called Eddy One which has option of automated analysis of bobbin coil eddy current data. During its development and on site use, many valuable lessons were learned which are described in this article. In accordance with previous, the following topics are covered: General requirements for automated analysis of bobbin coil eddy current data; Main approaches to automated analysis; Multi rule algorithms for data screening; Landmark detection algorithms as prerequisite for automated analysis (threshold algorithms and algorithms based on neural network principles); Field experience with Eddy One software; Development directions (use of artificial intelligence with self learning abilities for indication detection and sizing); Automated analysis software qualification; Conclusions. Special emphasis is given on results obtained on different types of steam generators, condensers and heat exchangers. Such results are then compared with results obtained by other automated software vendors giving clear advantage to INETEC approach. It has to be pointed out that INETEC field experience was collected also on WWER steam generators what is for now unique experience.(author)

  17. Time consumption and quality of an automated fusion tool for SPECT and MRI images of the brain

    International Nuclear Information System (INIS)

    Fiedler, E.; Platsch, G.; Schwarz, A.; Schmiedehausen, K.; Kuwert, T.; Tomandl, B.; Huk, W.; Rupprecht, Th.; Rahn, N.

    2003-01-01

    Aim: Although the fusion of images from different modalities may improve diagnostic accuracy, it is rarely used in clinical routine work due to logistic problems. Therefore we evaluated performance and time needed for fusing MRI and SPECT images using a semiautomated dedicated software. Patients, material and method: In 32 patients regional cerebral blood flow was measured using 99m Tc ethylcystein dimer (ECD) and the three-headed SPECT camera MultiSPECT 3. MRI scans of the brain were performed using either a 0,2 T Open or a 1,5 T Sonata. Twelve of the MRI data sets were acquired using a 3 D-T1 w MPRAGE sequence, 20 with a 2D acquisition technique and different echo sequences. Image fusion was performed on a Syngo workstation using an entropy minimizing algorithm by an experienced user of the software. The fusion results were classified. We measured the time needed for the automated fusion procedure and in case of need that for manual realignment after automated, but insufficient fusion. Results: The mean time of the automated fusion procedure was 123 s. It was for the 2D significantly shorter than for the 3D MRI datasets. For four of the 2D data sets and two of the 3D data sets an optimal fit was reached using the automated approach. The remaining 26 data sets required manual correction. The sum of the time required for automated fusion and that needed for manual correction averaged 320 s (50-886 s). Conclusion: The fusion of 3D MRI data sets lasted significantly longer than that of the 2D MRI data. The automated fusion tool delivered in 20% an optimal fit, in 80% manual correction was necessary. Nevertheless, each of the 32 SPECT data sets could be merged in less than 15 min with the corresponding MRI data, which seems acceptable for clinical routine use. (orig.) [de

  18. Automated ultrasonic testing--capabilities, limitations and methods

    International Nuclear Information System (INIS)

    Beller, L.S.; Mikesell, C.R.

    1977-01-01

    The requirements for precision and reproducibility of ultrasonic testing during inservice inspection of nuclear reactors are both quantitatively and qualitatively more severe than most current practice in the field can provide. An automated ultrasonic testing (AUT) system, which provides a significant advancement in field examination capabilities, is described. Properties of the system, its application, and typical results are discussed

  19. Software complex AS (automation of spectrometry). User interface of experiment automation system implementation

    International Nuclear Information System (INIS)

    Astakhova, N.V.; Beskrovnyj, A.I.; Bogdzel', A.A.; Butorin, P.E.; Vasilovskij, S.G.; Gundorin, N.A.; Zlokazov, V.B.; Kutuzov, S.A.; Salamatin, I.M.; Shvetsov, V.N.

    2003-01-01

    An instrumental software complex for automation of spectrometry (AS) that enables prompt realization of experiment automation systems for spectrometers, which use data buferisation, has been developed. In the development new methods of programming and building of automation systems together with novel net technologies were employed. It is suggested that programs to schedule and conduct experiments should be based on the parametric model of the spectrometer, the approach that will make it possible to write programs suitable for any FLNP (Frank Laboratory of Neutron Physics) spectrometer and experimental technique applied and use different hardware interfaces for introducing the spectrometric data into the data acquisition system. The article describes the possibilities provided to the user in the field of scheduling and control of the experiment, data viewing, and control of the spectrometer parameters. The possibility of presenting the current spectrometer state, programs and the experimental data in the Internet in the form of dynamically formed protocols and graphs, as well as of the experiment control via the Internet is realized. To use the means of the Internet on the side of the client, applied programs are not needed. It suffices to know how to use the two programs to carry out experiments in the automated mode. The package is designed for experiments in condensed matter and nuclear physics and is ready for using. (author)

  20. A guide to the automation body of knowledge

    CERN Document Server

    2006-01-01

    Edited by Vernon L. Trevathan, with contributions from more than 35 leading experts from all aspects of automation, this book is a key resource for anyone who is studying for the ISA Certified Automation Professional® (CAP®), ISA Certified Control Systems Technician® (CCST®), and/or Control Systems Engineer (CSE) exams. The book defines the most important automation concepts and processes, while also describing the technical skills required to implement them in today's industrial environment. This edition provides comprehensive information about all major topics in the broad field of automation including: Process and analytical instrumentation ; Continuous and batch control ; Control valves and final control elements ; Basic discrete, sequencing, and manufacturing control ; Advanced control ; Digital and analog communications ; Data management and system software ; Networking and security ; Safety and reliability ; System checkout, testing, startup, and troubleshooting ; Project management. Whether you ar...

  1. New Trends in Agent-Based Complex Automated Negotiations

    CERN Document Server

    Zhang, Minjie; Robu, Valentin; Fatima, Shaheen; Matsuo, Tokuro

    2012-01-01

    Complex Automated Negotiations represent an important, emerging area in the field of Autonomous Agents and Multi-Agent Systems. Automated negotiations can be complex, since there are a lot of factors that characterize such negotiations. These factors include the number of issues, dependencies between these issues,  representation of utilities, the negotiation protocol, the number of parties in the negotiation (bilateral or multi-party), time constraints, etc. Software agents can support automation or simulation of such complex negotiations on the behalf of their owners, and can provide them with efficient bargaining strategies. To realize such a complex automated negotiation, we have to incorporate advanced Artificial Intelligence technologies includes search, CSP, graphical utility models, Bayes nets, auctions, utility graphs, predicting and learning methods. Applications could include e-commerce tools, decision-making support tools, negotiation support tools, collaboration tools, etc. This book aims to pro...

  2. Photogrammetric approach to automated checking of DTMs

    DEFF Research Database (Denmark)

    Potucková, Marketa

    2005-01-01

    Geometrically accurate digital terrain models (DTMs) are essential for orthoimage production and many other applications. Collecting reference data or visual inspection are reliable but time consuming and therefore expensive methods for finding errors in DTMs. In this paper, a photogrammetric...... approach to automated checking and improving of DTMs is evaluated. Corresponding points in two overlapping orthoimages are found by means of area based matching. Provided the image orientation is correct, discovered displacements correspond to DTM errors. Improvements of the method regarding its...

  3. Models of Automation surprise : results of a field survey in aviation

    NARCIS (Netherlands)

    De Boer, Robert; Dekker, Sidney

    2017-01-01

    Automation surprises in aviation continue to be a significant safety concern and the community’s search for effective strategies to mitigate them are ongoing. The literature has offered two fundamentally divergent directions, based on different ideas about the nature of cognition and collaboration

  4. MARC and the Library Service Center: Automation at Bargain Rates.

    Science.gov (United States)

    Pearson, Karl M.

    Despite recent research and development in the field of library automation, libraries have been unable to reap the benefits promised by technology due to the high cost of building and maintaining their own computer-based systems. Time-sharing and disc mass storage devices will bring automation costs, if spread over a number of users, within the…

  5. A Conceptual Design Study for the Error Field Correction Coil Power Supply in JT-60SA

    International Nuclear Information System (INIS)

    Matsukawa, M.; Shimada, K.; Yamauchi, K.; Gaio, E.; Ferro, A.; Novello, L.

    2013-01-01

    This paper describes a conceptual design study for the circuit configuration of the Error Field Correction Coil (EFCC) power supply (PS) to maximize the expected performance with reasonable cost in JT-60SA. The EFCC consists of eighteen sector coils installed inside the vacuum vessel, six in the toroidal direction and three in the poloidal direction, each one rated for 30 kA-turn. As a result, star point connection is proposed for each group of six EFCC coils installed cyclically in the toroidal direction for decoupling with poloidal field coils. In addition, a six phase inverter which is capable of controlling each phase current was chosen as PS topology to ensure higher flexibility of operation with reasonable cost.

  6. Improved automated perimetry performance in elderly subjects after listening to Mozart.

    Science.gov (United States)

    Marques, Junia Cabral; Vanessa, Adriana Chaves Oliveira; Fiorelli, Macedo Batista; Kasahara, Niro

    2009-01-01

    To evaluate the performance of automated perimetry of elderly subjects naïve to AP after listening to a Mozart sonata. Automated perimetry (AP) is a psychophysical test used to assess visual fields in patients with neurological disorders and glaucoma. In a previous study, Fiorelli et al. showed that young subjects who listened to a Mozart sonata prior to undergoing AP performed better in terms of reliability than those who did not listen to the sonata. Fifty-two AP-naïve, normal subjects underwent Automated perimetry (SITA 24-2). The study group (25 subjects) underwent AP after listening to Mozart's Sonata for Two Pianos in D Major, and the control group (27 subjects) underwent Automated perimetry without prior exposure to the music. The study group had significantly lower false negative rates and a lower visual field reliability score than the controls (P=0.04 and P=0.04, respectively). The test time was shorter for the study group (P=0.03). This study shows that elderly subjects, when exposed to the Mozart sonata immediately before AP testing, have lower false negative rates and lower visual field reliability scores when compared with an age- and gender-matched control group. Our results differ from those of Fiorelli et al. who found lower false positive rates and less fixation loss in addition to lower false negative rates. Listening to a Mozart sonata seems to improve automated perimetry reliability in elderly subjects.

  7. Real-Time Correction By Optical Tracking with Integrated Geometric Distortion Correction for Reducing Motion Artifacts in fMRI

    Science.gov (United States)

    Rotenberg, David J.

    Artifacts caused by head motion are a substantial source of error in fMRI that limits its use in neuroscience research and clinical settings. Real-time scan-plane correction by optical tracking has been shown to correct slice misalignment and non-linear spin-history artifacts, however residual artifacts due to dynamic magnetic field non-uniformity may remain in the data. A recently developed correction technique, PLACE, can correct for absolute geometric distortion using the complex image data from two EPI images, with slightly shifted k-space trajectories. We present a correction approach that integrates PLACE into a real-time scan-plane update system by optical tracking, applied to a tissue-equivalent phantom undergoing complex motion and an fMRI finger tapping experiment with overt head motion to induce dynamic field non-uniformity. Experiments suggest that including volume by volume geometric distortion correction by PLACE can suppress dynamic geometric distortion artifacts in a phantom and in vivo and provide more robust activation maps.

  8. White matter hyperintensities segmentation: a new semi-automated method

    Directory of Open Access Journals (Sweden)

    Mariangela eIorio

    2013-12-01

    Full Text Available White matter hyperintensities (WMH are brain areas of increased signal on T2-weighted or fluid attenuated inverse recovery magnetic resonance imaging (MRI scans. In this study we present a new semi-automated method to measure WMH load that is based on the segmentation of the intensity histogram of fluid-attenuated inversion recovery images. Thirty patients with Mild Cognitive Impairment with variable WMH load were enrolled. The semi-automated WMH segmentation included: removal of non-brain tissue, spatial normalization, removal of cerebellum and brain stem, spatial filtering, thresholding to segment probable WMH, manual editing for correction of false positives and negatives, generation of WMH map and volumetric estimation of the WMH load. Accuracy was quantitatively evaluated by comparing semi-automated and manual WMH segmentations performed by two independent raters. Differences between the two procedures were assessed using Student’s t tests and similarity was evaluated using linear regression model and Dice Similarity Coefficient (DSC. The volumes of the manual and semi-automated segmentations did not statistically differ (t-value= -1.79, DF=29, p= 0.839 for rater 1; t-value= 1.113, DF=29, p= 0.2749 for rater 2, were highly correlated (R²= 0.921, F (1,29 =155,54, p

  9. Improving the driver-automation interaction: an approach using automation uncertainty.

    Science.gov (United States)

    Beller, Johannes; Heesen, Matthias; Vollrath, Mark

    2013-12-01

    The aim of this study was to evaluate whether communicating automation uncertainty improves the driver-automation interaction. A false system understanding of infallibility may provoke automation misuse and can lead to severe consequences in case of automation failure. The presentation of automation uncertainty may prevent this false system understanding and, as was shown by previous studies, may have numerous benefits. Few studies, however, have clearly shown the potential of communicating uncertainty information in driving. The current study fills this gap. We conducted a driving simulator experiment, varying the presented uncertainty information between participants (no uncertainty information vs. uncertainty information) and the automation reliability (high vs.low) within participants. Participants interacted with a highly automated driving system while engaging in secondary tasks and were required to cooperate with the automation to drive safely. Quantile regressions and multilevel modeling showed that the presentation of uncertainty information increases the time to collision in the case of automation failure. Furthermore, the data indicated improved situation awareness and better knowledge of fallibility for the experimental group. Consequently, the automation with the uncertainty symbol received higher trust ratings and increased acceptance. The presentation of automation uncertaintythrough a symbol improves overall driver-automation cooperation. Most automated systems in driving could benefit from displaying reliability information. This display might improve the acceptance of fallible systems and further enhances driver-automation cooperation.

  10. Levels of automation and user control - evaluation of a turbine automation interface

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Jonas (Chalmers Univ. of Technology (Sweden))

    2008-10-15

    The study was performed during the annual operator training at the Studsvik nuclear power plant simulator facility in Nykoeping, Sweden. The participating operators came from the Oskarshamn 3 nuclear power plant. In the study, seven nuclear power plant turbine operators were interviewed concerning their use of the automatic turbine system. A field study approach together with a heuristic usability evaluation was made to assess how the operators are affected by use of automation in the control room setting. The purpose of the study was to examine how operator performance is affected by varying levels of automation in nuclear power plant turbine operation. The Automatic Turbine System (ATS) was evaluated to clarify how the ATS interface design supports the operators work. The results show that during manual control the operators experience loss of speed and accuracy in performing actions together with difficulty of dividing attention between performing a task and overall monitoring, as the major problems. The positive aspects of manual operations lie in increased feeling of being in control when performing actions by hand. With higher levels of automation the problems shift to issues concerning difficulty of following the automatic sequences and loosing track in procedures. As the level of automation gets higher, the need of feedback increases which means that information presentation also becomes more important. The use of the semiautomatic, step-mode is often preferred by the operators since it combines the speed and accuracy of the automation with the ability of maintaining the feeling of being in control. Further, a number of usability related concerns was found in the ATS interface. The operators especially experience the presentation of the conditions that manage the automatic sequences as difficult to perceive. (author)

  11. Levels of automation and user control - evaluation of a turbine automation interface

    International Nuclear Information System (INIS)

    Andersson, Jonas

    2008-10-01

    The study was performed during the annual operator training at the Studsvik nuclear power plant simulator facility in Nykoeping, Sweden. The participating operators came from the Oskarshamn 3 nuclear power plant. In the study, seven nuclear power plant turbine operators were interviewed concerning their use of the automatic turbine system. A field study approach together with a heuristic usability evaluation was made to assess how the operators are affected by use of automation in the control room setting. The purpose of the study was to examine how operator performance is affected by varying levels of automation in nuclear power plant turbine operation. The Automatic Turbine System (ATS) was evaluated to clarify how the ATS interface design supports the operators work. The results show that during manual control the operators experience loss of speed and accuracy in performing actions together with difficulty of dividing attention between performing a task and overall monitoring, as the major problems. The positive aspects of manual operations lie in increased feeling of being in control when performing actions by hand. With higher levels of automation the problems shift to issues concerning difficulty of following the automatic sequences and loosing track in procedures. As the level of automation gets higher, the need of feedback increases which means that information presentation also becomes more important. The use of the semiautomatic, step-mode is often preferred by the operators since it combines the speed and accuracy of the automation with the ability of maintaining the feeling of being in control. Further, a number of usability related concerns was found in the ATS interface. The operators especially experience the presentation of the conditions that manage the automatic sequences as difficult to perceive. (au)

  12. Automation in control laboratory and related information management system

    International Nuclear Information System (INIS)

    Gopalan, B.; Syamsundar, S.

    1997-01-01

    In the field of technology, the word automation is often employed to indicate many types of mechanized operations, though in the strict sense it means those operations which involve application of an element of knowledge or decision making without the intervention of human mind. In laboratory practice for example, the use of multi-sample array turret and millivolt recorder connected to a spectrophotometer represents a situation of mechanized operation as these gadgets help eliminating human muscle power. If a micro processor or a computer is connected to the above equipment for interpreting the measured parameters and establishing calibration graphs or display concentration results, then a real automated situation results where the application of human mind is eliminated. The state of the art of modern laboratory analysis abounds in the employment of automatic analytical equipment thanks to the development in the field of VLSI, computer, software etc. and this has given rise to the concept of laboratory automation

  13. Evaluation of an automated karyotyping system for chromosome aberration analysis

    International Nuclear Information System (INIS)

    Prichard, H.M.

    1987-01-01

    Chromosome aberration analysis is a promising complement to conventional radiation dosimetry, particularly in the complex radiation fields encountered in the space environment. The capabilities of a recently developed automated karyotyping system were evaluated both to determine current capabilities and limitations and to suggest areas where future development should be emphasized. Cells exposed to radiometric chemicals and to photon and particulate radiation were evaluated by manual inspection and by automated karyotyping. It was demonstrated that the evaluated programs were appropriate for image digitization, storage, and transmission. However, automated and semi-automated scoring techniques must be advanced significantly if in-flight chromosome aberration analysis is to be practical. A degree of artificial intelligence may be necessary to realize this goal

  14. Enhancing Cooperative Loan Scheme Through Automated Loan ...

    African Journals Online (AJOL)

    Journal Home > Vol 6, No 1 (2013) > ... The concept of automation has been variously applied in most computing fields. ... competent capabilities to eliminate data inconsistency and redundancy as well as ensuring data integrity and security, ...

  15. Study of automated segmentation of the cerebellum and brainstem on brain MR images

    International Nuclear Information System (INIS)

    Hayashi, Norio; Matsuura, Yukihiro; Sanada, Shigeru; Suzuki, Masayuki

    2005-01-01

    MR imaging is an important method for diagnosing abnormalities of the brain. This paper presents an automated method to segment the cerebellum and brainstem for brain MR images. MR images were obtained from 10 normal subjects (male 4, female 6; 22-75 years old, average 31.0 years) and 15 patients with brain atrophy (male 3, female 12; 62-85 years of age, average 76.0 years). The automated method consisted of the following four steps: segmentation of the brain on original images, detection of an upper plane of the cerebellum using the Hough transform, correction of the plane using three-dimensional (3D) information, and segmentation of the cerebellum and brainstem using the plane. The results indicated that the regions obtained by the automated method were visually similar to those obtained by a manual method. The average rates of coincidence between the automated method and manual method were 83.0±9.0% in normal subjects and 86.4±3.6% in patients. (author)

  16. SU-G-BRB-04: Automated Output Factor Measurements Using Continuous Data Logging for Linac Commissioning

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, X; Li, S; Zheng, D; Wang, S; Lei, Y; Zhang, M; Ma, R; Fan, Q; Wang, X; Li, X; Verma, V; Enke, C; Zhou, S [University of Nebraska Medical Center, Omaha, NE (United States)

    2016-06-15

    Purpose: Linac commissioning is a time consuming and labor intensive process, the streamline of which is highly desirable. In particular, manual measurement of output factors for a variety of field sizes and energy greatly hinders the commissioning efficiency. In this study, automated measurement of output factors was demonstrated as ‘one-click’ using data logging of an electrometer. Methods: Beams to be measured were created in the recording and verifying (R&V) system and configured for continuous delivery. An electrometer with an automatic data logging feature enabled continuous data collection for all fields without human intervention. The electrometer saved data into a spreadsheet every 0.5 seconds. A Matlab program was developed to analyze the excel data to monitor and check the data quality. Results: For each photon energy, output factors were measured for five configurations, including open field and four wedges. Each configuration includes 72 fields sizes, ranging from 4×4 to 20×30 cm{sup 2}. Using automation, it took 50 minutes to complete the measurement of 72 field sizes, in contrast to 80 minutes when using the manual approach. The automation avoided the necessity of redundant Linac status checks between fields as in the manual approach. In fact, the only limiting factor in such automation is Linac overheating. The data collection beams in the R&V system are reusable, and the simplified process is less error-prone. In addition, our Matlab program extracted the output factors faithfully from data logging, and the discrepancy between the automatic and manual measurement is within ±0.3%. For two separate automated measurements 30 days apart, consistency check shows a discrepancy within ±1% for 6MV photon with a 60 degree wedge. Conclusion: Automated output factor measurements can save time by 40% when compared with conventional manual approach. This work laid ground for further improvement for the automation of Linac commissioning.

  17. SU-G-BRB-04: Automated Output Factor Measurements Using Continuous Data Logging for Linac Commissioning

    International Nuclear Information System (INIS)

    Zhu, X; Li, S; Zheng, D; Wang, S; Lei, Y; Zhang, M; Ma, R; Fan, Q; Wang, X; Li, X; Verma, V; Enke, C; Zhou, S

    2016-01-01

    Purpose: Linac commissioning is a time consuming and labor intensive process, the streamline of which is highly desirable. In particular, manual measurement of output factors for a variety of field sizes and energy greatly hinders the commissioning efficiency. In this study, automated measurement of output factors was demonstrated as ‘one-click’ using data logging of an electrometer. Methods: Beams to be measured were created in the recording and verifying (R&V) system and configured for continuous delivery. An electrometer with an automatic data logging feature enabled continuous data collection for all fields without human intervention. The electrometer saved data into a spreadsheet every 0.5 seconds. A Matlab program was developed to analyze the excel data to monitor and check the data quality. Results: For each photon energy, output factors were measured for five configurations, including open field and four wedges. Each configuration includes 72 fields sizes, ranging from 4×4 to 20×30 cm"2. Using automation, it took 50 minutes to complete the measurement of 72 field sizes, in contrast to 80 minutes when using the manual approach. The automation avoided the necessity of redundant Linac status checks between fields as in the manual approach. In fact, the only limiting factor in such automation is Linac overheating. The data collection beams in the R&V system are reusable, and the simplified process is less error-prone. In addition, our Matlab program extracted the output factors faithfully from data logging, and the discrepancy between the automatic and manual measurement is within ±0.3%. For two separate automated measurements 30 days apart, consistency check shows a discrepancy within ±1% for 6MV photon with a 60 degree wedge. Conclusion: Automated output factor measurements can save time by 40% when compared with conventional manual approach. This work laid ground for further improvement for the automation of Linac commissioning.

  18. Using microwave Doppler radar in automated manufacturing applications

    Science.gov (United States)

    Smith, Gregory C.

    Since the beginning of the Industrial Revolution, manufacturers worldwide have used automation to improve productivity, gain market share, and meet growing or changing consumer demand for manufactured products. To stimulate further industrial productivity, manufacturers need more advanced automation technologies: "smart" part handling systems, automated assembly machines, CNC machine tools, and industrial robots that use new sensor technologies, advanced control systems, and intelligent decision-making algorithms to "see," "hear," "feel," and "think" at the levels needed to handle complex manufacturing tasks without human intervention. The investigator's dissertation offers three methods that could help make "smart" CNC machine tools and industrial robots possible: (1) A method for detecting acoustic emission using a microwave Doppler radar detector, (2) A method for detecting tool wear on a CNC lathe using a Doppler radar detector, and (3) An online non-contact method for detecting industrial robot position errors using a microwave Doppler radar motion detector. The dissertation studies indicate that microwave Doppler radar could be quite useful in automated manufacturing applications. In particular, the methods developed may help solve two difficult problems that hinder further progress in automating manufacturing processes: (1) Automating metal-cutting operations on CNC machine tools by providing a reliable non-contact method for detecting tool wear, and (2) Fully automating robotic manufacturing tasks by providing a reliable low-cost non-contact method for detecting on-line position errors. In addition, the studies offer a general non-contact method for detecting acoustic emission that may be useful in many other manufacturing and non-manufacturing areas, as well (e.g., monitoring and nondestructively testing structures, materials, manufacturing processes, and devices). By advancing the state of the art in manufacturing automation, the studies may help

  19. Process development for automated solar cell and module production. Task 4: automated array assembly. Quarterly report No. 5

    Energy Technology Data Exchange (ETDEWEB)

    Hagerty, J.J.

    1980-01-31

    Construction of an automated solar cell layup and interconnect system is now complete. This system incorporates a Unimate 2000 B industrial robot with an end effector consisting of a vacuum pick up and induction heating coil. The robot interfaces with a smart cell preparation station which correctly orients the cell, applies solder paste and forms and positions the correct lengths of interconnect lead. The system is controlled and monitored by a TRS-80 micro computer. The first operational tests of the fully integrated station have been run. These tests proved the soundness of the basic design concept but also pointed to areas in which modifications are necessary. These modifications are nearly complete and the improved parts are being integrated. Development of the controlling computer program is progressing to both reflect these changes and reduce operating time.

  20. American social work, corrections and restorative justice: an appraisal.

    Science.gov (United States)

    Gumz, Edward J

    2004-08-01

    Social work played an active role in American corrections until the 1980s when the ethic of rehabilitation began to give way to a more conservative doctrine of retribution. Changes in the field of social work, characterized by preference of social workers to work only with certain populations, contributed to social work's diminishment in corrections. Although efforts at rehabilitation continue in corrections, the concept of restorative justice that emphasizes assisting victims, communities, and offenders in dealing with the consequences of crime is gaining acceptance in the field of corrections in the United States and in other countries. This study explored social work's presence in corrections, the decline of that presence, and how the concept of restorative justice can invigorate social work within the field of corrections. Several examples of social work's contemporary efforts to use the concept of restorative justice in the United Kingdom are presented.

  1. Automated generation of lattice QCD Feynman rules

    Energy Technology Data Exchange (ETDEWEB)

    Hart, A.; Mueller, E.H. [Edinburgh Univ. (United Kingdom). SUPA School of Physics and Astronomy; von Hippel, G.M. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Horgan, R.R. [Cambridge Univ. (United Kingdom). DAMTP, CMS

    2009-04-15

    The derivation of the Feynman rules for lattice perturbation theory from actions and operators is complicated, especially for highly improved actions such as HISQ. This task is, however, both important and particularly suitable for automation. We describe a suite of software to generate and evaluate Feynman rules for a wide range of lattice field theories with gluons and (relativistic and/or heavy) quarks. Our programs are capable of dealing with actions as complicated as (m)NRQCD and HISQ. Automated differentiation methods are used to calculate also the derivatives of Feynman diagrams. (orig.)

  2. Automated generation of lattice QCD Feynman rules

    International Nuclear Information System (INIS)

    Hart, A.; Mueller, E.H.; Horgan, R.R.

    2009-04-01

    The derivation of the Feynman rules for lattice perturbation theory from actions and operators is complicated, especially for highly improved actions such as HISQ. This task is, however, both important and particularly suitable for automation. We describe a suite of software to generate and evaluate Feynman rules for a wide range of lattice field theories with gluons and (relativistic and/or heavy) quarks. Our programs are capable of dealing with actions as complicated as (m)NRQCD and HISQ. Automated differentiation methods are used to calculate also the derivatives of Feynman diagrams. (orig.)

  3. Cubic Dresselhaus interaction parameter from quantum corrections to the conductivity in the presence of an in-plane magnetic field

    Science.gov (United States)

    Marinescu, D. C.

    2017-09-01

    We evaluate the quantum corrections to the conductivity of a two-dimensional electron system with competing Rashba (R) and linear and cubic Dresselhaus (D) spin-orbit interactions in the presence of an in-plane magnetic field B . Within a perturbative approximation, we investigate the interplay between the spin-orbit coupling and the magnetic field in determining the transport regime in two different limiting scenarios: when only one of the linear terms, either Rashba or Dresselhaus, dominates, and at equal linear couplings, when the cubic Dresselhaus breaks the spin symmetry. In each instance, we find that for B higher than a critical value, the antilocalization correction is suppressed and the effective dephasing time saturates to a constant value determined only by the spin-orbit interaction. At equal R-D linear couplings, this value is directly proportional with the cubic Dresselhaus contribution. In the same regime, the magnetoconductivity is expressed as a simple logarithmic function dependent only on the cubic Dresselhaus constant.

  4. Multi-layer universal correction magnet

    International Nuclear Information System (INIS)

    Parzen, G.

    1981-08-01

    This paper presents an approach for constructing a universal correction magnet in which the return currents play an active role in determining the field. The return currents are not hidden by the iron shield. The coil is wound in many layers, instead of just one layer. Each layer has a particular symmetry, and generates a particular class of field multipoles such that the location of the return current for each independently excited current block is clear. Three layers may be sufficient in many cases. This approach is applied to the ISABELLE storage accelerator correction system

  5. Automated rapid chemistry in heavy element research

    International Nuclear Information System (INIS)

    Schaedel, M.

    1994-01-01

    With the increasingly short half-lives of the heavy element isotopes in the transition region from the heaviest actinides to the transactinide elements the demand for automated rapid chemistry techniques is also increasing. Separation times of significantly less than one minute, high chemical yields, high repetition rates, and an adequate detection system are prerequisites for many successful experiments in this field. The development of techniques for separations in the gas phase and in the aqueous phase for applications of chemical or nuclear studies of the heaviest elements are briefly outlined. Typical examples of results obtained with automated techniques are presented for studies up to element 105, especially those obtained with the Automated Rapid Chemistry Apparatus, ARCA. The prospects to investigate the properties of even heavier elements with chemical techniques are discussed

  6. Automated fault extraction and classification using 3-D seismic data for the Ekofisk field development

    Energy Technology Data Exchange (ETDEWEB)

    Signer, C.; Nickel, M.; Randen, T.; Saeter, T.; Soenneland, H.H.

    1998-12-31

    Mapping of fractures is important for the prediction of fluid flow in many reservoir types. The fluid flow depends mainly on the efficiency of the reservoir seals. Improved spatial mapping of the open and closed fracture systems will allow a better prediction of the fluid flow pattern. The primary objectives of this paper is to present fracture characterization at the reservoir scale combined with seismic facies mapping. The complexity of the giant Ekofisk field on the Norwegian continental shelf provides an ideal framework for testing the validity and the applicability of an automated seismic fault and fracture detection and mapping tool. The mapping of the faults can be based on seismic attribute grids, which means that attribute-responses related to faults are extracted along key horizons which were interpreted in the reservoir interval. 3 refs., 3 figs.

  7. A study of the dosimetry of small field photon beams used in intensity modulated radiation therapy in inhomogeneous media: Monte Carlo simulations, and algorithm comparisons and corrections

    International Nuclear Information System (INIS)

    Jones, Andrew Osler

    2004-01-01

    There is an increasing interest in the use of inhomogeneity corrections for lung, air, and bone in radiotherapy treatment planning. Traditionally, corrections based on physical density have been used. Modern algorithms use the electron density derived from CT images. Small fields are used in both conformal radiotherapy and IMRT, however, their beam characteristics in inhomogeneous media have not been extensively studied. This work compares traditional and modern treatment planning algorithms to Monte Carlo simulations in and near low-density inhomogeneities. Field sizes ranging from 0.5 cm to 5 cm in diameter are projected onto a phantom containing inhomogeneities and depth dose curves are compared. Comparisons of the Dose Perturbation Factors (DPF) are presented as functions of density and field size. Dose Correction Factors (DCF), which scale the algorithms to the Monte Carlo data, are compared for each algorithm. Physical scaling algorithms such as Batho and Equivalent Pathlength (EPL) predict an increase in dose for small fields passing through lung tissue, where Monte Carlo simulations show a sharp dose drop. The physical model-based collapsed cone convolution (CCC) algorithm correctly predicts the dose drop, but does not accurately predict the magnitude. Because the model-based algorithms do not correctly account for the change in backscatter, the dose drop predicted by CCC occurs farther downstream compared to that predicted by the Monte Carlo simulations. Beyond the tissue inhomogeneity all of the algorithms studied predict dose distributions in close agreement with Monte Carlo simulations. Dose-volume relationships are important in understanding the effects of radiation to the lung. The dose within the lung is affected by a complex function of beam energy, lung tissue density, and field size. Dose algorithms vary in their abilities to correctly predict the dose to the lung tissue. A thorough analysis of the effects of density, and field size on dose to the

  8. Loop quantum corrected Einstein Yang-Mills black holes

    Science.gov (United States)

    Protter, Mason; DeBenedictis, Andrew

    2018-05-01

    In this paper, we study the homogeneous interiors of black holes possessing SU(2) Yang-Mills fields subject to corrections inspired by loop quantum gravity. The systems studied possess both magnetic and induced electric Yang-Mills fields. We consider the system of equations both with and without Wilson loop corrections to the Yang-Mills potential. The structure of the Yang-Mills Hamiltonian, along with the restriction to homogeneity, allows for an anomaly-free effective quantization. In particular, we study the bounce which replaces the classical singularity and the behavior of the Yang-Mills fields in the quantum corrected interior, which possesses topology R ×S2 . Beyond the bounce, the magnitude of the Yang-Mills electric field asymptotically grows monotonically. This results in an ever-expanding R sector even though the two-sphere volume is asymptotically constant. The results are similar with and without Wilson loop corrections on the Yang-Mills potential.

  9. Using historical wafermap data for automated yield analysis

    International Nuclear Information System (INIS)

    Tobin, K.W.; Karnowski, T.P.; Gleason, S.S.; Jensen, D.; Lakhani, F.

    1999-01-01

    To be productive and profitable in a modern semiconductor fabrication environment, large amounts of manufacturing data must be collected, analyzed, and maintained. This includes data collected from in- and off-line wafer inspection systems and from the process equipment itself. This data is increasingly being used to design new processes, control and maintain tools, and to provide the information needed for rapid yield learning and prediction. Because of increasing device complexity, the amount of data being generated is outstripping the yield engineer close-quote s ability to effectively monitor and correct unexpected trends and excursions. The 1997 SIA National Technology Roadmap for Semiconductors highlights a need to address these issues through open-quotes automated data reduction algorithms to source defects from multiple data sources and to reduce defect sourcing time.close quotes SEMATECH and the Oak Ridge National Laboratory have been developing new strategies and technologies for providing the yield engineer with higher levels of assisted data reduction for the purpose of automated yield analysis. In this article, we will discuss the current state of the art and trends in yield management automation. copyright 1999 American Vacuum Society

  10. The practical application of scintillation dosimetry in small-field photon-beam radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Burke, Elisa; Poppinga, Daniela; Schoenfeld, Andreas A.; Poppe, Bjoern; Looe, Hui Khee [Oldenburg Univ. (Germany). Univ. Clinic for Medical Radiation Physics; Harder, Dietrich [Goettingen Univ. (Germany). Medical Physics and Biophysics

    2017-07-01

    Plastic scintillation detectors are a new instrument of stereotactic photon-beam dosimetry. The clinical application of the plastic scintillation detector Exradin W1 at the Siemens Artiste and Elekta Synergy accelerators is a matter of current interest. In order to reduce the measurement uncertainty, precautions have to be taken with regard to the geometrical arrangement of the scintillator, the light-guide fiber and the photodiode in the radiation field. To determine the ''Cerenkov light ratio'' CLR with a type A uncertainty below 1%, the Cerenkov calibration procedure for small-field measurements based on the two-channel spectral method was used. Output factors were correctly measured with the W1 for field sizes down to 0.5 x 0.5 cm{sup 2} with a type A uncertainty of 1.8%. Measurements of small field dose profiles and percentage depth dose curves were carried out with the W1 using automated water phantom profile scans, and a type A uncertainty for dose maxima of 1.4% was achieved. The agreement with a synthetic diamond detector (microDiamond, PTW Freiburg) and a plane parallel ionization chamber (Roos chamber, PTW Freiburg) in relative dose measurements was excellent. In oversight of all results, the suitability of the plastic scintillation detector Exradin W1 for clinical dosimetry under stereotactic conditions, in particular the tried and tested procedures for CLR determination, output factor measurement and automated dose profile scans in water phantoms, have been confirmed.

  11. Generalized second law of thermodynamics for non-canonical scalar field model with corrected-entropy

    International Nuclear Information System (INIS)

    Das, Sudipta; Mamon, Abdulla Al; Debnath, Ujjal

    2015-01-01

    In this work, we have considered a non-canonical scalar field dark energy model in the framework of flat FRW background. It has also been assumed that the dark matter sector interacts with the non-canonical dark energy sector through some interaction term. Using the solutions for this interacting non-canonical scalar field dark energy model, we have investigated the validity of generalized second law (GSL) of thermodynamics in various scenarios using first law and area law of thermodynamics. For this purpose, we have assumed two types of horizons viz apparent horizon and event horizon for the universe and using first law of thermodynamics, we have examined the validity of GSL on both apparent and event horizons. Next, we have considered two types of entropy-corrections on apparent and event horizons. Using the modified area law, we have examined the validity of GSL of thermodynamics on apparent and event horizons under some restrictions of model parameters. (orig.)

  12. Quantum gravitational corrections to the functional Schroedinger equation

    International Nuclear Information System (INIS)

    Kiefer, C.; Singh, T.P.

    1990-10-01

    We derive corrections to the Schroedinger equation which arise from the quantization of the gravitational field. This is achieved through an expansion of the full functional Wheeler-DeWitt equation with respect to powers of the Planck mass. We demonstrate that the corrections terms are independent of the factor ordering which is chosen for the gravitational kinetic term. Although the corrections are numerically extremely tiny, we show how they lead, at least in principle, to shift in the spectral lines of hydrogen type atoms. We discuss the significance of these corrections for quantum field theory near the Planck scale. (author). 35 refs

  13. Holographic bulk reconstruction with α' corrections

    Science.gov (United States)

    Roy, Shubho R.; Sarkar, Debajyoti

    2017-10-01

    We outline a holographic recipe to reconstruct α' corrections to anti-de Sitter (AdS) (quantum) gravity from an underlying CFT in the strictly planar limit (N →∞ ). Assuming that the boundary CFT can be solved in principle to all orders of the 't Hooft coupling λ , for scalar primary operators, the λ-1 expansion of the conformal dimensions can be mapped to higher curvature corrections of the dual bulk scalar field action. Furthermore, for the metric perturbations in the bulk, the AdS /CFT operator-field isomorphism forces these corrections to be of the Lovelock type. We demonstrate this by reconstructing the coefficient of the leading Lovelock correction, also known as the Gauss-Bonnet term in a bulk AdS gravity action using the expression of stress-tensor two-point function up to subleading order in λ-1.

  14. Modern approaches to agent-based complex automated negotiation

    CERN Document Server

    Bai, Quan; Ito, Takayuki; Zhang, Minjie; Ren, Fenghui; Aydoğan, Reyhan; Hadfi, Rafik

    2017-01-01

    This book addresses several important aspects of complex automated negotiations and introduces a number of modern approaches for facilitating agents to conduct complex negotiations. It demonstrates that autonomous negotiation is one of the most important areas in the field of autonomous agents and multi-agent systems. Further, it presents complex automated negotiation scenarios that involve negotiation encounters that may have, for instance, a large number of agents, a large number of issues with strong interdependencies and/or real-time constraints.

  15. Automated cloud tracking system for the Akatsuki Venus Climate Orbiter data

    Science.gov (United States)

    Ogohara, Kazunori; Kouyama, Toru; Yamamoto, Hiroki; Sato, Naoki; Takagi, Masahiro; Imamura, Takeshi

    2012-02-01

    Japanese Venus Climate Orbiter, Akatsuki, is cruising to approach to Venus again although its first Venus orbital insertion (VOI) has been failed. At present, we focus on the next opportunity of VOI and the following scientific observations.We have constructed an automated cloud tracking system for processing data obtained by Akatsuki in the present study. In this system, correction of the pointing of the satellite is essentially important for improving accuracy of the cloud motion vectors derived using the cloud tracking. Attitude errors of the satellite are reduced by fitting an ellipse to limb of an imaged Venus disk. Next, longitude-latitude distributions of brightness (cloud patterns) are calculated to make it easy to derive the cloud motion vectors. The grid points are distributed at regular intervals in the longitude-latitude coordinate. After applying the solar zenith correction and a highpass filter to the derived longitude-latitude distributions of brightness, the cloud features are tracked using pairs of images. As a result, we obtain cloud motion vectors on longitude-latitude grid points equally spaced. These entire processes are pipelined and automated, and are applied to all data obtained by combinations of cameras and filters onboard Akatsuki. It is shown by several tests that the cloud motion vectors are determined with a sufficient accuracy. We expect that longitude-latitude data sets created by the automated cloud tracking system will contribute to the Venus meteorology.

  16. A GENERALIZED NON-LINEAR METHOD FOR DISTORTION CORRECTION AND TOP-DOWN VIEW CONVERSION OF FISH EYE IMAGES

    Directory of Open Access Journals (Sweden)

    Vivek Singh Bawa

    2017-06-01

    Full Text Available Advanced driver assistance systems (ADAS have been developed to automate and modify vehicles for safety and better driving experience. Among all computer vision modules in ADAS, 360-degree surround view generation of immediate surroundings of the vehicle is very important, due to application in on-road traffic assistance, parking assistance etc. This paper presents a novel algorithm for fast and computationally efficient transformation of input fisheye images into required top down view. This paper also presents a generalized framework for generating top down view of images captured by cameras with fish-eye lenses mounted on vehicles, irrespective of pitch or tilt angle. The proposed approach comprises of two major steps, viz. correcting the fish-eye lens images to rectilinear images, and generating top-view perspective of the corrected images. The images captured by the fish-eye lens possess barrel distortion, for which a nonlinear and non-iterative method is used. Thereafter, homography is used to obtain top-down view of corrected images. This paper also targets to develop surroundings of the vehicle for wider distortion less field of view and camera perspective independent top down view, with minimum computation cost which is essential due to limited computation power on vehicles.

  17. Multiple scattering corrections to the Beer-Lambert law. 2: Detector with a variable field of view.

    Science.gov (United States)

    Zardecki, A; Tam, W G

    1982-07-01

    The multiple scattering corrections to the Beer-Lambert law in the case of a detector with a variable field of view are analyzed. We introduce transmission functions relating the received radiant power to reference power levels relevant to two different experimental situations. In the first case, the transmission function relates the received power to a reference power level appropriate to a nonattenuating medium. In the second case, the reference power level is established by bringing the receiver to the close-up position with respect to the source. To examine the effect of the variation of the detector field of view the behavior of the gain factor is studied. Numerical results modeling the laser beam propagation in fog, cloud, and rain are presented.

  18. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.

    Science.gov (United States)

    Wang, Jun Yi; Ngo, Michael M; Hessl, David; Hagerman, Randi J; Rivera, Susan M

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as

  19. Extended Field Laser Confocal Microscopy (EFLCM): Combining automated Gigapixel image capture with in silico virtual microscopy

    International Nuclear Information System (INIS)

    Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo

    2008-01-01

    Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes

  20. Computer-automated tuning of semiconductor double quantum dots into the single-electron regime

    NARCIS (Netherlands)

    Baart, T.A.; Eendebak, P.T.; Reichl, C.; Wegscheider, W.; Vandersypen, L.M.K.

    2016-01-01

    We report the computer-automated tuning of gate-defined semiconductor double quantum dots in GaAs heterostructures. We benchmark the algorithm by creating three double quantum dots inside a linear array of four quantum dots. The algorithm sets the correct gate voltages for all the gates to tune the

  1. An automated approach to the design of decision tree classifiers

    Science.gov (United States)

    Argentiero, P.; Chin, R.; Beaudet, P.

    1982-01-01

    An automated technique is presented for designing effective decision tree classifiers predicated only on a priori class statistics. The procedure relies on linear feature extractions and Bayes table look-up decision rules. Associated error matrices are computed and utilized to provide an optimal design of the decision tree at each so-called 'node'. A by-product of this procedure is a simple algorithm for computing the global probability of correct classification assuming the statistical independence of the decision rules. Attention is given to a more precise definition of decision tree classification, the mathematical details on the technique for automated decision tree design, and an example of a simple application of the procedure using class statistics acquired from an actual Landsat scene.

  2. Improved automated perimetry performance in elderly subjects after listening to Mozart

    Directory of Open Access Journals (Sweden)

    Junia Cabral Marques

    2009-01-01

    Full Text Available PURPOSE: To evaluate the performance of automated perimetry of elderly subjects naïve to AP after listening to a Mozart sonata. INTRODUCTION: Automated perimetry (AP is a psychophysical test used to assess visual fields in patients with neurological disorders and glaucoma. In a previous study, Fiorelli et al. showed that young subjects who listened to a Mozart sonata prior to undergoing AP performed better in terms of reliability than those who did not listen to the sonata. METHODS: Fifty-two AP-naïve, normal subjects underwent Automated perimetry (SITA 24-2. The study group (25 subjects underwent AP after listening to Mozart's Sonata for Two Pianos in D Major, and the control group (27 subjects underwent Automated perimetry without prior exposure to the music. RESULTS: The study group had significantly lower false negative rates and a lower visual field reliability score than the controls (P=0.04 and P=0.04, respectively. The test time was shorter for the study group (P=0.03. DISCUSSION: This study shows that elderly subjects, when exposed to the Mozart sonata immediately before AP testing, have lower false negative rates and lower visual field reliability scores when compared with an age- and gender-matched control group. Our results differ from those of Fiorelli et al. who found lower false positive rates and less fixation loss in addition to lower false negative rates. CONCLUSION: Listening to a Mozart sonata seems to improve automated perimetry reliability in elderly subjects.

  3. Unitarity corrections and high field strengths in high energy hard collisions

    International Nuclear Information System (INIS)

    Kovchegov, Y.V.; Mueller, A.H.

    1997-01-01

    Unitarity corrections to the BFKL description of high energy hard scattering are viewed in large N c QCD in light-cone quantization. In a center of mass frame unitarity corrections to high energy hard scattering are manifestly perturbatively calculable and unrelated to questions of parton saturation. In a frame where one of the hadrons is initially at rest unitarity corrections are related to parton saturation effects and involve potential strengths A μ ∝1/g. In such a frame we describe the high energy scattering in terms of the expectation value of a Wilson loop. The large potentials A μ ∝1/g are shown to be pure gauge terms allowing perturbation theory to again describe unitarity corrections and parton saturation effects. Genuine nonperturbative effects only come in at energies well beyond those energies where unitarity constraints first become important. (orig.)

  4. Automated approach to detecting behavioral states using EEG-DABS

    Directory of Open Access Journals (Sweden)

    Zachary B. Loris

    2017-07-01

    Full Text Available Electrocorticographic (ECoG signals represent cortical electrical dipoles generated by synchronous local field potentials that result from simultaneous firing of neurons at distinct frequencies (brain waves. Since different brain waves correlate to different behavioral states, ECoG signals presents a novel strategy to detect complex behaviors. We developed a program, EEG Detection Analysis for Behavioral States (EEG-DABS that advances Fast Fourier Transforms through ECoG signals time series, separating it into (user defined frequency bands and normalizes them to reduce variability. EEG-DABS determines events if segments of an experimental ECoG record have significantly different power bands than a selected control pattern of EEG. Events are identified at every epoch and frequency band and then are displayed as output graphs by the program. Certain patterns of events correspond to specific behaviors. Once a predetermined pattern was selected for a behavioral state, EEG-DABS correctly identified the desired behavioral event. The selection of frequency band combinations for detection of the behavior affects accuracy of the method. All instances of certain behaviors, such as freezing, were correctly identified from the event patterns generated with EEG-DABS. Detecting behaviors is typically achieved by visually discerning unique animal phenotypes, a process that is time consuming, unreliable, and subjective. EEG-DABS removes variability by using defined parameters of EEG/ECoG for a desired behavior over chronic recordings. EEG-DABS presents a simple and automated approach to quantify different behavioral states from ECoG signals.

  5. Automated Comparative Auditing of NCIT Genomic Roles Using NCBI

    Science.gov (United States)

    Cohen, Barry; Oren, Marc; Min, Hua; Perl, Yehoshua; Halper, Michael

    2008-01-01

    Biomedical research has identified many human genes and various knowledge about them. The National Cancer Institute Thesaurus (NCIT) represents such knowledge as concepts and roles (relationships). Due to the rapid advances in this field, it is to be expected that the NCIT’s Gene hierarchy will contain role errors. A comparative methodology to audit the Gene hierarchy with the use of the National Center for Biotechnology Information’s (NCBI’s) Entrez Gene database is presented. The two knowledge sources are accessed via a pair of Web crawlers to ensure up-to-date data. Our algorithms then compare the knowledge gathered from each, identify discrepancies that represent probable errors, and suggest corrective actions. The primary focus is on two kinds of gene-roles: (1) the chromosomal locations of genes, and (2) the biological processes in which genes plays a role. Regarding chromosomal locations, the discrepancies revealed are striking and systematic, suggesting a structurally common origin. In regard to the biological processes, difficulties arise because genes frequently play roles in multiple processes, and processes may have many designations (such as synonymous terms). Our algorithms make use of the roles defined in the NCIT Biological Process hierarchy to uncover many probable gene-role errors in the NCIT. These results show that automated comparative auditing is a promising technique that can identify a large number of probable errors and corrections for them in a terminological genomic knowledge repository, thus facilitating its overall maintenance. PMID:18486558

  6. Next frontier in agent-based complex automated negotiation

    CERN Document Server

    Ito, Takayuki; Zhang, Minjie; Robu, Valentin

    2015-01-01

    This book focuses on automated negotiations based on multi-agent systems. It is intended for researchers and students in various fields involving autonomous agents and multi-agent systems, such as e-commerce tools, decision-making and negotiation support systems, and collaboration tools. The contents will help them to understand the concept of automated negotiations, negotiation protocols, negotiating agents’ strategies, and the applications of those strategies. In this book, some negotiation protocols focusing on the multiple interdependent issues in negotiations are presented, making it possible to find high-quality solutions for the complex agents’ utility functions. This book is a compilation of the extended versions of the very best papers selected from the many that were presented at the International Workshop on Agent-Based Complex Automated Negotiations.

  7. Interacting viscous entropy-corrected holographic scalar field models of dark energy with time-varying G in modified FRW cosmology

    International Nuclear Information System (INIS)

    Adabi, Farzin; Karami, Kayoomars; Felegary, Fereshte; Azarmi, Zohre

    2012-01-01

    We study the entropy-corrected version of the holographic dark energy (HDE) model in the framework of modified Friedmann-Robertson-Walker cosmology. We consider a non-flat universe filled with an interacting viscous entropy-corrected HDE (ECHDE) with dark matter. Also included in our model is the case of the variable gravitational constant G. We obtain the equation of state and the deceleration parameters of the interacting viscous ECHDE. Moreover, we reconstruct the potential and the dynamics of the quintessence, tachyon, K-essence and dilaton scalar field models according to the evolutionary behavior of the interacting viscous ECHDE model with time-varying G. (research papers)

  8. Future of Automated Insulin Delivery Systems.

    Science.gov (United States)

    Castle, Jessica R; DeVries, J Hans; Kovatchev, Boris

    2017-06-01

    Advances in continuous glucose monitoring (CGM) have brought on a paradigm shift in the management of type 1 diabetes. These advances have enabled the automation of insulin delivery, where an algorithm determines the insulin delivery rate in response to the CGM values. There are multiple automated insulin delivery (AID) systems in development. A system that automates basal insulin delivery has already received Food and Drug Administration approval, and more systems are likely to follow. As the field of AID matures, future systems may incorporate additional hormones and/or multiple inputs, such as activity level. All AID systems are impacted by CGM accuracy and future CGM devices must be shown to be sufficiently accurate to be safely incorporated into AID. In this article, we summarize recent achievements in AID development, with a special emphasis on CGM sensor performance, and discuss the future of AID systems from the point of view of their input-output characteristics, form factor, and adaptability.

  9. Recent advances in agent-based complex automated negotiation

    CERN Document Server

    Ito, Takayuki; Zhang, Minjie; Fujita, Katsuhide; Robu, Valentin

    2016-01-01

    This book covers recent advances in Complex Automated Negotiations as a widely studied emerging area in the field of Autonomous Agents and Multi-Agent Systems. The book includes selected revised and extended papers from the 7th International Workshop on Agent-Based Complex Automated Negotiation (ACAN2014), which was held in Paris, France, in May 2014. The book also includes brief introductions about Agent-based Complex Automated Negotiation which are based on tutorials provided in the workshop, and brief summaries and descriptions about the ANAC'14 (Automated Negotiating Agents Competition) competition, where authors of selected finalist agents explain the strategies and the ideas used by them. The book is targeted to academic and industrial researchers in various communities of autonomous agents and multi-agent systems, such as agreement technology, mechanism design, electronic commerce, related areas, as well as graduate, undergraduate, and PhD students working in those areas or having interest in them.

  10. Evaluation of Machine Learning Methods for LHC Optics Measurements and Corrections Software

    CERN Document Server

    AUTHOR|(CDS)2206853; Henning, Peter

    The field of artificial intelligence is driven by the goal to provide machines with human-like intelligence. However modern science is currently facing problems with high complexity that cannot be solved by humans in the same timescale as by machines. Therefore there is a demand on automation of complex tasks. To identify the category of tasks which can be performed by machines in the domain of optics measurements and correction on the Large Hadron Collider (LHC) is one of the central research subjects of this thesis. The application of machine learning methods and concepts of artificial intelligence can be found in various industry and scientific branches. In High Energy Physics these concepts are mostly used in offline analysis of experiments data and to perform regression tasks. In Accelerator Physics the machine learning approach has not found a wide application yet. Therefore potential tasks for machine learning solutions can be specified in this domain. The appropriate methods and their suitability for...

  11. Low cost automation

    International Nuclear Information System (INIS)

    1987-03-01

    This book indicates method of building of automation plan, design of automation facilities, automation and CHIP process like basics of cutting, NC processing machine and CHIP handling, automation unit, such as drilling unit, tapping unit, boring unit, milling unit and slide unit, application of oil pressure on characteristics and basic oil pressure circuit, application of pneumatic, automation kinds and application of process, assembly, transportation, automatic machine and factory automation.

  12. Automating Groundwater Sampling At Hanford, The Next Step

    International Nuclear Information System (INIS)

    Connell, C.W.; Conley, S.F.; Hildebrand, R.D.; Cunningham, D.E.

    2010-01-01

    Historically, the groundwater monitoring activities at the Department of Energy's Hanford Site in southeastern Washington State have been very 'people intensive.' Approximately 1500 wells are sampled each year by field personnel or 'samplers.' These individuals have been issued pre-printed forms showing information about the well(s) for a particular sampling evolution. This information is taken from 2 official electronic databases: the Hanford Well information System (HWIS) and the Hanford Environmental Information System (HEIS). The samplers used these hardcopy forms to document the groundwater samples and well water-levels. After recording the entries in the field, the samplers turned the forms in at the end of the day and other personnel posted the collected information onto a spreadsheet that was then printed and included in a log book. The log book was then used to make manual entries of the new information into the software application(s) for the HEIS and HWIS databases. A pilot project for automating this extremely tedious process was lauched in 2008. Initially, the automation was focused on water-level measurements. Now, the effort is being extended to automate the meta-data associated with collecting groundwater samples. The project allowed electronic forms produced in the field by samplers to be used in a work flow process where the data is transferred to the database and electronic form is filed in managed records - thus eliminating manually completed forms. Elimating the manual forms and streamlining the data entry not only improved the accuracy of the information recorded, but also enhanced the efficiency and sampling capacity of field office personnel.

  13. Corrective Action Decision Document/Closure Report for Corrective Action Unit 567: Miscellaneous Soil Sites - Nevada National Security Site, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    Matthews, Patrick [Navarro-Intera, LLC (N-I), Las Vegas, NV (United States)

    2014-12-01

    This Corrective Action Decision Document/Closure Report presents information supporting the closure of Corrective Action Unit (CAU) 567: Miscellaneous Soil Sites, Nevada National Security Site, Nevada. The purpose of this Corrective Action Decision Document/Closure Report is to provide justification and documentation supporting the recommendation that no further corrective action is needed for CAU 567 based on the implementation of the corrective actions. The corrective actions implemented at CAU 567 were developed based on an evaluation of analytical data from the CAI, the assumed presence of COCs at specific locations, and the detailed and comparative analysis of the CAAs. The CAAs were selected on technical merit focusing on performance, reliability, feasibility, safety, and cost. The implemented corrective actions meet all requirements for the technical components evaluated. The CAAs meet all applicable federal and state regulations for closure of the site. Based on the implementation of these corrective actions, the DOE, National Nuclear Security Administration Nevada Field Office provides the following recommendations: • No further corrective actions are necessary for CAU 567. • The Nevada Division of Environmental Protection issue a Notice of Completion to the DOE, National Nuclear Security Administration Nevada Field Office for closure of CAU 567. • CAU 567 be moved from Appendix III to Appendix IV of the FFACO.

  14. Automation bias: a systematic review of frequency, effect mediators, and mitigators.

    Science.gov (United States)

    Goddard, Kate; Roudsari, Abdul; Wyatt, Jeremy C

    2012-01-01

    Automation bias (AB)--the tendency to over-rely on automation--has been studied in various academic fields. Clinical decision support systems (CDSS) aim to benefit the clinical decision-making process. Although most research shows overall improved performance with use, there is often a failure to recognize the new errors that CDSS can introduce. With a focus on healthcare, a systematic review of the literature from a variety of research fields has been carried out, assessing the frequency and severity of AB, the effect mediators, and interventions potentially mitigating this effect. This is discussed alongside automation-induced complacency, or insufficient monitoring of automation output. A mix of subject specific and freetext terms around the themes of automation, human-automation interaction, and task performance and error were used to search article databases. Of 13 821 retrieved papers, 74 met the inclusion criteria. User factors such as cognitive style, decision support systems (DSS), and task specific experience mediated AB, as did attitudinal driving factors such as trust and confidence. Environmental mediators included workload, task complexity, and time constraint, which pressurized cognitive resources. Mitigators of AB included implementation factors such as training and emphasizing user accountability, and DSS design factors such as the position of advice on the screen, updated confidence levels attached to DSS output, and the provision of information versus recommendation. By uncovering the mechanisms by which AB operates, this review aims to help optimize the clinical decision-making process for CDSS developers and healthcare practitioners.

  15. A Toolchain to Produce Correct-by-Construction OCaml Programs

    OpenAIRE

    Filliâtre , Jean-Christophe; Gondelman , Léon; Paskevich , Andrei; Pereira , Mário; Melo De Sousa , Simão

    2018-01-01

    This paper presents a methodology to get correct-by-construction OCaml programs using the Why3 tool. First, a formal behavioral specification is given in the form of an OCaml module signature extended with type invariants and function contracts, in the spirit of JML. Second, an implementation is written in the programming language of Why3 and then verified with respect to the specification. Finally, an OCaml program is obtained by an automated translation. Our methodology is illustrated with ...

  16. A fully automated system for ultrasonic power measurement and simulation accordingly to IEC 61161:2006

    International Nuclear Information System (INIS)

    Costa-Felix, Rodrigo P B; Alvarenga, Andre V; Hekkenberg, Rob

    2011-01-01

    The ultrasonic power measurement, worldwide accepted, standard is the IEC 61161, presently in its 2nd edition (2006), but under review. To fulfil its requirements, considering that a radiation force balance is to be used as ultrasonic power detector, a large amount of raw data (mass measurement) shall be collected as function of time to perform all necessary calculations and corrections. Uncertainty determination demands calculation effort of raw and processed data. Although it is possible to be undertaken in an old-fashion way, using spread sheets and manual data collection, automation software are often used in metrology to provide a virtually error free environment concerning data acquisition and repetitive calculations and corrections. Considering that, a fully automate ultrasonic power measurement system was developed and comprehensively tested. A 0,1 mg of precision balance model CP224S (Sartorius, Germany) was used as measuring device and a calibrated continuous wave ultrasound check source (Precision Acoustics, UK) was the device under test. A 150 ml container filled with degassed water and containing an absorbing target at the bottom was placed on the balance pan. Besides the feature of automation software, a routine of power measurement simulation was implemented. It was idealized as a teaching tool of how ultrasonic power emission behaviour is with a radiation force balance equipped with an absorbing target. Automation software was considered as an effective tool for speeding up ultrasonic power measurement, while allowing accurate calculation and attractive graphical partial and final results.

  17. A 0.5 Tesla Transverse-Field Alternating Magnetic Field Demagnetizer

    Science.gov (United States)

    Schillinger, W. E.; Morris, E. R.; Finn, D. R.; Coe, R. S.

    2015-12-01

    We have built an alternating field demagnetizer that can routinely achieve a maximum field of 0.5 Tesla. It uses an amorphous magnetic core with an air-cooled coil. We have started with a 0.5 T design, which satisfies most of our immediate needs, but we can certainly achieve higher fields. In our design, the magnetic field is transverse to the bore and uniform to 1% over a standard (25 mm) paleomagnetic sample. It is powered by a 1 kW power amplifier and is compatible with our existing sample handler for automated demagnetization and measurement (Morris et al., 2009). It's much higher peak field has enabled us to completely demagnetize many of the samples that previously we could not with commercial equipment. This capability is especially needed for high-coercivity sedimentary and igneous rocks that contain magnetic minerals that alter during thermal demagnetization. It will also enable detailed automated demagnetization of high coercivity phases in extraterrestrial samples, such as native iron, iron-alloy and sulfide minerals that are common in lunar rocks and meteorites. Furthermore, it has opened the door for us to use the rock-magnetic technique of component analysis, using coercivity distributions derived from very detailed AF demagnetization of NRM and remanence produced in the laboratory to characterize the magnetic mineralogy of sedimentary rocks. In addition to the many benefits this instrument has brought to our own research, a much broader potential impact is to replace the transverse coils in automated AF demagnetization systems, which typically are limited to peak fields around 0.1 T.

  18. Drilling Automation Demonstrations in Subsurface Exploration for Astrobiology

    Science.gov (United States)

    Glass, Brian; Cannon, H.; Lee, P.; Hanagud, S.; Davis, K.

    2006-01-01

    This project proposes to study subsurface permafrost microbial habitats at a relevant Arctic Mars-analog site (Haughton Crater, Devon Island, Canada) while developing and maturing the subsurface drilling and drilling automation technologies that will be required by post-2010 missions. It builds on earlier drilling technology projects to add permafrost and ice-drilling capabilities to 5m with a lightweight drill that will be automatically monitored and controlled in-situ. Frozen cores obtained with this drill under sterilized protocols will be used in testing three hypotheses pertaining to near-surface physical geology and ground H2O ice distribution, viewed as a habitat for microbial life in subsurface ice and ice-consolidated sediments. Automation technologies employed will demonstrate hands-off diagnostics and drill control, using novel vibrational dynamical analysis methods and model-based reasoning to monitor and identify drilling fault states before and during faults. Three field deployments, to a Mars-analog site with frozen impact crater fallback breccia, will support science goals, provide a rigorous test of drilling automation and lightweight permafrost drilling, and leverage past experience with the field site s particular logistics.

  19. Developing Formal Correctness Properties from Natural Language Requirements

    Science.gov (United States)

    Nikora, Allen P.

    2006-01-01

    This viewgraph presentation reviews the rationale of the program to transform natural language specifications into formal notation.Specifically, automate generation of Linear Temporal Logic (LTL)correctness properties from natural language temporal specifications. There are several reasons for this approach (1) Model-based techniques becoming more widely accepted, (2) Analytical verification techniques (e.g., model checking, theorem proving) significantly more effective at detecting types of specification design errors (e.g., race conditions, deadlock) than manual inspection, (3) Many requirements still written in natural language, which results in a high learning curve for specification languages, associated tools and increased schedule and budget pressure on projects reduce training opportunities for engineers, and (4) Formulation of correctness properties for system models can be a difficult problem. This has relevance to NASA in that it would simplify development of formal correctness properties, lead to more widespread use of model-based specification, design techniques, assist in earlier identification of defects and reduce residual defect content for space mission software systems. The presentation also discusses: potential applications, accomplishments and/or technological transfer potential and the next steps.

  20. TH-CD-BRA-05: First Water Calorimetric Dw Measurement and Direct Measurement of Magnetic Field Correction Factors, KQ,B, in a 1.5 T B-Field of An MRI Linac

    Energy Technology Data Exchange (ETDEWEB)

    Prez, L de; Pooter, J de; Jansen, B [VSL, Delft (Netherlands); Wolthaus, J; Asselen, B van; Woodings, S; Soest, T; Kok, J; Raaymakers, B [University Medical Center Utrecht, Utrecht (Netherlands)

    2016-06-15

    Purpose: Reference dosimetry in MR-guided radiotherapy is performed in the presence of a B-field. As a consequence the response of ionization chambers changes considerably and depends on parameters not considered in traditional reference dosimetry. Therefore future Codes of Practices need ionization chamber correction factors to correct for both the change in beam quality and the presence of a B-field. The objective was to study the feasibility of water calorimetric absorbed-dose measurements in a 1.5 T B-field of an MRLinac and the direct measurement of kQ,B calibration of ionization chambers. Methods: Calorimetric absorbed dose to water Dw was measured with a new water calorimeter in the bore of an MRLinac (TPR20,10 of 0.702). Two waterproof ionization chambers (PTW 30013, IBA FC-65G) were calibrated inside the calorimeter phantom (ND,w,Q,B). Both measurements were normalized to a monitor ionization chamber. Ionization chamber measurements were corrected for conventional influence parameter. Based on the chambers’ Co-60 calibrations (ND,w,Q0), measured directly against the calorimeter. In this study the correction factors kQ,B was determined as the ratio of the calibration coefficients in the MRLinac and in Co-60. Additionally, kB was determined based on kQ values obtained with the IAEA TRS-398 Code of Practice. Results: The kQ,B factors of the ionization chambers mentioned above were respectively 0.9488(8) and 0.9445(8) with resulting kB factors of 0.961(13) and 0.952(13) with standard uncertainties on the least significant digit(s) between brackets. Conclusion: Calorimetric Dw measurements and calibration of waterproof ionization chambers were successfully carried out in the 1.5 T B-field of an MRLinac with a standard uncertainty of 0.7%. Preliminary kQ,B and kB factors were determined with standard uncertainties of respectively 0.8% and 1.3%. The kQ,B agrees with an alternative method within 0.4%. The feasibility of water calorimetry in the presence of B-fields

  1. Importance of the Decompensative Correction of the Gravity Field for Study of the Upper Crust: Application to the Arabian Plate and Surroundings

    OpenAIRE

    M. K. Kaban; Sami El Khrepy; Nassir Al-Arifi

    2017-01-01

    The isostatic correction represents one of the most useful “geological” reduction methods of the gravity field. With this correction it is possible to remove a significant part of the effect of deep density heterogeneity, which dominates in the Bouguer gravity anomalies. However, even this reduction does not show the full gravity effect of unknown anomalies in the upper crust since their impact is substantially reduced by the isostatic compensation. We analyze a so-called decompensative corre...

  2. Automated correction on X-rays calibration using transmission chamber and LabVIEWTM

    International Nuclear Information System (INIS)

    Betti, Flavio; Potiens, Maria da Penha Albuquerque

    2009-01-01

    Uncertainties during prolonged exposure times on X-rays calibration procedures at the Instruments Calibration facilities at IPEN may suffer from efficiency (and therefore intensity) variations on the industrial X-Ray generator used. Using a transmission chamber as an online reference chamber during the whole irradiation process is proposed in order to compensate for such error source. Also temperature (and pressure) fluctuations may arise from the performance limited calibration room air conditioning system. As an open ionization chamber, that monitor chamber does require calculation of a correction factor due to the temperature and pressure effects on air density. Sending and processing data from all related instruments (electrometer, thermometer and barometer) can be more easily achieved by interfacing them to a host computer running an especially developed algorithm using LabVIEW TM environment which will not only apply the proper correction factors during runtime, but also determine the exact length of time to reach a desired condition, which can be: time period, charge collected, or air kerma, based on the previous calibration of the whole system using a reference chamber traceable to primary standard dosimetry laboratories. When performing such calibration, two temperature sensors (secondary standard thermistors) are simultaneously used, one for the transmission chamber, and other for the reference chamber. As the substitution method is used during actual customer's calibration, the readings from the second thermistor can also be used when desired for further corrections. Use of LabVIEW TM programming language allowed for a shorter development time, and it is also extremely convenient to make things easier when improvements and modifications are called for. (author)

  3. Inelastic neutron scattering, Raman, vibrational analysis with anharmonic corrections, and scaled quantum mechanical force field for polycrystalline L-alanine

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Robert W. [Department of Biomedical Informatics, Uniformed Services University, 4301 Jones Bridge Road, Bethesda, MD 20815 (United States)], E-mail: bob@bob.usuhs.mil; Schluecker, Sebastian [Institute of Physical Chemistry, University of Wuerzburg, Wuerzburg (Germany); Hudson, Bruce S. [Department of Chemistry, Syracuse University, Syracuse, NY (United States)

    2008-01-22

    A scaled quantum mechanical harmonic force field (SQMFF) corrected for anharmonicity is obtained for the 23 K L-alanine crystal structure using van der Waals corrected periodic boundary condition density functional theory (DFT) calculations with the PBE functional. Scale factors are obtained with comparisons to inelastic neutron scattering (INS), Raman, and FT-IR spectra of polycrystalline L-alanine at 15-23 K. Calculated frequencies for all 153 normal modes differ from observed frequencies with a standard deviation of 6 wavenumbers. Non-bonded external k = 0 lattice modes are included, but assignments to these modes are presently ambiguous. The extension of SQMFF methodology to lattice modes is new, as are the procedures used here for providing corrections for anharmonicity and van der Waals interactions in DFT calculations on crystals. First principles Born-Oppenheimer molecular dynamics (BOMD) calculations are performed on the L-alanine crystal structure at a series of classical temperatures ranging from 23 K to 600 K. Corrections for zero-point energy (ZPE) are estimated by finding the classical temperature that reproduces the mean square displacements (MSDs) measured from the diffraction data at 23 K. External k = 0 lattice motions are weakly coupled to bonded internal modes.

  4. Inelastic neutron scattering, Raman, vibrational analysis with anharmonic corrections, and scaled quantum mechanical force field for polycrystalline L-alanine

    International Nuclear Information System (INIS)

    Williams, Robert W.; Schluecker, Sebastian; Hudson, Bruce S.

    2008-01-01

    A scaled quantum mechanical harmonic force field (SQMFF) corrected for anharmonicity is obtained for the 23 K L-alanine crystal structure using van der Waals corrected periodic boundary condition density functional theory (DFT) calculations with the PBE functional. Scale factors are obtained with comparisons to inelastic neutron scattering (INS), Raman, and FT-IR spectra of polycrystalline L-alanine at 15-23 K. Calculated frequencies for all 153 normal modes differ from observed frequencies with a standard deviation of 6 wavenumbers. Non-bonded external k = 0 lattice modes are included, but assignments to these modes are presently ambiguous. The extension of SQMFF methodology to lattice modes is new, as are the procedures used here for providing corrections for anharmonicity and van der Waals interactions in DFT calculations on crystals. First principles Born-Oppenheimer molecular dynamics (BOMD) calculations are performed on the L-alanine crystal structure at a series of classical temperatures ranging from 23 K to 600 K. Corrections for zero-point energy (ZPE) are estimated by finding the classical temperature that reproduces the mean square displacements (MSDs) measured from the diffraction data at 23 K. External k = 0 lattice motions are weakly coupled to bonded internal modes

  5. Quantum corrections for spinning particles in de Sitter

    Energy Technology Data Exchange (ETDEWEB)

    Fröb, Markus B. [Department of Mathematics, University of York, Heslington, York, YO10 5DD (United Kingdom); Verdaguer, Enric, E-mail: mbf503@york.ac.uk, E-mail: enric.verdaguer@ub.edu [Departament de Física Quàntica i Astrofísica, Institut de Ciències del Cosmos (ICC), Universitat de Barcelona (UB), C/ Martí i Franquès 1, 08028 Barcelona (Spain)

    2017-04-01

    We compute the one-loop quantum corrections to the gravitational potentials of a spinning point particle in a de Sitter background, due to the vacuum polarisation induced by conformal fields in an effective field theory approach. We consider arbitrary conformal field theories, assuming only that the theory contains a large number N of fields in order to separate their contribution from the one induced by virtual gravitons. The corrections are described in a gauge-invariant way, classifying the induced metric perturbations around the de Sitter background according to their behaviour under transformations on equal-time hypersurfaces. There are six gauge-invariant modes: two scalar Bardeen potentials, one transverse vector and one transverse traceless tensor, of which one scalar and the vector couple to the spinning particle. The quantum corrections consist of three different parts: a generalisation of the flat-space correction, which is only significant at distances of the order of the Planck length; a constant correction depending on the undetermined parameters of the renormalised effective action; and a term which grows logarithmically with the distance from the particle. This last term is the most interesting, and when resummed gives a modified power law, enhancing the gravitational force at large distances. As a check on the accuracy of our calculation, we recover the linearised Kerr-de Sitter metric in the classical limit and the flat-space quantum correction in the limit of vanishing Hubble constant.

  6. Autonomy and Automation

    Science.gov (United States)

    Shively, Jay

    2017-01-01

    A significant level of debate and confusion has surrounded the meaning of the terms autonomy and automation. Automation is a multi-dimensional concept, and we propose that Remotely Piloted Aircraft Systems (RPAS) automation should be described with reference to the specific system and task that has been automated, the context in which the automation functions, and other relevant dimensions. In this paper, we present definitions of automation, pilot in the loop, pilot on the loop and pilot out of the loop. We further propose that in future, the International Civil Aviation Organization (ICAO) RPAS Panel avoids the use of the terms autonomy and autonomous when referring to automated systems on board RPA. Work Group 7 proposes to develop, in consultation with other workgroups, a taxonomy of Levels of Automation for RPAS.

  7. Reducing overlay sampling for APC-based correction per exposure by replacing measured data with computational prediction

    Science.gov (United States)

    Noyes, Ben F.; Mokaberi, Babak; Oh, Jong Hun; Kim, Hyun Sik; Sung, Jun Ha; Kea, Marc

    2016-03-01

    One of the keys to successful mass production of sub-20nm nodes in the semiconductor industry is the development of an overlay correction strategy that can meet specifications, reduce the number of layers that require dedicated chuck overlay, and minimize measurement time. Three important aspects of this strategy are: correction per exposure (CPE), integrated metrology (IM), and the prioritization of automated correction over manual subrecipes. The first and third aspects are accomplished through an APC system that uses measurements from production lots to generate CPE corrections that are dynamically applied to future lots. The drawback of this method is that production overlay sampling must be extremely high in order to provide the system with enough data to generate CPE. That drawback makes IM particularly difficult because of the throughput impact that can be created on expensive bottleneck photolithography process tools. The goal is to realize the cycle time and feedback benefits of IM coupled with the enhanced overlay correction capability of automated CPE without impacting process tool throughput. This paper will discuss the development of a system that sends measured data with reduced sampling via an optimized layout to the exposure tool's computational modelling platform to predict and create "upsampled" overlay data in a customizable output layout that is compatible with the fab user CPE APC system. The result is dynamic CPE without the burden of extensive measurement time, which leads to increased utilization of IM.

  8. Automated X-ray image analysis for cargo security: Critical review and future promise.

    Science.gov (United States)

    Rogers, Thomas W; Jaccard, Nicolas; Morton, Edward J; Griffin, Lewis D

    2017-01-01

    We review the relatively immature field of automated image analysis for X-ray cargo imagery. There is increasing demand for automated analysis methods that can assist in the inspection and selection of containers, due to the ever-growing volumes of traded cargo and the increasing concerns that customs- and security-related threats are being smuggled across borders by organised crime and terrorist networks. We split the field into the classical pipeline of image preprocessing and image understanding. Preprocessing includes: image manipulation; quality improvement; Threat Image Projection (TIP); and material discrimination and segmentation. Image understanding includes: Automated Threat Detection (ATD); and Automated Contents Verification (ACV). We identify several gaps in the literature that need to be addressed and propose ideas for future research. Where the current literature is sparse we borrow from the single-view, multi-view, and CT X-ray baggage domains, which have some characteristics in common with X-ray cargo.

  9. Automated image-matching technique for comparative diagnosis of the liver on CT examination

    International Nuclear Information System (INIS)

    Okumura, Eiichiro; Sanada, Shigeru; Suzuki, Masayuki; Tsushima, Yoshito; Matsui, Osamu

    2005-01-01

    When interpreting enhanced computer tomography (CT) images of the upper abdomen, radiologists visually select a set of images of the same anatomical positions from two or more CT image series (i.e., non-enhanced and contrast-enhanced CT images at arterial and delayed phase) to depict and to characterize any abnormalities. The same process is also necessary to create subtraction images by computer. We have developed an automated image selection system using a template-matching technique that allows the recognition of image sets at the same anatomical position from two CT image series. Using the template-matching technique, we compared several anatomical structures in each CT image at the same anatomical position. As the position of the liver may shift according to respiratory movement, not only the shape of the liver but also the gallbladder and other prominent structures included in the CT images were compared to allow appropriate selection of a set of CT images. This novel technique was applied in 11 upper abdominal CT examinations. In CT images with a slice thickness of 7.0 or 7.5 mm, the percentage of image sets selected correctly by the automated procedure was 86.6±15.3% per case. In CT images with a slice thickness of 1.25 mm, the percentages of correct selection of image sets by the automated procedure were 79.4±12.4% (non-enhanced and arterial-phase CT images) and 86.4±10.1% (arterial- and delayed-phase CT images). This automated method is useful for assisting in interpreting CT images and in creating digital subtraction images. (author)

  10. Radiative corrections to fermion matter and nontopological solitons

    International Nuclear Information System (INIS)

    Perry, R.J.

    1984-01-01

    This thesis addresses the effects of one loop radiative corrections to fermion matter and nontopological solitons. The effective action formalism is employed to explore the effects of these corrections on the ground state energy and scalar field expectation value of a system containing valence fermions, which are introduced using a chemical potential. This formalism is discussed extensively, and detailed calculations are presented for the Friedberg-Lee model. The techniques illustrated can be used in any renormalizable field theory and can be extended to include higher order quantum corrections

  11. On the truncation of the azimuthal mode spectrum of high-order probes in probe-corrected spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Pivnenko, Sergey; Laitinen, Tommi

    2011-01-01

    Azimuthal mode (m mode) truncation of a high-order probe pattern in probe-corrected spherical near-field antenna measurements is studied in this paper. The results of this paper provide rules for appropriate and sufficient m-mode truncation for non-ideal first-order probes and odd-order probes wi...

  12. Automated jitter correction for IR image processing to assess the quality of W7-X high heat flux components

    International Nuclear Information System (INIS)

    Greuner, H; De Marne, P; Herrmann, A; Boeswirth, B; Schindler, T; Smirnow, M

    2009-01-01

    An automated IR image processing method was developed to evaluate the surface temperature distribution of cyclically loaded high heat flux (HHF) plasma facing components. IPP Garching will perform the HHF testing of a high percentage of the series production of the WENDELSTEIN 7-X (W7-X) divertor targets to minimize the number of undiscovered uncertainties in the finally installed components. The HHF tests will be performed as quality assurance (QA) complementary to the non-destructive examination (NDE) methods used during the manufacturing. The IR analysis of an HHF-loaded component detects growing debonding of the plasma facing material, made of carbon fibre composite (CFC), after a few thermal cycles. In the case of the prototype testing, the IR data was processed manually. However, a QA method requires a reliable, reproducible and efficient automated procedure. Using the example of the HHF testing of W7-X pre-series target elements, the paper describes the developed automated IR image processing method. The algorithm is based on an iterative two-step correlation analysis with an individually defined reference pattern for the determination of the jitter.

  13. Next-to-next-to-leading order gravitational spin-orbit coupling via the effective field theory for spinning objects in the post-Newtonian scheme

    Energy Technology Data Exchange (ETDEWEB)

    Levi, Michele [Université Pierre et Marie Curie, CNRS-UMR 7095, Institut d' Astrophysique de Paris, 98 bis Boulevard Arago, 75014 Paris (France); Steinhoff, Jan, E-mail: michele.levi@upmc.fr, E-mail: jan.steinhoff@aei.mpg.de [Max-Planck-Institute for Gravitational Physics (Albert-Einstein-Institute), Am Mühlenberg 1, 14476 Potsdam-Golm (Germany)

    2016-01-01

    We implement the effective field theory for gravitating spinning objects in the post-Newtonian scheme at the next-to-next-to-leading order level to derive the gravitational spin-orbit interaction potential at the third and a half post-Newtonian order for rapidly rotating compact objects. From the next-to-next-to-leading order interaction potential, which we obtain here in a Lagrangian form for the first time, we derive straightforwardly the corresponding Hamiltonian. The spin-orbit sector constitutes the most elaborate spin dependent sector at each order, and accordingly we encounter a proliferation of the relevant Feynman diagrams, and a significant increase of the computational complexity. We present in detail the evaluation of the interaction potential, going over all contributing Feynman diagrams. The computation is carried out in terms of the ''nonrelativistic gravitational'' fields, which are advantageous also in spin dependent sectors, together with the various gauge choices included in the effective field theory for gravitating spinning objects, which also optimize the calculation. In addition, we automatize the effective field theory computations, and carry out the automated computations in parallel. Such automated effective field theory computations would be most useful to obtain higher order post-Newtonian corrections. We compare our Hamiltonian to the ADM Hamiltonian, and arrive at a complete agreement between the ADM and effective field theory results. Finally, we provide Hamiltonians in the center of mass frame, and complete gauge invariant relations among the binding energy, angular momentum, and orbital frequency of an inspiralling binary with generic compact spinning components to third and a half post-Newtonian order. The derivation presented here is essential to obtain further higher order post-Newtonian corrections, and to reach the accuracy level required for the successful detection of gravitational radiation.

  14. Flexible Automation System for Determination of Elemental Composition of Incrustations in Clogged Biliary Endoprostheses Using ICP-MS.

    Science.gov (United States)

    Fleischer, Heidi; Ramani, Kinjal; Blitti, Koffi; Roddelkopf, Thomas; Warkentin, Mareike; Behrend, Detlef; Thurow, Kerstin

    2018-02-01

    Automation systems are well established in industries and life science laboratories, especially in bioscreening and high-throughput applications. An increasing demand of automation solutions can be seen in the field of analytical measurement in chemical synthesis, quality control, and medical and pharmaceutical fields, as well as research and development. In this study, an automation solution was developed and optimized for the investigation of new biliary endoprostheses (stents), which should reduce clogging after implantation in the human body. The material inside the stents (incrustations) has to be controlled regularly and under identical conditions. The elemental composition is one criterion to be monitored in stent development. The manual procedure was transferred to an automated process including sample preparation, elemental analysis using inductively coupled plasma mass spectrometry (ICP-MS), and data evaluation. Due to safety issues, microwave-assisted acid digestion was executed outside of the automation system. The performance of the automated process was determined and validated. The measurement results and the processing times were compared for both the manual and the automated procedure. Finally, real samples of stent incrustations and pig bile were analyzed using the automation system.

  15. High-speed atmospheric correction for spectral image processing

    Science.gov (United States)

    Perkins, Timothy; Adler-Golden, Steven; Cappelaere, Patrice; Mandl, Daniel

    2012-06-01

    Land and ocean data product generation from visible-through-shortwave-infrared multispectral and hyperspectral imagery requires atmospheric correction or compensation, that is, the removal of atmospheric absorption and scattering effects that contaminate the measured spectra. We have recently developed a prototype software system for automated, low-latency, high-accuracy atmospheric correction based on a C++-language version of the Spectral Sciences, Inc. FLAASH™ code. In this system, pre-calculated look-up tables replace on-the-fly MODTRAN® radiative transfer calculations, while the portable C++ code enables parallel processing on multicore/multiprocessor computer systems. The initial software has been installed on the Sensor Web at NASA Goddard Space Flight Center, where it is currently atmospherically correcting new data from the EO-1 Hyperion and ALI sensors. Computation time is around 10 s per data cube per processor. Further development will be conducted to implement the new atmospheric correction software on board the upcoming HyspIRI mission's Intelligent Payload Module, where it would generate data products in nearreal time for Direct Broadcast to the ground. The rapid turn-around of data products made possible by this software would benefit a broad range of applications in areas of emergency response, environmental monitoring and national defense.

  16. Drift correction for single-molecule imaging by molecular constraint field, a distance minimum metric

    International Nuclear Information System (INIS)

    Han, Renmin; Wang, Liansan; Xu, Fan; Zhang, Yongdeng; Zhang, Mingshu; Liu, Zhiyong; Ren, Fei; Zhang, Fa

    2015-01-01

    The recent developments of far-field optical microscopy (single molecule imaging techniques) have overcome the diffraction barrier of light and improve image resolution by a factor of ten compared with conventional light microscopy. These techniques utilize the stochastic switching of probe molecules to overcome the diffraction limit and determine the precise localizations of molecules, which often requires a long image acquisition time. However, long acquisition times increase the risk of sample drift. In the case of high resolution microscopy, sample drift would decrease the image resolution. In this paper, we propose a novel metric based on the distance between molecules to solve the drift correction. The proposed metric directly uses the position information of molecules to estimate the frame drift. We also designed an algorithm to implement the metric for the general application of drift correction. There are two advantages of our method: First, because our method does not require space binning of positions of molecules but directly operates on the positions, it is more natural for single molecule imaging techniques. Second, our method can estimate drift with a small number of positions in each temporal bin, which may extend its potential application. The effectiveness of our method has been demonstrated by both simulated data and experiments on single molecular images

  17. Axis Measurements, Field Quality and Quench Performance of the First LHC Short Straight Sections

    CERN Document Server

    Sanfilippo, S; Calvi, M; Chohan, V; Durante, M; Hagen, P; Pugnat, P; Smirnov, N; Schnizer, P; Sammut, N; Siemko, A; Simon, F; Stafiniak, A; Todesco, Ezio; Tortschanoff, Theodor; Walckiers, L

    2005-01-01

    The series testing at 1.9 K of the 360 Short Straight Sections (SSS) for the Large Hadron Collider have started at CERN in September 2003. The SSS contain the lattice quadrupoles and correction magnets in a common cryostat. The lattice quadrupoles feature two collared coils with 56 mm bore assembled in a common yoke. The coils are wound in two-layers from 15.1 mm wide NbTi cable, insulated with polyimide tape. The paper reviews the main test results performed in superfluid helium. The magnetic field and magnetic center position of the quadrupoles and associated correctors were measured with two independent systems, namely an automated scanner and a single stretched wire technique. The quench training, the field quality and the magnetic alignment measurements are presented and discussed in terms of the specifications and expected performances of these magnets in the LHC. We discuss in detail the field quality in terms of multipole errors measured at injection and nominal field and decomposed into geometric an...

  18. PROCEEDINGS OF THE WORKSHOP ON LHC INTERACTION REGION CORRECTION SYSTEMS

    International Nuclear Information System (INIS)

    FISCHER, W.; WEI, J.

    1999-01-01

    The Workshop on LHC Interaction Region Correction Systems was held at Brookhaven National Laboratory, Upton, New York, on 6 and 7 May 1999. It was attended by 25 participants from 5 institutions. The performance of the Large Hadron Collider (LHC) at collision energy is limited by the field quality of the interaction region quadrupoles and dipoles. In three sessions the workshop addressed the field quality of the these magnets, reviewed the principles and efficiency of global and local correction schemes and finalized a corrector layout. The session on Field Quality Issues, chaired by J. Strait (FNAL), discussed the progress made by KEK and FNAL in achieving the best possible field quality in the interaction region quadrupoles. Results of simulation studies were presented that assess the effects of magnetic field errors with simulation studies. Attention was given to the uncertainties in predicting and measuring field errors. The session on Global Correction, chaired by J.-P. Koutchouk (CERN), considered methods of reducing the nonlinear detuning or resonance driving terms in the accelerator one-turn map by either sorting or correcting. The session also discussed the crossing angle dependence of the dynamic aperture and operational experience from LEP. The session on Local Correction, chaired by T. Taylor (CERN), discussed the location, strength and effectiveness of multipole correctors in the interaction regions for both proton and heavy ion operation. Discussions were based on technical feasibility considerations and dynamic aperture requirements. The work on linear corrections in the interaction regions was reviewed

  19. Repeatability of an automated Landolt C test, compared with the early treatment of diabetic retinopathy study (ETDRS) chart testing.

    Science.gov (United States)

    Ruamviboonsuk, Paisan; Tiensuwan, Montip; Kunawut, Catleya; Masayaanon, Patcharapim

    2003-10-01

    To evaluate the repeatability of visual acuity scores from the automated test and compare them with the Early Treatment of Diabetic Retinopathy Study (ETDRS) chart. Instrument validation study based on a model of repeatability study in two observations. SMETHODS: a prospective, clinic-based, comparative study. A total of 206 participants without ocular diseases and refractive errors in their right eyes were randomly enrolled in the automated group in which 107 participants performed the automated test and the ETDRS group in which 99 participants read the ETDRS chart. All participants were tested with only their right eyes without corrections at 4 meters and came back to have the same tests 1 week later. The automated test used the Landolt rings as optotypes and was conducted by a low-ended personal computer with a 15-inch monitor and a wireless keyboard. The "letter" score calculated by counting every correct response to optotypes, and the "threshold curve" score interpreted from the optotype size at the midpoint of a visual acuity threshold curve. The 95% confidence interval of test-retest of visual acuity scores from the automated test are comparable to the ETDRS chart (.143 compared with.125 for letter scores,.145 compared with.122 for threshold curve scores). The score repeatabilities, calculated from the standard deviations of test-retest, from the automated test are also comparable to the ETDRS chart (.201 compared with.177 for letter scores,.206 compared with.172 for threshold curve scores). All comparisons demonstrated no statistical difference (P >.05). The automated testing system in this study enables practical measuring visual acuity by the Landolt rings. The system's repeatability, which is comparable to the ETDRS chart, supports its role as an alternative tool for measuring outcome in new clinical research. Its ability to practically generate visual acuity threshold curves may also be useful in future clinical research studies.

  20. webPOISONCONTROL: can poison control be automated?

    Science.gov (United States)

    Litovitz, Toby; Benson, Blaine E; Smolinske, Susan

    2016-08-01

    A free webPOISONCONTROL app allows the public to determine the appropriate triage of poison ingestions without calling poison control. If accepted and safe, this alternative expands access to reliable poison control services to those who prefer the Internet over the telephone. This study assesses feasibility, safety, and user-acceptance of automated online triage of asymptomatic, nonsuicidal poison ingestion cases. The user provides substance name, amount, age, and weight in an automated online tool or downloadable app, and is given a specific triage recommendation to stay home, go to the emergency department, or call poison control for further guidance. Safety was determined by assessing outcomes of consecutive home-triaged cases with follow-up and by confirming the correct application of algorithms. Case completion times and user perceptions of speed and ease of use were measures of user-acceptance. Of 9256 cases, 73.3% were triaged to home, 2.1% to an emergency department, and 24.5% directed to call poison control. Children younger than 6 years were involved in 75.2% of cases. Automated follow-up was done in 31.2% of home-triaged cases; 82.3% of these had no effect. No major or fatal outcomes were reported. More than 91% of survey respondents found the tool quick and easy to use. Median case completion time was 4.1 minutes. webPOISONCONTROL augments traditional poison control services by providing automated, accurate online access to case-specific triage and first aid guidance for poison ingestions. It is safe, quick, and easy to use. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Automated drumlin shape and volume estimation using high resolution LiDAR imagery (Curvature Based Relief Separation): A test from the Wadena Drumlin Field, Minnesota

    Science.gov (United States)

    Yu, Peter; Eyles, Nick; Sookhan, Shane

    2015-10-01

    Resolving the origin(s) of drumlins and related megaridges in areas of megascale glacial lineations (MSGL) left by paleo-ice sheets is critical to understanding how ancient ice sheets interacted with their sediment beds. MSGL is now linked with fast-flowing ice streams but there is a broad range of erosional and depositional models. Further progress is reliant on constraining fluxes of subglacial sediment at the ice sheet base which in turn is dependent on morphological data such as landform shape and elongation and most importantly landform volume. Past practice in determining shape has employed a broad range of geomorphological methods from strictly visualisation techniques to more complex semi-automated and automated drumlin extraction methods. This paper reviews and builds on currently available visualisation, semi-automated and automated extraction methods and presents a new, Curvature Based Relief Separation (CBRS) technique; for drumlin mapping. This uses curvature analysis to generate a base level from which topography can be normalized and drumlin volume can be derived. This methodology is tested using a high resolution (3 m) LiDAR elevation dataset from the Wadena Drumlin Field, Minnesota, USA, which was constructed by the Wadena Lobe of the Laurentide Ice Sheet ca. 20,000 years ago and which as a whole contains 2000 drumlins across an area of 7500 km2. This analysis demonstrates that CBRS provides an objective and robust procedure for automated drumlin extraction. There is strong agreement with manually selected landforms but the method is also capable of resolving features that were not detectable manually thereby considerably expanding the known population of streamlined landforms. CBRS provides an effective automatic method for visualisation of large areas of the streamlined beds of former ice sheets and for modelling sediment fluxes below ice sheets.

  2. Early Validation of Automation Plant Control Software using Simulation Based on Assumption Modeling and Validation Use Cases

    Directory of Open Access Journals (Sweden)

    Veronika Brandstetter

    2015-10-01

    Full Text Available In automation plants, technical processes must be conducted in a way that products, substances, or services are produced reliably, with sufficient quality and with minimal strain on resources. A key driver in conducting these processes is the automation plant’s control software, which controls the technical plant components and thereby affects the physical, chemical, and mechanical processes that take place in automation plants. To this end, the control software of an automation plant must adhere to strict process requirements arising from the technical processes, and from the physical plant design. Currently, the validation of the control software often starts late in the engineering process in many cases – once the automation plant is almost completely constructed. However, as widely acknowledged, the later the control software of the automation plant is validated, the higher the effort for correcting revealed defects is, which can lead to serious budget overruns and project delays. In this article we propose an approach that allows the early validation of automation control software against the technical plant processes and assumptions about the physical plant design by means of simulation. We demonstrate the application of our approach on the example of an actual plant project from the automation industry and present it’s technical implementation

  3. Semi-automated Robust Quantification of Lesions (SRQL Toolbox

    Directory of Open Access Journals (Sweden)

    Kaori L Ito

    2017-05-01

    Full Text Available Quantifying lesions in a reliable manner is fundamental for studying the effects of neuroanatomical changes related to recovery in the post-stroke brain. However, the wide variability in lesion characteristics across individuals makes manual lesion segmentation a challenging and often subjective process. This often makes it difficult to combine stroke lesion data across multiple research sites, due to subjective differences in how lesions may be defined. Thus, we developed the Semi-automated Robust Quantification of Lesions (SRQL; https://github.com/npnl/SRQL; DOI: 10.5281/zenodo.557114 Toolbox that performs several analysis steps: 1 a white matter intensity correction that removes healthy white matter voxels from the lesion mask, thereby making lesions slightly more robust to subjective errors; 2 an automated report of descriptive statistics on lesions for simplified comparison between or across groups, and 3 an option to perform analyses in both native and standard space to facilitate analyses in either space. Here, we describe the methods implemented in the toolbox.

  4. Automated discovery systems and the inductivist controversy

    Science.gov (United States)

    Giza, Piotr

    2017-09-01

    The paper explores possible influences that some developments in the field of branches of AI, called automated discovery and machine learning systems, might have upon some aspects of the old debate between Francis Bacon's inductivism and Karl Popper's falsificationism. Donald Gillies facetiously calls this controversy 'the duel of two English knights', and claims, after some analysis of historical cases of discovery, that Baconian induction had been used in science very rarely, or not at all, although he argues that the situation has changed with the advent of machine learning systems. (Some clarification of terms machine learning and automated discovery is required here. The key idea of machine learning is that, given data with associated outcomes, software can be trained to make those associations in future cases which typically amounts to inducing some rules from individual cases classified by the experts. Automated discovery (also called machine discovery) deals with uncovering new knowledge that is valuable for human beings, and its key idea is that discovery is like other intellectual tasks and that the general idea of heuristic search in problem spaces applies also to discovery tasks. However, since machine learning systems discover (very low-level) regularities in data, throughout this paper I use the generic term automated discovery for both kinds of systems. I will elaborate on this later on). Gillies's line of argument can be generalised: thanks to automated discovery systems, philosophers of science have at their disposal a new tool for empirically testing their philosophical hypotheses. Accordingly, in the paper, I will address the question, which of the two philosophical conceptions of scientific method is better vindicated in view of the successes and failures of systems developed within three major research programmes in the field: machine learning systems in the Turing tradition, normative theory of scientific discovery formulated by Herbert Simon

  5. Discussion of a Possible Corrected Black Hole Entropy

    Directory of Open Access Journals (Sweden)

    Miao He

    2018-01-01

    Full Text Available Einstein’s equation could be interpreted as the first law of thermodynamics near the spherically symmetric horizon. Through recalling the Einstein gravity with a more general static spherical symmetric metric, we find that the entropy would have a correction in Einstein gravity. By using this method, we investigate the Eddington-inspired Born-Infeld (EiBI gravity. Without matter field, we can also derive the first law in EiBI gravity. With an electromagnetic field, as the field equations have a more general spherically symmetric solution in EiBI gravity, we find that correction of the entropy could be generalized to EiBI gravity. Furthermore, we point out that the Einstein gravity and EiBI gravity might be equivalent on the event horizon. At last, under EiBI gravity with the electromagnetic field, a specific corrected entropy of black hole is given.

  6. Corrected body surface potential mapping.

    Science.gov (United States)

    Krenzke, Gerhard; Kindt, Carsten; Hetzer, Roland

    2007-02-01

    In the method for body surface potential mapping described here, the influence of thorax shape on measured ECG values is corrected. The distances of the ECG electrodes from the electrical heart midpoint are determined using a special device for ECG recording. These distances are used to correct the ECG values as if they had been measured on the surface of a sphere with a radius of 10 cm with its midpoint localized at the electrical heart midpoint. The equipotential lines of the electrical heart field are represented on the virtual surface of such a sphere. It is demonstrated that the character of a dipole field is better represented if the influence of the thorax shape is reduced. The site of the virtual reference electrode is also important for the dipole character of the representation of the electrical heart field.

  7. Corrective Action Investigation Plan for Corrective Action Unit 554: Area 23 Release Site, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    Boehlecke, Robert F.

    2004-01-01

    This Corrective Action Investigation Plan (CAIP) contains project-specific information for conducting site investigation activities at Corrective Action Unit (CAU) 554: Area 23 Release Site, Nevada Test Site, Nevada. Information presented in this CAIP includes facility descriptions, environmental sample collection objectives, and criteria for the selection and evaluation of environmental samples. Corrective Action Unit 554 is located in Area 23 of the Nevada Test Site, which is 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit 554 is comprised of one Corrective Action Site (CAS), which is: 23-02-08, USTs 23-115-1, 2, 3/Spill 530-90-002. This site consists of soil contamination resulting from a fuel release from underground storage tanks (USTs). Corrective Action Site 23-02-08 is being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives. Additional information will be obtained by conducting a corrective action investigation prior to evaluating corrective action alternatives and selecting the appropriate corrective action for this CAS. The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document for CAU 554. Corrective Action Site 23-02-08 will be investigated based on the data quality objectives (DQOs) developed on July 15, 2004, by representatives of the Nevada Division of Environmental Protection; U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office; and contractor personnel. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 554. Appendix A provides a detailed discussion of the DQO methodology and the DQOs specific to CAS 23-02-08. The scope of the corrective action investigation

  8. Corrective Action Investigation Plan for Corrective Action Unit 542: Disposal Holes, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    Laura Pastor

    2006-01-01

    Corrective Action Unit (CAU) 542 is located in Areas 3, 8, 9, and 20 of the Nevada Test Site, which is 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit 542 is comprised of eight corrective action sites (CASs): (1) 03-20-07, ''UD-3a Disposal Hole''; (2) 03-20-09, ''UD-3b Disposal Hole''; (3) 03-20-10, ''UD-3c Disposal Hole''; (4) 03-20-11, ''UD-3d Disposal Hole''; (5) 06-20-03, ''UD-6 and UD-6s Disposal Holes''; (6) 08-20-01, ''U-8d PS No.1A Injection Well Surface Release''; (7) 09-20-03, ''U-9itsy30 PS No.1A Injection Well Surface Release''; and (8) 20-20-02, ''U-20av PS No.1A Injection Well Surface Release''. These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives. Additional information will be obtained by conducting a corrective action investigation before evaluating corrective action alternatives and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed on January 30, 2006, by representatives of the Nevada Division of Environmental Protection; U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office; Stoller-Navarro Joint Venture; and Bechtel Nevada. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 542. Appendix A provides a detailed discussion of the DQO methodology and the DQOs specific to each CAS. The scope of the CAI for CAU 542 includes the following activities: (1) Move surface debris and/or materials, as needed, to facilitate sampling. (2) Conduct radiological surveys. (3) Conduct geophysical surveys to

  9. Findings from Seven Years of Field Performance Data for Automated Demand Response in Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Kiliccote, Sila; Piette, Mary Ann; Mathieu, Johanna; Parrish, Kristen

    2010-05-14

    California is a leader in automating demand response (DR) to promote low-cost, consistent, and predictable electric grid management tools. Over 250 commercial and industrial facilities in California participate in fully-automated programs providing over 60 MW of peak DR savings. This paper presents a summary of Open Automated DR (OpenADR) implementation by each of the investor-owned utilities in California. It provides a summary of participation, DR strategies and incentives. Commercial buildings can reduce peak demand from 5 to 15percent with an average of 13percent. Industrial facilities shed much higher loads. For buildings with multi-year savings we evaluate their load variability and shed variability. We provide a summary of control strategies deployed, along with costs to install automation. We report on how the electric DR control strategies perform over many years of events. We benchmark the peak demand of this sample of buildings against their past baselines to understand the differences in building performance over the years. This is done with peak demand intensities and load factors. The paper also describes the importance of these data in helping to understand possible techniques to reach net zero energy using peak day dynamic control capabilities in commercial buildings. We present an example in which the electric load shape changed as a result of a lighting retrofit.

  10. Home Automation

    OpenAIRE

    Ahmed, Zeeshan

    2010-01-01

    In this paper I briefly discuss the importance of home automation system. Going in to the details I briefly present a real time designed and implemented software and hardware oriented house automation research project, capable of automating house's electricity and providing a security system to detect the presence of unexpected behavior.

  11. Gypsy moth (Lepidoptera: Lymantriidae) flight behavior and phenology based on field-deployed automated pheromone-baited traps

    Science.gov (United States)

    Patrick C. Tobin; Kenneth T. Klein; Donna S. Leonard

    2009-01-01

    Populations of the gypsy moth, Lymantria dispar (L.), are extensively monitored in the United States through the use of pheromone-baited traps.We report on use of automated pheromone-baited traps that use a recording sensor and data logger to record the unique date-time stamp of males as they enter the trap.We deployed a total of 352 automated traps...

  12. Automation of the CHARMM General Force Field (CGenFF) I: bond perception and atom typing.

    Science.gov (United States)

    Vanommeslaeghe, K; MacKerell, A D

    2012-12-21

    Molecular mechanics force fields are widely used in computer-aided drug design for the study of drug-like molecules alone or interacting with biological systems. In simulations involving biological macromolecules, the biological part is typically represented by a specialized biomolecular force field, while the drug is represented by a matching general (organic) force field. In order to apply these general force fields to an arbitrary drug-like molecule, functionality for assignment of atom types, parameters, and charges is required. In the present article, which is part I of a series of two, we present the algorithms for bond perception and atom typing for the CHARMM General Force Field (CGenFF). The CGenFF atom typer first associates attributes to the atoms and bonds in a molecule, such as valence, bond order, and ring membership among others. Of note are a number of features that are specifically required for CGenFF. This information is then used by the atom typing routine to assign CGenFF atom types based on a programmable decision tree. This allows for straightforward implementation of CGenFF's complicated atom typing rules and for equally straightforward updating of the atom typing scheme as the force field grows. The presented atom typer was validated by assigning correct atom types on 477 model compounds including in the training set as well as 126 test-set molecules that were constructed to specifically verify its different components. The program may be utilized via an online implementation at https://www.paramchem.org/ .

  13. Automated high resolution full-field spatial coherence tomography for quantitative phase imaging of human red blood cells

    Science.gov (United States)

    Singla, Neeru; Dubey, Kavita; Srivastava, Vishal; Ahmad, Azeem; Mehta, D. S.

    2018-02-01

    We developed an automated high-resolution full-field spatial coherence tomography (FF-SCT) microscope for quantitative phase imaging that is based on the spatial, rather than the temporal, coherence gating. The Red and Green color laser light was used for finding the quantitative phase images of unstained human red blood cells (RBCs). This study uses morphological parameters of unstained RBCs phase images to distinguish between normal and infected cells. We recorded the single interferogram by a FF-SCT microscope for red and green color wavelength and average the two phase images to further reduced the noise artifacts. In order to characterize anemia infected from normal cells different morphological features were extracted and these features were used to train machine learning ensemble model to classify RBCs with high accuracy.

  14. Automated Test Case Generation

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I would like to present the concept of automated test case generation. I work on it as part of my PhD and I think it would be interesting also for other people. It is also the topic of a workshop paper that I am introducing in Paris. (abstract below) Please note that the talk itself would be more general and not about the specifics of my PhD, but about the broad field of Automated Test Case Generation. I would introduce the main approaches (combinatorial testing, symbolic execution, adaptive random testing) and their advantages and problems. (oracle problem, combinatorial explosion, ...) Abstract of the paper: Over the last decade code-based test case generation techniques such as combinatorial testing or dynamic symbolic execution have seen growing research popularity. Most algorithms and tool implementations are based on finding assignments for input parameter values in order to maximise the execution branch coverage. Only few of them consider dependencies from outside the Code Under Test’s scope such...

  15. pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling

    Science.gov (United States)

    Florian Wellmann, J.; Thiele, Sam T.; Lindsay, Mark D.; Jessell, Mark W.

    2016-03-01

    We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilize the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.

  16. A two-dimensional matrix correction for off-axis portal dose prediction errors

    International Nuclear Information System (INIS)

    Bailey, Daniel W.; Kumaraswamy, Lalith; Bakhtiari, Mohammad; Podgorsak, Matthew B.

    2013-01-01

    Purpose: This study presents a follow-up to a modified calibration procedure for portal dosimetry published by Bailey et al. [“An effective correction algorithm for off-axis portal dosimetry errors,” Med. Phys. 36, 4089–4094 (2009)]. A commercial portal dose prediction system exhibits disagreement of up to 15% (calibrated units) between measured and predicted images as off-axis distance increases. The previous modified calibration procedure accounts for these off-axis effects in most regions of the detecting surface, but is limited by the simplistic assumption of radial symmetry. Methods: We find that a two-dimensional (2D) matrix correction, applied to each calibrated image, accounts for off-axis prediction errors in all regions of the detecting surface, including those still problematic after the radial correction is performed. The correction matrix is calculated by quantitative comparison of predicted and measured images that span the entire detecting surface. The correction matrix was verified for dose-linearity, and its effectiveness was verified on a number of test fields. The 2D correction was employed to retrospectively examine 22 off-axis, asymmetric electronic-compensation breast fields, five intensity-modulated brain fields (moderate-high modulation) manipulated for far off-axis delivery, and 29 intensity-modulated clinical fields of varying complexity in the central portion of the detecting surface. Results: Employing the matrix correction to the off-axis test fields and clinical fields, predicted vs measured portal dose agreement improves by up to 15%, producing up to 10% better agreement than the radial correction in some areas of the detecting surface. Gamma evaluation analyses (3 mm, 3% global, 10% dose threshold) of predicted vs measured portal dose images demonstrate pass rate improvement of up to 75% with the matrix correction, producing pass rates that are up to 30% higher than those resulting from the radial correction technique alone. As

  17. Automated evaluation of ultrasonic indications. State of the art -development trends. Pt. 1

    International Nuclear Information System (INIS)

    Hansch, M.K.T.; Stegemann, D.

    1994-01-01

    Future requirements of reliability and reproducibility in quality assurance demand computer evaluation of defect indications. The ultrasonic method with its large field of applications and a high potential for automation provides all preconditions for fully automated inspection. The survey proposes several desirable hardware improvements, data acquisition requirements and software configurations. (orig.) [de

  18. Air Force construction automation/robotics

    Science.gov (United States)

    Nease, AL; Dusseault, Christopher

    1994-01-01

    The Air Force has several unique requirements that are being met through the development of construction robotic technology. The missions associated with these requirements place construction/repair equipment operators in potentially harmful situations. Additionally, force reductions require that human resources be leveraged to the maximum extent possible and that more stringent construction repair requirements push for increased automation. To solve these problems, the U.S. Air Force is undertaking a research and development effort at Tyndall AFB, FL to develop robotic teleoperation, telerobotics, robotic vehicle communications, automated damage assessment, vehicle navigation, mission/vehicle task control architecture, and associated computing environment. The ultimate goal is the fielding of robotic repair capability operating at the level of supervised autonomy. The authors of this paper will discuss current and planned efforts in construction/repair, explosive ordnance disposal, hazardous waste cleanup, fire fighting, and space construction.

  19. Analysis of the failures and corrective actions for the LHC cryogenics radiation tolerant electronics and its field instruments

    Energy Technology Data Exchange (ETDEWEB)

    Balle, Christoph; Casas, Juan; Vauthier, Nicolas [CERN, TE Department, 1211 Geneva (Switzerland)

    2014-01-29

    The LHC cryogenic system radiation tolerant electronics and their associated field instruments have been in nominal conditions since before the commissioning of the first LHC beams in September 2008. This system is made of about 15’000 field instruments (thermometers, pressure sensors, liquid helium level gauges, electrical heaters and position switches), 7’500 electronic cards and 853 electronic crates. Since mid-2008 a software tool has been deployed, this allows an operator to report a problem and then lists the corrective actions. The tool is a great help in detecting recurrent problems that may be tackled by a hardware or software consolidation. The corrective actions range from simple resets, exchange of defective equipment, repair of electrical connectors, etc. However a recurrent problem that heals by itself is present on some channels. This type of fault is extremely difficult to diagnose and it appears as a temporary opening of an electrical circuit; its duration can range from a few minutes to several months. This paper presents the main type of problems encountered during the last four years, their evolution over time, the various hardware or software consolidations that have resulted and whether they have had an impact in the availability of the LHC beam.

  20. ICT: isotope correction toolbox.

    Science.gov (United States)

    Jungreuthmayer, Christian; Neubauer, Stefan; Mairinger, Teresa; Zanghellini, Jürgen; Hann, Stephan

    2016-01-01

    Isotope tracer experiments are an invaluable technique to analyze and study the metabolism of biological systems. However, isotope labeling experiments are often affected by naturally abundant isotopes especially in cases where mass spectrometric methods make use of derivatization. The correction of these additive interferences--in particular for complex isotopic systems--is numerically challenging and still an emerging field of research. When positional information is generated via collision-induced dissociation, even more complex calculations for isotopic interference correction are necessary. So far, no freely available tools can handle tandem mass spectrometry data. We present isotope correction toolbox, a program that corrects tandem mass isotopomer data from tandem mass spectrometry experiments. Isotope correction toolbox is written in the multi-platform programming language Perl and, therefore, can be used on all commonly available computer platforms. Source code and documentation can be freely obtained under the Artistic License or the GNU General Public License from: https://github.com/jungreuc/isotope_correction_toolbox/ {christian.jungreuthmayer@boku.ac.at,juergen.zanghellini@boku.ac.at} Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Electromagnetic fields with vanishing quantum corrections

    Science.gov (United States)

    Ortaggio, Marcello; Pravda, Vojtěch

    2018-04-01

    We show that a large class of null electromagnetic fields are immune to any modifications of Maxwell's equations in the form of arbitrary powers and derivatives of the field strength. These are thus exact solutions to virtually any generalized classical electrodynamics containing both non-linear terms and higher derivatives, including, e.g., non-linear electrodynamics as well as QED- and string-motivated effective theories. This result holds not only in a flat or (anti-)de Sitter background, but also in a larger subset of Kundt spacetimes, which allow for the presence of aligned gravitational waves and pure radiation.

  2. System approach to automation and robotization of drivage

    Science.gov (United States)

    Zinov’ev, VV; Mayorov, AE; Starodubov, AN; Nikolaev, PI

    2018-03-01

    The authors consider the system approach to finding ways of no-man drilling and blasting in the face area by means of automation and robotization of operations with a view to reducing injuries in mines. The analysis is carried out in terms of the drilling and blasting technology applied in Makarevskoe Coal Field, Kuznetsk Coal Basin. Within the system-functional approach and using INDEFO procedure, the processes of drilling and blasthole charging are decomposed into related elementary operations. The automation and robotization methods to avoid the presence of miners in the face are found for each operation.

  3. A comparison of semi-automated volumetric vs linear measurement of small vestibular schwannomas.

    Science.gov (United States)

    MacKeith, Samuel; Das, Tilak; Graves, Martin; Patterson, Andrew; Donnelly, Neil; Mannion, Richard; Axon, Patrick; Tysome, James

    2018-04-01

    Accurate and precise measurement of vestibular schwannoma (VS) size is key to clinical management decisions. Linear measurements are used in routine clinical practice but are prone to measurement error. This study aims to compare a semi-automated volume segmentation tool against standard linear method for measuring small VS. This study also examines whether oblique tumour orientation can contribute to linear measurement error. Experimental comparison of observer agreement using two measurement techniques. Tertiary skull base unit. Twenty-four patients with unilateral sporadic small (linear dimension following reformatting to correct for oblique orientation of VS. Intra-observer ICC was higher for semi-automated volumetric when compared with linear measurements, 0.998 (95% CI 0.994-0.999) vs 0.936 (95% CI 0.856-0.972), p linear measurements, 0.989 (95% CI 0.975-0.995) vs 0.946 (95% CI 0.880-0.976), p = 0.0045. The intra-observer %SDD was similar for volumetric and linear measurements, 9.9% vs 11.8%. However, the inter-observer %SDD was greater for volumetric than linear measurements, 20.1% vs 10.6%. Following oblique reformatting to correct tumour angulation, the mean increase in size was 1.14 mm (p = 0.04). Semi-automated volumetric measurements are more repeatable than linear measurements when measuring small VS and should be considered for use in clinical practice. Oblique orientation of VS may contribute to linear measurement error.

  4. The trajectory control in the SLC linac

    International Nuclear Information System (INIS)

    Hsu, I.C.; Adolphsen, C.E.; Himel, T.M.; Seeman, J.T.

    1991-05-01

    Due to wake field effects, the trajectories of accelerated beams in the Linac should be well maintained to avoid severe beam breakup. In order to maintain a small emittance at the end of the Linac, the tolerance on the trajectory deviations become tighter when the beam intensities increase. The existing two beam trajectory correction method works well when the theoretical model agrees with the real machine lattice. Unknown energy deviations along the linac as well as wake field effects can cause the real lattice to deviate from the model. This makes the trajectory correction difficult. Several automated procedures have been developed to solve these problems. They are: an automated procedure to frequently steer the whole Linac by dividing the Linac into several small regions; an automated procedure to empirically correct the model to fit the real lattice and eight trajectory correcting feedback loops along the linac and steering through the collimator region with restricted corrector strengths and a restricted number of correctors. 6 refs., 2 figs

  5. Process automation

    International Nuclear Information System (INIS)

    Moser, D.R.

    1986-01-01

    Process automation technology has been pursued in the chemical processing industries and to a very limited extent in nuclear fuel reprocessing. Its effective use has been restricted in the past by the lack of diverse and reliable process instrumentation and the unavailability of sophisticated software designed for process control. The Integrated Equipment Test (IET) facility was developed by the Consolidated Fuel Reprocessing Program (CFRP) in part to demonstrate new concepts for control of advanced nuclear fuel reprocessing plants. A demonstration of fuel reprocessing equipment automation using advanced instrumentation and a modern, microprocessor-based control system is nearing completion in the facility. This facility provides for the synergistic testing of all chemical process features of a prototypical fuel reprocessing plant that can be attained with unirradiated uranium-bearing feed materials. The unique equipment and mission of the IET facility make it an ideal test bed for automation studies. This effort will provide for the demonstration of the plant automation concept and for the development of techniques for similar applications in a full-scale plant. A set of preliminary recommendations for implementing process automation has been compiled. Some of these concepts are not generally recognized or accepted. The automation work now under way in the IET facility should be useful to others in helping avoid costly mistakes because of the underutilization or misapplication of process automation. 6 figs

  6. Completely automated measurement facility (PAVICOM) for track-detector data processing

    CERN Document Server

    Aleksandrov, A B; Feinberg, E L; Goncharova, L A; Konovalova, N S; Martynov, A G; Polukhina, N G; Roussetski, A S; Starkov, NI; Tsarev, V A

    2004-01-01

    A review of technical capabilities and investigations performed using the completely automated measuring facility (PAVICOM) is presented. This very efficient facility for track-detector data processing in the field of nuclear and high-energy particle physics has been constructed in the Lebedev physical institute. PAVICOM is widely used in Russia for treatment of experimental data from track detectors (emulsion and solid-state trackers) in high- and low-energy physics, cosmic ray physics, etc. PAVICOM provides an essential improvement of the efficiency of experimental studies. In contrast to semi-automated microscopes widely used until now, PAVICOM is capable of performing completely automated measurements of charged particle tracks in nuclear emulsions and track detectors without employing hard visual work. In this case, track images are recorded by CCD cameras and then are digitized and converted into files. Thus, experimental data processing is accelerated by approximately a thousand times. Completely autom...

  7. Study and optimal correction of a systematic skew quadrupole field in the Tevatron

    International Nuclear Information System (INIS)

    Snopok, Pavel; Johnstone, Carol; Berz, Martin; Ovsyannikov, Dmitry A.; Ovsyannikov, Alexander D.

    2006-01-01

    Increasing demands for luminosity in existing and future colliders have made lattice design and error tolerance and correction critical to achieving performance goals. The current state of the Tevatron collider is an example, with a strong skew quadrupole error present in the operational lattice. This work studies the high-order performance of the Tevatron and the strong nonlinear behavior introduced when a significant skew quadrupole error is combined with conventional sextupole correction, a behavior still clearly evident after optimal tuning of available skew quadrupole circuits. An optimization study is performed using different skew quadrupole families, and, importantly, local and global correction of the linear skew terms in maps generated by the code COSY INFINITY [M. Berz, COSY INFINITY version 8.1 user's guide and reference manual, Department of Physics and Astronomy MSUHEP-20704, Michigan State University (2002). URL http://cosy.pa.msu.edu/cosymanu/index.html]. Two correction schemes with one family locally correcting each arc and eight independent correctors in the straight sections for global correction are proposed and shown to dramatically improve linearity and performance of the baseline Tevatron lattice

  8. Method for determining correction factors induced by irradiation of ionization chamber cables in large radiation field

    International Nuclear Information System (INIS)

    Rodrigues, L.L.C.

    1988-01-01

    A simple method was developed to be suggested to hospital physicists in order to be followed during large radiation field dosimetry, to evaluate the effects of cables, connectors and extension cables irradiation and to determine correction factors for each system or geometry. All quality control tests were performed according to the International Electrotechnical Commission for three clinical dosimeters. Photon and electron irradiation effects for cables, connectors and extention cables were investigated under different experimental conditions by means of measurements of chamber sensitivity to a standard radiation source of 90 Sr. The radiation induced leakage current was also measured for cables, connectors and extension cables irradiated by photons and electrons. All measurements were performed at standard dosimetry conditions. Finally, measurements were performed in large fields. Cable factors and leakage factors were determined by the relation between chamber responses for irradiated and unirradiated cables. (author) [pt

  9. Manual versus Automated Rodent Behavioral Assessment: Comparing Efficacy and Ease of Bederson and Garcia Neurological Deficit Scores to an Open Field Video-Tracking System

    OpenAIRE

    Fiona A. Desland; Aqeela Afzal; Zuha Warraich; J Mocco

    2014-01-01

    Animal models of stroke have been crucial in advancing our understanding of the pathophysiology of cerebral ischemia. Currently, the standards for determining neurological deficit in rodents are the Bederson and Garcia scales, manual assessments scoring animals based on parameters ranked on a narrow scale of severity. Automated open field analysis of a live-video tracking system that analyzes animal behavior may provide a more sensitive test. Results obtained from the manual Bederson and Garc...

  10. Automated Functional Testing based on the Navigation of Web Applications

    Directory of Open Access Journals (Sweden)

    Boni García

    2011-08-01

    Full Text Available Web applications are becoming more and more complex. Testing such applications is an intricate hard and time-consuming activity. Therefore, testing is often poorly performed or skipped by practitioners. Test automation can help to avoid this situation. Hence, this paper presents a novel approach to perform automated software testing for web applications based on its navigation. On the one hand, web navigation is the process of traversing a web application using a browser. On the other hand, functional requirements are actions that an application must do. Therefore, the evaluation of the correct navigation of web applications results in the assessment of the specified functional requirements. The proposed method to perform the automation is done in four levels: test case generation, test data derivation, test case execution, and test case reporting. This method is driven by three kinds of inputs: i UML models; ii Selenium scripts; iii XML files. We have implemented our approach in an open-source testing framework named Automatic Testing Platform. The validation of this work has been carried out by means of a case study, in which the target is a real invoice management system developed using a model-driven approach.

  11. Supplementing the neurosurgical virtuoso: evolution of automation from mythology to operating room adjunct.

    Science.gov (United States)

    Attenello, Frank J; Lee, Brian; Yu, Cheng; Liu, Charles Y; Apuzzo, Michael L J

    2014-01-01

    A central concept of scientific advancement in the medical and surgical fields is the incorporation of successful emerging ideas and technologies throughout the scope of human endeavors. The field of automation and robotics is a pivotal representation of this concept. Arising in the mythology of Homer, the concept of automation and robotics grew exponentially over the millennia to provide the substrate for a paradigm shift in the current and future practice of neurosurgery. We trace the growth of this field from the seminal concepts of Homer and Aristotle to early incorporation into neurosurgical practice. Resulting changes provide drastic and welcome advances in areas of visualization, haptics, acoustics, dexterity, tremor reduction, motion scaling, and surgical precision. Published by Elsevier Inc.

  12. A precise technique for manufacturing correction coil

    International Nuclear Information System (INIS)

    Schieber, L.

    1992-01-01

    An automated method of manufacturing correction coils has been developed which provides a precise embodiment of the coil design. Numerically controlled machines have been developed to accurately position coil windings on the beam tube. Two types of machines have been built. One machine bonds the wire to a substrate which is wrapped around the beam tube after it is completed while the second machine bonds the wire directly to the beam tube. Both machines use the Multiwire reg-sign technique of bonding the wire to the substrate utilizing an ultrasonic stylus. These machines are being used to manufacture coils for both the SSC and RHIC

  13. Automated extraction of pleural effusion in three-dimensional thoracic CT images

    Science.gov (United States)

    Kido, Shoji; Tsunomori, Akinori

    2009-02-01

    It is important for diagnosis of pulmonary diseases to measure volume of accumulating pleural effusion in threedimensional thoracic CT images quantitatively. However, automated extraction of pulmonary effusion correctly is difficult. Conventional extraction algorithm using a gray-level based threshold can not extract pleural effusion from thoracic wall or mediastinum correctly, because density of pleural effusion in CT images is similar to those of thoracic wall or mediastinum. So, we have developed an automated extraction method of pulmonary effusion by use of extracting lung area with pleural effusion. Our method used a template of lung obtained from a normal lung for segmentation of lungs with pleural effusions. Registration process consisted of two steps. First step was a global matching processing between normal and abnormal lungs of organs such as bronchi, bones (ribs, sternum and vertebrae) and upper surfaces of livers which were extracted using a region-growing algorithm. Second step was a local matching processing between normal and abnormal lungs which were deformed by the parameter obtained from the global matching processing. Finally, we segmented a lung with pleural effusion by use of the template which was deformed by two parameters obtained from the global matching processing and the local matching processing. We compared our method with a conventional extraction method using a gray-level based threshold and two published methods. The extraction rates of pleural effusions obtained from our method were much higher than those obtained from other methods. Automated extraction method of pulmonary effusion by use of extracting lung area with pleural effusion is promising for diagnosis of pulmonary diseases by providing quantitative volume of accumulating pleural effusion.

  14. A facile and rapid automated synthesis of 3'-deoxy-3'-[18F]fluorothymidine

    International Nuclear Information System (INIS)

    Tang Ganghua; Tang Xiaolan; Wen Fuhua; Wang Mingfang; Li Baoyuan

    2010-01-01

    Aim: To develop a simplified and fully automated synthesis procedure of 3'-deoxy-3'-[ 18 F]fluorothymidine ([ 18 F]FLT) using PET-MF-2V-IT-I synthesis module. Methods: Synthesis of [ 18 F]FLT was performed using PET-MF-2V-IT-I synthesis module by one-pot two-step reaction procedure, including nucleophilic fluorination of 3-N-t-butoxycarbonyl-1-[5'-O-(4,4'-dimethoxy triphenylmethyl)-2'-deoxy-3'-O-(4-nitrobenzenesulfonyl) -β-D-threopentofuranosyl]thymine (15 mg) as the precursor molecule with [ 18 F]fluoride, and subsequent hydrolysis of the protecting group with 1.0 M HCl at the same reaction vessel and purification with SEP PAK cartridges instead of the HPLC system. Results: The automated synthesis of [ 18 F]FLT with SEP PAK purification gave corrected radiochemical yield of 23.2±2.6% (n=6, uncorrected yield: 16-22%) and radiochemical purity of >97% within the total synthesis time of 35 min. Conclusion: The fully one-pot automated synthesis procedure with SEP PAK purification can be applied to the fully automated synthesis of [ 18 F]FLT using commercial [ 18 F]FDG synthesis module.

  15. Weighted divergence correction scheme and its fast implementation

    Science.gov (United States)

    Wang, ChengYue; Gao, Qi; Wei, RunJie; Li, Tian; Wang, JinJun

    2017-05-01

    Forcing the experimental volumetric velocity fields to satisfy mass conversation principles has been proved beneficial for improving the quality of measured data. A number of correction methods including the divergence correction scheme (DCS) have been proposed to remove divergence errors from measurement velocity fields. For tomographic particle image velocimetry (TPIV) data, the measurement uncertainty for the velocity component along the light thickness direction is typically much larger than for the other two components. Such biased measurement errors would weaken the performance of traditional correction methods. The paper proposes a variant for the existing DCS by adding weighting coefficients to the three velocity components, named as the weighting DCS (WDCS). The generalized cross validation (GCV) method is employed to choose the suitable weighting coefficients. A fast algorithm for DCS or WDCS is developed, making the correction process significantly low-cost to implement. WDCS has strong advantages when correcting velocity components with biased noise levels. Numerical tests validate the accuracy and efficiency of the fast algorithm, the effectiveness of GCV method, and the advantages of WDCS. Lastly, DCS and WDCS are employed to process experimental velocity fields from the TPIV measurement of a turbulent boundary layer. This shows that WDCS achieves a better performance than DCS in improving some flow statistics.

  16. Evaluating the potential of automated telephony systems in rural communities: Field assessment for project Lwazi of HLT Meraka

    CSIR Research Space (South Africa)

    Gumede, T

    2008-11-01

    Full Text Available the potential role automated telephony services in the improving access to important government information and services. Our interviews, focus groups and surveys revealed that an automated telephony service could be greatly support current government efforts...

  17. Formal verification of automated teller machine systems using SPIN

    Science.gov (United States)

    Iqbal, Ikhwan Mohammad; Adzkiya, Dieky; Mukhlash, Imam

    2017-08-01

    Formal verification is a technique for ensuring the correctness of systems. This work focuses on verifying a model of the Automated Teller Machine (ATM) system against some specifications. We construct the model as a state transition diagram that is suitable for verification. The specifications are expressed as Linear Temporal Logic (LTL) formulas. We use Simple Promela Interpreter (SPIN) model checker to check whether the model satisfies the formula. This model checker accepts models written in Process Meta Language (PROMELA), and its specifications are specified in LTL formulas.

  18. Corrective Action Investigation Plan for Corrective Action Unit 409: Other Waste Sites, Tonopah Test Range, Nevada (Rev. 0)

    International Nuclear Information System (INIS)

    2000-01-01

    undisturbed locations near the area of the disposal pits; field screening samples for radiological constituents; analysis for geotechnical/hydrologic parameters of samples beneath the disposal pits; and bioassesment samples, if VOC or TPH contamination concentrations exceed field-screening levels. The results of this field investigation will support a defensible evaluation of corrective action alternatives in the corrective action decision document

  19. SU-E-T-225: Correction Matrix for PinPoint Ionization Chamber for Dosimetric Measurements in the Newly Released Incise™ Multileaf Collimator Shaped Small Field for CyberKnife M6™ Machine

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Y; Li, T; Heron, D; Huq, M [University of Pittsburgh Cancer Institute and UPMC CancerCenter, Pittsburgh, PA (United States)

    2015-06-15

    Purpose: For small field dosimetry, such as measurements of output factors for cones or MLC-shaped irregular small fields, ion chambers often Result in an underestimation of the dose, due to both the volume averaging effect and the lack of lateral charged particle equilibrium. This work presents a mathematical model for correction matrix for a PTW PinPoint ionization chamber for dosimetric measurements made in the newly released Incise™ Multileaf collimator fields of the CyberKnife M6™ machine. Methods: A correction matrix for a PTW 0.015cc PinPoint ionization chamber was developed by modeling its 3D dose response in twelve cone-shaped circular fields created using the 5mm, 7.5mm, 10mm, 12.5mm, 15mm, 20mm, 25mm, 30mm, 35mm, 40mm, 50mm, 60mm cones in a CyberKnife M6™ machine. For each field size, hundreds of readings were recorded for every 2mm chamber shift in the horizontal plane. The contribution of each dose pixel to a measurement point depended on the radial distance and the angle to the chamber axis. These readings were then compared with the theoretical dose as obtained with Monte Carlo calculation. A penalized least-square optimization algorithm was developed to generate the correction matrix. After the parameter fitting, the mathematical model was validated for MLC-shaped irregular fields. Results: The optimization algorithm used for parameter fitting was stable and the resulted response factors were smooth in spatial domain. After correction with the mathematical model, the chamber reading matched with the calculation for all the tested fields to within 2%. Conclusion: A novel mathematical model has been developed for PinPoint chamber for dosimetric measurements in small MLC-shaped irregular fields. The correction matrix is dependent on detector, treatment unit and the geometry of setup. The model can be applied to non-standard composite fields and provides an access to IMRT point dose validation.

  20. SU-E-T-225: Correction Matrix for PinPoint Ionization Chamber for Dosimetric Measurements in the Newly Released Incise™ Multileaf Collimator Shaped Small Field for CyberKnife M6™ Machine

    International Nuclear Information System (INIS)

    Zhang, Y; Li, T; Heron, D; Huq, M

    2015-01-01

    Purpose: For small field dosimetry, such as measurements of output factors for cones or MLC-shaped irregular small fields, ion chambers often Result in an underestimation of the dose, due to both the volume averaging effect and the lack of lateral charged particle equilibrium. This work presents a mathematical model for correction matrix for a PTW PinPoint ionization chamber for dosimetric measurements made in the newly released Incise™ Multileaf collimator fields of the CyberKnife M6™ machine. Methods: A correction matrix for a PTW 0.015cc PinPoint ionization chamber was developed by modeling its 3D dose response in twelve cone-shaped circular fields created using the 5mm, 7.5mm, 10mm, 12.5mm, 15mm, 20mm, 25mm, 30mm, 35mm, 40mm, 50mm, 60mm cones in a CyberKnife M6™ machine. For each field size, hundreds of readings were recorded for every 2mm chamber shift in the horizontal plane. The contribution of each dose pixel to a measurement point depended on the radial distance and the angle to the chamber axis. These readings were then compared with the theoretical dose as obtained with Monte Carlo calculation. A penalized least-square optimization algorithm was developed to generate the correction matrix. After the parameter fitting, the mathematical model was validated for MLC-shaped irregular fields. Results: The optimization algorithm used for parameter fitting was stable and the resulted response factors were smooth in spatial domain. After correction with the mathematical model, the chamber reading matched with the calculation for all the tested fields to within 2%. Conclusion: A novel mathematical model has been developed for PinPoint chamber for dosimetric measurements in small MLC-shaped irregular fields. The correction matrix is dependent on detector, treatment unit and the geometry of setup. The model can be applied to non-standard composite fields and provides an access to IMRT point dose validation

  1. Corrective Action Investigation Plan for Corrective Action Unit 232: Area 25 Sewage Lagoons, Nevada Test Site, Nevada, Revision 0

    International Nuclear Information System (INIS)

    1999-01-01

    The Corrective Action Investigation Plan for Corrective Action Unit 232, Area 25 Sewage Lagoons, has been developed in accordance with the Federal Facility Agreement and Consent Order that was agreed to by the U.S. Department of Energy, Nevada Operations Office; the State of Nevada Division of Environmental Protection; and the U. S. Department of Defense. Corrective Action Unit 232 consists of Corrective Action Site 25-03-01, Sewage Lagoon. Corrective Action Unit 232, Area 25 Sewage Lagoons, received sanitary effluent from four buildings within the Test Cell ''C'' Facility from the mid-1960s through approximately 1996. The Test Cell ''C'' Facility was used to develop nuclear propulsion technology by conducting nuclear test reactor studies. Based on the site history collected to support the Data Quality Objectives process, contaminants of potential concern include volatile organic compounds, semivolatile organic compounds, Resource Conservation and Recovery Act metals, petroleum hydrocarbons, polychlorinated biphenyls, pesticides, herbicides, gamma emitting radionuclides, isotopic plutonium, isotopic uranium, and strontium-90. A detailed conceptual site model is presented in Section 3.0 and Appendix A of this Corrective Action Investigation Plan. The conceptual model serves as the basis for the sampling strategy. Under the Federal Facility Agreement and Consent Order, the Corrective Action Investigation Plan will be submitted to the Nevada Division of Environmental Protection for approval. Field work will be conducted following approval of the plan. The results of the field investigation will support a defensible evaluation of corrective action alternatives in the Corrective Action Decision Document

  2. Corrective Action Investigation Plan for Corrective Action Unit 232: Area 25 Sewage Lagoons, Nevada Test Site, Nevada, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    USDOE/NV

    1999-05-01

    The Corrective Action Investigation Plan for Corrective Action Unit 232, Area 25 Sewage Lagoons, has been developed in accordance with the Federal Facility Agreement and Consent Order that was agreed to by the U.S. Department of Energy, Nevada Operations Office; the State of Nevada Division of Environmental Protection; and the U. S. Department of Defense. Corrective Action Unit 232 consists of Corrective Action Site 25-03-01, Sewage Lagoon. Corrective Action Unit 232, Area 25 Sewage Lagoons, received sanitary effluent from four buildings within the Test Cell ''C'' Facility from the mid-1960s through approximately 1996. The Test Cell ''C'' Facility was used to develop nuclear propulsion technology by conducting nuclear test reactor studies. Based on the site history collected to support the Data Quality Objectives process, contaminants of potential concern include volatile organic compounds, semivolatile organic compounds, Resource Conservation and Recovery Act metals, petroleum hydrocarbons, polychlorinated biphenyls, pesticides, herbicides, gamma emitting radionuclides, isotopic plutonium, isotopic uranium, and strontium-90. A detailed conceptual site model is presented in Section 3.0 and Appendix A of this Corrective Action Investigation Plan. The conceptual model serves as the basis for the sampling strategy. Under the Federal Facility Agreement and Consent Order, the Corrective Action Investigation Plan will be submitted to the Nevada Division of Environmental Protection for approval. Field work will be conducted following approval of the plan. The results of the field investigation will support a defensible evaluation of corrective action alternatives in the Corrective Action Decision Document.

  3. Automated lattice perturbation theory in the Schroedinger functional. Implementation and applications in HQET

    International Nuclear Information System (INIS)

    Hesse, Dirk

    2012-01-01

    The author developed the pastor software package for automated lattice perturbation theory calculations in the Schroedinger functional scheme. The pastor code consists of two building blocks, dealing with the generation of Feynman rules and Feynman diagrams respectively. Accepting a rather generic class of lattice gauge and fermion actions, passed to the code in a symbolic form as input, a low level part of pastor will generate Feynman rules to an arbitrary order in the bare coupling with a trivial or an Abelian background field. The second, high level part of pastor is a code generator whose output relies on the vertex generator. It writes programs that evaluate Feynman diagrams for a class of Schroedinger functional observables up to one loop order automatically, the relevant O(a) improvement terms are taken into account. We will describe the algorithms used for implementation of both parts of the code in detail, and provide cross checks with perturbative and non-perturbative data to demonstrate the correctness of our code. We demonstrate the usefulness of the pastor package through various applications taken from the matching process of heavy quark effective theory with quantum chromodynamics. We have e.g. completed a one loop analysis for new candidates for matching observables timely and with rather small effort, highlighting two advantages of an automated software setup. The results that were obtained so far will be useful as a guideline for further non-perturbative studies.

  4. Automated lattice perturbation theory in the Schroedinger functional. Implementation and applications in HQET

    Energy Technology Data Exchange (ETDEWEB)

    Hesse, Dirk

    2012-07-13

    The author developed the pastor software package for automated lattice perturbation theory calculations in the Schroedinger functional scheme. The pastor code consists of two building blocks, dealing with the generation of Feynman rules and Feynman diagrams respectively. Accepting a rather generic class of lattice gauge and fermion actions, passed to the code in a symbolic form as input, a low level part of pastor will generate Feynman rules to an arbitrary order in the bare coupling with a trivial or an Abelian background field. The second, high level part of pastor is a code generator whose output relies on the vertex generator. It writes programs that evaluate Feynman diagrams for a class of Schroedinger functional observables up to one loop order automatically, the relevant O(a) improvement terms are taken into account. We will describe the algorithms used for implementation of both parts of the code in detail, and provide cross checks with perturbative and non-perturbative data to demonstrate the correctness of our code. We demonstrate the usefulness of the pastor package through various applications taken from the matching process of heavy quark effective theory with quantum chromodynamics. We have e.g. completed a one loop analysis for new candidates for matching observables timely and with rather small effort, highlighting two advantages of an automated software setup. The results that were obtained so far will be useful as a guideline for further non-perturbative studies.

  5. Applications of the soft computing in the automated history matching

    Energy Technology Data Exchange (ETDEWEB)

    Silva, P.C.; Maschio, C.; Schiozer, D.J. [Unicamp (Brazil)

    2006-07-01

    Reservoir management is a research field in petroleum engineering that optimizes reservoir performance based on environmental, political, economic and technological criteria. Reservoir simulation is based on geological models that simulate fluid flow. Models must be constantly corrected to yield the observed production behaviour. The process of history matching is controlled by the comparison of production data, well test data and measured data from simulations. Parametrization, objective function analysis, sensitivity analysis and uncertainty analysis are important steps in history matching. One of the main challenges facing automated history matching is to develop algorithms that find the optimal solution in multidimensional search spaces. Optimization algorithms can be either global optimizers that work with noisy multi-modal functions, or local optimizers that cannot work with noisy multi-modal functions. The problem with global optimizers is the very large number of function calls, which is an inconvenience due to the long reservoir simulation time. For that reason, techniques such as least squared, thin plane spline, kriging and artificial neural networks (ANN) have been used as substitutes to reservoir simulators. This paper described the use of optimization algorithms to find optimal solution in automated history matching. Several ANN were used, including the generalized regression neural network, fuzzy system with subtractive clustering and radial basis network. The UNIPAR soft computing method was used along with a modified Hooke- Jeeves optimization method. Two case studies with synthetic and real reservoirs are examined. It was concluded that the combination of global and local optimization has the potential to improve the history matching process and that the use of substitute models can reduce computational efforts. 15 refs., 11 figs.

  6. Corrective Action Investigation Plan for Corrective Action Unit 166: Storage Yards and Contaminated Materials, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    David Strand

    2006-01-01

    Corrective Action Unit 166 is located in Areas 2, 3, 5, and 18 of the Nevada Test Site, which is 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit (CAU) 166 is comprised of the seven Corrective Action Sites (CASs) listed below: (1) 02-42-01, Cond. Release Storage Yd - North; (2) 02-42-02, Cond. Release Storage Yd - South; (3) 02-99-10, D-38 Storage Area; (4) 03-42-01, Conditional Release Storage Yard; (5) 05-19-02, Contaminated Soil and Drum; (6) 18-01-01, Aboveground Storage Tank; and (7) 18-99-03, Wax Piles/Oil Stain. These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives. Additional information will be obtained by conducting a corrective action investigation (CAI) before evaluating corrective action alternatives and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed on February 28, 2006, by representatives of the Nevada Division of Environmental Protection; U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office; Stoller-Navarro Joint Venture; and Bechtel Nevada. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 166. Appendix A provides a detailed discussion of the DQO methodology and the DQOs specific to each CAS. The scope of the CAI for CAU 166 includes the following activities: (1) Move surface debris and/or materials, as needed, to facilitate sampling. (2) Conduct radiological surveys. (3) Perform field screening. (4) Collect and submit environmental samples for laboratory analysis to determine if

  7. Demonstration and validation of automated agricultural field extraction from multi-temporal Landsat data for the majority of United States harvested cropland

    Science.gov (United States)

    Yan, L.; Roy, D. P.

    2014-12-01

    The spatial distribution of agricultural fields is a fundamental description of rural landscapes and the location and extent of fields is important to establish the area of land utilized for agricultural yield prediction, resource allocation, and for economic planning, and may be indicative of the degree of agricultural capital investment, mechanization, and labor intensity. To date, field objects have not been extracted from satellite data over large areas because of computational constraints, the complexity of the extraction task, and because consistently processed appropriate resolution data have not been available or affordable. A recently published automated methodology to extract agricultural crop fields from weekly 30 m Web Enabled Landsat data (WELD) time series was refined and applied to 14 states that cover 70% of harvested U.S. cropland (USDA 2012 Census). The methodology was applied to 2010 combined weekly Landsat 5 and 7 WELD data. The field extraction and quantitative validation results are presented for the following 14 states: Iowa, North Dakota, Illinois, Kansas, Minnesota, Nebraska, Texas, South Dakota, Missouri, Indiana, Ohio, Wisconsin, Oklahoma and Michigan (sorted by area of harvested cropland). These states include the top 11 U.S states by harvested cropland area. Implications and recommendations for systematic application to global coverage Landsat data are discussed.

  8. Automated SmartPrep tracker positioning in liver MRI scans

    International Nuclear Information System (INIS)

    Goto, Takao; Kabasawa, Hiroyuki

    2013-01-01

    This paper presents a new method for automated SmartPrep tracker positioning in liver MRI scans. SmartPrep is used to monitor the contrast bolus signal in order to detect the arrival time of the bolus. Accurately placing the tracker in the aorta while viewing three planar scout images is a difficult task for the operator and is an important problem from the workflow standpoint. The development of an automated SmartPrep tracker would therefore help to improve workflow in liver MRI scans. In our proposed method, the aorta is detected using AdaBoost (which is a machine learning technique) by searching around the cerebral spinal fluid (CSF) in the spinal cord. Analysis of scout scan images showed that our detection method functioned properly for a variety of axial MR images without intensity correction. A total of 234 images reconstructed from the datasets of 64 volunteers were analyzed, and the results showed that the detection error for the aorta was approximately 3 mm. (author)

  9. Automated Execution and Tracking of the LHC Commissioning Tests

    CERN Document Server

    Fuchsberger, K; Galetzka, M; Gorbonosov, R; Pojer, M; Solfaroli Camillocci, M; Zerlauth, M

    2012-01-01

    To ensure the correct operation and prevent system failures, which can lead to equipment damage in the worst case, all critical systems in the Large Hadron Collider (LHC), among them the superconducting circuits, have to be tested thoroughly during dedicated commissioning phases after each intervention. In view of the around 7,000 individual tests to be performed each year after a Christmas stop, a lot of effort was already put into the automation of these tests at the beginning of LHC hardware commissioning in 2005, to assure the dependable execution and analysis of these tests. To further increase the productivity during the commissioning campaigns and to enforce a more consistent workflow, the development of a dedicated testing framework was launched. This new framework is designed to schedule and track the automated tests for all systems of the LHC and will also be extendable, e.g., to beam commissioning tests. This is achieved by re-using different, already existing execution frameworks. In this paper, w...

  10. Space station automation and robotics study. Operator-systems interface

    Science.gov (United States)

    1984-01-01

    This is the final report of a Space Station Automation and Robotics Planning Study, which was a joint project of the Boeing Aerospace Company, Boeing Commercial Airplane Company, and Boeing Computer Services Company. The study is in support of the Advanced Technology Advisory Committee established by NASA in accordance with a mandate by the U.S. Congress. Boeing support complements that provided to the NASA Contractor study team by four aerospace contractors, the Stanford Research Institute (SRI), and the California Space Institute. This study identifies automation and robotics (A&R) technologies that can be advanced by requirements levied by the Space Station Program. The methodology used in the study is to establish functional requirements for the operator system interface (OSI), establish the technologies needed to meet these requirements, and to forecast the availability of these technologies. The OSI would perform path planning, tracking and control, object recognition, fault detection and correction, and plan modifications in connection with extravehicular (EV) robot operations.

  11. An automated instrument for controlled-potential coulometry: System documentation

    Energy Technology Data Exchange (ETDEWEB)

    Holland, M K; Cordaro, J V

    1988-06-01

    An automated controlled-potential coulometer has been developed at the Savannah River Plant for the determination of plutonium. Two such coulometers have been assembled, evaluated, and applied. The software is based upon the methodology used at the Savannah River Plant, however the system is applicable with minimal software modifications to any of the methodologies used throughout the nuclear industry. These state-of-the-art coulometers feature electrical calibration of the integration system, background current corrections, and control-potential adjustment capabilities. Measurement precision within 0.1% has been demonstrated. The systems have also been successfully applied to the determination of pure neptunium solutions. The design and documentation of the automated instrument are described herein. Each individual module's operation, wiring layout, and alignment are described. Interconnection of the modules and system calibration are discussed. A complete set of system prints and a list of associated parts are included. 9 refs., 10 figs., 6 tabs.

  12. Automation bias and verification complexity: a systematic review.

    Science.gov (United States)

    Lyell, David; Coiera, Enrico

    2017-03-01

    While potentially reducing decision errors, decision support systems can introduce new types of errors. Automation bias (AB) happens when users become overreliant on decision support, which reduces vigilance in information seeking and processing. Most research originates from the human factors literature, where the prevailing view is that AB occurs only in multitasking environments. This review seeks to compare the human factors and health care literature, focusing on the apparent association of AB with multitasking and task complexity. EMBASE, Medline, Compendex, Inspec, IEEE Xplore, Scopus, Web of Science, PsycINFO, and Business Source Premiere from 1983 to 2015. Evaluation studies where task execution was assisted by automation and resulted in errors were included. Participants needed to be able to verify automation correctness and perform the task manually. Tasks were identified and grouped. Task and automation type and presence of multitasking were noted. Each task was rated for its verification complexity. Of 890 papers identified, 40 met the inclusion criteria; 6 were in health care. Contrary to the prevailing human factors view, AB was found in single tasks, typically involving diagnosis rather than monitoring, and with high verification complexity. The literature is fragmented, with large discrepancies in how AB is reported. Few studies reported the statistical significance of AB compared to a control condition. AB appears to be associated with the degree of cognitive load experienced in decision tasks, and appears to not be uniquely associated with multitasking. Strategies to minimize AB might focus on cognitive load reduction. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  13. Development of the RSAC Automation System for Reload Core of WH NPP

    International Nuclear Information System (INIS)

    Choi, Yu Sun; Bae, Sung Man; Koh, Byung Marn; Hong, Sun Kwan

    2006-01-01

    The Nuclear Design for Reload Core of Westinghouse Nuclear Power Plant consists of 'Reload Core Model Search', 'Safety Analysis(RSAC)', 'NDR(Nuclear Design Report) and OCAP(Operational Core Analysis Package Generation)' phases. Since scores of calculations for various accidents are required to confirm that the safety analysis assumptions are valid, the Safety Analysis(RSAC) is the most important and time and effort consuming phase of reload core design sequence. The Safety Analysis Automation System supports core designer by the automation of safety analysis calculations in 'Safety Analysis' phase(about 20 calculations). More than 10 kinds of codes, APA(ALPHA/PHOENIX/ANC), APOLLO, VENUS, PHIRE XEFIT, INCORE, etc. are being used for Safety Analysis calculations. Westinghouse code system needs numerous inputs and outputs, so the possibility of human errors could not be ignored during Safety Analysis calculations. To remove these inefficiencies, all input files for Safety Analysis calculations are automatically generated and executed by this Safety Analysis Automation System. All calculation notes are generated and the calculation results are summarized in RSAC (Reload Safety Analysis Checklist) by this system. Therefore, The Safety Analysis Automation System helps the reload core designer to perform safety analysis of the reload core model instantly and correctly

  14. Chromatic aberration correction: an enhancement to the calibration of low-cost digital dermoscopes.

    Science.gov (United States)

    Wighton, Paul; Lee, Tim K; Lui, Harvey; McLean, David; Atkins, M Stella

    2011-08-01

    We present a method for calibrating low-cost digital dermoscopes that corrects for color and inconsistent lighting and also corrects for chromatic aberration. Chromatic aberration is a form of radial distortion that often occurs in inexpensive digital dermoscopes and creates red and blue halo-like effects on edges. Being radial in nature, distortions due to chromatic aberration are not constant across the image, but rather vary in both magnitude and direction. As a result, distortions are not only visually distracting but could also mislead automated characterization techniques. Two low-cost dermoscopes, based on different consumer-grade cameras, were tested. Color is corrected by imaging a reference and applying singular value decomposition to determine the transformation required to ensure accurate color reproduction. Lighting is corrected by imaging a uniform surface and creating lighting correction maps. Chromatic aberration is corrected using a second-order radial distortion model. Our results for color and lighting calibration are consistent with previously published results, while distortions due to chromatic aberration can be reduced by 42-47% in the two systems considered. The disadvantages of inexpensive dermoscopy can be quickly substantially mitigated with a suitable calibration procedure. © 2011 John Wiley & Sons A/S.

  15. Corrective Action Plan for Corrective Action Unit 417: Central Nevada Test Area Surface, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    K. Campbell

    2000-04-01

    This Corrective Action Plan provides methods for implementing the approved corrective action alternative as provided in the Corrective Action Decision Document for the Central Nevada Test Area (CNTA), Corrective Action Unit (CAU) 417 (DOE/NV, 1999). The CNTA is located in the Hot Creek Valley in Nye County, Nevada, approximately 137 kilometers (85 miles) northeast of Tonopah, Nevada. The CNTA consists of three separate land withdrawal areas commonly referred to as UC-1, UC-3, and UC-4, all of which are accessible to the public. CAU 417 consists of 34 Corrective Action Sites (CASs). Results of the investigation activities completed in 1998 are presented in Appendix D of the Corrective Action Decision Document (DOE/NV, 1999). According to the results, the only Constituent of Concern at the CNTA is total petroleum hydrocarbons (TPH). Of the 34 CASs, corrective action was proposed for 16 sites in 13 CASs. In fiscal year 1999, a Phase I Work Plan was prepared for the construction of a cover on the UC-4 Mud Pit C to gather information on cover constructibility and to perform site management activities. With Nevada Division of Environmental Protection concurrence, the Phase I field activities began in August 1999. A multi-layered cover using a Geosynthetic Clay Liner as an infiltration barrier was constructed over the UC-4 Mud Pit. Some TPH impacted material was relocated, concrete monuments were installed at nine sites, signs warning of site conditions were posted at seven sites, and subsidence markers were installed on the UC-4 Mud Pit C cover. Results from the field activities indicated that the UC-4 Mud Pit C cover design was constructable and could be used at the UC-1 Central Mud Pit (CMP). However, because of the size of the UC-1 CMP this design would be extremely costly. An alternative cover design, a vegetated cover, is proposed for the UC-1 CMP.

  16. AUTOMATION FOR THE SYNTHESIS AND APPLICATION OF PET RADIOPHARMACEUTICALS

    International Nuclear Information System (INIS)

    Alexoff, D.L.

    2001-01-01

    The development of automated systems supporting the production and application of PET radiopharmaceuticals has been an important focus of researchers since the first successes of using carbon-11 (Comar et al., 1979) and fluorine-18 (Reivich et al., 1979) labeled compounds to visualize functional activity of the human brain. These initial successes of imaging the human brain soon led to applications in the human heart (Schelbert et al., 1980), and quickly radiochemists began to see the importance of automation to support PET studies in humans (Lambrecht, 1982; Langstrom et al., 1983). Driven by the necessity of controlling processes emanating high fluxes of 511 KeV photons, and by the tedium of repetitive syntheses for carrying out these human PET investigations, academic and government scientists have designed, developed and tested many useful and novel automated systems in the past twenty years. These systems, originally designed primarily by radiochemists, not only carry out effectively the tasks they were designed for, but also demonstrate significant engineering innovation in the field of laboratory automation

  17. Passive correction of persistent current multipoles in superconducting accelerator dipoles

    International Nuclear Information System (INIS)

    Fisk, H.E.; Hanft, R.A.; Kuchnir, M.; McInturff, A.D.

    1986-07-01

    Correction of the magnetization sextupole and decapole fields with strips of superconductor placed just inside the coil winding is discussed. Calculations have been carried out for such a scheme, and tests have been conducted on a 4 cm aperture magnet. The calculated sextupole correction at the injection excitation of 330 A, 5% of full field, was expected to be 77% effective, while the measured correction is 83%, thus suggesting the scheme may be useful for future accelerators such as SSC and LHC

  18. Fator de correção para indivíduos com capacidade acomodativa baseado no uso do refrator automático Correction factor for individuals with accommodative capacity based on automated refractor

    Directory of Open Access Journals (Sweden)

    Rodrigo Ueno Takahagi

    2009-12-01

    Full Text Available OBJETIVO: Pesquisar um fator de correção para avaliação do erro refrativo sem a utilização da cicloplegia. MÉTODOS: Foram estudados 623 pacientes (1.246 olhos, de ambos os sexos, com idade entre 3 e 40 anos. As refratometrias estática e dinâmica foram obtidas usando-se o refrator automático Shin-Nippon Accuref-K 9001. A cicloplegia foi obtida com a instilação de uma gota de colírio ciclopentolato a 1%, com refratometria estática 30 minutos após. Os dados foram submetidos à análise estatística usando a técnica do modelo de regressão linear e modelo de regressão múltipla do valor dióptrico com e sem cicloplegia, em função da idade. RESULTADOS: A correlação entre valores dióptricos sem e com cicloplegia quanto ao erro astigmático variou de 81,52% a 92,27%. Quanto ao valor dióptrico esférico, a correlação foi menor (53,57% a 87,78%. O mesmo se observou em relação ao eixo do astigmatismo (28,86% a 58,80%. O modelo de regressão múltipla em função da idade mostrou coeficiente de determinação múltiplo maior para a miopia (86,38% e astigmatismo (79,79%. O menor coeficiente foi observado para o eixo do astigmatismo (17,70%. CONCLUSÃO: Avaliando-se os erros refrativos com e sem cicloplegia, observou-se alta correlação nas ametropias cilíndricas. Foram desenvolvidas equações matemáticas como fator de correção para refratometrias dos pacientes sem cicloplegia, portadores de ametropias cilíndricas e esféricas.PURPOSE: To determine a correction factor for refractive errors evaluated without cycloplegy effect. METHODS: A study was made with 623 patients (1,246 eyes of both sexes, aging between 3 and 40 years old. The dynamic and static refractometries were obtained using the automated refractor Shin-Nippon Accuref-K 9001. 1% Cyclopentolate was dropped and the static refractometry was performed in 30 minutes. Data were analyzed using the linear regression model and the multiple regression model of the diopter

  19. Automated thermochemolysis reactor for detection of Bacillus anthracis endospores by gas chromatography–mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Li, Dan [Department of Chemistry and Biochemistry, Brigham Young University, Provo, UT 84602 (United States); Rands, Anthony D.; Losee, Scott C. [Torion Technologies, American Fork, UT 84003 (United States); Holt, Brian C. [Department of Statistics, Brigham Young University, Provo, UT 84602 (United States); Williams, John R. [Department of Chemistry and Biochemistry, Brigham Young University, Provo, UT 84602 (United States); Lammert, Stephen A. [Torion Technologies, American Fork, UT 84003 (United States); Robison, Richard A. [Department of Microbiology and Molecular Biology, Brigham Young University, Provo, UT 84602 (United States); Tolley, H. Dennis [Department of Statistics, Brigham Young University, Provo, UT 84602 (United States); Lee, Milton L., E-mail: milton_lee@byu.edu [Department of Chemistry and Biochemistry, Brigham Young University, Provo, UT 84602 (United States)

    2013-05-02

    Graphical abstract: -- Highlights: •An automated sample preparation system for Bacillus anthracis endospores was developed. •A thermochemolysis method was applied to produce and derivatize biomarkers for Bacillus anthracis detection. •The autoreactor controlled the precise delivery of reagents, and TCM reaction times and temperatures. •Solid phase microextraction was used to extract biomarkers, and GC–MS was used for final identification. •This autoreactor was successfully applied to the identification of Bacillus anthracis endospores. -- Abstract: An automated sample preparation system was developed and tested for the rapid detection of Bacillus anthracis endospores by gas chromatography–mass spectrometry (GC–MS) for eventual use in the field. This reactor is capable of automatically processing suspected bio-threat agents to release and derivatize unique chemical biomarkers by thermochemolysis (TCM). The system automatically controls the movement of sample vials from one position to another, crimping of septum caps onto the vials, precise delivery of reagents, and TCM reaction times and temperatures. The specific operations of introduction of sample vials, solid phase microextraction (SPME) sampling, injection into the GC–MS system, and ejection of used vials from the system were performed manually in this study, although they can be integrated into the automated system. Manual SPME sampling is performed by following visual and audible signal prompts for inserting the fiber into and retracting it from the sampling port. A rotating carousel design allows for simultaneous sample collection, reaction, biomarker extraction and analysis of sequential samples. Dipicolinic acid methyl ester (DPAME), 3-methyl-2-butenoic acid methyl ester (a fragment of anthrose) and two methylated sugars were used to compare the performance of the autoreactor with manual TCM. Statistical algorithms were used to construct reliable bacterial endospore signatures, and 24

  20. Automated thermochemolysis reactor for detection of Bacillus anthracis endospores by gas chromatography–mass spectrometry

    International Nuclear Information System (INIS)

    Li, Dan; Rands, Anthony D.; Losee, Scott C.; Holt, Brian C.; Williams, John R.; Lammert, Stephen A.; Robison, Richard A.; Tolley, H. Dennis; Lee, Milton L.

    2013-01-01

    Graphical abstract: -- Highlights: •An automated sample preparation system for Bacillus anthracis endospores was developed. •A thermochemolysis method was applied to produce and derivatize biomarkers for Bacillus anthracis detection. •The autoreactor controlled the precise delivery of reagents, and TCM reaction times and temperatures. •Solid phase microextraction was used to extract biomarkers, and GC–MS was used for final identification. •This autoreactor was successfully applied to the identification of Bacillus anthracis endospores. -- Abstract: An automated sample preparation system was developed and tested for the rapid detection of Bacillus anthracis endospores by gas chromatography–mass spectrometry (GC–MS) for eventual use in the field. This reactor is capable of automatically processing suspected bio-threat agents to release and derivatize unique chemical biomarkers by thermochemolysis (TCM). The system automatically controls the movement of sample vials from one position to another, crimping of septum caps onto the vials, precise delivery of reagents, and TCM reaction times and temperatures. The specific operations of introduction of sample vials, solid phase microextraction (SPME) sampling, injection into the GC–MS system, and ejection of used vials from the system were performed manually in this study, although they can be integrated into the automated system. Manual SPME sampling is performed by following visual and audible signal prompts for inserting the fiber into and retracting it from the sampling port. A rotating carousel design allows for simultaneous sample collection, reaction, biomarker extraction and analysis of sequential samples. Dipicolinic acid methyl ester (DPAME), 3-methyl-2-butenoic acid methyl ester (a fragment of anthrose) and two methylated sugars were used to compare the performance of the autoreactor with manual TCM. Statistical algorithms were used to construct reliable bacterial endospore signatures, and 24

  1. UAS imaging for automated crop lodging detection: a case study over an experimental maize field

    Science.gov (United States)

    Chu, Tianxing; Starek, Michael J.; Brewer, Michael J.; Masiane, Tiisetso; Murray, Seth C.

    2017-05-01

    Lodging has been recognized as one of the major destructive factors for crop quality and yield, particularly in corn. A variety of contributing causes, e.g. disease and/or pest, weather conditions, excessive nitrogen, and high plant density, may lead to lodging before harvesting season. Traditional lodging detection strategies mainly rely on ground data collection, which is insufficient in efficiency and accuracy. To address this problem, this research focuses on the use of unmanned aircraft systems (UAS) for automated detection of crop lodging. The study was conducted over an experimental corn field at the Texas A and M AgriLife Research and Extension Center at Corpus Christi, Texas, during the growing season of 2016. Nadir-view images of the corn field were taken by small UAS platforms equipped with consumer grade RGB and NIR cameras on a per week basis, enabling a timely observation of the plant growth. 3D structural information of the plants was reconstructed using structure-from-motion photogrammetry. The structural information was then applied to calculate crop height, and rates of growth. A lodging index for detecting corn lodging was proposed afterwards. Ground truth data of lodging was collected on a per row basis and used for fair assessment and tuning of the detection algorithm. Results show the UAS-measured height correlates well with the ground-measured height. More importantly, the lodging index can effectively reflect severity of corn lodging and yield after harvesting.

  2. Leading quantum gravitational corrections to scalar QED

    International Nuclear Information System (INIS)

    Bjerrum-Bohr, N.E.J.

    2002-01-01

    We consider the leading post-Newtonian and quantum corrections to the non-relativistic scattering amplitude of charged scalars in the combined theory of general relativity and scalar QED. The combined theory is treated as an effective field theory. This allows for a consistent quantization of the gravitational field. The appropriate vertex rules are extracted from the action, and the non-analytic contributions to the 1-loop scattering matrix are calculated in the non-relativistic limit. The non-analytical parts of the scattering amplitude, which are known to give the long range, low energy, leading quantum corrections, are used to construct the leading post-Newtonian and quantum corrections to the two-particle non-relativistic scattering matrix potential for two charged scalars. The result is discussed in relation to experimental verifications

  3. Design and experimental testing of air slab caps which convert commercial electron diodes into dual purpose, correction-free diodes for small field dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Charles, P. H., E-mail: paulcharles111@gmail.com [Department of Radiation Oncology, Princess Alexandra Hospital, Ipswich Road, Woolloongabba, Brisbane, Queensland 4102, Australia and School of Chemistry, Physics and Mechanical Engineering, Queensland University of Technology, GPO Box 2434, Brisbane, Queensland 4001 (Australia); Cranmer-Sargison, G. [Department of Medical Physics, Saskatchewan Cancer Agency, 20 Campus Drive, Saskatoon, Saskatchewan S7L 3P6, Canada and College of Medicine, University of Saskatchewan, 107 Wiggins Road, Saskatoon, Saskatchewan S7N 5E5 (Canada); Thwaites, D. I. [Institute of Medical Physics, School of Physics, University of Sydney, New South Wales 2006 (Australia); Kairn, T. [School of Chemistry, Physics and Mechanical Engineering, Queensland University of Technology, GPO Box 2434, Brisbane, Queensland 4001, Australia and Genesis CancerCare Queensland, The Wesley Medical Centre, Suite 1, 40 Chasely Street, Auchenflower, Brisbane, Queensland 4066 (Australia); Crowe, S. B.; Langton, C. M.; Trapp, J. V. [School of Chemistry, Physics and Mechanical Engineering, Queensland University of Technology, GPO Box 2434, Brisbane, Queensland 4001 (Australia); Pedrazzini, G. [Genesis CancerCare Queensland, The Wesley Medical Centre, Suite 1, 40 Chasely Street, Auchenflower, Brisbane, Queensland 4066 (Australia); Aland, T.; Kenny, J. [Epworth Radiation Oncology, 89 Bridge Road, Richmond, Melbourne, Victoria 3121 (Australia)

    2014-10-15

    Purpose: Two diodes which do not require correction factors for small field relative output measurements are designed and validated using experimental methodology. This was achieved by adding an air layer above the active volume of the diode detectors, which canceled out the increase in response of the diodes in small fields relative to standard field sizes. Methods: Due to the increased density of silicon and other components within a diode, additional electrons are created. In very small fields, a very small air gap acts as an effective filter of electrons with a high angle of incidence. The aim was to design a diode that balanced these perturbations to give a response similar to a water-only geometry. Three thicknesses of air were placed at the proximal end of a PTW 60017 electron diode (PTWe) using an adjustable “air cap”. A set of output ratios (OR{sub Det}{sup f{sub c}{sub l}{sub i}{sub n}}) for square field sizes of side length down to 5 mm was measured using each air thickness and compared to OR{sub Det}{sup f{sub c}{sub l}{sub i}{sub n}} measured using an IBA stereotactic field diode (SFD). k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} was transferred from the SFD to the PTWe diode and plotted as a function of air gap thickness for each field size. This enabled the optimal air gap thickness to be obtained by observing which thickness of air was required such that k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} was equal to 1.00 at all field sizes. A similar procedure was used to find the optimal air thickness required to make a modified Sun Nuclear EDGE detector (EDGEe) which is “correction-free” in small field relative dosimetry. In addition, the feasibility of experimentally transferring k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r

  4. Correction of the horizontal closed orbit at all energies

    International Nuclear Information System (INIS)

    Degueurce, L.; Nakach, A.

    The method followed is accomplished in two steps. At average energy, the closed orbit is corrected by a remote realignment of the focusing quadrupoles by a known quantity. This closed orbit, created by the position adjustment of the quadrupoles, is valid during the whole cycle; but at low energy level, a closed orbit is added because of constant currents or parasitic fields whose effects decrease as the energy level increases. This residual orbit is corrected during the injection by dipolar correction fields, located on the inside of the quadrupoles and fed by direct currents. Therefore, the closed orbit resulting from the superposition of the two types of corrections and defects is brought back to +- 2.5 mm with respect to the center of the quadrupoles

  5. Rough Sets and Stomped Normal Distribution for Simultaneous Segmentation and Bias Field Correction in Brain MR Images.

    Science.gov (United States)

    Banerjee, Abhirup; Maji, Pradipta

    2015-12-01

    The segmentation of brain MR images into different tissue classes is an important task for automatic image analysis technique, particularly due to the presence of intensity inhomogeneity artifact in MR images. In this regard, this paper presents a novel approach for simultaneous segmentation and bias field correction in brain MR images. It integrates judiciously the concept of rough sets and the merit of a novel probability distribution, called stomped normal (SN) distribution. The intensity distribution of a tissue class is represented by SN distribution, where each tissue class consists of a crisp lower approximation and a probabilistic boundary region. The intensity distribution of brain MR image is modeled as a mixture of finite number of SN distributions and one uniform distribution. The proposed method incorporates both the expectation-maximization and hidden Markov random field frameworks to provide an accurate and robust segmentation. The performance of the proposed approach, along with a comparison with related methods, is demonstrated on a set of synthetic and real brain MR images for different bias fields and noise levels.

  6. Corrective Action Investigation Plan for Corrective Action Unit 190: Contaminated Waste Sites Nevada Test Site, Nevada, Rev. No.: 0

    International Nuclear Information System (INIS)

    Wickline, Alfred

    2006-01-01

    Corrective Action Unit (CAU) 190 is located in Areas 11 and 14 of the Nevada Test Site, which is 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit 190 is comprised of the four Corrective Action Sites (CASs) listed below: (1) 11-02-01, Underground Centrifuge; (2) 11-02-02, Drain Lines and Outfall; (3) 11-59-01, Tweezer Facility Septic System; and (4) 14-23-01, LTU-6 Test Area. These sites are being investigated because existing information is insufficient on the nature and extent of potential contamination to evaluate and recommend corrective action alternatives. Additional information will be obtained before evaluating corrective action alternatives and selecting the appropriate corrective action for each CAS by conducting a corrective action investigation (CAI). The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed on August 24, 2006, by representatives of the Nevada Division of Environmental Protection; U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office; Stoller-Navarro Joint Venture, and National Security Technologies, LLC. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 190. The scope of the CAU 190 CAI includes the following activities: (1) Move surface debris and/or materials, as needed, to facilitate sampling; (2) Conduct radiological and geophysical surveys; (3) Perform field screening; (4) Collect and submit environmental samples for laboratory analysis to determine whether contaminants of concern (COCs) are present; (5) If COCs are present, collect additional step-out samples to define the lateral and vertical extent of the contamination; (6) Collect samples of source material, if present

  7. Note: Automated optical focusing on encapsulated devices for scanning light stimulation systems

    International Nuclear Information System (INIS)

    Bitzer, L. A.; Benson, N.; Schmechel, R.

    2014-01-01

    Recently, a scanning light stimulation system with an automated, adaptive focus correction during the measurement was introduced. Here, its application on encapsulated devices is discussed. This includes the changes an encapsulating optical medium introduces to the focusing process as well as to the subsequent light stimulation measurement. Further, the focusing method is modified to compensate for the influence of refraction and to maintain a minimum beam diameter on the sample surface

  8. Attenuation correction for the collimated gamma ray assay of cylindrical samples

    International Nuclear Information System (INIS)

    Patra, Sabyasachi; Agarwal, Chhavi; Goswami, A.; Gathibandhe, M.

    2015-01-01

    The Hybrid Monte Carlo (HMC) method developed earlier for attenuation correction of non-collimated samples [Agarwal et al., 2008, Nucl. Instrum. Methods A 597, 198], has been extended to the segmented gamma ray assay of cylindrical samples. The method has been validated both experimentally and theoretically. For experimental validation, the results of HMC calculation have been compared with the experimentally obtained attenuation correction factors. The HMC attenuation correction factors have also been compared with the results obtained from literature available near-field and far-field formulae at two sample-to-detector distances (10.3 cm and 20.4 cm). The method has been found to be valid at all sample-to-detector distances over a wide range of transmittance. On the other hand, the literature available near-field and far-field formulae have been found to work over a limited range of sample-to detector distances and transmittances. The HMC method has been further extended to circular collimated geometries where analytical formula for attenuation correction does not exist. - Highlights: • Hybrid Monte Carlo method for attenuation correction developed for SGA system. • Method found to work for all sample-detector geometries for all transmittances. • The near-field formula applicable only after certain sample-detector distance. • The far-field formula applicable only for higher transmittances (>18%). • Hybrid Monte Carlo method further extended to circular collimated geometry

  9. Semi-automated Robust Quantification of Lesions (SRQL Toolbox

    Directory of Open Access Journals (Sweden)

    Kaori Ito

    2017-02-01

    Full Text Available Quantifying lesions in a robust manner is fundamental for studying the effects of neuroanatomical changes in the post-stroke brain on recovery. However, the wide variability in lesion characteristics across individuals makes manual lesion segmentation a challenging and often subjective process. This makes it difficult to combine stroke lesion data across multiple research sites, due to subjective differences in how lesions may be defined. We developed the Semi-automated Robust Quantification of Lesions (SRQL; https://github.com/npnl/SRQL; DOI: 10.5281/zenodo.267213 Toolbox that performs several analysis steps: 1 a white matter intensity correction that removes healthy white matter voxels from the lesion mask, thereby making lesions slightly more robust to subjective errors; 2 an automated report of descriptive statistics on lesions for simplified comparison between or across groups, and 3 an option to perform analyses in both native and standard space to facilitate analyses in either space, or comparisons between spaces. Here, we describe the methods implemented in the toolbox and demonstrate the outputs of the SRQL toolbox.

  10. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  11. Automized squark-neutralino production to next-to-leading order

    International Nuclear Information System (INIS)

    Binoth, Thomas; Wigmore, Ioan; Netto, Dorival Goncalves; Lopez-Val, David; Plehn, Tilman; Mawatari, Kentarou

    2011-01-01

    The production of one hard jet in association with missing transverse energy is a major LHC search channel motivated by many scenarios for physics beyond the standard model. In scenarios with a weakly interacting dark matter candidate, like supersymmetry, it arises from the associated production of a quark partner with the dark matter agent. We present the next-to-leading-order cross section calculation as the first application of the fully automized MadGolem package. We find moderate corrections to the production rate with a strongly reduced theory uncertainty.

  12. Importance of the Decompensative Correction of the Gravity Field for Study of the Upper Crust: Application to the Arabian Plate and Surroundings

    Science.gov (United States)

    Kaban, Mikhail K.; El Khrepy, Sami; Al-Arifi, Nassir

    2017-01-01

    The isostatic correction represents one of the most useful "geological" reduction methods of the gravity field. With this correction it is possible to remove a significant part of the effect of deep density heterogeneity, which dominates in the Bouguer gravity anomalies. However, even this reduction does not show the full gravity effect of unknown anomalies in the upper crust since their impact is substantially reduced by the isostatic compensation. We analyze a so-called decompensative correction of the isostatic anomalies, which provides a possibility to separate these effects. It was demonstrated that this correction is very significant at the mid-range wavelengths and may exceed 100 m/s2 (mGal), therefore ignoring this effect would lead to wrong conclusions about the upper crust structure. At the same time, the decompensative correction is very sensitive to the compensation depth and effective elastic thickness of the lithosphere. Therefore, these parameters should be properly determined based on other studies. Based on this technique, we estimate the decompensative correction for the Arabian plate and surrounding regions. The amplitude of the decompensative anomalies reaches ±250 m/s2 10-5 (mGal), evidencing for both, large density anomalies of the upper crust (including sediments) and strong isostatic disturbances of the lithosphere. These results improve the knowledge about the crustal structure in the Middle East.

  13. A Monte Carlo procedure for Hamiltonians with small nonlocal correction terms

    International Nuclear Information System (INIS)

    Mack, G.; Pinn, K.

    1986-03-01

    We consider lattice field theories whose Hamiltonians contain small nonlocal correction terms. We propose to do simulations for an auxiliarly polymer system with field dependent activities. If a nonlocal correction term to the Hamiltonian is small, it need to be evaluated only rarely. (orig.)

  14. Lean automation development : applying lean principles to the automation development process

    OpenAIRE

    Granlund, Anna; Wiktorsson, Magnus; Grahn, Sten; Friedler, Niklas

    2014-01-01

    By a broad empirical study it is indicated that automation development show potential of improvement. In the paper, 13 lean product development principles are contrasted to the automation development process and it is suggested why and how these principles can facilitate, support and improve the automation development process. The paper summarises a description of what characterises a lean automation development process and what consequences it entails. Main differences compared to current pr...

  15. Automation of Raw sugar Crystallizer tacho of Central Julio Antonio Mella

    Directory of Open Access Journals (Sweden)

    Mónica Mulet-Hing

    2016-04-01

    Full Text Available This paper treats about the analysis of the actual  situation and prospects for solutions that permit economical and efficient automation in the area of Tachos of Sugar Company Julio Antonio Mella, as part of the "Supervisory Control Systems", for the first level of automation in this industry, performing the proposed automation in the central area of cans by PLCs .This proposal arises from the need to improve the crystallization process tacho 5 of that area, since it has no automation, that is, everything is done manually, which will bring undoubtedly a quality improvement the final product. The structure and control system variables are defined, proving the feasibility of the proposed solution. The essential result of the work involves the submission of a proposal of automation that has the structure of a control algorithm, taking into account the requirements, technical resources for implementation, the variables that must be observed and processed, as well as elements of final action; the respective field instrumentation is proposed, to perform satisfactorily the control with the minimum possible investment.

  16. Automated borehole gravity meter system

    International Nuclear Information System (INIS)

    Lautzenhiser, Th.V.; Wirtz, J.D.

    1984-01-01

    An automated borehole gravity meter system for measuring gravity within a wellbore. The gravity meter includes leveling devices for leveling the borehole gravity meter, displacement devices for applying forces to a gravity sensing device within the gravity meter to bring the gravity sensing device to a predetermined or null position. Electronic sensing and control devices are provided for (i) activating the displacement devices, (ii) sensing the forces applied to the gravity sensing device, (iii) electronically converting the values of the forces into a representation of the gravity at the location in the wellbore, and (iv) outputting such representation. The system further includes electronic control devices with the capability of correcting the representation of gravity for tidal effects, as well as, calculating and outputting the formation bulk density and/or porosity

  17. A conceptual model of the automated credibility assessment of the volunteered geographic information

    International Nuclear Information System (INIS)

    Idris, N H; Jackson, M J; Ishak, M H I

    2014-01-01

    The use of Volunteered Geographic Information (VGI) in collecting, sharing and disseminating geospatially referenced information on the Web is increasingly common. The potentials of this localized and collective information have been seen to complement the maintenance process of authoritative mapping data sources and in realizing the development of Digital Earth. The main barrier to the use of this data in supporting this bottom up approach is the credibility (trust), completeness, accuracy, and quality of both the data input and outputs generated. The only feasible approach to assess these data is by relying on an automated process. This paper describes a conceptual model of indicators (parameters) and practical approaches to automated assess the credibility of information contributed through the VGI including map mashups, Geo Web and crowd – sourced based applications. There are two main components proposed to be assessed in the conceptual model – metadata and data. The metadata component comprises the indicator of the hosting (websites) and the sources of data / information. The data component comprises the indicators to assess absolute and relative data positioning, attribute, thematic, temporal and geometric correctness and consistency. This paper suggests approaches to assess the components. To assess the metadata component, automated text categorization using supervised machine learning is proposed. To assess the correctness and consistency in the data component, we suggest a matching validation approach using the current emerging technologies from Linked Data infrastructures and using third party reviews validation. This study contributes to the research domain that focuses on the credibility, trust and quality issues of data contributed by web citizen providers

  18. Hitchhiker'S Guide to Voxel Segmentation for Partial Volume Correction of in Vivo Magnetic Resonance Spectroscopy

    Directory of Open Access Journals (Sweden)

    Scott Quadrelli

    2016-01-01

    Full Text Available Partial volume effects have the potential to cause inaccuracies when quantifying metabolites using proton magnetic resonance spectroscopy (MRS. In order to correct for cerebrospinal fluid content, a spectroscopic voxel needs to be segmented according to different tissue contents. This article aims to detail how automated partial volume segmentation can be undertaken and provides a software framework for researchers to develop their own tools. While many studies have detailed the impact of partial volume correction on proton magnetic resonance spectroscopy quantification, there is a paucity of literature explaining how voxel segmentation can be achieved using freely available neuroimaging packages.

  19. Both Automation and Paper.

    Science.gov (United States)

    Purcell, Royal

    1988-01-01

    Discusses the concept of a paperless society and the current situation in library automation. Various applications of automation and telecommunications are addressed, and future library automation is considered. Automation at the Monroe County Public Library in Bloomington, Indiana, is described as an example. (MES)

  20. Automated identification of animal species in camera trap images

    NARCIS (Netherlands)

    Yu, X.; Wang, J.; Kays, R.; Jansen, P.A.; Wang, T.; Huang, T.

    2013-01-01

    Image sensors are increasingly being used in biodiversity monitoring, with each study generating many thousands or millions of pictures. Efficiently identifying the species captured by each image is a critical challenge for the advancement of this field. Here, we present an automated species