WorldWideScience

Sample records for field correction automated

  1. Space Weather Magnetometer Set with Automated AC Spacecraft Field Correction for GEO-KOMPSAT-2A

    Science.gov (United States)

    Auster, U.; Magnes, W.; Delva, M.; Valavanoglou, A.; Leitner, S.; Hillenmaier, O.; Strauch, C.; Brown, P.; Whiteside, B.; Bendyk, M.; Hilgers, A.; Kraft, S.; Luntama, J. P.; Seon, J.

    2016-05-01

    Monitoring the solar wind conditions, in particular its magnetic field (interplanetary magnetic field) ahead of the Earth is essential in performing accurate and reliable space weather forecasting. The magnetic condition of the spacecraft itself is a key parameter for the successful performance of the magnetometer onboard. In practice a condition with negligible magnetic field of the spacecraft cannot always be fulfilled and magnetic sources on the spacecraft interfere with the natural magnetic field measured by the space magnetometer. The presented "ready-to-use" Service Oriented Spacecraft Magnetometer (SOSMAG) is developed for use on any satellite implemented without magnetic cleanliness programme. It enables detection of the spacecraft field AC variations on a proper time scale suitable to distinguish the magnetic field variations relevant to space weather phenomena, such as sudden increase in the interplanetary field or southward turning. This is achieved through the use of dual fluxgate magnetometers on a short boom (1m) and two additional AMR sensors on the spacecraft body, which monitor potential AC disturbers. The measurements of the latter sensors enable an automated correction of the AC signal contributions from the spacecraft in the final magnetic vector. After successful development and test of the EQM prototype, a flight model (FM) is being built for the Korean satellite Geo-Kompsat 2A, with launch foreseen in 2018.

  2. Peripheral refractive correction and automated perimetric profiles.

    Science.gov (United States)

    Wild, J M; Wood, J M; Crews, S J

    1988-06-01

    The effect of peripheral refractive error correction on the automated perimetric sensitivity profile was investigated on a sample of 10 clinically normal, experienced observers. Peripheral refractive error was determined at eccentricities of 0 degree, 20 degrees and 40 degrees along the temporal meridian of the right eye using the Canon Autoref R-1, an infra-red automated refractor, under the parametric conditions of the Octopus automated perimeter. Perimetric sensitivity was then undertaken at these eccentricities (stimulus sizes 0 and III) with and without the appropriate peripheral refractive correction using the Octopus 201 automated perimeter. Within the measurement limits of the experimental procedures employed, perimetric sensitivity was not influenced by peripheral refractive correction.

  3. Automation of one-loop QCD corrections

    CERN Document Server

    Hirschi, Valentin; Frixione, Stefano; Garzelli, Maria Vittoria; Maltoni, Fabio; Pittau, Roberto

    2011-01-01

    We present the complete automation of the computation of one-loop QCD corrections, including UV renormalization, to an arbitrary scattering process in the Standard Model. This is achieved by embedding the OPP integrand reduction technique, as implemented in CutTools, into the MadGraph framework. By interfacing the tool so constructed, which we dub MadLoop, with MadFKS, the fully automatic computation of any infrared-safe observable at the next-to-leading order in QCD is attained. We demonstrate the flexibility and the reach of our method by calculating the production rates for a variety of processes at the 7 TeV LHC.

  4. Automated NLO QCD corrections with WHIZARD

    International Nuclear Information System (INIS)

    Weiss, Christian; Siegen Univ.; Chokoufe Nejad, Bijan; Reuter, Juergen; Kilian, Wolfgang

    2015-10-01

    We briefly discuss the current status of NLO QCD automation in the Monte Carlo event generator WHIZARD. The functionality is presented for the explicit study of off-shell top quark production with associated backgrounds at a lepton collider.

  5. Automated general temperature correction method for dielectric soil moisture sensors

    Science.gov (United States)

    Kapilaratne, R. G. C. Jeewantinie; Lu, Minjiao

    2017-08-01

    An effective temperature correction method for dielectric sensors is important to ensure the accuracy of soil water content (SWC) measurements of local to regional-scale soil moisture monitoring networks. These networks are extensively using highly temperature sensitive dielectric sensors due to their low cost, ease of use and less power consumption. Yet there is no general temperature correction method for dielectric sensors, instead sensor or site dependent correction algorithms are employed. Such methods become ineffective at soil moisture monitoring networks with different sensor setups and those that cover diverse climatic conditions and soil types. This study attempted to develop a general temperature correction method for dielectric sensors which can be commonly used regardless of the differences in sensor type, climatic conditions and soil type without rainfall data. In this work an automated general temperature correction method was developed by adopting previously developed temperature correction algorithms using time domain reflectometry (TDR) measurements to ThetaProbe ML2X, Stevens Hydra probe II and Decagon Devices EC-TM sensor measurements. The rainy day effects removal procedure from SWC data was automated by incorporating a statistical inference technique with temperature correction algorithms. The temperature correction method was evaluated using 34 stations from the International Soil Moisture Monitoring Network and another nine stations from a local soil moisture monitoring network in Mongolia. Soil moisture monitoring networks used in this study cover four major climates and six major soil types. Results indicated that the automated temperature correction algorithms developed in this study can eliminate temperature effects from dielectric sensor measurements successfully even without on-site rainfall data. Furthermore, it has been found that actual daily average of SWC has been changed due to temperature effects of dielectric sensors with a

  6. Automation of electroweak NLO corrections in general models

    Energy Technology Data Exchange (ETDEWEB)

    Lang, Jean-Nicolas [Universitaet Wuerzburg (Germany)

    2016-07-01

    I discuss the automation of generation of scattering amplitudes in general quantum field theories at next-to-leading order in perturbation theory. The work is based on Recola, a highly efficient one-loop amplitude generator for the Standard Model, which I have extended so that it can deal with general quantum field theories. Internally, Recola computes off-shell currents and for new models new rules for off-shell currents emerge which are derived from the Feynman rules. My work relies on the UFO format which can be obtained by a suited model builder, e.g. FeynRules. I have developed tools to derive the necessary counterterm structures and to perform the renormalization within Recola in an automated way. I describe the procedure using the example of the two-Higgs-doublet model.

  7. An Automated Baseline Correction Method Based on Iterative Morphological Operations.

    Science.gov (United States)

    Chen, Yunliang; Dai, Liankui

    2018-05-01

    Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.

  8. Scatter factor corrections for elongated fields

    International Nuclear Information System (INIS)

    Higgins, P.D.; Sohn, W.H.; Sibata, C.H.; McCarthy, W.A.

    1989-01-01

    Measurements have been made to determine scatter factor corrections for elongated fields of Cobalt-60 and for nominal linear accelerator energies of 6 MV (Siemens Mevatron 67) and 18 MV (AECL Therac 20). It was found that for every energy the collimator scatter factor varies by 2% or more as the field length-to-width ratio increases beyond 3:1. The phantom scatter factor is independent of which collimator pair is elongated at these energies. For 18 MV photons it was found that the collimator scatter factor is complicated by field-size-dependent backscatter into the beam monitor

  9. Longitudinal wake field corrections in circular machines

    International Nuclear Information System (INIS)

    Symon, K.R.

    1996-01-01

    In computations of longitudinal particle motions in accelerators and storage rings, the fields produced by the interactions of the beam with the cavity in which it circulates are usually calculated by multiplying Fourier components of the beam current by the appropriate impedances. This procedure neglects the slow variation with time of the Fourier coefficients and of the beam revolution frequency. When there are cavity elements with decay times that are comparable with or larger than the time during which changes in the beam parameters occur, these changes can not be neglected. Corrections for this effect have been worked out in terms of the response functions of elements in the ring. The result is expressed as a correction to the impedance which depends on the way in which the beam parameters are changing. A method is presented for correcting a numerical simulation by keeping track of the steady state and transient terms in the response of a cavity

  10. Evaluation of refractive correction for standard automated perimetry in eyes wearing multifocal contact lenses

    Directory of Open Access Journals (Sweden)

    Kazunori Hirasawa

    2017-10-01

    Full Text Available AIM: To evaluate the refractive correction for standard automated perimetry (SAP in eyes with refractive multifocal contact lenses (CL in healthy young participants. METHODS: Twenty-nine eyes of 29 participants were included. Accommodation was paralyzed in all participants with 1% cyclopentolate hydrochloride. SAP was performed using the Humphrey SITA-standard 24-2 and 10-2 protocol under three refractive conditions: monofocal CL corrected for near distance (baseline; multifocal CL corrected for distance (mCL-D; and mCL-D corrected for near vision using a spectacle lens (mCL-N. Primary outcome measures were the foveal threshold, mean deviation (MD, and pattern standard deviation (PSD. RESULTS: The foveal threshold of mCL-N with both the 24-2 and 10-2 protocols significantly decreased by 2.2-2.5 dB CONCLUSION: Despite the induced mydriasis and the optical design of the multifocal lens used in this study, our results indicated that, when the dome-shaped visual field test is performed with eyes with large pupils and wearing refractive multifocal CLs, distance correction without additional near correction is to be recommended.

  11. Mean Field Analysis of Quantum Annealing Correction.

    Science.gov (United States)

    Matsuura, Shunji; Nishimori, Hidetoshi; Albash, Tameem; Lidar, Daniel A

    2016-06-03

    Quantum annealing correction (QAC) is a method that combines encoding with energy penalties and decoding to suppress and correct errors that degrade the performance of quantum annealers in solving optimization problems. While QAC has been experimentally demonstrated to successfully error correct a range of optimization problems, a clear understanding of its operating mechanism has been lacking. Here we bridge this gap using tools from quantum statistical mechanics. We study analytically tractable models using a mean-field analysis, specifically the p-body ferromagnetic infinite-range transverse-field Ising model as well as the quantum Hopfield model. We demonstrate that for p=2, where the phase transition is of second order, QAC pushes the transition to increasingly larger transverse field strengths. For p≥3, where the phase transition is of first order, QAC softens the closing of the gap for small energy penalty values and prevents its closure for sufficiently large energy penalty values. Thus QAC provides protection from excitations that occur near the quantum critical point. We find similar results for the Hopfield model, thus demonstrating that our conclusions hold in the presence of disorder.

  12. Comparatively Studied Color Correction Methods for Color Calibration of Automated Microscopy Complex of Biomedical Specimens

    Directory of Open Access Journals (Sweden)

    T. A. Kravtsova

    2016-01-01

    Full Text Available The paper considers a task of generating the requirements and creating a calibration target for automated microscopy systems (AMS of biomedical specimens to provide the invariance of algorithms and software to the hardware configuration. The required number of color fields of the calibration target and their color coordinates are mostly determined by the color correction method, for which coefficients of the equations are estimated during the calibration process. The paper analyses existing color calibration techniques for digital imaging systems using an optical microscope and shows that there is a lack of published results of comparative studies to demonstrate a particular useful color correction method for microscopic images. A comparative study of ten image color correction methods in RGB space using polynomials and combinations of color coordinate of different orders was carried out. The method of conditioned least squares to estimate the coefficients in the color correction equations using captured images of 217 color fields of the calibration target Kodak Q60-E3 was applied. The regularization parameter in this method was chosen experimentally. It was demonstrated that the best color correction quality characteristics are provided by the method that uses a combination of color coordinates of the 3rd order. The study of the influence of the number and the set of color fields included in calibration target on color correction quality for microscopic images was performed. Six train sets containing 30, 35, 40, 50, 60 and 80 color fields, and test set of 47 color fields not included in any of the train sets were formed. It was found out that the train set of 60 color fields minimizes the color correction error values for both operating modes of digital camera: using "default" color settings and with automatic white balance. At the same time it was established that the use of color fields from the widely used now Kodak Q60-E3 target does not

  13. A fully automated algorithm of baseline correction based on wavelet feature points and segment interpolation

    Science.gov (United States)

    Qian, Fang; Wu, Yihui; Hao, Peng

    2017-11-01

    Baseline correction is a very important part of pre-processing. Baseline in the spectrum signal can induce uneven amplitude shifts across different wavenumbers and lead to bad results. Therefore, these amplitude shifts should be compensated before further analysis. Many algorithms are used to remove baseline, however fully automated baseline correction is convenient in practical application. A fully automated algorithm based on wavelet feature points and segment interpolation (AWFPSI) is proposed. This algorithm finds feature points through continuous wavelet transformation and estimates baseline through segment interpolation. AWFPSI is compared with three commonly introduced fully automated and semi-automated algorithms, using simulated spectrum signal, visible spectrum signal and Raman spectrum signal. The results show that AWFPSI gives better accuracy and has the advantage of easy use.

  14. Relativistic Scott correction in self-generated magnetic fields

    DEFF Research Database (Denmark)

    Erdos, Laszlo; Fournais, Søren; Solovej, Jan Philip

    2012-01-01

    /3}$ and it is unchanged by including the self-generated magnetic field. We prove the first correction term to this energy, the so-called Scott correction of the form $S(\\alpha Z) Z^2$. The current paper extends the result of \\cite{SSS} on the Scott correction for relativistic molecules to include a self......-generated magnetic field. Furthermore, we show that the corresponding Scott correction function $S$, first identified in \\cite{SSS}, is unchanged by including a magnetic field. We also prove new Lieb-Thirring inequalities for the relativistic kinetic energy with magnetic fields....

  15. Evaluation of refractive correction for standard automated perimetry in eyes wearing multifocal contact lenses.

    Science.gov (United States)

    Hirasawa, Kazunori; Ito, Hikaru; Ohori, Yukari; Takano, Yui; Shoji, Nobuyuki

    2017-01-01

    To evaluate the refractive correction for standard automated perimetry (SAP) in eyes with refractive multifocal contact lenses (CL) in healthy young participants. Twenty-nine eyes of 29 participants were included. Accommodation was paralyzed in all participants with 1% cyclopentolate hydrochloride. SAP was performed using the Humphrey SITA-standard 24-2 and 10-2 protocol under three refractive conditions: monofocal CL corrected for near distance (baseline); multifocal CL corrected for distance (mCL-D); and mCL-D corrected for near vision using a spectacle lens (mCL-N). Primary outcome measures were the foveal threshold, mean deviation (MD), and pattern standard deviation (PSD). The foveal threshold of mCL-N with both the 24-2 and 10-2 protocols significantly decreased by 2.2-2.5 dB ( P correction without additional near correction is to be recommended.

  16. Electromagnetic fields with vanishing quantum corrections

    Czech Academy of Sciences Publication Activity Database

    Ortaggio, Marcello; Pravda, Vojtěch

    2018-01-01

    Roč. 779, 10 April (2018), s. 393-395 ISSN 0370-2693 R&D Projects: GA ČR GA13-10042S Institutional support: RVO:67985840 Keywords : nonlinear electrodynamics * quantum corrections Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 4.807, year: 2016 https://www. science direct.com/ science /article/pii/S0370269318300327?via%3Dihub

  17. Electromagnetic fields with vanishing quantum corrections

    Czech Academy of Sciences Publication Activity Database

    Ortaggio, Marcello; Pravda, Vojtěch

    2018-01-01

    Roč. 779, 10 April (2018), s. 393-395 ISSN 0370-2693 R&D Projects: GA ČR GA13-10042S Institutional support: RVO:67985840 Keywords : nonlinear electrodynamics * quantum corrections Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 4.807, year: 2016 https://www.sciencedirect.com/science/article/pii/S0370269318300327?via%3Dihub

  18. Electromagnetic fields with vanishing quantum corrections

    Science.gov (United States)

    Ortaggio, Marcello; Pravda, Vojtěch

    2018-04-01

    We show that a large class of null electromagnetic fields are immune to any modifications of Maxwell's equations in the form of arbitrary powers and derivatives of the field strength. These are thus exact solutions to virtually any generalized classical electrodynamics containing both non-linear terms and higher derivatives, including, e.g., non-linear electrodynamics as well as QED- and string-motivated effective theories. This result holds not only in a flat or (anti-)de Sitter background, but also in a larger subset of Kundt spacetimes, which allow for the presence of aligned gravitational waves and pure radiation.

  19. Correction of oral contrast artifacts in CT-based attenuation correction of PET images using an automated segmentation algorithm

    International Nuclear Information System (INIS)

    Ahmadian, Alireza; Ay, Mohammad R.; Sarkar, Saeed; Bidgoli, Javad H.; Zaidi, Habib

    2008-01-01

    Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map (μmap), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated μmaps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique in

  20. Automated particulate sampler field test model operations guide

    Energy Technology Data Exchange (ETDEWEB)

    Bowyer, S.M.; Miley, H.S.

    1996-10-01

    The Automated Particulate Sampler Field Test Model Operations Guide is a collection of documents which provides a complete picture of the Automated Particulate Sampler (APS) and the Field Test in which it was evaluated. The Pacific Northwest National Laboratory (PNNL) Automated Particulate Sampler was developed for the purpose of radionuclide particulate monitoring for use under the Comprehensive Test Ban Treaty (CTBT). Its design was directed by anticipated requirements of small size, low power consumption, low noise level, fully automatic operation, and most predominantly the sensitivity requirements of the Conference on Disarmament Working Paper 224 (CDWP224). This guide is intended to serve as both a reference document for the APS and to provide detailed instructions on how to operate the sampler. This document provides a complete description of the APS Field Test Model and all the activity related to its evaluation and progression.

  1. The Center-TRACON Automation System: Simulation and field testing

    Science.gov (United States)

    Denery, Dallas G.; Erzberger, Heinz

    1995-01-01

    A new concept for air traffic management in the terminal area, implemented as the Center-TRACON Automation System, has been under development at NASA Ames in a cooperative program with the FAA since 1991. The development has been strongly influenced by concurrent simulation and field site evaluations. The role of simulation and field activities in the development process will be discussed. Results of recent simulation and field tests will be presented.

  2. Automated aberration correction of arbitrary laser modes in high numerical aperture systems

    OpenAIRE

    Hering, Julian; Waller, Erik H.; Freymann, Georg von

    2016-01-01

    Controlling the point-spread-function in three-dimensional laser lithography is crucial for fabricating structures with highest definition and resolution. In contrast to microscopy, aberrations have to be physically corrected prior to writing, to create well defined doughnut modes, bottlebeams or multi foci modes. We report on a modified Gerchberg-Saxton algorithm for spatial-light-modulator based automated aberration compensation to optimize arbitrary laser-modes in a high numerical aperture...

  3. Radiation corrections to quantum processes in an intense electromagnetic field

    International Nuclear Information System (INIS)

    Narozhny, N.B.

    1979-01-01

    A derivation of an asymptotic expression for the mass correction of order α to the electron propagator in an intense electromagnetic field is presented. It is used for the calculation of radiation corrections to the electron and photon elastic scattering amplitudes in the α 3 approximation. All proper diagrams contributing to the amplitudes and containing the above-mentioned correction to the propagator were considered, but not those which include vertex corrections. It is shown that the expansion parameter of the perturbation theory of quantum electrodynamics in intense fields grows not more slowly than αchi/sup 1/3/ at least for the electron amplitude, where chi = [(eF/sub μν/p/sub ν/) 2 ] 12 /m 3 , p is a momentum of the electron, and F is the electromagnetic field tensor

  4. Visualization and correction of automated segmentation, tracking and lineaging from 5-D stem cell image sequences.

    Science.gov (United States)

    Wait, Eric; Winter, Mark; Bjornsson, Chris; Kokovay, Erzsebet; Wang, Yue; Goderie, Susan; Temple, Sally; Cohen, Andrew R

    2014-10-03

    Neural stem cells are motile and proliferative cells that undergo mitosis, dividing to produce daughter cells and ultimately generating differentiated neurons and glia. Understanding the mechanisms controlling neural stem cell proliferation and differentiation will play a key role in the emerging fields of regenerative medicine and cancer therapeutics. Stem cell studies in vitro from 2-D image data are well established. Visualizing and analyzing large three dimensional images of intact tissue is a challenging task. It becomes more difficult as the dimensionality of the image data increases to include time and additional fluorescence channels. There is a pressing need for 5-D image analysis and visualization tools to study cellular dynamics in the intact niche and to quantify the role that environmental factors play in determining cell fate. We present an application that integrates visualization and quantitative analysis of 5-D (x,y,z,t,channel) and large montage confocal fluorescence microscopy images. The image sequences show stem cells together with blood vessels, enabling quantification of the dynamic behaviors of stem cells in relation to their vascular niche, with applications in developmental and cancer biology. Our application automatically segments, tracks, and lineages the image sequence data and then allows the user to view and edit the results of automated algorithms in a stereoscopic 3-D window while simultaneously viewing the stem cell lineage tree in a 2-D window. Using the GPU to store and render the image sequence data enables a hybrid computational approach. An inference-based approach utilizing user-provided edits to automatically correct related mistakes executes interactively on the system CPU while the GPU handles 3-D visualization tasks. By exploiting commodity computer gaming hardware, we have developed an application that can be run in the laboratory to facilitate rapid iteration through biological experiments. We combine unsupervised image

  5. Heterotic α ’-corrections in Double Field Theory

    OpenAIRE

    Bedoya, OscarInstituto de Astronomía y Física del Espacio (CONICET-UBA), Ciudad Universitaria, Buenos Aires, Argentina; Marqués, Diego(Instituto de Astronomía y Física del Espacio (CONICET-UBA), Ciudad Universitaria, Buenos Aires, Argentina); Núñez, Carmen(Instituto de Astronomía y Física del Espacio (CONICET-UBA), Ciudad Universitaria, Buenos Aires, Argentina)

    2014-01-01

    We extend the generalized flux formulation of Double Field Theory to include all the first order bosonic contributions to the α′ expansion of the heterotic string low energy effective theory. The generalized tangent space and duality group are enhanced by α′ corrections, and the gauge symmetries are generated by the usual (gauged) generalized Lie derivative in the extended space. The generalized frame receives derivative corrections through the spin connection with torsion, which is incorpora...

  6. Error correcting circuit design with carbon nanotube field effect transistors

    Science.gov (United States)

    Liu, Xiaoqiang; Cai, Li; Yang, Xiaokuo; Liu, Baojun; Liu, Zhongyong

    2018-03-01

    In this work, a parallel error correcting circuit based on (7, 4) Hamming code is designed and implemented with carbon nanotube field effect transistors, and its function is validated by simulation in HSpice with the Stanford model. A grouping method which is able to correct multiple bit errors in 16-bit and 32-bit application is proposed, and its error correction capability is analyzed. Performance of circuits implemented with CNTFETs and traditional MOSFETs respectively is also compared, and the former shows a 34.4% decrement of layout area and a 56.9% decrement of power consumption.

  7. Field nonuniformity correction for quantitative analysis of digitized mammograms

    International Nuclear Information System (INIS)

    Pawluczyk, Olga; Yaffe, Martin J.

    2001-01-01

    Several factors, including the heel effect, variation in distance from the x-ray source to points in the image and path obliquity contribute to the signal nonuniformity of mammograms. To best use digitized mammograms for quantitative image analysis, these field non-uniformities must be corrected. An empirically based correction method, which uses a bowl-shaped calibration phantom, has been developed. Due to the annular spherical shape of the phantom, its attenuation is constant over the entire image. Remaining nonuniformities are due only to the heel and inverse square effects as well as the variable path through the beam filter, compression plate and image receptor. In logarithmic space, a normalized image of the phantom can be added to mammograms to correct for these effects. Then, an analytical correction for path obliquity in the breast can be applied to the images. It was found that the correction causes the errors associated with field nonuniformity to be reduced from 14% to 2% for a 4 cm block of material corresponding to a combination of 50% fibroglandular and 50% fatty breast tissue. A repeatability study has been conducted to show that in regions as far as 20 cm away from the chest wall, variations due to imaging conditions and phantom alignment contribute to <2% of overall corrected signal

  8. Automated aberration correction of arbitrary laser modes in high numerical aperture systems.

    Science.gov (United States)

    Hering, Julian; Waller, Erik H; Von Freymann, Georg

    2016-12-12

    Controlling the point-spread-function in three-dimensional laser lithography is crucial for fabricating structures with highest definition and resolution. In contrast to microscopy, aberrations have to be physically corrected prior to writing, to create well defined doughnut modes, bottlebeams or multi foci modes. We report on a modified Gerchberg-Saxton algorithm for spatial-light-modulator based automated aberration compensation to optimize arbitrary laser-modes in a high numerical aperture system. Using circularly polarized light for the measurement and first-guess initial conditions for amplitude and phase of the pupil function our scalar approach outperforms recent algorithms with vectorial corrections. Besides laser lithography also applications like optical tweezers and microscopy might benefit from the method presented.

  9. An automated baseline correction protocol for infrared spectra of atmospheric aerosols collected on polytetrafluoroethylene (Teflon) filters

    Science.gov (United States)

    Kuzmiakova, Adele; Dillner, Ann M.; Takahama, Satoshi

    2016-06-01

    A growing body of research on statistical applications for characterization of atmospheric aerosol Fourier transform infrared (FT-IR) samples collected on polytetrafluoroethylene (PTFE) filters (e.g., Russell et al., 2011; Ruthenburg et al., 2014) and a rising interest in analyzing FT-IR samples collected by air quality monitoring networks call for an automated PTFE baseline correction solution. The existing polynomial technique (Takahama et al., 2013) is not scalable to a project with a large number of aerosol samples because it contains many parameters and requires expert intervention. Therefore, the question of how to develop an automated method for baseline correcting hundreds to thousands of ambient aerosol spectra given the variability in both environmental mixture composition and PTFE baselines remains. This study approaches the question by detailing the statistical protocol, which allows for the precise definition of analyte and background subregions, applies nonparametric smoothing splines to reproduce sample-specific PTFE variations, and integrates performance metrics from atmospheric aerosol and blank samples alike in the smoothing parameter selection. Referencing 794 atmospheric aerosol samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011, we start by identifying key FT-IR signal characteristics, such as non-negative absorbance or analyte segment transformation, to capture sample-specific transitions between background and analyte. While referring to qualitative properties of PTFE background, the goal of smoothing splines interpolation is to learn the baseline structure in the background region to predict the baseline structure in the analyte region. We then validate the model by comparing smoothing splines baseline-corrected spectra with uncorrected and polynomial baseline (PB)-corrected equivalents via three statistical applications: (1) clustering analysis, (2) functional group quantification

  10. Correction factors for clinical dosemeters used in large field dosimetry

    International Nuclear Information System (INIS)

    Campos, L.L.; Caldas, L.

    1989-08-01

    The determination of the absorbed dose in high-energy photon and electron beams by the user is carried out as a two-step procedure. First the ionization chamber is calibrated at a reference quality by the user at a standard laboratory, and then the chamber is used to determine the absorbed dose with the user's beam. A number of conversion and correction factors have to be applied. Different sets of factors are needed depending on the physical quantity the calibration refers to, the calibration geometry and the chamber design. Another correction factor to be introduced for the absorbed dose determination in large fields refers to radiation effects on the stem, cable and sometimes connectors. A simple method was developed to be suggested to hospital physicists to be followed during large radiation field dosimetry, in order to evaluate the radiation effects of cables and connectors and to determine correction factors for each system or geometry. (author) [pt

  11. Local field corrections in the lattice dynamics of chromium | Ndukwe ...

    African Journals Online (AJOL)

    This work extends the inclusion of local field corrections in the calculation of the phonon dispersion curves to the transition metal, chromium (Cr3+) using the formalism of lattice dynamics based on the transition metal model potential approach in the adiabatic and hatmonic approximations. The results obtained here have a ...

  12. Mass corrections in string theory and lattice field theory

    International Nuclear Information System (INIS)

    Del Debbio, Luigi; Kerrane, Eoin; Russo, Rodolfo

    2009-01-01

    Kaluza-Klein (KK) compactifications of higher-dimensional Yang-Mills theories contain a number of 4-dimensional scalars corresponding to the internal components of the gauge field. While at tree level the scalar zero modes are massless, it is well known that quantum corrections make them massive. We compute these radiative corrections at 1 loop in an effective field theory framework, using the background field method and proper Schwinger-time regularization. In order to clarify the proper treatment of the sum over KK modes in the effective field theory approach, we consider the same problem in two different UV completions of Yang-Mills: string theory and lattice field theory. In both cases, when the compactification radius R is much bigger than the scale of the UV completion (R>>√(α ' ), a), we recover a mass renormalization that is independent of the UV scale and agrees with the one derived in the effective field theory approach. These results support the idea that the value of the mass corrections is, in this regime, universal for any UV completion that respects locality and gauge invariance. The string analysis suggests that this property holds also at higher loops. The lattice analysis suggests that the mass of the adjoint scalars appearing in N=2, 4 super Yang-Mills is highly suppressed, even if the lattice regularization breaks all supersymmetries explicitly. This is due to an interplay between the higher-dimensional gauge invariance and the degeneracy of bosonic and fermionic degrees of freedom.

  13. Loop Corrections to Standard Model fields in inflation

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xingang [Institute for Theory and Computation, Harvard-Smithsonian Center for Astrophysics,60 Garden Street, Cambridge, MA 02138 (United States); Department of Physics, The University of Texas at Dallas,800 W Campbell Rd, Richardson, TX 75080 (United States); Wang, Yi [Department of Physics, The Hong Kong University of Science and Technology,Clear Water Bay, Kowloon, Hong Kong (China); Xianyu, Zhong-Zhi [Center of Mathematical Sciences and Applications, Harvard University,20 Garden Street, Cambridge, MA 02138 (United States)

    2016-08-08

    We calculate 1-loop corrections to the Schwinger-Keldysh propagators of Standard-Model-like fields of spin-0, 1/2, and 1, with all renormalizable interactions during inflation. We pay special attention to the late-time divergences of loop corrections, and show that the divergences can be resummed into finite results in the late-time limit using dynamical renormalization group method. This is our first step toward studying both the Standard Model and new physics in the primordial universe.

  14. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: a postmortem study.

    Science.gov (United States)

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q; Ducote, Justin L; Su, Min-Ying; Molloi, Sabee

    2013-12-01

    significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications.

  15. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: A postmortem study

    International Nuclear Information System (INIS)

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q.; Ducote, Justin L.; Su, Min-Ying; Molloi, Sabee

    2013-01-01

    . Conclusions: The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications

  16. Mean field with corrections in lattice gauge theory

    International Nuclear Information System (INIS)

    Flyvbjerg, H.; Zuber, J.B.; Lautrup, B.

    1981-12-01

    A systematic expansion of the path integral for lattice gauge theory is performed around the mean field solution. In this letter the authors present the results for the pure gauge groups Z(2), SU(2) and SO(3). The agreement with Monte Carlo calculations is excellent. For the discrete group the calculation is performed with and without gauge fixing, whereas for the continuous groups gauge fixing is mandatory. In the case of SU(2) the absence of a phase transition is correctly signalled by mean field theory. (Auth.)

  17. Analysis of an automated background correction method for cardiovascular MR phase contrast imaging in children and young adults

    Energy Technology Data Exchange (ETDEWEB)

    Rigsby, Cynthia K.; Hilpipre, Nicholas; Boylan, Emma E.; Popescu, Andrada R.; Deng, Jie [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Medical Imaging, Chicago, IL (United States); McNeal, Gary R. [Siemens Medical Solutions USA Inc., Customer Solutions Group, Cardiovascular MR R and D, Chicago, IL (United States); Zhang, Gang [Ann and Robert H. Lurie Children' s Hospital of Chicago Research Center, Biostatistics Research Core, Chicago, IL (United States); Choi, Grace [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Pediatrics, Chicago, IL (United States); Greiser, Andreas [Siemens AG Healthcare Sector, Erlangen (Germany)

    2014-03-15

    Phase contrast magnetic resonance imaging (MRI) is a powerful tool for evaluating vessel blood flow. Inherent errors in acquisition, such as phase offset, eddy currents and gradient field effects, can cause significant inaccuracies in flow parameters. These errors can be rectified with the use of background correction software. To evaluate the performance of an automated phase contrast MRI background phase correction method in children and young adults undergoing cardiac MR imaging. We conducted a retrospective review of patients undergoing routine clinical cardiac MRI including phase contrast MRI for flow quantification in the aorta (Ao) and main pulmonary artery (MPA). When phase contrast MRI of the right and left pulmonary arteries was also performed, these data were included. We excluded patients with known shunts and metallic implants causing visible MRI artifact and those with more than mild to moderate aortic or pulmonary stenosis. Phase contrast MRI of the Ao, mid MPA, proximal right pulmonary artery (RPA) and left pulmonary artery (LPA) using 2-D gradient echo Fast Low Angle SHot (FLASH) imaging was acquired during normal respiration with retrospective cardiac gating. Standard phase image reconstruction and the automatic spatially dependent background-phase-corrected reconstruction were performed on each phase contrast MRI dataset. Non-background-corrected and background-phase-corrected net flow, forward flow, regurgitant volume, regurgitant fraction, and vessel cardiac output were recorded for each vessel. We compared standard non-background-corrected and background-phase-corrected mean flow values for the Ao and MPA. The ratio of pulmonary to systemic blood flow (Qp:Qs) was calculated for the standard non-background and background-phase-corrected data and these values were compared to each other and for proximity to 1. In a subset of patients who also underwent phase contrast MRI of the MPA, RPA, and LPA a comparison was made between standard non-background-corrected

  18. Text recognition and correction for automated data collection by mobile devices

    Science.gov (United States)

    Ozarslan, Suleyman; Eren, P. Erhan

    2014-03-01

    Participatory sensing is an approach which allows mobile devices such as mobile phones to be used for data collection, analysis and sharing processes by individuals. Data collection is the first and most important part of a participatory sensing system, but it is time consuming for the participants. In this paper, we discuss automatic data collection approaches for reducing the time required for collection, and increasing the amount of collected data. In this context, we explore automated text recognition on images of store receipts which are captured by mobile phone cameras, and the correction of the recognized text. Accordingly, our first goal is to evaluate the performance of the Optical Character Recognition (OCR) method with respect to data collection from store receipt images. Images captured by mobile phones exhibit some typical problems, and common image processing methods cannot handle some of them. Consequently, the second goal is to address these types of problems through our proposed Knowledge Based Correction (KBC) method used in support of the OCR, and also to evaluate the KBC method with respect to the improvement on the accurate recognition rate. Results of the experiments show that the KBC method improves the accurate data recognition rate noticeably.

  19. Ideal flood field images for SPECT uniformity correction

    International Nuclear Information System (INIS)

    Oppenheim, B.E.; Appledorn, C.R.

    1984-01-01

    Since as little as 2.5% camera non-uniformity can cause disturbing artifacts in SPECT imaging, the ideal flood field images for uniformity correction would be made with the collimator in place using a perfectly uniform sheet source. While such a source is not realizable the equivalent images can be generated by mapping the activity distribution of a Co-57 sheet source and correcting subsequent images of the source with this mapping. Mapping is accomplished by analyzing equal-time images of the source made in multiple precisely determined positions. The ratio of counts detected in the same region of two images is a measure of the ratio of the activities of the two portions of the source imaged in that region. The activity distribution in the sheet source is determined from a set of such ratios. The more source positions imaged in a given time, the more accurate the source mapping, according to results of a computer simulation. A 1.9 mCi Co-57 sheet source was shifted by 12 mm increments along the horizontal and vertical axis of the camera face to 9 positions on each axis. The source was imaged for 20 min in each position and 214 million total counts were accumulated. The activity distribution of the source, relative to the center pixel, was determined for a 31 x 31 array. The integral uniformity was found to be 2.8%. The RMS error for such a mapping was determined by computer simulation to be 0.46%. The activity distribution was used to correct a high count flood field image for non-uniformities attributable to the Co-57 source. Such a corrected image represents camera plus collimator response to an almost perfectly uniform sheet source

  20. Heel effect adaptive flat field correction of digital x-ray detectors

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Yongjian [X-ray Products, Varian Medical Systems Inc., Liverpool, New York 13088 (United States); Wang, Jue [Department of Mathematics, Union College, Schenectady, New York 12308 (United States)

    2013-08-15

    Purpose: Anode heel effect renders large-scale background nonuniformities in digital radiographs. Conventional offset/gain calibration is performed at mono source-to-image distance (SID), and disregards the SID-dependent characteristic of heel effect. It results in a residual nonuniform background in the corrected radiographs when the SID settings for calibration and correction differ. In this work, the authors develop a robust and efficient computational method for digital x-ray detector gain correction adapted to SID-variant heel effect, without resorting to physical filters, phantoms, complicated heel effect models, or multiple-SID calibration and interpolation.Methods: The authors present the Duo-SID projection correction method. In our approach, conventional offset/gain calibrations are performed only twice, at the minimum and maximum SIDs of the system in typical clinical use. A fast iterative separation algorithm is devised to extract the detector gain and basis heel patterns from the min/max SID calibrations. The resultant detector gain is independent of SID, while the basis heel patterns are parameterized by the min- and max-SID. The heel pattern at any SID is obtained from the min-SID basis heel pattern via projection imaging principles. The system gain desired at a specific acquisition SID is then constructed using the projected heel pattern and detector gain map.Results: The method was evaluated for flat field and anatomical phantom image corrections. It demonstrated promising improvements over interpolation and conventional gain calibration/correction methods, lowering their correction errors by approximately 70% and 80%, respectively. The separation algorithm was able to extract the detector gain and heel patterns with less than 2% error, and the Duo-SID corrected images showed perceptually appealing uniform background across the detector.Conclusions: The Duo-SID correction method has substantially improved on conventional offset/gain corrections for

  1. Heel effect adaptive flat field correction of digital x-ray detectors

    International Nuclear Information System (INIS)

    Yu, Yongjian; Wang, Jue

    2013-01-01

    Purpose: Anode heel effect renders large-scale background nonuniformities in digital radiographs. Conventional offset/gain calibration is performed at mono source-to-image distance (SID), and disregards the SID-dependent characteristic of heel effect. It results in a residual nonuniform background in the corrected radiographs when the SID settings for calibration and correction differ. In this work, the authors develop a robust and efficient computational method for digital x-ray detector gain correction adapted to SID-variant heel effect, without resorting to physical filters, phantoms, complicated heel effect models, or multiple-SID calibration and interpolation.Methods: The authors present the Duo-SID projection correction method. In our approach, conventional offset/gain calibrations are performed only twice, at the minimum and maximum SIDs of the system in typical clinical use. A fast iterative separation algorithm is devised to extract the detector gain and basis heel patterns from the min/max SID calibrations. The resultant detector gain is independent of SID, while the basis heel patterns are parameterized by the min- and max-SID. The heel pattern at any SID is obtained from the min-SID basis heel pattern via projection imaging principles. The system gain desired at a specific acquisition SID is then constructed using the projected heel pattern and detector gain map.Results: The method was evaluated for flat field and anatomical phantom image corrections. It demonstrated promising improvements over interpolation and conventional gain calibration/correction methods, lowering their correction errors by approximately 70% and 80%, respectively. The separation algorithm was able to extract the detector gain and heel patterns with less than 2% error, and the Duo-SID corrected images showed perceptually appealing uniform background across the detector.Conclusions: The Duo-SID correction method has substantially improved on conventional offset/gain corrections for

  2. Bias field inconsistency correction of motion-scattered multislice MRI for improved 3D image reconstruction.

    Science.gov (United States)

    Kim, Kio; Habas, Piotr A; Rajagopalan, Vidya; Scott, Julia A; Corbett-Detig, James M; Rousseau, Francois; Barkovich, A James; Glenn, Orit A; Studholme, Colin

    2011-09-01

    A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multislice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types.

  3. Automated movement correction for dynamic PET/CT images: evaluation with phantom and patient data.

    Science.gov (United States)

    Ye, Hu; Wong, Koon-Pong; Wardak, Mirwais; Dahlbom, Magnus; Kepe, Vladimir; Barrio, Jorge R; Nelson, Linda D; Small, Gary W; Huang, Sung-Cheng

    2014-01-01

    Head movement during a dynamic brain PET/CT imaging results in mismatch between CT and dynamic PET images. It can cause artifacts in CT-based attenuation corrected PET images, thus affecting both the qualitative and quantitative aspects of the dynamic PET images and the derived parametric images. In this study, we developed an automated retrospective image-based movement correction (MC) procedure. The MC method first registered the CT image to each dynamic PET frames, then re-reconstructed the PET frames with CT-based attenuation correction, and finally re-aligned all the PET frames to the same position. We evaluated the MC method's performance on the Hoffman phantom and dynamic FDDNP and FDG PET/CT images of patients with neurodegenerative disease or with poor compliance. Dynamic FDDNP PET/CT images (65 min) were obtained from 12 patients and dynamic FDG PET/CT images (60 min) were obtained from 6 patients. Logan analysis with cerebellum as the reference region was used to generate regional distribution volume ratio (DVR) for FDDNP scan before and after MC. For FDG studies, the image derived input function was used to generate parametric image of FDG uptake constant (Ki) before and after MC. Phantom study showed high accuracy of registration between PET and CT and improved PET images after MC. In patient study, head movement was observed in all subjects, especially in late PET frames with an average displacement of 6.92 mm. The z-direction translation (average maximum = 5.32 mm) and x-axis rotation (average maximum = 5.19 degrees) occurred most frequently. Image artifacts were significantly diminished after MC. There were significant differences (Pdynamic brain FDDNP and FDG PET/CT scans could improve the qualitative and quantitative aspects of images of both tracers.

  4. Solving for the Surface: An Automated Approach to THEMIS Atmospheric Correction

    Science.gov (United States)

    Ryan, A. J.; Salvatore, M. R.; Smith, R.; Edwards, C. S.; Christensen, P. R.

    2013-12-01

    Here we present the initial results of an automated atmospheric correction algorithm for the Thermal Emission Imaging System (THEMIS) instrument, whereby high spectral resolution Thermal Emission Spectrometer (TES) data are queried to generate numerous atmospheric opacity values for each THEMIS infrared image. While the pioneering methods of Bandfield et al. [2004] also used TES spectra to atmospherically correct THEMIS data, the algorithm presented here is a significant improvement because of the reduced dependency on user-defined inputs for individual images. Additionally, this technique is particularly useful for correcting THEMIS images that have captured a range of atmospheric conditions and/or surface elevations, issues that have been difficult to correct for using previous techniques. Thermal infrared observations of the Martian surface can be used to determine the spatial distribution and relative abundance of many common rock-forming minerals. This information is essential to understanding the planet's geologic and climatic history. However, the Martian atmosphere also has absorptions in the thermal infrared which complicate the interpretation of infrared measurements obtained from orbit. TES has sufficient spectral resolution (143 bands at 10 cm-1 sampling) to linearly unmix and remove atmospheric spectral end-members from the acquired spectra. THEMIS has the benefit of higher spatial resolution (~100 m/pixel vs. 3x5 km/TES-pixel) but has lower spectral resolution (8 surface sensitive spectral bands). As such, it is not possible to isolate the surface component by unmixing the atmospheric contribution from the THEMIS spectra, as is done with TES. Bandfield et al. [2004] developed a technique using atmospherically corrected TES spectra as tie-points for constant radiance offset correction and surface emissivity retrieval. This technique is the primary method used to correct THEMIS but is highly susceptible to inconsistent results if great care in the

  5. Efficient Photometry In-Frame Calibration (EPIC) Gaussian Corrections for Automated Background Normalization of Rate-Tracked Satellite Imagery

    Science.gov (United States)

    Griesbach, J.; Wetterer, C.; Sydney, P.; Gerber, J.

    Photometric processing of non-resolved Electro-Optical (EO) images has commonly required the use of dark and flat calibration frames that are obtained to correct for charge coupled device (CCD) dark (thermal) noise and CCD quantum efficiency/optical path vignetting effects respectively. It is necessary to account/calibrate for these effects so that the brightness of objects of interest (e.g. stars or resident space objects (RSOs)) may be measured in a consistent manner across the CCD field of view. Detected objects typically require further calibration using aperture photometry to compensate for sky background (shot noise). For this, annuluses are measured around each detected object whose contained pixels are used to estimate an average background level that is subtracted from the detected pixel measurements. In a new photometric calibration software tool developed for AFRL/RD, called Efficient Photometry In-Frame Calibration (EPIC), an automated background normalization technique is proposed that eliminates the requirement to capture dark and flat calibration images. The proposed technique simultaneously corrects for dark noise, shot noise, and CCD quantum efficiency/optical path vignetting effects. With this, a constant detection threshold may be applied for constant false alarm rate (CFAR) object detection without the need for aperture photometry corrections. The detected pixels may be simply summed (without further correction) for an accurate instrumental magnitude estimate. The noise distribution associated with each pixel is assumed to be sampled from a Poisson distribution. Since Poisson distributed data closely resembles Gaussian data for parameterized means greater than 10, the data may be corrected by applying bias subtraction and standard-deviation division. EPIC performs automated background normalization on rate-tracked satellite images using the following technique. A deck of approximately 50-100 images is combined by performing an independent median

  6. Correcting GRACE gravity fields for ocean tide effects

    DEFF Research Database (Denmark)

    Knudsen, Per; Andersen, Ole Baltazar

    2002-01-01

    [1] The GRACE mission will be launch in early 2002 and will map the Earth's gravity fields and its variations with unprecedented accuracy during its 5-year lifetime. Unless ocean tide signals and their load upon the solid earth are removed from the GRACE data, their long period aliases obscure more...... tide model if altimetry corrected for inverted barometer effects was used in its derivation. To study the temporal characteristics of the ocean tidal constituents when sampled by GRACE, approximate alias frequencies were derived assuming a sampling of half a sidereal day. Those results show...

  7. Error field and its correction strategy in tokamaks

    International Nuclear Information System (INIS)

    In, Yongkyoon

    2014-01-01

    While error field correction (EFC) is to minimize the unwanted kink-resonant non-axisymmetric components, resonant magnetic perturbation (RMP) application is to maximize the benefits of pitch-resonant non-axisymmetric components. As the plasma response against non-axisymmetric field increases with beta increase, feedback-controlled EFC is a more promising EFC strategy in reactor-relevant high-beta regimes. Nonetheless, various physical aspects and uncertainties associated with EFC should be taken into account and clarified in the terms of multiple low-n EFC and multiple MHD modes, in addition to the compatibility issue with RMP application. Such a multi-faceted view of EFC strategy is briefly discussed. (author)

  8. A new controller for the JET error field correction coils

    International Nuclear Information System (INIS)

    Zanotto, L.; Sartori, F.; Bigi, M.; Piccolo, F.; De Benedetti, M.

    2005-01-01

    This paper describes the hardware and the software structure of a new controller for the JET error field correction coils (EFCC) system, a set of ex-vessel coils that recently replaced the internal saddle coils. The EFCC controller has been developed on a conventional VME hardware platform using a new software framework, recently designed for real-time applications at JET, and replaces the old disruption feedback controller increasing the flexibility and the optimization of the system. The use of conventional hardware has required a particular effort in designing the software part in order to meet the specifications. The peculiarities of the new controller will be highlighted, such as its very useful trigger logic interface, which allows in principle exploring various error field experiment scenarios

  9. TLS FIELD DATA BASED INTENSITY CORRECTION FOR FOREST ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    J. Heinzel

    2016-06-01

    Full Text Available Terrestrial laser scanning (TLS is increasingly used for forestry applications. Besides the three dimensional point coordinates, the 'intensity' of the reflected signal plays an important role in forestry and vegetation studies. The benefit of the signal intensity is caused by the wavelength of the laser that is within the near infrared (NIR for most scanners. The NIR is highly indicative for various vegetation characteristics. However, the intensity as recorded by most terrestrial scanners is distorted by both external and scanner specific factors. Since details about system internal alteration of the signal are often unknown to the user, model driven approaches are impractical. On the other hand, existing data driven calibration procedures require laborious acquisition of separate reference datasets or areas of homogenous reflection characteristics from the field data. In order to fill this gap, the present study introduces an approach to correct unwanted intensity variations directly from the point cloud of the field data. The focus is on the variation over range and sensor specific distortions. Instead of an absolute calibration of the values, a relative correction within the dataset is sufficient for most forestry applications. Finally, a method similar to time series detrending is presented with the only pre-condition of a relative equal distribution of forest objects and materials over range. Our test data covers 50 terrestrial scans captured with a FARO Focus 3D S120 scanner using a laser wavelength of 905 nm. Practical tests demonstrate that our correction method removes range and scanner based alterations of the intensity.

  10. Multi-objective optimization for an automated and simultaneous phase and baseline correction of NMR spectral data

    Science.gov (United States)

    Sawall, Mathias; von Harbou, Erik; Moog, Annekathrin; Behrens, Richard; Schröder, Henning; Simoneau, Joël; Steimers, Ellen; Neymeyr, Klaus

    2018-04-01

    Spectral data preprocessing is an integral and sometimes inevitable part of chemometric analyses. For Nuclear Magnetic Resonance (NMR) spectra a possible first preprocessing step is a phase correction which is applied to the Fourier transformed free induction decay (FID) signal. This preprocessing step can be followed by a separate baseline correction step. Especially if series of high-resolution spectra are considered, then automated and computationally fast preprocessing routines are desirable. A new method is suggested that applies the phase and the baseline corrections simultaneously in an automated form without manual input, which distinguishes this work from other approaches. The underlying multi-objective optimization or Pareto optimization provides improved results compared to consecutively applied correction steps. The optimization process uses an objective function which applies strong penalty constraints and weaker regularization conditions. The new method includes an approach for the detection of zero baseline regions. The baseline correction uses a modified Whittaker smoother. The functionality of the new method is demonstrated for experimental NMR spectra. The results are verified against gravimetric data. The method is compared to alternative preprocessing tools. Additionally, the simultaneous correction method is compared to a consecutive application of the two correction steps.

  11. Software-controlled, highly automated intrafraction prostate motion correction with intrafraction stereographic targeting: System description and clinical results

    International Nuclear Information System (INIS)

    Mutanga, Theodore F.; Boer, Hans C. J. de; Rajan, Vinayakrishnan; Dirkx, Maarten L. P.; Os, Marjolein J. H. van; Incrocci, Luca; Heijmen, Ben J. M.

    2012-01-01

    Purpose: A new system for software-controlled, highly automated correction of intrafraction prostate motion,'' intrafraction stereographic targeting'' (iSGT), is described and evaluated. Methods: At our institute, daily prostate positioning is routinely performed at the start of treatment beam using stereographic targeting (SGT). iSGT was implemented by extension of the SGT software to facilitate fast and accurate intrafraction motion corrections with minimal user interaction. iSGT entails megavoltage (MV) image acquisitions with the first segment of selected IMRT beams, automatic registration of implanted markers, followed by remote couch repositioning to correct for intrafraction motion above a predefined threshold, prior to delivery of the remaining segments. For a group of 120 patients, iSGT with corrections for two nearly lateral beams was evaluated in terms of workload and impact on effective intrafraction displacements in the sagittal plane. Results: SDs of systematic (Σ) and random (σ) displacements relative to the planning CT measured directly after initial SGT setup correction were eff eff eff eff eff eff < 0.7 mm, requiring corrections in 82.4% of the fractions. Because iSGT is highly automated, the extra time added by iSGT is <30 s if a correction is required. Conclusions: Without increasing imaging dose, iSGT successfully reduces intrafraction prostate motion with minimal workload and increase in fraction time. An action level of 2 mm is recommended.

  12. An automated phase correction algorithm for retrieving permittivity and permeability of electromagnetic metamaterials

    Directory of Open Access Journals (Sweden)

    Z. X. Cao

    2014-06-01

    Full Text Available To retrieve complex-valued effective permittivity and permeability of electromagnetic metamaterials (EMMs based on resonant effect from scattering parameters using a complex logarithmic function is not inevitable. When complex values are expressed in terms of magnitude and phase, an infinite number of permissible phase angles is permissible due to the multi-valued property of complex logarithmic functions. Special attention needs to be paid to ensure continuity of the effective permittivity and permeability of lossy metamaterials as frequency sweeps. In this paper, an automated phase correction (APC algorithm is proposed to properly trace and compensate phase angles of the complex logarithmic function which may experience abrupt phase jumps near the resonant frequency region of the concerned EMMs, and hence the continuity of the effective optical properties of lossy metamaterials is ensured. The algorithm is then verified to extract effective optical properties from the simulated scattering parameters of the four different types of metamaterial media: a cut-wire cell array, a split ring resonator (SRR cell array, an electric-LC (E-LC resonator cell array, and a combined SRR and wire cell array respectively. The results demonstrate that the proposed algorithm is highly accurate and effective.

  13. Automation of NLO QCD and EW corrections with Sherpa and Recola

    Energy Technology Data Exchange (ETDEWEB)

    Biedermann, Benedikt; Denner, Ansgar; Pellen, Mathieu [Universitaet Wuerzburg, Institut fuer Theoretische Physik und Astrophysik, Wuerzburg (Germany); Braeuer, Stephan; Schumann, Steffen [Georg-August Universitaet Goettingen, II. Physikalisches Institut, Goettingen (Germany); Thompson, Jennifer M. [Universitaet Heidelberg, Institut fuer Theoretische Physik, Heidelberg (Germany)

    2017-07-15

    This publication presents the combination of the one-loop matrix-element generator Recola with the multipurpose Monte Carlo program Sherpa. Since both programs are highly automated, the resulting Sherpa +Recola framework allows for the computation of - in principle - any Standard Model process at both NLO QCD and EW accuracy. To illustrate this, three representative LHC processes have been computed at NLO QCD and EW: vector-boson production in association with jets, off-shell Z-boson pair production, and the production of a top-quark pair in association with a Higgs boson. In addition to fixed-order computations, when considering QCD corrections, all functionalities of Sherpa, i.e. particle decays, QCD parton showers, hadronisation, underlying events, etc. can be used in combination with Recola. This is demonstrated by the merging and matching of one-loop QCD matrix elements for Drell-Yan production in association with jets to the parton shower. The implementation is fully automatised, thus making it a perfect tool for both experimentalists and theorists who want to use state-of-the-art predictions at NLO accuracy. (orig.)

  14. Voxel-based morphometry and automated lobar volumetry: The trade-off between spatial scale and statistical correction

    Science.gov (United States)

    Voormolen, Eduard H.J.; Wei, Corie; Chow, Eva W.C.; Bassett, Anne S.; Mikulis, David J.; Crawley, Adrian P.

    2011-01-01

    Voxel-based morphometry (VBM) and automated lobar region of interest (ROI) volumetry are comprehensive and fast methods to detect differences in overall brain anatomy on magnetic resonance images. However, VBM and automated lobar ROI volumetry have detected dissimilar gray matter differences within identical image sets in our own experience and in previous reports. To gain more insight into how diverging results arise and to attempt to establish whether one method is superior to the other, we investigated how differences in spatial scale and in the need to statistically correct for multiple spatial comparisons influence the relative sensitivity of either technique to group differences in gray matter volumes. We assessed the performance of both techniques on a small dataset containing simulated gray matter deficits and additionally on a dataset of 22q11-deletion syndrome patients with schizophrenia (22q11DS-SZ) vs. matched controls. VBM was more sensitive to simulated focal deficits compared to automated ROI volumetry, and could detect global cortical deficits equally well. Moreover, theoretical calculations of VBM and ROI detection sensitivities to focal deficits showed that at increasing ROI size, ROI volumetry suffers more from loss in sensitivity than VBM. Furthermore, VBM and automated ROI found corresponding GM deficits in 22q11DS-SZ patients, except in the parietal lobe. Here, automated lobar ROI volumetry found a significant deficit only after a smaller subregion of interest was employed. Thus, sensitivity to focal differences is impaired relatively more by averaging over larger volumes in automated ROI methods than by the correction for multiple comparisons in VBM. These findings indicate that VBM is to be preferred over automated lobar-scale ROI volumetry for assessing gray matter volume differences between groups. PMID:19619660

  15. Automated fetal brain segmentation from 2D MRI slices for motion correction.

    Science.gov (United States)

    Keraudren, K; Kuklisova-Murgasova, M; Kyriakopoulou, V; Malamateniou, C; Rutherford, M A; Kainz, B; Hajnal, J V; Rueckert, D

    2014-11-01

    Motion correction is a key element for imaging the fetal brain in-utero using Magnetic Resonance Imaging (MRI). Maternal breathing can introduce motion, but a larger effect is frequently due to fetal movement within the womb. Consequently, imaging is frequently performed slice-by-slice using single shot techniques, which are then combined into volumetric images using slice-to-volume reconstruction methods (SVR). For successful SVR, a key preprocessing step is to isolate fetal brain tissues from maternal anatomy before correcting for the motion of the fetal head. This has hitherto been a manual or semi-automatic procedure. We propose an automatic method to localize and segment the brain of the fetus when the image data is acquired as stacks of 2D slices with anatomy misaligned due to fetal motion. We combine this segmentation process with a robust motion correction method, enabling the segmentation to be refined as the reconstruction proceeds. The fetal brain localization process uses Maximally Stable Extremal Regions (MSER), which are classified using a Bag-of-Words model with Scale-Invariant Feature Transform (SIFT) features. The segmentation process is a patch-based propagation of the MSER regions selected during detection, combined with a Conditional Random Field (CRF). The gestational age (GA) is used to incorporate prior knowledge about the size and volume of the fetal brain into the detection and segmentation process. The method was tested in a ten-fold cross-validation experiment on 66 datasets of healthy fetuses whose GA ranged from 22 to 39 weeks. In 85% of the tested cases, our proposed method produced a motion corrected volume of a relevant quality for clinical diagnosis, thus removing the need for manually delineating the contours of the brain before motion correction. Our method automatically generated as a side-product a segmentation of the reconstructed fetal brain with a mean Dice score of 93%, which can be used for further processing. Copyright

  16. An automated portal verification system for the tangential breast portal field

    International Nuclear Information System (INIS)

    Yin, F.-F.; Lai, W.; Chen, C. W.; Nelson, D. F.

    1995-01-01

    Purpose/Objective: In order to ensure the treatment is delivered as planned, a portal image is acquired in the accelerator and is compared to the reference image. At present, this comparison is performed by radiation oncologists based on the manually-identified features, which is both time-consuming and potentially error-prone. With the introduction of various electronic portal imaging devices, real-time patient positioning correction is becoming clinically feasible to replace time-delayed analysis using films. However, this procedure requires present of radiation oncologists during patient treatment which is not cost-effective and practically not realistic. Therefore, the efficiency and quality of radiation therapy could be substantially improved if this procedure can be automated. The purpose of this study is to develop a fully computerized verification system for the radiation therapy of breast cancer for which a similar treatment setup is generally employed. Materials/Methods: The automated verification system involves image acquisition, image feature extraction, feature correlation between reference and portal images, and quantitative evaluation of patient setup. In this study, a matrix liquid ion-chamber EPID was used to acquire digital portal images which is directly attached to Varian CL2100C accelerator. For effective use of computation memory, the 12-bit gray levels in original portal images were quantized to form a range of 8-bit gray levels. A typical breast portal image includes three important components: breast and lung tissues in the treatment field, air space within the treatment field, and non-irradiated region. A hierarchical region processing technique was developed to separate these regions sequentially. The inherent hierarchical features were formulated based on different radiation attenuation for different regions as: treatment field edge -- breast skin line -- chest wall. Initially, a combination of a Canny edge detector and a constrained

  17. Correction

    DEFF Research Database (Denmark)

    Pinkevych, Mykola; Cromer, Deborah; Tolstrup, Martin

    2016-01-01

    [This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.].......[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.]....

  18. Automated Critical Peak Pricing Field Tests: Program Descriptionand Results

    Energy Technology Data Exchange (ETDEWEB)

    Piette, Mary Ann; Watson, David; Motegi, Naoya; Kiliccote, Sila; Xu, Peng

    2006-04-06

    California utilities have been exploring the use of critical peak prices (CPP) to help reduce needle peaks in customer end-use loads. CPP is a form of price-responsive demand response (DR). Recent experience has shown that customers have limited knowledge of how to operate their facilities in order to reduce their electricity costs under CPP (Quantum 2004). While the lack of knowledge about how to develop and implement DR control strategies is a barrier to participation in DR programs like CPP, another barrier is the lack of automation of DR systems. During 2003 and 2004, the PIER Demand Response Research Center (DRRC) conducted a series of tests of fully automated electric demand response (Auto-DR) at 18 facilities. Overall, the average of the site-specific average coincident demand reductions was 8% from a variety of building types and facilities. Many electricity customers have suggested that automation will help them institutionalize their electric demand savings and improve their overall response and DR repeatability. This report focuses on and discusses the specific results of the Automated Critical Peak Pricing (Auto-CPP, a specific type of Auto-DR) tests that took place during 2005, which build on the automated demand response (Auto-DR) research conducted through PIER and the DRRC in 2003 and 2004. The long-term goal of this project is to understand the technical opportunities of automating demand response and to remove technical and market impediments to large-scale implementation of automated demand response (Auto-DR) in buildings and industry. A second goal of this research is to understand and identify best practices for DR strategies and opportunities. The specific objectives of the Automated Critical Peak Pricing test were as follows: (1) Demonstrate how an automated notification system for critical peak pricing can be used in large commercial facilities for demand response (DR). (2) Evaluate effectiveness of such a system. (3) Determine how customers

  19. CONCEPTUAL STRUCTURALLOGIC DIAGRAM PRODUCTION AUTOMATION EXPERT STUDY ON THE ISSUE OF CORRECTNESS OF CALCULATION OF THE TAX ON PROFIT OF ORGANIZATIONS

    Directory of Open Access Journals (Sweden)

    Andrey N. Ishchenko

    2014-01-01

    Full Text Available In this article the possibility of automation of an expert study on the questionof correctness of tax calculation profi t organization. Considered are the problemsof formalization of the expert research inthis field, specify the structure of imprisonment. The author proposes a conceptual structural-logic diagram automation expertresearch in this area.

  20. The magnetic field for the ZEUS central detector - analysis and correction of the field measurement

    International Nuclear Information System (INIS)

    Mengel, S.

    1992-06-01

    The magnetic field in the central tracking region of the ZEUS-detector - a facility to investigate highly energetic electron-proton-collisions at the HERA-collider at DESY Hamburg - is generated by a superconducting coil and reaches 18 kG (1.8 T). Some of the tracking devices particularly the drift chambers in the proton forward and rear direction (FTD1-3 and RTD) are not fully contained within the coil and therefore situated in a highly inhomogeneous magnetic field: The radial component B r is up to 6.6 kG, maximum gradients are found to be 300 G/cm for δB r /δr. Evaluating the space drifttime relation necessitates a detailed knowledge of the magnetic field. To reach this goal we analysed the field measurements and corrected them for systematic errors. The corrected data were compared with the field calculations (TOSCA-maps). Measurements and calculations are confirmed by studying consistency with Maxwell's equations. The accuracy reached is better than 100 G throughout the forward and central drift chambers (FTD1-3, CTD) and better than 150 G in the RTD. (orig.) [de

  1. Automated Critical PeakPricing Field Tests: 2006 Pilot ProgramDescription and Results

    Energy Technology Data Exchange (ETDEWEB)

    Piette, Mary Ann; Watson, David; Motegi, Naoya; Kiliccote, Sila

    2007-06-19

    During 2006 Lawrence Berkeley National Laboratory (LBNL) and the Demand Response Research Center (DRRC) performed a technology evaluation for the Pacific Gas and Electric Company (PG&E) Emerging Technologies Programs. This report summarizes the design, deployment, and results from the 2006 Automated Critical Peak Pricing Program (Auto-CPP). The program was designed to evaluate the feasibility of deploying automation systems that allow customers to participate in critical peak pricing (CPP) with a fully-automated response. The 2006 program was in operation during the entire six-month CPP period from May through October. The methodology for this field study included site recruitment, control strategy development, automation system deployment, and evaluation of sites' participation in actual CPP events through the summer of 2006. LBNL recruited sites in PG&E's territory in northern California through contacts from PG&E account managers, conferences, and industry meetings. Each site contact signed a memorandum of understanding with LBNL that outlined the activities needed to participate in the Auto-CPP program. Each facility worked with LBNL to select and implement control strategies for demand response and developed automation system designs based on existing Internet connectivity and building control systems. Once the automation systems were installed, LBNL conducted communications tests to ensure that the Demand Response Automation Server (DRAS) correctly provided and logged the continuous communications of the CPP signals with the energy management and control system (EMCS) for each site. LBNL also observed and evaluated Demand Response (DR) shed strategies to ensure proper commissioning of controls. The communication system allowed sites to receive day-ahead as well as day-of signals for pre-cooling, a DR strategy used at a few sites. Measurement of demand response was conducted using two different baseline models for estimating peak load savings. One

  2. Correction of the closed orbit and vertical dispersion and the tuning and field correction system in ISABELLE

    International Nuclear Information System (INIS)

    Parzen, G.

    1979-01-01

    Each ring in ISABELLE will have 10 separately powered systematic field correction coils to make required corrections which are the same in corresponding magnets around the ring. These corrections include changing the ν-value, shaping the working line in ν-space, correction of field errors due to iron saturation effects, the conductor arrangements, the construction of the coil ends, diamagnetic effects in the superconductor and to rate-dependent induced currents. The twelve insertion quadrupoles in the insertion surrounding each crossing point will each have a quadrupole trim coil. The closed orbit will be controlled by a system of 84 horizontal dipole coils and 90 vertical dipole coils in each ring, each coil being separately powered. This system of dipole coils will also be used to correct the vertical dispersion at the crossing points. Two families of skew quadrupoles per ring will be provided for correction of the coupling between the horizontal and vertical motions. Although there will be 258 separately powered correction coils in each ring

  3. Top-quark physics as a prime application of automated higher-order corrections

    Energy Technology Data Exchange (ETDEWEB)

    Weiss, Christian

    2017-07-15

    Experiments in high energy physics have reached an unprecedented accuracy. This accuracy has to be matched by the theoretical predictions used to search for new physics. For this purpose, sophisticated computer programs are necessary, both for the calculation of matrix elements (tree-level and loop) and in the field of Monte-Carlo event generation. The hadronic initial state at the LHC poses significant challenges for measurement and simulation. A future lepton collider, like the proposed international linear collider (ILC) in Japan or compact linear collider (CLIC) at CERN would have a much cleaner initial state. Such a machine would achieve an even higher precision. In the field of lepton colliders, the Whizard event generator has been established as the program of choice due to its unique treatment of beam structure functions and initial-state radiation. In this thesis, we present the extension of Whizard to next-to-leading order accuracy, thus augmenting it to the state of the art. We use the Frixione-Kunszt-Signer (FKS) subtraction scheme to subtract divergences, of which a detailed outline is given. This new functionality is used to perform in-depth studies of the top quark. Being the heaviest particle in the standard model, its strong connection to the Higgs sector as well as its abundant production at a future lepton collider makes it an excellent object of study. Yet, its lifetime is very short and high-multiplicity final-states of its decay products are decayed in the detector. This thesis investigates the influence of NLO QCD corrections to the fully off-shell top production processes e{sup +}e{sup -}→μ{sup +}ν{sub μ}e{sup -} anti ν{sub e}b anti b and e{sup +}e{sup -}→μ{sup +}ν{sub μ}e{sup -} anti ν{sub e}b anti bH. These calculations have not been performed for the first time. Moreover, the incorporation of NLO QCD corrections into the resummation of the top production threshold and its matching to the relativistic continuum for the process

  4. Top-quark physics as a prime application of automated higher-order corrections

    International Nuclear Information System (INIS)

    Weiss, Christian

    2017-07-01

    Experiments in high energy physics have reached an unprecedented accuracy. This accuracy has to be matched by the theoretical predictions used to search for new physics. For this purpose, sophisticated computer programs are necessary, both for the calculation of matrix elements (tree-level and loop) and in the field of Monte-Carlo event generation. The hadronic initial state at the LHC poses significant challenges for measurement and simulation. A future lepton collider, like the proposed international linear collider (ILC) in Japan or compact linear collider (CLIC) at CERN would have a much cleaner initial state. Such a machine would achieve an even higher precision. In the field of lepton colliders, the Whizard event generator has been established as the program of choice due to its unique treatment of beam structure functions and initial-state radiation. In this thesis, we present the extension of Whizard to next-to-leading order accuracy, thus augmenting it to the state of the art. We use the Frixione-Kunszt-Signer (FKS) subtraction scheme to subtract divergences, of which a detailed outline is given. This new functionality is used to perform in-depth studies of the top quark. Being the heaviest particle in the standard model, its strong connection to the Higgs sector as well as its abundant production at a future lepton collider makes it an excellent object of study. Yet, its lifetime is very short and high-multiplicity final-states of its decay products are decayed in the detector. This thesis investigates the influence of NLO QCD corrections to the fully off-shell top production processes e"+e"-→μ"+ν_μe"- anti ν_eb anti b and e"+e"-→μ"+ν_μe"- anti ν_eb anti bH. These calculations have not been performed for the first time. Moreover, the incorporation of NLO QCD corrections into the resummation of the top production threshold and its matching to the relativistic continuum for the process e"+e"-→bW"++ anti bW"-. All results are obtained with

  5. Collective-field-corrected strong field approximation for laser-irradiated metal clusters

    International Nuclear Information System (INIS)

    Keil, Th; Bauer, D

    2014-01-01

    The strong field approximation (SFA) formulated in terms of so-called ‘quantum orbits’ led to much insight into intense-laser driven ionization dynamics. In plain SFA, the emitted electron is treated as a free electron in the laser field alone. However, with improving experimental techniques and more advanced numerical simulations, it becomes more and more obvious that the plain SFA misses interesting effects even on a qualitative level. Examples are holographic side lobes, the low-energy structure, radial patterns in photoelectron spectra at low kinetic energies and strongly rotated angular distributions. For this reason, increasing efforts have been recently devoted to Coulomb corrections of the SFA. In the current paper, we follow a similar line but consider ionization of metal clusters. It is known that photoelectrons from clusters can be much more energetic than those emitted from atoms or small molecules, especially if the Mie resonance of the expanding cluster is evoked. We develop a SFA that takes the collective field inside the cluster via the simple rigid-sphere model into account. Our approach is based on field-corrected quantum orbits so that the acceleration process (or any other spectral feature of interest) can be investigated in detail. (paper)

  6. Magnetic field measurement and correction of VECC K500 superconducting cyclotron

    International Nuclear Information System (INIS)

    Dey, M.K.; Debnath, J.; Bhunia, U.; Pradhan, J.; Rashid, H.; Paul, S.; Dutta, A.; Naser, Z.A.; Singh, V.; Pal, G.; Nandi, C.; Dasgupta, S.; Bhattacharya, S.; Pal, S.; Roy, A.; Bhattacharya, T.; Bhole, R.B.; Bhale, D.; Chatterjee, M.; Prasad, R.; Nabhiraj, P.Y.; Hazra, D.P.; Mallik, C.; Bhandari, R.K.

    2006-01-01

    The VECC K500 superconducting cyclotron magnet is commissioned and magnetic field measurement and correction program was successfully completed in March 2006. Here we report the analysis of the measured field data and subsequent correction of the magnet to improve the field quality. (author)

  7. Automation surprise : results of a field survey of Dutch pilots

    NARCIS (Netherlands)

    de Boer, R.J.; Hurts, Karel

    2017-01-01

    Automation surprise (AS) has often been associated with aviation safety incidents. Although numerous laboratory studies have been conducted, few data are available from routine flight operations. A survey among a representative sample of 200 Dutch airline pilots was used to determine the prevalence

  8. Models of Automation Surprise: Results of a Field Survey in Aviation

    Directory of Open Access Journals (Sweden)

    Robert De Boer

    2017-09-01

    Full Text Available Automation surprises in aviation continue to be a significant safety concern and the community’s search for effective strategies to mitigate them are ongoing. The literature has offered two fundamentally divergent directions, based on different ideas about the nature of cognition and collaboration with automation. In this paper, we report the results of a field study that empirically compared and contrasted two models of automation surprises: a normative individual-cognition model and a sensemaking model based on distributed cognition. Our data prove a good fit for the sense-making model. This finding is relevant for aviation safety, since our understanding of the cognitive processes that govern human interaction with automation drive what we need to do to reduce the frequency of automation-induced events.

  9. Short wavelength automated perimetry can detect visual field changes in diabetic patients without retinopathy

    Directory of Open Access Journals (Sweden)

    Othman Ali Zico

    2014-01-01

    Full Text Available Purpose: The purpose of the following study is to compare short wave automated perimetry (SWAP versus standard automated perimetry (SAP for early detection of diabetic retinopathy (DR. Materials and Methods: A total of 40 diabetic patients, divided into group I without DR (20 patients = 40 eyes and group II with mild non-proliferative DR (20 patients = 40 eyes were included. They were tested with central 24-2 threshold test with both shortwave and SAP to compare sensitivity values and local visual field indices in both of them. A total of 20 healthy age and gender matched subjects were assessed as a control group. Results: Control group showed no differences between SWAP and SAP regarding mean deviation (MD, corrected pattern standard deviation (CPSD or short fluctuations (SF. In group I, MD showed significant more deflection in SWAP (−4.44 ± 2.02 dB compared to SAP (−0.96 ± 1.81 dB (P = 0.000002. However, CPSD and SF were not different between SWAP and SAP. In group II, MD and SF showed significantly different values in SWAP (−5.75 ± 3.11 dB and 2.0 ± 0.95 compared to SAP (−3.91 ± 2.87 dB and 2.86 ± 1.23 (P = 0.01 and 0.006 respectively. There are no differences regarding CPSD between SWAP and SAP. The SWAP technique was significantly more sensitive than SAP in patients without retinopathy (p, but no difference exists between the two techniques in patients with non-proliferative DR. Conclusion: The SWAP technique has a higher yield and efficacy to pick up abnormal findings in diabetic patients without overt retinopathy rather than patients with clinical retinopathy.

  10. Field correction for a one meter long permanent-magnet wiggler

    International Nuclear Information System (INIS)

    Fortgang, C.M.

    1992-01-01

    Field errors in wigglers are usually measured and corrected on-axis only, thus ignoring field error gradients. We find that gradient scale lengths are of the same order as electron beam size and therefore can be important. We report measurements of wiggler field errors in three dimensions and expansion of these errors out to first order (including two dipole and two quadrupole components). Conventional techniques for correcting on-axis errors (order zero) create new off-axis (first order) errors. We present a new approach to correcting wiggler fields out to first order. By correcting quadrupole errors in addition to the usual dipole correction, we minimize growth in electron beam size. Correction to first order yields better overlap between the electron and optical beams and should improve laser gain. (Author) 2 refs., 5 figs

  11. Free-field correction values for Interacoustics DD 45 supra-aural audiometric earphones

    DEFF Research Database (Denmark)

    Poulsen, Torben

    2010-01-01

    This paper report free-field correction values for the Interacoustics DD 45 audiometric earphones. The free-field correction values for earphones provide the loudness based equivalence to loudspeaker presentation. Correction values are especially used for the calibration of audiometric equipment...... for the acoustic coupler IEC 60318-3 (NBS 9-A) and for the ear simulator IEC 60318-1. The results are in good agreement with the results of another independent investigation. The reported free-field correction values may be used as part of the basis for future standardization of the DD 45 earphone....

  12. Assessment of automated disease detection in diabetic retinopathy screening using two-field photography.

    Science.gov (United States)

    Goatman, Keith; Charnley, Amanda; Webster, Laura; Nussey, Stephen

    2011-01-01

    To assess the performance of automated disease detection in diabetic retinopathy screening using two field mydriatic photography. Images from 8,271 sequential patient screening episodes from a South London diabetic retinopathy screening service were processed by the Medalytix iGrading™ automated grading system. For each screening episode macular-centred and disc-centred images of both eyes were acquired and independently graded according to the English national grading scheme. Where discrepancies were found between the automated result and original manual grade, internal and external arbitration was used to determine the final study grades. Two versions of the software were used: one that detected microaneurysms alone, and one that detected blot haemorrhages and exudates in addition to microaneurysms. Results for each version were calculated once using both fields and once using the macula-centred field alone. Of the 8,271 episodes, 346 (4.2%) were considered unassessable. Referable disease was detected in 587 episodes (7.1%). The sensitivity of the automated system for detecting unassessable images ranged from 97.4% to 99.1% depending on configuration. The sensitivity of the automated system for referable episodes ranged from 98.3% to 99.3%. All the episodes that included proliferative or pre-proliferative retinopathy were detected by the automated system regardless of configuration (192/192, 95% confidence interval 98.0% to 100%). If implemented as the first step in grading, the automated system would have reduced the manual grading effort by between 2,183 and 3,147 patient episodes (26.4% to 38.1%). Automated grading can safely reduce the workload of manual grading using two field, mydriatic photography in a routine screening service.

  13. Automated and observer based light field indicator edge evaluation in diagnostic X-ray equipment

    OpenAIRE

    Bottaro, Márcio; Nagy, Balázs Vince; Soares, Fernanda Cristina Salvador; Rosendo, Danilo Cabral

    2017-01-01

    Abstract Introduction To analyze edge detection and optical contrast calculation of light field-indicators used in X-ray via automated- and observer-based methods, and comparison with current standard approaches, which do not give exact definition for light field edge determination. Methods Automated light sensor array was used to measure the penumbra zone of the edge in the standard X-ray equipment, while trained and naïve human observers were asked to mark the light field edge according t...

  14. Automated and observer based light field indicator edge evaluation in diagnostic X-ray equipment

    Directory of Open Access Journals (Sweden)

    Márcio Bottaro

    Full Text Available Abstract Introduction To analyze edge detection and optical contrast calculation of light field-indicators used in X-ray via automated- and observer-based methods, and comparison with current standard approaches, which do not give exact definition for light field edge determination. Methods Automated light sensor array was used to measure the penumbra zone of the edge in the standard X-ray equipment, while trained and naïve human observers were asked to mark the light field edge according to their own determination. Different interpretations of the contrast were then calculated and compared. Results In contrast to automated measurements of edge definition and detection, measurements by human observers showed large inter-observer variation independent of their training with X-ray equipment. Different contrast calculations considering the different edge definitions gave very different contrast values. Conclusion As the main conclusion, we propose a more exact edge definition of the X-ray light field, corresponding well to the average human observer’s edge determination. The new edge definition method with automated systems would reduce human variability in edge determination. Such errors could potentially affect the approval of X-ray equipment, and also increase the radiation dose. The automated measurement based on human observers’ edge definition and the corresponding contrast calculation may lead to a more precise light field calibration, which enables reduced irradiation doses on radiology patients.

  15. Automated and observer based light field indicator edge evaluation in diagnostic X-ray equipment

    International Nuclear Information System (INIS)

    Bottaro, Marcio; Nagy, Balazs Vince; Soares, Fernanda Cristina Salvador; Rosendo, Danilo Cabral

    2017-01-01

    Introduction: To analyze edge detection and optical contrast calculation of light field-indicators used in X-ray via automated- and observer-based methods, and comparison with current standard approaches, which do not give exact definition for light field edge determination. Methods: Automated light sensor array was used to measure the penumbra zone of the edge in the standard X-ray equipment, while trained and naive human observers were asked to mark the light field edge according to their own determination. Different interpretations of the contrast were then calculated and compared. Results: In contrast to automated measurements of edge definition and detection, measurements by human observers showed large inter-observer variation independent of their training with X-ray equipment. Different contrast calculations considering the different edge definitions gave very different contrast values. Conclusion: As the main conclusion, we propose a more exact edge definition of the X-ray light field, corresponding well to the average human observer's edge determination. The new edge definition method with automated systems would reduce human variability in edge determination. Such errors could potentially affect the approval of X-ray equipment, and also increase the radiation dose. The automated measurement based on human observers’ edge definition and the corresponding contrast calculation may lead to a more precise light field calibration, which enables reduced irradiation doses on radiology patients. (author)

  16. Automated and observer based light field indicator edge evaluation in diagnostic X-ray equipment

    Energy Technology Data Exchange (ETDEWEB)

    Bottaro, Marcio; Nagy, Balazs Vince; Soares, Fernanda Cristina Salvador; Rosendo, Danilo Cabral, E-mail: marcio@iee.usp.br [Universidade de Sao Paulo (USP), SP (Brazil); Optics and Engineering Informatics, Budapest University of Technology and Economics, Budapest (Hungary)

    2017-04-15

    Introduction: To analyze edge detection and optical contrast calculation of light field-indicators used in X-ray via automated- and observer-based methods, and comparison with current standard approaches, which do not give exact definition for light field edge determination. Methods: Automated light sensor array was used to measure the penumbra zone of the edge in the standard X-ray equipment, while trained and naive human observers were asked to mark the light field edge according to their own determination. Different interpretations of the contrast were then calculated and compared. Results: In contrast to automated measurements of edge definition and detection, measurements by human observers showed large inter-observer variation independent of their training with X-ray equipment. Different contrast calculations considering the different edge definitions gave very different contrast values. Conclusion: As the main conclusion, we propose a more exact edge definition of the X-ray light field, corresponding well to the average human observer's edge determination. The new edge definition method with automated systems would reduce human variability in edge determination. Such errors could potentially affect the approval of X-ray equipment, and also increase the radiation dose. The automated measurement based on human observers’ edge definition and the corresponding contrast calculation may lead to a more precise light field calibration, which enables reduced irradiation doses on radiology patients. (author)

  17. An automated analysis workflow for optimization of force-field parameters using neutron scattering data

    Energy Technology Data Exchange (ETDEWEB)

    Lynch, Vickie E.; Borreguero, Jose M. [Neutron Data Analysis & Visualization Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Bhowmik, Debsindhu [Computational Sciences & Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Ganesh, Panchapakesan; Sumpter, Bobby G. [Center for Nanophase Material Sciences, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Computational Sciences & Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Proffen, Thomas E. [Neutron Data Analysis & Visualization Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Goswami, Monojoy, E-mail: goswamim@ornl.gov [Center for Nanophase Material Sciences, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States); Computational Sciences & Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 (United States)

    2017-07-01

    Graphical abstract: - Highlights: • An automated workflow to optimize force-field parameters. • Used the workflow to optimize force-field parameter for a system containing nanodiamond and tRNA. • The mechanism relies on molecular dynamics simulation and neutron scattering experimental data. • The workflow can be generalized to any other experimental and simulation techniques. - Abstract: Large-scale simulations and data analysis are often required to explain neutron scattering experiments to establish a connection between the fundamental physics at the nanoscale and data probed by neutrons. However, to perform simulations at experimental conditions it is critical to use correct force-field (FF) parameters which are unfortunately not available for most complex experimental systems. In this work, we have developed a workflow optimization technique to provide optimized FF parameters by comparing molecular dynamics (MD) to neutron scattering data. We describe the workflow in detail by using an example system consisting of tRNA and hydrophilic nanodiamonds in a deuterated water (D{sub 2}O) environment. Quasi-elastic neutron scattering (QENS) data show a faster motion of the tRNA in the presence of nanodiamond than without the ND. To compare the QENS and MD results quantitatively, a proper choice of FF parameters is necessary. We use an efficient workflow to optimize the FF parameters between the hydrophilic nanodiamond and water by comparing to the QENS data. Our results show that we can obtain accurate FF parameters by using this technique. The workflow can be generalized to other types of neutron data for FF optimization, such as vibrational spectroscopy and spin echo.

  18. Diffusion in the kicked quantum rotator by random corrections to a linear and sine field

    International Nuclear Information System (INIS)

    Hilke, M.; Flores, J.C.

    1992-01-01

    We discuss the diffusion in momentum space, of the kicked quantum rotator, by introducing random corrections to a linear and sine external field. For the linear field we obtain a linear diffusion behavior identical to the case with zero average in the external field. But for the sine field, accelerator modes with quadratic diffusion are found for particular values of the kicking period. (orig.)

  19. Stability of Gradient Field Corrections for Quantitative Diffusion MRI

    OpenAIRE

    Rogers, Baxter P.; Blaber, Justin; Welch, E. Brian; Ding, Zhaohua; Anderson, Adam W.; Landman, Bennett A.

    2017-01-01

    In magnetic resonance diffusion imaging, gradient nonlinearity causes significant bias in the estimation of quantitative diffusion parameters such as diffusivity, anisotropy, and diffusion direction in areas away from the magnet isocenter. This bias can be substantially reduced if the scanner- and coil-specific gradient field nonlinearities are known. Using a set of field map calibration scans on a large (29 cm diameter) phantom combined with a solid harmonic approximation of the gradient fie...

  20. Automated mass correction and data interpretation for protein open-access liquid chromatography-mass spectrometry.

    Science.gov (United States)

    Wagner, Craig D; Hall, John T; White, Wendy L; Miller, Luke A D; Williams, Jon D

    2007-02-01

    Characterization of recombinant protein purification fractions and final products by liquid chromatography-mass spectrometry (LC/MS) are requested more frequently each year. A protein open-access (OA) LC/MS system was developed in our laboratory to meet this demand. This paper compares the system that we originally implemented in our facilities in 2003 to the one now in use, and discusses, in more detail, recent enhancements that have improved its robustness, reliability, and data reporting capabilities. The system utilizes instruments equipped with reversed-phase chromatography and an orthogonal accelerated time-of-flight mass spectrometer fitted with an electrospray source. Sample analysis requests are accomplished using a simple form on a web-enabled laboratory information management system (LIMS). This distributed form is accessible from any intranet-connected company desktop computer. Automated data acquisition and processing are performed using a combination of in-house (OA-Self Service, OA-Monitor, and OA-Analysis Engine) and vendor-supplied programs (AutoLynx, and OpenLynx) located on acquisition computers and off-line processing workstations. Analysis results are then reported via the same web-based LIMS. Also presented are solutions to problems not addressed on commercially available, small-molecule OA-LC/MS systems. These include automated transforming of mass-to-charge (m/z) spectra to mass spectra and automated data interpretation that considers minor variants to the protein sequence-such as common post-translational modifications (PTMs). Currently, our protein OA-LC/MS platform runs on five LC/MS instruments located in three separate GlaxoSmithKline R&D sites in the US and UK. To date, more than 8000 protein OA-LC/MS samples have been analyzed. With these user friendly and highly automated OA systems in place, mass spectrometry plays a key role in assessing the quality of recombinant proteins, either produced at our facilities or bought from external

  1. In vivo robotics: the automation of neuroscience and other intact-system biological fields.

    Science.gov (United States)

    Kodandaramaiah, Suhasa B; Boyden, Edward S; Forest, Craig R

    2013-12-01

    Robotic and automation technologies have played a huge role in in vitro biological science, having proved critical for scientific endeavors such as genome sequencing and high-throughput screening. Robotic and automation strategies are beginning to play a greater role in in vivo and in situ sciences, especially when it comes to the difficult in vivo experiments required for understanding the neural mechanisms of behavior and disease. In this perspective, we discuss the prospects for robotics and automation to influence neuroscientific and intact-system biology fields. We discuss how robotic innovations might be created to open up new frontiers in basic and applied neuroscience and present a concrete example with our recent automation of in vivo whole-cell patch clamp electrophysiology of neurons in the living mouse brain. © 2013 New York Academy of Sciences.

  2. Feasibility study for the computerized automation of the Annapolis Field Office of EPA region III

    International Nuclear Information System (INIS)

    Ames, H.S.; Barton, G.W. Jr.; Bystroff, R.I.; Crawford, R.W.; Kray, A.M.; Maples, M.D.

    1976-08-01

    This report describes a feasibility study for computerized automation of the Annapolis Field Office (AFO) of EPA's Region III. The AFO laboratory provides analytical support for a number of EPA divisions; its primary function at present is analysis of water samples from rivers, estuaries, and the ocean in the Chesapeake Bay area. Automation of the AFO laboratory is found to be not only feasible but also highly desirable. An automation system is proposed which will give major improvements in analytical capacity, quality control, sample management, and reporting capabilities. This system is similar to the LLL-developed automation systems already installed at other EPA laboratories, with modifications specific to the needs of the AFO laboratory and the addition of sample file control. It is estimated that the initial cost of the system, nearly $300,000, would be recouped in about three years by virtue of the increased capacity and efficiency of operation

  3. Radiative Corrections from Heavy Fast-Roll Fields during Inflation

    DEFF Research Database (Denmark)

    Jain, Rajeev Kumar; Sandora, McCullen; Sloth, Martin S.

    2015-01-01

    to an unobservable small running of the spectral index. An observable level of tensor modes can also be accommodated, but, surprisingly, this requires running to be induced by a curvaton. If upcoming observations are consistent with a small tensor-to-scalar ratio as predicted by small field models of inflation...

  4. Correction

    CERN Multimedia

    2002-01-01

    Tile Calorimeter modules stored at CERN. The larger modules belong to the Barrel, whereas the smaller ones are for the two Extended Barrels. (The article was about the completion of the 64 modules for one of the latter.) The photo on the first page of the Bulletin n°26/2002, from 24 July 2002, illustrating the article «The ATLAS Tile Calorimeter gets into shape» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.

  5. Error Field Correction in DIII-D Ohmic Plasmas With Either Handedness

    International Nuclear Information System (INIS)

    Park, Jong-Kyu; Schaffer, Michael J.; La Haye, Robert J.; Scoville, Timothy J.; Menard, Jonathan E.

    2011-01-01

    Error field correction results in DIII-D plasmas are presented in various configurations. In both left-handed and right-handed plasma configurations, where the intrinsic error fields become different due to the opposite helical twist (handedness) of the magnetic field, the optimal error correction currents and the toroidal phases of internal(I)-coils are empirically established. Applications of the Ideal Perturbed Equilibrium Code to these results demonstrate that the field component to be minimized is not the resonant component of the external field, but the total field including ideal plasma responses. Consistency between experiment and theory has been greatly improved along with the understanding of ideal plasma responses, but non-ideal plasma responses still need to be understood to achieve the reliable predictability in tokamak error field correction.

  6. Correction

    Directory of Open Access Journals (Sweden)

    2012-01-01

    Full Text Available Regarding Gorelik, G., & Shackelford, T.K. (2011. Human sexual conflict from molecules to culture. Evolutionary Psychology, 9, 564–587: The authors wish to correct an omission in citation to the existing literature. In the final paragraph on p. 570, we neglected to cite Burch and Gallup (2006 [Burch, R. L., & Gallup, G. G., Jr. (2006. The psychobiology of human semen. In S. M. Platek & T. K. Shackelford (Eds., Female infidelity and paternal uncertainty (pp. 141–172. New York: Cambridge University Press.]. Burch and Gallup (2006 reviewed the relevant literature on FSH and LH discussed in this paragraph, and should have been cited accordingly. In addition, Burch and Gallup (2006 should have been cited as the originators of the hypothesis regarding the role of FSH and LH in the semen of rapists. The authors apologize for this oversight.

  7. Correction

    CERN Multimedia

    2002-01-01

    The photo on the second page of the Bulletin n°48/2002, from 25 November 2002, illustrating the article «Spanish Visit to CERN» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.   The Spanish delegation, accompanied by Spanish scientists at CERN, also visited the LHC superconducting magnet test hall (photo). From left to right: Felix Rodriguez Mateos of CERN LHC Division, Josep Piqué i Camps, Spanish Minister of Science and Technology, César Dopazo, Director-General of CIEMAT (Spanish Research Centre for Energy, Environment and Technology), Juan Antonio Rubio, ETT Division Leader at CERN, Manuel Aguilar-Benitez, Spanish Delegate to Council, Manuel Delfino, IT Division Leader at CERN, and Gonzalo León, Secretary-General of Scientific Policy to the Minister.

  8. Correction

    Directory of Open Access Journals (Sweden)

    2014-01-01

    Full Text Available Regarding Tagler, M. J., and Jeffers, H. M. (2013. Sex differences in attitudes toward partner infidelity. Evolutionary Psychology, 11, 821–832: The authors wish to correct values in the originally published manuscript. Specifically, incorrect 95% confidence intervals around the Cohen's d values were reported on page 826 of the manuscript where we reported the within-sex simple effects for the significant Participant Sex × Infidelity Type interaction (first paragraph, and for attitudes toward partner infidelity (second paragraph. Corrected values are presented in bold below. The authors would like to thank Dr. Bernard Beins at Ithaca College for bringing these errors to our attention. Men rated sexual infidelity significantly more distressing (M = 4.69, SD = 0.74 than they rated emotional infidelity (M = 4.32, SD = 0.92, F(1, 322 = 23.96, p < .001, d = 0.44, 95% CI [0.23, 0.65], but there was little difference between women's ratings of sexual (M = 4.80, SD = 0.48 and emotional infidelity (M = 4.76, SD = 0.57, F(1, 322 = 0.48, p = .29, d = 0.08, 95% CI [−0.10, 0.26]. As expected, men rated sexual infidelity (M = 1.44, SD = 0.70 more negatively than they rated emotional infidelity (M = 2.66, SD = 1.37, F(1, 322 = 120.00, p < .001, d = 1.12, 95% CI [0.85, 1.39]. Although women also rated sexual infidelity (M = 1.40, SD = 0.62 more negatively than they rated emotional infidelity (M = 2.09, SD = 1.10, this difference was not as large and thus in the evolutionary theory supportive direction, F(1, 322 = 72.03, p < .001, d = 0.77, 95% CI [0.60, 0.94].

  9. Three-loop corrections in a covariant effective field theory

    International Nuclear Information System (INIS)

    McIntire, Jeff

    2008-01-01

    Chiral effective field theories have been used with success in the study of nuclear structure. It is of interest to systematically improve these energy functionals (particularly that of quantum hadrodynamics) through the inclusion of many-body correlations. One possible source of improvement is the loop expansion. Using the techniques of Infrared Regularization, the short-range, local dynamics at each order in the loops is absorbed into the parameterization of the underlying effective Lagrangian. The remaining nonlocal, exchange correlations must be calculated explicitly. Given that the interactions of quantum hadrodynamics are relatively soft, the loop expansion may be manageable or even perturbative in nuclear matter. This work investigates the role played by the three-loop contributions to the loop expansion for quantum hadrodynamics

  10. Automated correction on X-rays calibration using transmission chamber and LabVIEWTM

    International Nuclear Information System (INIS)

    Betti, Flavio; Potiens, Maria da Penha Albuquerque

    2009-01-01

    Uncertainties during prolonged exposure times on X-rays calibration procedures at the Instruments Calibration facilities at IPEN may suffer from efficiency (and therefore intensity) variations on the industrial X-Ray generator used. Using a transmission chamber as an online reference chamber during the whole irradiation process is proposed in order to compensate for such error source. Also temperature (and pressure) fluctuations may arise from the performance limited calibration room air conditioning system. As an open ionization chamber, that monitor chamber does require calculation of a correction factor due to the temperature and pressure effects on air density. Sending and processing data from all related instruments (electrometer, thermometer and barometer) can be more easily achieved by interfacing them to a host computer running an especially developed algorithm using LabVIEW TM environment which will not only apply the proper correction factors during runtime, but also determine the exact length of time to reach a desired condition, which can be: time period, charge collected, or air kerma, based on the previous calibration of the whole system using a reference chamber traceable to primary standard dosimetry laboratories. When performing such calibration, two temperature sensors (secondary standard thermistors) are simultaneously used, one for the transmission chamber, and other for the reference chamber. As the substitution method is used during actual customer's calibration, the readings from the second thermistor can also be used when desired for further corrections. Use of LabVIEW TM programming language allowed for a shorter development time, and it is also extremely convenient to make things easier when improvements and modifications are called for. (author)

  11. Importance of the CMAP Correction to the CHARMM22 Protein Force Field: Dynamics of Hen Lysozyme

    OpenAIRE

    Buck, Matthias; Bouguet-Bonnet, Sabine; Pastor, Richard W.; MacKerell, Alexander D.

    2005-01-01

    The recently developed CMAP correction to the CHARMM22 force field (C22) is evaluated from 25 ns molecular dynamics simulations on hen lysozyme. Substantial deviations from experimental backbone root mean-square fluctuations and N-H NMR order parameters obtained in the C22 trajectories (especially in the loops) are eliminated by the CMAP correction. Thus, the C22/CMAP force field yields improved dynamical and structural properties of proteins in molecular dynamics simulations.

  12. Building automation and perceived control : a field study on motorized exterior blinds in Dutch offices

    NARCIS (Netherlands)

    Meerbeek, B.W.; te Kulve, Marije; Gritti, T.; Aarts, M.P.J.; Loenen, van E.J.; Aarts, E.H.L.

    2014-01-01

    As a result of the technological advances and increasing focus on energy efficient buildings, simple forms of building automation including automatic motorized blinds systems found their ways into today's office environments. In a five-month field study, qualitative and quantitative methods were

  13. Setup accuracy of stereoscopic X-ray positioning with automated correction for rotational errors in patients treated with conformal arc radiotherapy for prostate cancer

    International Nuclear Information System (INIS)

    Soete, Guy; Verellen, Dirk; Tournel, Koen; Storme, Guy

    2006-01-01

    We evaluated setup accuracy of NovalisBody stereoscopic X-ray positioning with automated correction for rotational errors with the Robotics Tilt Module in patients treated with conformal arc radiotherapy for prostate cancer. The correction of rotational errors was shown to reduce random and systematic errors in all directions. (NovalisBody TM and Robotics Tilt Module TM are products of BrainLAB A.G., Heimstetten, Germany)

  14. Automated geographic registration and radiometric correction for UAV-based mosaics

    Science.gov (United States)

    Texas A&M University has been operating a large-scale, UAV-based, agricultural remote-sensing research project since 2015. To use UAV-based images in agricultural production, many high-resolution images must be mosaicked together to create an image of an agricultural field. Two key difficulties to s...

  15. Correcting Inconsistencies and Errors in Bacterial Genome Metadata Using an Automated Curation Tool in Excel (AutoCurE).

    Science.gov (United States)

    Schmedes, Sarah E; King, Jonathan L; Budowle, Bruce

    2015-01-01

    Whole-genome data are invaluable for large-scale comparative genomic studies. Current sequencing technologies have made it feasible to sequence entire bacterial genomes with relative ease and time with a substantially reduced cost per nucleotide, hence cost per genome. More than 3,000 bacterial genomes have been sequenced and are available at the finished status. Publically available genomes can be readily downloaded; however, there are challenges to verify the specific supporting data contained within the download and to identify errors and inconsistencies that may be present within the organizational data content and metadata. AutoCurE, an automated tool for bacterial genome database curation in Excel, was developed to facilitate local database curation of supporting data that accompany downloaded genomes from the National Center for Biotechnology Information. AutoCurE provides an automated approach to curate local genomic databases by flagging inconsistencies or errors by comparing the downloaded supporting data to the genome reports to verify genome name, RefSeq accession numbers, the presence of archaea, BioProject/UIDs, and sequence file descriptions. Flags are generated for nine metadata fields if there are inconsistencies between the downloaded genomes and genomes reports and if erroneous or missing data are evident. AutoCurE is an easy-to-use tool for local database curation for large-scale genome data prior to downstream analyses.

  16. A median filter approach for correcting errors in a vector field

    Science.gov (United States)

    Schultz, H.

    1985-01-01

    Techniques are presented for detecting and correcting errors in a vector field. These methods employ median filters which are frequently used in image processing to enhance edges and remove noise. A detailed example is given for wind field maps produced by a spaceborne scatterometer. The error detection and replacement algorithm was tested with simulation data from the NASA Scatterometer (NSCAT) project.

  17. A few more comments on secularly growing loop corrections in strong electric fields

    International Nuclear Information System (INIS)

    Akhmedov, E.T.; Popov, F.K.

    2015-01-01

    We extend the observations of our previous paper http://dx.doi.org/10.1007/JHEP09(2014)071 [http://arxiv.org/abs/1405.5285]. particular, we show that the secular growth of the loop corrections to the two-point correlation functions is gauge independent: we observe the same growth in the case of the static gauge for the constant background electric field. Furthermore we solve the kinetic equation describing photon production from the background fields, which was derived in our previous paper and allows one to sum up leading secularly growing corrections from all loops. Finally, we show that in the constant electric field background the one-loop correction to the current of the produced pairs is not zero: it also grows with time and violates time translational and reversal invariance of QED on the constant electric field background.

  18. Method of correcting eddy current magnetic fields in particle accelerator vacuum chambers

    Science.gov (United States)

    Danby, Gordon T.; Jackson, John W.

    1991-01-01

    A method for correcting magnetic field aberrations produced by eddy currents induced in a particle accelerator vacuum chamber housing is provided wherein correction windings are attached to selected positions on the housing and the windings are energized by transformer action from secondary coils, which coils are inductively coupled to the poles of electro-magnets that are powered to confine the charged particle beam within a desired orbit as the charged particles are accelerated through the vacuum chamber by a particle-driving rf field. The power inductively coupled to the secondary coils varies as a function of variations in the power supplied by the particle-accelerating rf field to a beam of particles accelerated through the vacuum chamber, so the current in the energized correction coils is effective to cancel eddy current flux fields that would otherwise be induced in the vacuum chamber by power variations in the particle beam.

  19. High order field-to-field corrections for imaging and overlay to achieve sub 20-nm lithography requirements

    Science.gov (United States)

    Mulkens, Jan; Kubis, Michael; Hinnen, Paul; de Graaf, Roelof; van der Laan, Hans; Padiy, Alexander; Menchtchikov, Boris

    2013-04-01

    Immersion lithography is being extended to the 20-nm and 14-nm node and the lithography performance requirements need to be tightened further to enable this shrink. In this paper we present an integral method to enable high-order fieldto- field corrections for both imaging and overlay, and we show that this method improves the performance with 20% - 50%. The lithography architecture we build for these higher order corrections connects the dynamic scanner actuators with the angle resolved scatterometer via a separate application server. Improvements of CD uniformity are based on enabling the use of freeform intra-field dose actuator and field-to-field control of focus. The feedback control loop uses CD and focus targets placed on the production mask. For the overlay metrology we use small in-die diffraction based overlay targets. Improvements of overlay are based on using the high order intra-field correction actuators on a field-tofield basis. We use this to reduce the machine matching error, extending the heating control and extending the correction capability for process induced errors.

  20. Quantum mean-field decoding algorithm for error-correcting codes

    International Nuclear Information System (INIS)

    Inoue, Jun-ichi; Saika, Yohei; Okada, Masato

    2009-01-01

    We numerically examine a quantum version of TAP (Thouless-Anderson-Palmer)-like mean-field algorithm for the problem of error-correcting codes. For a class of the so-called Sourlas error-correcting codes, we check the usefulness to retrieve the original bit-sequence (message) with a finite length. The decoding dynamics is derived explicitly and we evaluate the average-case performance through the bit-error rate (BER).

  1. High magnetic field multipoles generated by superconductor magnetization within a set of nested superconducting correction coils

    International Nuclear Information System (INIS)

    Green, M.A.

    1990-04-01

    Correction elements in colliding beam accelerators such as the SSC can be the source of undesirable higher magnetic field multipoles due to magnetization of the superconductor within the corrector. Quadrupole and sextupole correctors located within the main dipole will produce sextupole and decapole due to magnetization of the superconductor within the correction coils. Lumped nested correction coils can produce a large number of skew and normal magnetization multipoles which may have an adverse effect on a stored beam at injection into a high energy colliding beam machine such as the SSC. 6 refs., 2 figs., 2 tabs

  2. Higher magnetic field multipoles generated by superconductor magnetization within a set of nested superconducting correction coils

    International Nuclear Information System (INIS)

    Green, M.A.

    1990-01-01

    Correction elements in colliding beam accelerators such as the Superconducting Super Collider (SSC) can be the source of undesirable higher magnetic field multipoles due to magnetization of the superconductor within the corrector. Quadrupole and sextupole correctors located within the main dipole will produce sextupole and decapole due to magnetization of the superconductor within the correction coils. Lumped nested correction coils can produce a large number of skew and normal magnetization multipoles which may have an adverse effect on a stored beam at injection into a high energy colliding beam machine such as the SSC. Multipole magnetization field components have been measured within the HERA storage ring dipole magnets. Calculations of these components using the SCMAG04 code, which agree substantially with the measured multipoles, are presented in the report. As a result, in the proposed continuous correction winding for the SSC, dipoles have been replaced with lumped correction elements every six dipole magnets (about 120 meters apart). Nested lumped correction elements will also produce undesirable higher magnetization multipoles. This report shows a method by which the higher multipole generated by nested correction elements can be identified. (author)

  3. Massive Corrections to Entanglement in Minimal E8 Toda Field Theory

    Directory of Open Access Journals (Sweden)

    Olalla A. Castro-Alvaredo

    2017-02-01

    Full Text Available In this letter we study the exponentially decaying corrections to saturation of the second R\\'enyi entropy of one interval of length L in minimal E8 Toda field theory. It has been known for some time that the entanglement entropy of a massive quantum field theory in 1+1 dimensions saturates to a constant value for m1 L <<1 where m1 is the mass of the lightest particle in the spectrum. Subsequently, results by Cardy, Castro-Alvaredo and Doyon have shown that there are exponentially decaying corrections to this behaviour which are characterised by Bessel functions with arguments proportional to m1 L. For the von Neumann entropy the leading correction to saturation takes the precise universal form -K0(2m1 L/8 whereas for the R\\'enyi entropies leading corrections which are proportional to K0(m1 L are expected. Recent numerical work by P\\'almai for the second R\\'enyi entropy of minimal E8 Toda has found next-to-leading order corrections decaying as exp(-2m1 L rather than the expected exp(-m1 L. In this paper we investigate the origin of this result and show that it is incorrect. An exact form factor computation of correlators of branch point twist fields reveals that the leading corrections are proportional to K0(m1 L as expected.

  4. HDR Pathological Image Enhancement Based on Improved Bias Field Correction and Guided Image Filter

    Directory of Open Access Journals (Sweden)

    Qingjiao Sun

    2016-01-01

    Full Text Available Pathological image enhancement is a significant topic in the field of pathological image processing. This paper proposes a high dynamic range (HDR pathological image enhancement method based on improved bias field correction and guided image filter (GIF. Firstly, a preprocessing including stain normalization and wavelet denoising is performed for Haematoxylin and Eosin (H and E stained pathological image. Then, an improved bias field correction model is developed to enhance the influence of light for high-frequency part in image and correct the intensity inhomogeneity and detail discontinuity of image. Next, HDR pathological image is generated based on least square method using low dynamic range (LDR image, H and E channel images. Finally, the fine enhanced image is acquired after the detail enhancement process. Experiments with 140 pathological images demonstrate the performance advantages of our proposed method as compared with related work.

  5. Quark number density and susceptibility calculation with one correction in mean field potential

    International Nuclear Information System (INIS)

    Singh, S. Somorendro

    2016-01-01

    We calculate quark number density and susceptibility of a model which has one loop correction in mean field potential. The calculation shows continuous increasing in the number density and susceptibility up to the temperature T = 0.4 GeV. Then the value of number density and susceptibility approach to the lattice result for higher value of temperature. The result indicates that the calculated values of the model fit well and the result increase the temperature to reach the lattice data with the one loop correction in the mean field potential. (author)

  6. Copula-based assimilation of radar and gauge information to derive bias-corrected precipitation fields

    Directory of Open Access Journals (Sweden)

    S. Vogl

    2012-07-01

    Full Text Available This study addresses the problem of combining radar information and gauge measurements. Gauge measurements are the best available source of absolute rainfall intensity albeit their spatial availability is limited. Precipitation information obtained by radar mimics well the spatial patterns but is biased for their absolute values.

    In this study copula models are used to describe the dependence structure between gauge observations and rainfall derived from radar reflectivity at the corresponding grid cells. After appropriate time series transformation to generate "iid" variates, only the positive pairs (radar >0, gauge >0 of the residuals are considered. As not each grid cell can be assigned to one gauge, the integration of point information, i.e. gauge rainfall intensities, is achieved by considering the structure and the strength of dependence between the radar pixels and all the gauges within the radar image. Two different approaches, namely Maximum Theta and Multiple Theta, are presented. They finally allow for generating precipitation fields that mimic the spatial patterns of the radar fields and correct them for biases in their absolute rainfall intensities. The performance of the approach, which can be seen as a bias-correction for radar fields, is demonstrated for the Bavarian Alps. The bias-corrected rainfall fields are compared to a field of interpolated gauge values (ordinary kriging and are validated with available gauge measurements. The simulated precipitation fields are compared to an operationally corrected radar precipitation field (RADOLAN. The copula-based approach performs similarly well as indicated by different validation measures and successfully corrects for errors in the radar precipitation.

  7. [Application of an Adaptive Inertia Weight Particle Swarm Algorithm in the Magnetic Resonance Bias Field Correction].

    Science.gov (United States)

    Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao

    2016-06-01

    An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.

  8. Mapping and correcting respiration-induced field changes in the brain using fluorine field probes

    DEFF Research Database (Denmark)

    Andersen, Mads; Madsen, Kristoffer; Hanson, Lars G.

    2014-01-01

    strength values from signal phase by linear fitting. Ahead of imaging, the field probe positions were determined for each subject, by applying known gradients in all three dimensions while measuring with the field probes. Experiments: Measurements were performed in 4 male subjects instructed to hold...... software was updated with f0 and first order shim values, before the acquisition of every volume. Evaluation: To assess whether the dynamic field changes were captured by the field probe data, the field probe fitted fields were subtracted from the scanner B0 maps to model shimming. We then assessed whether......Purpose. Breathing induced dynamic B0 field perturbations in the head can lead to artefacts in ultra-high field MR by causing line broadening in spectroscopy and signal dropout, ghosting, displacement artifacts and blurring in imaging. It has recently been proposed to continuously stabilize...

  9. Single-shot imaging with higher-dimensional encoding using magnetic field monitoring and concomitant field correction.

    Science.gov (United States)

    Testud, Frederik; Gallichan, Daniel; Layton, Kelvin J; Barmet, Christoph; Welz, Anna M; Dewdney, Andrew; Cocosco, Chris A; Pruessmann, Klaas P; Hennig, Jürgen; Zaitsev, Maxim

    2015-03-01

    PatLoc (Parallel Imaging Technique using Localized Gradients) accelerates imaging and introduces a resolution variation across the field-of-view. Higher-dimensional encoding employs more spatial encoding magnetic fields (SEMs) than the corresponding image dimensionality requires, e.g. by applying two quadratic and two linear spatial encoding magnetic fields to reconstruct a 2D image. Images acquired with higher-dimensional single-shot trajectories can exhibit strong artifacts and geometric distortions. In this work, the source of these artifacts is analyzed and a reliable correction strategy is derived. A dynamic field camera was built for encoding field calibration. Concomitant fields of linear and nonlinear spatial encoding magnetic fields were analyzed. A combined basis consisting of spherical harmonics and concomitant terms was proposed and used for encoding field calibration and image reconstruction. A good agreement between the analytical solution for the concomitant fields and the magnetic field simulations of the custom-built PatLoc SEM coil was observed. Substantial image quality improvements were obtained using a dynamic field camera for encoding field calibration combined with the proposed combined basis. The importance of trajectory calibration for single-shot higher-dimensional encoding is demonstrated using the combined basis including spherical harmonics and concomitant terms, which treats the concomitant fields as an integral part of the encoding. © 2014 Wiley Periodicals, Inc.

  10. Field of view extension and truncation correction for MR-based human attenuation correction in simultaneous MR/PET imaging

    International Nuclear Information System (INIS)

    Blumhagen, Jan O.; Ladebeck, Ralf; Fenchel, Matthias; Braun, Harald; Quick, Harald H.; Faul, David; Scheffler, Klaus

    2014-01-01

    Purpose: In quantitative PET imaging, it is critical to accurately measure and compensate for the attenuation of the photons absorbed in the tissue. While in PET/CT the linear attenuation coefficients can be easily determined from a low-dose CT-based transmission scan, in whole-body MR/PET the computation of the linear attenuation coefficients is based on the MR data. However, a constraint of the MR-based attenuation correction (AC) is the MR-inherent field-of-view (FoV) limitation due to static magnetic field (B 0 ) inhomogeneities and gradient nonlinearities. Therefore, the MR-based human AC map may be truncated or geometrically distorted toward the edges of the FoV and, consequently, the PET reconstruction with MR-based AC may be biased. This is especially of impact laterally where the patient arms rest beside the body and are not fully considered. Methods: A method is proposed to extend the MR FoV by determining an optimal readout gradient field which locally compensates B 0 inhomogeneities and gradient nonlinearities. This technique was used to reduce truncation in AC maps of 12 patients, and the impact on the PET quantification was analyzed and compared to truncated data without applying the FoV extension and additionally to an established approach of PET-based FoV extension. Results: The truncation artifacts in the MR-based AC maps were successfully reduced in all patients, and the mean body volume was thereby increased by 5.4%. In some cases large patient-dependent changes in SUV of up to 30% were observed in individual lesions when compared to the standard truncated attenuation map. Conclusions: The proposed technique successfully extends the MR FoV in MR-based attenuation correction and shows an improvement of PET quantification in whole-body MR/PET hybrid imaging. In comparison to the PET-based completion of the truncated body contour, the proposed method is also applicable to specialized PET tracers with little uptake in the arms and might reduce the

  11. Consequences of the center-of-mass correction in nuclear mean-field models

    International Nuclear Information System (INIS)

    Bender, M.; Rutz, K.; Reinhard, P.G.; Maruhn, J.A.

    2000-01-01

    We study the influence of the scheme for the correction for spurious center-of-mass motion on the fit of effective interactions for self-consistent nuclear mean-field calculations. We find that interactions with very simple center-of-mass correction have significantly larger surface coefficients than interactions for which the center-of-mass correction was calculated for the actual many-body state during the fit. The reason for that is that the effective interaction has to counteract the wrong trends with nucleon number of all simplified schemes for center-of-mass correction which puts a wrong trend with mass number into the effective interaction itself. The effect becomes clearly visible when looking at the deformation energy of largely deformed systems, e.g. superdeformed states or fission barriers of heavy nuclei. (orig.)

  12. Implementation and Application of PSF-Based EPI Distortion Correction to High Field Animal Imaging

    Directory of Open Access Journals (Sweden)

    Dominik Paul

    2009-01-01

    Full Text Available The purpose of this work is to demonstrate the functionality and performance of a PSF-based geometric distortion correction for high-field functional animal EPI. The EPI method was extended to measure the PSF and a postprocessing chain was implemented in Matlab for offline distortion correction. The correction procedure was applied to phantom and in vivo imaging of mice and rats at 9.4T using different SE-EPI and DWI-EPI protocols. Results show the significant improvement in image quality for single- and multishot EPI. Using a reduced FOV in the PSF encoding direction clearly reduced the acquisition time for PSF data by an acceleration factor of 2 or 4, without affecting the correction quality.

  13. Field test of the PNNL Automated Radioxenon Sampler/Analyzer (ARSA)

    International Nuclear Information System (INIS)

    Lagomarsino, R.J.; Ku, E.; Latner, N.; Sanderson, C.G.

    1998-07-01

    As part of the requirements of the Comprehensive Test Ban Treaty (CTBT), the Automated Radioxenon/Sampler Analyzer (ARSA) was designed and engineered by the Pacific Northwest National Laboratory (PNNL). The instrument is to provide near real-time detection and measurement of the radioxenons released into the atmosphere after a nuclear test. Forty-six field tests, designed to determine the performance of the ARSA prototype under simulated field conditions, were conducted at EML from March to December 1997. This final report contains detailed results of the tests with recommendations for improvements in instrument performance

  14. Field test of the PNNL Automated Radioxenon Sampler/Analyzer (ARSA)

    Energy Technology Data Exchange (ETDEWEB)

    Lagomarsino, R.J.; Ku, E.; Latner, N.; Sanderson, C.G.

    1998-07-01

    As part of the requirements of the Comprehensive Test Ban Treaty (CTBT), the Automated Radioxenon/Sampler Analyzer (ARSA) was designed and engineered by the Pacific Northwest National Laboratory (PNNL). The instrument is to provide near real-time detection and measurement of the radioxenons released into the atmosphere after a nuclear test. Forty-six field tests, designed to determine the performance of the ARSA prototype under simulated field conditions, were conducted at EML from March to December 1997. This final report contains detailed results of the tests with recommendations for improvements in instrument performance.

  15. Coulomb’s law corrections and fermion field localization in a tachyonic de Sitter thick braneworld

    International Nuclear Information System (INIS)

    Cartas-Fuentevilla, Roberto; Escalante, Alberto; Germán, Gabriel; Herrera-Aguilar, Alfredo; Mora-Luna, Refugio Rigel

    2016-01-01

    Following recent studies which show that it is possible to localize gravity as well as scalar and gauge vector fields in a tachyonic de Sitter thick braneworld, we investigate the solution of the gauge hierarchy problem, the localization of fermion fields in this model, the recovering of the Coulomb law on the non-relativistic limit of the Yukawa interaction between bulk fermions and gauge bosons localized in the brane, and confront the predicted 5D corrections to the photon mass with its upper experimental/observational bounds, finding the model physically viable since it passes these tests. In order to achieve the latter aims we first consider the Yukawa interaction term between the fermionic and the tachyonic scalar fields MF(T)ΨΨ-bar in the action and analyze four distinct tachyonic functions F(T) that lead to four different structures of the respective fermionic mass spectra with different physics. In particular, localization of the massless left-chiral fermion zero mode is possible for three of these cases. We further analyze the phenomenology of these Yukawa interactions among fermion fields and gauge bosons localized on the brane and obtain the crucial and necessary information to compute the corrections to Coulomb’s law coming from massive KK vector modes in the non-relativistic limit. These corrections are exponentially suppressed due to the presence of the mass gap in the mass spectrum of the bulk gauge vector field. From our results we conclude that corrections to Coulomb’s law in the thin brane limit have the same form (up to a numerical factor) as far as the left-chiral massless fermion field is localized on the brane. Finally we compute the corrections to the Coulomb’s law for an arbitrarily thick brane scenario which can be interpreted as 5D corrections to the photon mass. By performing consistent estimations with brane phenomenology, we found that the predicted corrections to the photon mass, which are well bounded by the experimentally

  16. Coulomb’s law corrections and fermion field localization in a tachyonic de Sitter thick braneworld

    Energy Technology Data Exchange (ETDEWEB)

    Cartas-Fuentevilla, Roberto; Escalante, Alberto [Instituto de Física, Benemérita Universidad Autónoma de Puebla,Apdo. postal J-48, 72570 Puebla, Pue. (Mexico); Germán, Gabriel [Instituto de Ciencias Físicas, Universidad Nacional Autónoma de México,Apdo. Postal 48-3, 62251 Cuernavaca, Morelos (Mexico); Rudolf Peierls Centre for Theoretical Physics, University of Oxford, 1 Keble Road,Oxford, OX1 3NP (United Kingdom); Herrera-Aguilar, Alfredo [Instituto de Física, Benemérita Universidad Autónoma de Puebla,Apdo. postal J-48, 72570 Puebla, Pue. (Mexico); Institutode Física y Matemáticas, Universidad Michoacana de San Nicolás de Hidalgo,Edificio C-3, Ciudad Universitaria, CP 58040, Morelia, Michoacán (Mexico); Mora-Luna, Refugio Rigel [Instituto de Ciencias Físicas, Universidad Nacional Autónoma de México,Apdo. Postal 48-3, 62251 Cuernavaca, Morelos (Mexico)

    2016-05-11

    Following recent studies which show that it is possible to localize gravity as well as scalar and gauge vector fields in a tachyonic de Sitter thick braneworld, we investigate the solution of the gauge hierarchy problem, the localization of fermion fields in this model, the recovering of the Coulomb law on the non-relativistic limit of the Yukawa interaction between bulk fermions and gauge bosons localized in the brane, and confront the predicted 5D corrections to the photon mass with its upper experimental/observational bounds, finding the model physically viable since it passes these tests. In order to achieve the latter aims we first consider the Yukawa interaction term between the fermionic and the tachyonic scalar fields MF(T)ΨΨ-bar in the action and analyze four distinct tachyonic functions F(T) that lead to four different structures of the respective fermionic mass spectra with different physics. In particular, localization of the massless left-chiral fermion zero mode is possible for three of these cases. We further analyze the phenomenology of these Yukawa interactions among fermion fields and gauge bosons localized on the brane and obtain the crucial and necessary information to compute the corrections to Coulomb’s law coming from massive KK vector modes in the non-relativistic limit. These corrections are exponentially suppressed due to the presence of the mass gap in the mass spectrum of the bulk gauge vector field. From our results we conclude that corrections to Coulomb’s law in the thin brane limit have the same form (up to a numerical factor) as far as the left-chiral massless fermion field is localized on the brane. Finally we compute the corrections to the Coulomb’s law for an arbitrarily thick brane scenario which can be interpreted as 5D corrections to the photon mass. By performing consistent estimations with brane phenomenology, we found that the predicted corrections to the photon mass, which are well bounded by the experimentally

  17. N3 Bias Field Correction Explained as a Bayesian Modeling Method

    DEFF Research Database (Denmark)

    Larsen, Christian Thode; Iglesias, Juan Eugenio; Van Leemput, Koen

    2014-01-01

    Although N3 is perhaps the most widely used method for MRI bias field correction, its underlying mechanism is in fact not well understood. Specifically, the method relies on a relatively heuristic recipe of alternating iterative steps that does not optimize any particular objective function. In t...

  18. Saturne II: characteristics of the proton beam, field qualities and corrections, acceleration of the polarized protons

    International Nuclear Information System (INIS)

    Laclare, J.-L.

    1978-01-01

    Indicated specifications of Saturne II are summed up: performance of the injection system, quality of the guidance field (magnetic measurements and multipolar corrections), transverse and longitudinal instabilities, characteristics of the beam stored in the machine and of the extracted beam. The problem of depolarization along the acceleration cycle is briefly discussed (1 or 2% between injection and 3 GeV) [fr

  19. Integrals of random fields treated by the model correction factor method

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  20. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  1. Born--Infeld theory of electroweak and gravitational fields: Possible correction to Newton and Coulomb laws

    OpenAIRE

    Palatnik, Dmitriy

    2002-01-01

    In this note one suggests a possibility of direct observation of the $\\theta$-parameter, introduced in the Born--Infeld theory of electroweak and gravitational fields, developed in quant-ph/0202024. Namely, one may treat $\\theta$ as a universal constant, responsible for correction to the Coulomb and Newton laws, allowing direct interaction between electrical charges and masses.

  2. A brain MRI bias field correction method created in the Gaussian multi-scale space

    Science.gov (United States)

    Chen, Mingsheng; Qin, Mingxin

    2017-07-01

    A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.

  3. Radiative corrections to the quark masses in the ferromagnetic Ising and Potts field theories

    Directory of Open Access Journals (Sweden)

    Sergei B. Rutkevich

    2017-10-01

    Full Text Available We consider the Ising Field Theory (IFT, and the 3-state Potts Field Theory (PFT, which describe the scaling limits of the two-dimensional lattice q-state Potts model with q=2, and q=3, respectively. At zero magnetic field h=0, both field theories are integrable away from the critical point, have q degenerate vacua in the ferromagnetic phase, and q(q−1 particles of the same mass – the kinks interpolating between two different vacua. Application of a weak magnetic field induces confinement of kinks into bound states – the “mesons” (for q=2,3 consisting predominantly of two kinks, and “baryons” (for q=3, which are essentially the three-kink excitations. The kinks in the confinement regime are also called “the quarks”. We review and refine the Form Factor Perturbation Theory (FFPT, adapting it to the analysis of the confinement problem in the limit of small h, and apply it to calculate the corrections to the kink (quark masses induced by the multi-kink fluctuations caused by the weak magnetic field. It is shown that the subleading third-order ∼h3 correction to the kink mass vanishes in the IFT. The leading second order ∼h2 correction to the kink mass in the 3-state PFT is estimated by truncation the infinite form factor expansion at the first term representing contribution of the two-kink fluctuations into the kink self-energy.

  4. Radiative corrections to the quark masses in the ferromagnetic Ising and Potts field theories

    Science.gov (United States)

    Rutkevich, Sergei B.

    2017-10-01

    We consider the Ising Field Theory (IFT), and the 3-state Potts Field Theory (PFT), which describe the scaling limits of the two-dimensional lattice q-state Potts model with q = 2, and q = 3, respectively. At zero magnetic field h = 0, both field theories are integrable away from the critical point, have q degenerate vacua in the ferromagnetic phase, and q (q - 1) particles of the same mass - the kinks interpolating between two different vacua. Application of a weak magnetic field induces confinement of kinks into bound states - the "mesons" (for q = 2 , 3) consisting predominantly of two kinks, and "baryons" (for q = 3), which are essentially the three-kink excitations. The kinks in the confinement regime are also called "the quarks". We review and refine the Form Factor Perturbation Theory (FFPT), adapting it to the analysis of the confinement problem in the limit of small h, and apply it to calculate the corrections to the kink (quark) masses induced by the multi-kink fluctuations caused by the weak magnetic field. It is shown that the subleading third-order ∼h3 correction to the kink mass vanishes in the IFT. The leading second order ∼h2 correction to the kink mass in the 3-state PFT is estimated by truncation the infinite form factor expansion at the first term representing contribution of the two-kink fluctuations into the kink self-energy.

  5. Treatment planning for SBRT using automated field delivery: A case study

    International Nuclear Information System (INIS)

    Ritter, Timothy A.; Owen, Dawn; Brooks, Cassandra M.; Stenmark, Matthew H.

    2015-01-01

    Stereotactic body radiation therapy (SBRT) treatment planning and delivery can be accomplished using a variety of techniques that achieve highly conformal dose distributions. Herein, we describe a template-based automated treatment field approach that enables rapid delivery of more than 20 coplanar fields. A case study is presented to demonstrate how modest adaptations to traditional SBRT planning can be implemented to take clinical advantage of this technology. Treatment was planned for a left-sided lung lesion adjacent to the chest wall using 25 coplanar treatment fields spaced at 11° intervals. The plan spares the contralateral lung and is in compliance with the conformality standards set forth in Radiation Therapy and Oncology Group protocol 0915, and the dose tolerances found in the report of the American Association of Physicists in Medicine Task Group 101. Using a standard template, treatment planning was accomplished in less than 20 minutes, and each 10 Gy fraction was delivered in approximately 5.4 minutes. For those centers equipped with linear accelerators capable of automated treatment field delivery, the use of more than 20 coplanar fields is a viable SBRT planning approach and yields excellent conformality and quality combined with rapid planning and treatment delivery. Although the case study discusses a laterally located lung lesion, this technique can be applied to centrally located tumors with similar results

  6. Possibilities of the common research-development action in the field of automated logistical engines

    Directory of Open Access Journals (Sweden)

    Pap Lajos

    2003-12-01

    Full Text Available The paper briefly presents the R&D cooperation of the Department of Materials Handling and Logistics and Departments of Automation. The main fields of cooperation are introduced. Different kind of Linear Motor (hereafter LM drives are being developed and tested for warehouse and rolling conveyor systems. Modern control strategies using AI methods are being investigated and tested for Automated guide vehicle. Wireless communication methods are being searched and developed for mobile material handling devices. Application possibilities of voice recognition and image processing are being tested for control of material handling robots and devices. Application of process visualization programs are being developed and investigated. Multi-level industrial communication system is being developed for the laboratories of the cooperating departments.

  7. Automated operation and management of the oil fields in Western Siberia

    Energy Technology Data Exchange (ETDEWEB)

    Guernault, P.; Valleur, M.

    1979-11-01

    In October 1978, Technip signed a contract worth 850 mf with the Soviet central purchasing organization, Mashinoimport, for the design and construction of 2 large complexes intended to improve the production of the Soviet Samotlor and Fyodorovsk fields. These fields are located in West Siberia near the towns of Nijnivartovsk and Surgut, in the OB Valley, approximately 600 km south of the Arctic Circle. They are among the largest in the Soviet Union. The present output of the Samotlor field exceeds 100 mt/yr; the 2 fields taken together comprise 2400 wells in the final stage, spread over an area of 2000 sq km. These installations thus are the largest to be designed to date with the gas lift method: i.e., the reinjection of high pressure gas into the production string. They make use of high performance compressor plants but their main feature is above all their very high level of automation.

  8. Slow-roll corrections in multi-field inflation: a separate universes approach

    Science.gov (United States)

    Karčiauskas, Mindaugas; Kohri, Kazunori; Mori, Taro; White, Jonathan

    2018-05-01

    In view of cosmological parameters being measured to ever higher precision, theoretical predictions must also be computed to an equally high level of precision. In this work we investigate the impact on such predictions of relaxing some of the simplifying assumptions often used in these computations. In particular, we investigate the importance of slow-roll corrections in the computation of multi-field inflation observables, such as the amplitude of the scalar spectrum Pζ, its spectral tilt ns, the tensor-to-scalar ratio r and the non-Gaussianity parameter fNL. To this end we use the separate universes approach and δ N formalism, which allows us to consider slow-roll corrections to the non-Gaussianity of the primordial curvature perturbation as well as corrections to its two-point statistics. In the context of the δ N expansion, we divide slow-roll corrections into two categories: those associated with calculating the correlation functions of the field perturbations on the initial flat hypersurface and those associated with determining the derivatives of the e-folding number with respect to the field values on the initial flat hypersurface. Using the results of Nakamura & Stewart '96, corrections of the first kind can be written in a compact form. Corrections of the second kind arise from using different levels of slow-roll approximation in solving for the super-horizon evolution, which in turn corresponds to using different levels of slow-roll approximation in the background equations of motion. We consider four different levels of approximation and apply the results to a few example models. The various approximations are also compared to exact numerical solutions.

  9. Image-guided regularization level set evolution for MR image segmentation and bias field correction.

    Science.gov (United States)

    Wang, Lingfeng; Pan, Chunhong

    2014-01-01

    Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Quantum corrections to scalar field dynamics in a slow-roll space-time

    Energy Technology Data Exchange (ETDEWEB)

    Herranen, Matti [Niels Bohr International Academy and Discovery Center, Niels Bohr Institute,University of Copenhagen,Blegdamsvej 17, 2100 Copenhagen (Denmark); Markkanen, Tommi [Helsinki Institute of Physics and Department of Physics,P.O. Box 64, FI-00014, University of Helsinki (Finland); Tranberg, Anders [Faculty of Science and Technology, University of Stavanger, 4036 Stavanger (Norway)

    2014-05-07

    We consider the dynamics of a quantum scalar field in the background of a slow-roll inflating Universe. We compute the one-loop quantum corrections to the field and Friedmann equation of motion, in both a 1PI and a 2PI expansion, to leading order in slow-roll. Generalizing the works of http://dx.doi.org/10.1016/j.nuclphysb.2006.04.029, http://dx.doi.org/10.1103/PhysRevLett.107.191103, http://dx.doi.org/10.1103/PhysRevD.76.103507 and http://dx.doi.org/10.1016/j.nuclphysb.2006.04.010, we then solve these equations to compute the effect on the primordial power spectrum, for the case of a self-interacting inflaton and a self-interacting spectator field. We find that for the inflaton the corrections are negligible due to the smallness of the coupling constant despite the large IR enhancement of the loop contributions. For a curvaton scenario, on the other hand, we find tension in using the 1PI loop corrections, which may indicate that the quantum corrections could be non-perturbatively large in this case, thus requiring resummation.

  11. Depolarization corrections to the coercive field in thin-film ferroelectrics

    International Nuclear Information System (INIS)

    Dawber, M; Chandra, P; Littlewood, P B; Scott, J F

    2003-01-01

    Empirically, the coercive field needed to reverse the polarization in a ferroelectric increases with decreasing film thickness. For ferroelectric films of 100 μm to 100 nm in thickness the coercive field has been successfully described by a semi-empirical scaling law. Accounting for depolarization corrections, we show that this scaling behaviour is consistent with field measurements of ultrathin ferroelectric capacitors down to one nanometre in film thickness. Our results also indicate that the minimum film thickness, determined by a polarization instability, can be tuned by the choice of electrodes, and recommendations for next-generation ferroelectric devices are discussed. (letter to the editor)

  12. Depolarization corrections to the coercive field in thin-film ferroelectrics

    CERN Document Server

    Dawber, M; Littlewood, P B; Scott, J F

    2003-01-01

    Empirically, the coercive field needed to reverse the polarization in a ferroelectric increases with decreasing film thickness. For ferroelectric films of 100 mu m to 100 nm in thickness the coercive field has been successfully described by a semi-empirical scaling law. Accounting for depolarization corrections, we show that this scaling behaviour is consistent with field measurements of ultrathin ferroelectric capacitors down to one nanometre in film thickness. Our results also indicate that the minimum film thickness, determined by a polarization instability, can be tuned by the choice of electrodes, and recommendations for next-generation ferroelectric devices are discussed. (letter to the editor)

  13. Variational-integral perturbation corrections of some lower excited states for hydrogen atoms in magnetic fields

    International Nuclear Information System (INIS)

    Yuan Lin; Zhou Ben-Hu; Zhao Yun-Hui; Xu Jun; Hai Wen-Hua

    2012-01-01

    A variational-integral perturbation method (VIPM) is established by combining the variational perturbation with the integral perturbation. The first-order corrected wave functions are constructed, and the second-order energy corrections for the ground state and several lower excited states are calculated by applying the VIPM to the hydrogen atom in a strong uniform magnetic field. Our calculations demonstrated that the energy calculated by the VIPM only shows a negative value, which indicates that the VIPM method is more accurate than the other methods. Our study indicated that the VIPM can not only increase the accuracy of the results but also keep the convergence of the wave functions

  14. One-pion exchange current corrections for nuclear magnetic moments in relativistic mean field theory

    International Nuclear Information System (INIS)

    Li Jian; Yao, J.M.; Meng Jie; Arima, Akito

    2011-01-01

    The one-pion exchange current corrections to isoscalar and isovector magnetic moments of double-closed shell nuclei plus and minus one nucleon with A = 15, 17, 39 and 41 have been studied in the relativistic mean field (RMF) theory and compared with previous relativistic and non-relativistic results. It has been found that the one-pion exchange current gives a negligible contribution to the isoscalar magnetic moments but a significant correction to the isovector ones. However, the one-pion exchange current enhances the isovector magnetic moments further and does not improve the corresponding description for the concerned nuclei in the present work. (author)

  15. Monte Carlo and experimental determination of correction factors for gamma knife perfexion small field dosimetry measurements

    Science.gov (United States)

    Zoros, E.; Moutsatsos, A.; Pappas, E. P.; Georgiou, E.; Kollias, G.; Karaiskos, P.; Pantelis, E.

    2017-09-01

    Detector-, field size- and machine-specific correction factors are required for precise dosimetry measurements in small and non-standard photon fields. In this work, Monte Carlo (MC) simulation techniques were used to calculate the k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} and k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors for a series of ionization chambers, a synthetic microDiamond and diode dosimeters, used for reference and/or output factor (OF) measurements in the Gamma Knife Perfexion photon fields. Calculations were performed for the solid water (SW) and ABS plastic phantoms, as well as for a water phantom of the same geometry. MC calculations for the k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors in SW were compared against corresponding experimental results for a subset of ionization chambers and diode detectors. Reference experimental OF data were obtained through the weighted average of corresponding measurements using TLDs, EBT-2 films and alanine pellets. k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} values close to unity (within 1%) were calculated for most of ionization chambers in water. Greater corrections of up to 6.0% were observed for chambers with relatively large air-cavity dimensions and steel central electrode. A phantom correction of 1.006 and 1.024 (breaking down to 1.014 from the ABS sphere and 1.010 from the accompanying ABS phantom adapter) were calculated for the SW and ABS phantoms, respectively, adding up to k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} corrections in water. Both measurements and MC calculations for the diode and microDiamond detectors resulted in lower than unit k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors, due to their denser sensitive volume and encapsulation materials. In comparison, higher than unit k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} results for the ionization chambers suggested field size depended dose underestimations (being significant for the 4 mm field), with magnitude depending on the combination of

  16. Development, field testing and implementation of automated hydraulically controlled, variable volume loading systems for reciprocating compressors

    Energy Technology Data Exchange (ETDEWEB)

    Hickman, Dwayne A. [ACI Services, Inc., Cambridge, OH (United States); Slupsky, John [Kvaerner Process Systems, Calgary, Alberta (Canada); Chrisman, Bruce M.; Hurley, Tom J. [Cooper Energy Services, Oklahoma City, OK (United States). Ajax Division

    2003-07-01

    Automated, variable volume unloaders provide the ability to smoothly load/unload reciprocating compressors to maintain ideal operations in ever-changing environments. Potential advantages provided by this load control system include: maximizing unit capacity, optimizing power economy, maintaining low exhaust emissions, and maintaining process suction and discharge pressures. Obstacles foreseen include: reliability, stability, serviceability and automation integration. Results desired include: increased productivity for the compressor and its operators, increased up time, and more stable process control. This presentation covers: system design features with descriptions of how different types of the devices were developed, initial test data, and how they can be effectively operated; three actual-case studies detailing the reasons why automated, hydraulically controlled, variable volume, head-end unloaders were chosen over other types of unloading devices; sophisticated software used in determining the device sizing and predicted performance; mechanical and field considerations; installation, serviceability and operating considerations; device control issues, including PC and PLC considerations; monitoring of actual performance and comparison of such with predicted performance; analysis of mechanical reliability and stability; and preliminary costs versus return on investment analysis. (author)

  17. Transcranial Magnetic Stimulation: An Automated Procedure to Obtain Coil-specific Models for Field Calculations

    DEFF Research Database (Denmark)

    Madsen, Kristoffer Hougaard; Ewald, Lars; Siebner, Hartwig R.

    2015-01-01

    Background: Field calculations for transcranial magnetic stimulation (TMS) are increasingly implemented online in neuronavigation systems and in more realistic offline approaches based on finite-element methods. They are often based on simplified and/or non-validated models of the magnetic vector...... potential of the TMS coils. Objective: To develop an approach to reconstruct the magnetic vector potential based on automated measurements. Methods: We implemented a setup that simultaneously measures the three components of the magnetic field with high spatial resolution. This is complemented by a novel...... approach to determine the magnetic vector potential via volume integration of the measured field. Results: The integration approach reproduces the vector potential with very good accuracy. The vector potential distribution of a standard figure-of-eight shaped coil determined with our setup corresponds well...

  18. Instrumentation, Field Network and Process Automation for the Cryogenic System of the LHC Test String

    CERN Document Server

    Suraci, A; Balle, C; Blanco-Viñuela, E; Casas-Cubillos, J; Gomes, P; Pelletier, S; Serio, L; Vauthier, N; Balle, Ch.

    2001-01-01

    CERN is now setting up String 2, a full-size prototype of a regular cell of the LHC arc. It is composed of two quadrupole, six dipole magnets, and a separate cryogenic distribution line (QRL) for the supply and recovery of the cryogen. An electrical feed box (DFB), with up to 38 High Temperature Superconducting (HTS) leads, powers the magnets. About 700 sensors and actuators are distributed along four Profibus DP and two Profibus PA field buses. The process automation is handled by two controllers, running 126 Closed Control Loops (CCL). This paper describes the cryogenic control system, associated instrumentation, and their commissioning.

  19. Local-field correction in the lattice dynamics of b.b.c. transition metals

    International Nuclear Information System (INIS)

    Onwuagba, B.N.

    1984-01-01

    It is shown that the off-diagonal components of the inverse dielectric matrix which determine the local-field correction associated with s-d interactions, make contributions to the dynamical matrix for phonon dispersion in the body-centred cubic transition metals V, Nb and Ta which tend to cancel the Born-Mayer contribution, just as the diagonal components of the inverse dielectric matrix tend to cancel or screen the long-range (Coulombic) contribution. Numerical calculations show that the cancellation of the Born-Mayer contribution to the dynamical matrix by the local-field correction is such that the effective short-range interatomic potential turns out to be attractive rather than repulsive in these metals and accounts for some peculiar shapes of the major soft modes observed in these metals

  20. Markov random field and Gaussian mixture for segmented MRI-based partial volume correction in PET

    International Nuclear Information System (INIS)

    Bousse, Alexandre; Thomas, Benjamin A; Erlandsson, Kjell; Hutton, Brian F; Pedemonte, Stefano; Ourselin, Sébastien; Arridge, Simon

    2012-01-01

    In this paper we propose a segmented magnetic resonance imaging (MRI) prior-based maximum penalized likelihood deconvolution technique for positron emission tomography (PET) images. The model assumes the existence of activity classes that behave like a hidden Markov random field (MRF) driven by the segmented MRI. We utilize a mean field approximation to compute the likelihood of the MRF. We tested our method on both simulated and clinical data (brain PET) and compared our results with PET images corrected with the re-blurred Van Cittert (VC) algorithm, the simplified Guven (SG) algorithm and the region-based voxel-wise (RBV) technique. We demonstrated our algorithm outperforms the VC algorithm and outperforms SG and RBV corrections when the segmented MRI is inconsistent (e.g. mis-segmentation, lesions, etc) with the PET image. (paper)

  1. Automatic segmentation for brain MR images via a convex optimized segmentation and bias field correction coupled model.

    Science.gov (United States)

    Chen, Yunjie; Zhao, Bo; Zhang, Jianwei; Zheng, Yuhui

    2014-09-01

    Accurate segmentation of magnetic resonance (MR) images remains challenging mainly due to the intensity inhomogeneity, which is also commonly known as bias field. Recently active contour models with geometric information constraint have been applied, however, most of them deal with the bias field by using a necessary pre-processing step before segmentation of MR data. This paper presents a novel automatic variational method, which can segment brain MR images meanwhile correcting the bias field when segmenting images with high intensity inhomogeneities. We first define a function for clustering the image pixels in a smaller neighborhood. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. In order to reduce the effect of the noise, the local intensity variations are described by the Gaussian distributions with different means and variances. Then, the objective functions are integrated over the entire domain. In order to obtain the global optimal and make the results independent of the initialization of the algorithm, we reconstructed the energy function to be convex and calculated it by using the Split Bregman theory. A salient advantage of our method is that its result is independent of initialization, which allows robust and fully automated application. Our method is able to estimate the bias of quite general profiles, even in 7T MR images. Moreover, our model can also distinguish regions with similar intensity distribution with different variances. The proposed method has been rigorously validated with images acquired on variety of imaging modalities with promising results. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Correction of inhomogeneous RF field using multiple SPGR signals for high-field spin-echo MRI

    International Nuclear Information System (INIS)

    Ishimori, Yoshiyuki; Monma, Masahiko; Yamada, Kazuhiro; Kimura, Hirohiko; Uematsu, Hidemasa; Fujiwara, Yasuhiro; Yamaguchi, Isao

    2007-01-01

    The purpose of this study was to propose a simple and useful method for correcting nonuniformity of high-field (3 Tesla) T 1 -weighted spin-echo (SE) images based on a B1 field map estimated from gradient recalled echo (GRE) signals. The method of this study was to estimate B1 inhomogeneity, spoiled gradient recalled echo (SPGR) images were collected using a fixed repetition time of 70 ms, flip angles of 45 and 90 degrees, and echo times of 4.8 and 10.4 ms. Selection of flip angles was based on the observation that the relative intensity changes in SPGR signals were very similar among different tissues at larger flip angles than the Ernst angle. Accordingly, spatial irregularity that was observed on a signal ratio map of the SPGR images acquired with these 2 flip angles was ascribed to inhomogeneity of the B1 field. Dual echo time was used to eliminate T 2 * effects. The ratio map that was acquired was scaled to provide an intensity correction map for SE images. Both phantom and volunteer studies were performed using a 3T magnetic resonance scanner to validate the method. In the phantom study, the uniformity of the T 1 -weighted SE image improved by 23%. Images of human heads also showed practically sufficient improvement in the image uniformity. The present method improves the image uniformity of high-field T 1 -weighted SE images. (author)

  3. Effects of Field-Map Distortion Correction on Resting State Functional Connectivity MRI

    Directory of Open Access Journals (Sweden)

    Hiroki Togo

    2017-12-01

    Full Text Available Magnetic field inhomogeneities cause geometric distortions of echo planar images used for functional magnetic resonance imaging (fMRI. To reduce this problem, distortion correction (DC with field map is widely used for both task and resting-state fMRI (rs-fMRI. Although DC with field map has been reported to improve the quality of task fMRI, little is known about its effects on rs-fMRI. Here, we tested the influence of field-map DC on rs-fMRI results using two rs-fMRI datasets derived from 40 healthy subjects: one with DC (DC+ and the other without correction (DC−. Independent component analysis followed by the dual regression approach was used for evaluation of resting-state functional connectivity networks (RSN. We also obtained the ratio of low-frequency to high-frequency signal power (0.01–0.1 Hz and above 0.1 Hz, respectively; LFHF ratio to assess the quality of rs-fMRI signals. For comparison of RSN between DC+ and DC− datasets, the default mode network showed more robust functional connectivity in the DC+ dataset than the DC− dataset. Basal ganglia RSN showed some decreases in functional connectivity primarily in white matter, indicating imperfect registration/normalization without DC. Supplementary seed-based and simulation analyses supported the utility of DC. Furthermore, we found a higher LFHF ratio after field map correction in the anterior cingulate cortex, posterior cingulate cortex, ventral striatum, and cerebellum. In conclusion, field map DC improved detection of functional connectivity derived from low-frequency rs-fMRI signals. We encourage researchers to include a DC step in the preprocessing pipeline of rs-fMRI analysis.

  4. On the roles of direct feedback and error field correction in stabilizing resistive-wall modes

    International Nuclear Information System (INIS)

    In, Y.; Bogatu, I.N.; Kim, J.S.; Garofalo, A.M.; Jackson, G.L.; La Haye, R.J.; Schaffer, M.J.; Strait, E.J.; Lanctot, M.J.; Reimerdes, H.; Marrelli, L.; Martin, P.; Okabayashi, M.

    2010-01-01

    Active feedback control in the DIII-D tokamak has fully stabilized the current-driven ideal kink resistive-wall mode (RWM). While complete stabilization is known to require both low frequency error field correction (EFC) and high frequency feedback, unambiguous identification has been made about the distinctive role of each in a fully feedback-stabilized discharge. Specifically, the role of direct RWM feedback, which nullifies the RWM perturbation in a time scale faster than the mode growth time, cannot be replaced by low frequency EFC, which minimizes the lack of axisymmetry of external magnetic fields. (letter)

  5. Automated disposal of produced water from a coalbed methane well field, a case history

    International Nuclear Information System (INIS)

    Luckianow, B.J.; Findley, M.L.; Paschal, W.T.

    1994-01-01

    This paper provides an overview of the automated disposal system for produced water designed and operated by Taurus Exploration, Inc. This presentation draws from Taurus' case study in the planning, design, construction, and operation of production water disposal facilities for the Mt. Olive well field, located in the Black Warrior Basin of Alabama. The common method for disposing of water produced from coalbed methane wells in the Warrior Basin is to discharge into a receiving stream. The limiting factor in the discharge method is the capability of the receiving stream to assimilate the chloride component of the water discharged. During the winter and spring, the major tributaries of the Black Warrior River are capable of assimilating far more production water than operations can generate. During the summer and fall months, however, these same tributaries can approach near zero flow, resulting in insufficient flow for dilution. During such periods pumping shut-down within the well field can be avoided by routing production waters into a storage facility. This paper discusses the automated production water disposal system on Big Sandy Creek designed and operated by Taurus. This system allows for continuous discharge to the receiving stream, thus taking full advantage of Big Sandy Creek's assimilative capacity, while allowing a provision for excess produced water storage and future stream discharge

  6. Scaling up Ecological Measurements of Coral Reefs Using Semi-Automated Field Image Collection and Analysis

    Directory of Open Access Journals (Sweden)

    Manuel González-Rivero

    2016-01-01

    Full Text Available Ecological measurements in marine settings are often constrained in space and time, with spatial heterogeneity obscuring broader generalisations. While advances in remote sensing, integrative modelling and meta-analysis enable generalisations from field observations, there is an underlying need for high-resolution, standardised and geo-referenced field data. Here, we evaluate a new approach aimed at optimising data collection and analysis to assess broad-scale patterns of coral reef community composition using automatically annotated underwater imagery, captured along 2 km transects. We validate this approach by investigating its ability to detect spatial (e.g., across regions and temporal (e.g., over years change, and by comparing automated annotation errors to those of multiple human annotators. Our results indicate that change of coral reef benthos can be captured at high resolution both spatially and temporally, with an average error below 5%, among key benthic groups. Cover estimation errors using automated annotation varied between 2% and 12%, slightly larger than human errors (which varied between 1% and 7%, but small enough to detect significant changes among dominant groups. Overall, this approach allows a rapid collection of in-situ observations at larger spatial scales (km than previously possible, and provides a pathway to link, calibrate, and validate broader analyses across even larger spatial scales (10–10,000 km2.

  7. Numerical correction of distorted images in full-field optical coherence tomography

    Science.gov (United States)

    Min, Gihyeon; Kim, Ju Wan; Choi, Woo June; Lee, Byeong Ha

    2012-03-01

    We propose a numerical method which can numerically correct the distorted en face images obtained with a full field optical coherence tomography (FF-OCT) system. It is shown that the FF-OCT image of the deep region of a biological sample is easily blurred or degraded because the sample has a refractive index (RI) much higher than its surrounding medium in general. It is analyzed that the focal plane of the imaging system is segregated from the imaging plane of the coherence-gated system due to the RI mismatch. This image-blurring phenomenon is experimentally confirmed by imaging the chrome pattern of a resolution test target through its glass substrate in water. Moreover, we demonstrate that the blurred image can be appreciably corrected by using the numerical correction process based on the Fresnel-Kirchhoff diffraction theory. The proposed correction method is applied to enhance the image of a human hair, which permits the distinct identification of the melanin granules inside the cortex layer of the hair shaft.

  8. Correcting PSP electron measurements for the effects of spacecraft electrostatic and magnetic fields

    Science.gov (United States)

    McGinnis, D.; Halekas, J. S.; Larson, D. E.; Whittlesey, P. L.; Kasper, J. C.

    2017-12-01

    The near-Sun environment which the Parker Solar Probe will investigate presents a unique challenge for the measurement of thermal and suprathermal electrons. Over one orbital period, the ionizing photon flux and charged particle densities vary to such an extent that the spacecraft could charge to electrostatic potentials ranging from a few volts to tens of volts or more, and it may even develop negative electrostatic potentials near closest approach. In addition, significant permanent magnetic fields from spacecraft components will perturb thermal electron trajectories. Given these effects, electron distribution function (EDF) measurements made by the SWEAP/SPAN electron sensors will be significantly affected. It is thus important to try to understand the extent and nature of such effects, and to remediate them as much as possible. To this end, we have incorporated magnetic fields and a model electrostatic potential field into particle tracing simulations to predict particle trajectories through the near spacecraft environment. These simulations allow us to estimate how the solid angle elements measured by SPAN deflect and stretch in the presence of these fields and therefore how and to what extent EDF measurements will be distorted. In this work, we demonstrate how this technique can be used to produce a `dewarping' correction factor. Further, we show that this factor can correct synthetic datasets simulating the warped EDFs that the SPAN instruments are likely to measure over a wide range of spacecraft potentials and plasma Debye lengths.

  9. Enabling full-field physics-based optical proximity correction via dynamic model generation

    Science.gov (United States)

    Lam, Michael; Clifford, Chris; Raghunathan, Ananthan; Fenger, Germain; Adam, Kostas

    2017-07-01

    As extreme ultraviolet lithography becomes closer to reality for high volume production, its peculiar modeling challenges related to both inter and intrafield effects have necessitated building an optical proximity correction (OPC) infrastructure that operates with field position dependency. Previous state-of-the-art approaches to modeling field dependency used piecewise constant models where static input models are assigned to specific x/y-positions within the field. OPC and simulation could assign the proper static model based on simulation-level placement. However, in the realm of 7 and 5 nm feature sizes, small discontinuities in OPC from piecewise constant model changes can cause unacceptable levels of edge placement errors. The introduction of dynamic model generation (DMG) can be shown to effectively avoid these dislocations by providing unique mask and optical models per simulation region, allowing a near continuum of models through the field. DMG allows unique models for electromagnetic field, apodization, aberrations, etc. to vary through the entire field and provides a capability to precisely and accurately model systematic field signatures.

  10. Bright-field scanning confocal electron microscopy using a double aberration-corrected transmission electron microscope.

    Science.gov (United States)

    Wang, Peng; Behan, Gavin; Kirkland, Angus I; Nellist, Peter D; Cosgriff, Eireann C; D'Alfonso, Adrian J; Morgan, Andrew J; Allen, Leslie J; Hashimoto, Ayako; Takeguchi, Masaki; Mitsuishi, Kazutaka; Shimojo, Masayuki

    2011-06-01

    Scanning confocal electron microscopy (SCEM) offers a mechanism for three-dimensional imaging of materials, which makes use of the reduced depth of field in an aberration-corrected transmission electron microscope. The simplest configuration of SCEM is the bright-field mode. In this paper we present experimental data and simulations showing the form of bright-field SCEM images. We show that the depth dependence of the three-dimensional image can be explained in terms of two-dimensional images formed in the detector plane. For a crystalline sample, this so-called probe image is shown to be similar to a conventional diffraction pattern. Experimental results and simulations show how the diffracted probes in this image are elongated in thicker crystals and the use of this elongation to estimate sample thickness is explored. Copyright © 2010 Elsevier B.V. All rights reserved.

  11. On the covariant formalism of the effective field theory of gravity and leading order corrections

    DEFF Research Database (Denmark)

    Codello, Alessandro; Jain, Rajeev Kumar

    2016-01-01

    We construct the covariant effective field theory of gravity as an expansion in inverse powers of the Planck mass, identifying the leading and next-to-leading quantum corrections. We determine the form of the effective action for the cases of pure gravity with cosmological constant as well...... as gravity coupled to matter. By means of heat kernel methods we renormalize and compute the leading quantum corrections to quadratic order in a curvature expansion. The final effective action in our covariant formalism is generally non-local and can be readily used to understand the phenomenology...... on different spacetimes. In particular, we point out that on curved backgrounds the observable leading quantum gravitational effects are less suppressed than on Minkowski spacetime....

  12. On the covariant formalism of the effective field theory of gravity and leading order corrections

    International Nuclear Information System (INIS)

    Codello, Alessandro; Jain, Rajeev Kumar

    2016-01-01

    We construct the covariant effective field theory of gravity as an expansion in inverse powers of the Planck mass, identifying the leading and next-to-leading quantum corrections. We determine the form of the effective action for the cases of pure gravity with cosmological constant as well as gravity coupled to matter. By means of heat kernel methods we renormalize and compute the leading quantum corrections to quadratic order in a curvature expansion. The final effective action in our covariant formalism is generally non-local and can be readily used to understand the phenomenology on different spacetimes. In particular, we point out that on curved backgrounds the observable leading quantum gravitational effects are less suppressed than on Minkowski spacetime. (paper)

  13. Errors of first-order probe correction for higher-order probes in spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Laitinen, Tommi; Nielsen, Jeppe Majlund; Pivnenko, Sergiy

    2004-01-01

    An investigation is performed to study the error of the far-field pattern determined from a spherical near-field antenna measurement in the case where a first-order (mu=+-1) probe correction scheme is applied to the near-field signal measured by a higher-order probe.......An investigation is performed to study the error of the far-field pattern determined from a spherical near-field antenna measurement in the case where a first-order (mu=+-1) probe correction scheme is applied to the near-field signal measured by a higher-order probe....

  14. Extended Field Laser Confocal Microscopy (EFLCM): Combining automated Gigapixel image capture with in silico virtual microscopy

    International Nuclear Information System (INIS)

    Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo

    2008-01-01

    Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes

  15. Field demonstration of automated demand response for both winter and summer events in large buildings in the Pacific Northwest

    Energy Technology Data Exchange (ETDEWEB)

    Piette, M.A.; Kiliccote, S.; Dudley, J.H. [Lawrence Berkeley National Laboratory, Berkeley, CA (United States)

    2013-11-15

    There are growing strains on the electric grid as cooling peaks grow and equipment ages. Increased penetration of renewables on the grid is also straining electricity supply systems and the need for flexible demand is growing. This paper summarizes results of a series of field test of automated demand response systems in large buildings in the Pacific Northwest. The objective of the research was twofold. One objective was to evaluate the use demand response automation technologies. A second objective was to evaluate control strategies that could change the electric load shape in both winter and summer conditions. Winter conditions focused on cold winter mornings, a time when the electric grid is often stressed. The summer test evaluated DR strategies in the afternoon. We found that we could automate both winter and summer control strategies with the open automated demand response communication standard. The buildings were able to provide significant demand response in both winter and summer events.

  16. The importance of matched poloidal spectra to error field correction in DIII-D

    Energy Technology Data Exchange (ETDEWEB)

    Paz-Soldan, C., E-mail: paz-soldan@fusion.gat.com; Lanctot, M. J.; Buttery, R. J.; La Haye, R. J.; Strait, E. J. [General Atomics, P.O. Box 85608, San Diego, California 92121 (United States); Logan, N. C.; Park, J.-K.; Solomon, W. M. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States); Shiraki, D.; Hanson, J. M. [Department of Applied Physics and Applied Mathematics, Columbia University, New York, New York 10027 (United States)

    2014-07-15

    Optimal error field correction (EFC) is thought to be achieved when coupling to the least-stable “dominant” mode of the plasma is nulled at each toroidal mode number (n). The limit of this picture is tested in the DIII-D tokamak by applying superpositions of in- and ex-vessel coil set n = 1 fields calculated to be fully orthogonal to the n = 1 dominant mode. In co-rotating H-mode and low-density Ohmic scenarios, the plasma is found to be, respectively, 7× and 20× less sensitive to the orthogonal field as compared to the in-vessel coil set field. For the scenarios investigated, any geometry of EFC coil can thus recover a strong majority of the detrimental effect introduced by the n = 1 error field. Despite low sensitivity to the orthogonal field, its optimization in H-mode is shown to be consistent with minimizing the neoclassical toroidal viscosity torque and not the higher-order n = 1 mode coupling.

  17. [Transient elevation of intraocular pressure in primary open-angle glaucoma patients after automated visual field examination in the winter].

    Science.gov (United States)

    Nishino, Kazuaki; Yoshida, Fujiko; Nitta, Akari; Saito, Mieko; Saito, Kazuuchi

    2013-12-01

    To evaluate retrospectively seasonal fluctuations of transient intraocular pressure (IOP) elevation after automated visual field examination in patients with primary open-angle glaucoma (POAG). We reviewed 53 consecutive patients with POAG who visited Kaimeido Ophthalmic and Dental Clinic from January 2011 to March 2013, 21 men and 32 women aged 67.7 +/- 11.2 years. The patients were divided into 4 groups, spring, summer, autumn, and winter according to the month of automated visual field examination and both eyes of each patient were enrolled. IOP was measured immediately after automated visual field examination (vf IOP) and compared with the average IOP from the previous 3 months (pre IOP) and with the average IOP from the following 3 months (post IOP) in each season. IOP elevation rate was defined as (vf IOP- pre IOP)/pre IOP x 100% and calculated for each season (paired t test). Additionally, the correlation between mean deviation (MD) and IOP elevation rate was evaluated (single regression analysis). Exclusion criteria were patients who received cataract surgery during this study or had a history of any previous glaucoma surgery. The automated visual field test was performed with a Humphrey field analyzer and the Central 30-2 FASTPAC threshold program. The average vf IOP was 14.5 +/- 2.5 mmHg, higher than pre IOP 13.8 +/- 2.4 mmHg (p field examination, especially in the winter but not in the summer.

  18. Industrial automation in floating production vessels for deep water oil and gas fields

    International Nuclear Information System (INIS)

    de Garcia, A.L.; Ferrante, A.J.

    1990-01-01

    The process supervision in offshore platforms was performed in the past through the use of local pneumatic instrumentation, based on relays, semi-graphic panels and button operated control panels. Considering the advanced technology used in the new floating production projects for deep water, it became mandatory to develop supervision systems capable of integrating different control panels, increasing the level of monitorization and reducing the number of operators and control rooms. From the point of view of field integration, a standardized architecture makes the communication between different production platforms and the regional headquarters, where all the equipment and support infrastructure for the computerized network is installed, possible. This test paper describes the characteristics of the initial systems, the main problems observed, the studies performed and the results obtained in relation to the design and implementation of computational systems with open architecture for automation of process control in floating production systems for deep water in Brazil

  19. Automated fault extraction and classification using 3-D seismic data for the Ekofisk field development

    Energy Technology Data Exchange (ETDEWEB)

    Signer, C.; Nickel, M.; Randen, T.; Saeter, T.; Soenneland, H.H.

    1998-12-31

    Mapping of fractures is important for the prediction of fluid flow in many reservoir types. The fluid flow depends mainly on the efficiency of the reservoir seals. Improved spatial mapping of the open and closed fracture systems will allow a better prediction of the fluid flow pattern. The primary objectives of this paper is to present fracture characterization at the reservoir scale combined with seismic facies mapping. The complexity of the giant Ekofisk field on the Norwegian continental shelf provides an ideal framework for testing the validity and the applicability of an automated seismic fault and fracture detection and mapping tool. The mapping of the faults can be based on seismic attribute grids, which means that attribute-responses related to faults are extracted along key horizons which were interpreted in the reservoir interval. 3 refs., 3 figs.

  20. Automated Agricultural Field Extraction from Multi-temporal Web Enabled Landsat Data

    Science.gov (United States)

    Yan, L.; Roy, D. P.

    2012-12-01

    Agriculture has caused significant anthropogenic surface change. In many regions agricultural field sizes may be increasing to maximize yields and reduce costs resulting in decreased landscape spatial complexity and increased homogenization of land uses with potential for significant biogeochemical and ecological effects. To date, studies of the incidence, drivers and impacts of changing field sizes have not been undertaken over large areas because of computational constraints and because consistently processed appropriate resolution data have not been available or affordable. The Landsat series of satellites provides near-global coverage, long term, and appropriate spatial resolution (30m) satellite data to document changing field sizes. The recent free availability of all the Landsat data in the U.S. Landsat archive now provides the opportunity to study field size changes in a global and consistent way. Commercial software can be used to extract fields from Landsat data but are inappropriate for large area application because they require considerable human interaction. This paper presents research to develop and validate an automated computational Geographic Object Based Image Analysis methodology to extract agricultural fields and derive field sizes from Web Enabled Landsat Data (WELD) (http://weld.cr.usgs.gov/). WELD weekly products (30m reflectance and brightness temperature) are classified into Satellite Image Automatic Mapper™ (SIAM™) spectral categories and an edge intensity map and a map of the probability of each pixel being agricultural are derived from five years of 52 weeks of WELD and corresponding SIAM™ data. These data are fused to derive candidate agriculture field segments using a variational region-based geometric active contour model. Geometry-based algorithms are used to decompose connected segments belonging to multiple fields into coherent isolated field objects with a divide and conquer strategy to detect and merge partial circle

  1. Bright-field scanning confocal electron microscopy using a double aberration-corrected transmission electron microscope

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Peng; Behan, Gavin; Kirkland, Angus I. [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Nellist, Peter D., E-mail: peter.nellist@materials.ox.ac.uk [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Cosgriff, Eireann C.; D' Alfonso, Adrian J.; Morgan, Andrew J.; Allen, Leslie J. [School of Physics, University of Melbourne, Parkville, Victoria 3010 (Australia); Hashimoto, Ayako [Advanced Nano-characterization Center, National Institute for Materials Science (NIMS), 3-13 Sakura, Tsukuba 305-0003 (Japan); Takeguchi, Masaki [Advanced Nano-characterization Center, National Institute for Materials Science (NIMS), 3-13 Sakura, Tsukuba 305-0003 (Japan); High Voltage Electron Microscopy Station, NIMS, 3-13 Sakura, Tsukuba 305-0003 (Japan); Mitsuishi, Kazutaka [Advanced Nano-characterization Center, National Institute for Materials Science (NIMS), 3-13 Sakura, Tsukuba 305-0003 (Japan); Quantum Dot Research Center, NIMS, 3-13 Sakura, Tsukuba 305-0003 (Japan); Shimojo, Masayuki [High Voltage Electron Microscopy Station, NIMS, 3-13 Sakura, Tsukuba 305-0003 (Japan); Advanced Science Research Laboratory, Saitama Institute of Technology, 1690 Fusaiji, Fukaya 369-0293 (Japan)

    2011-06-15

    Scanning confocal electron microscopy (SCEM) offers a mechanism for three-dimensional imaging of materials, which makes use of the reduced depth of field in an aberration-corrected transmission electron microscope. The simplest configuration of SCEM is the bright-field mode. In this paper we present experimental data and simulations showing the form of bright-field SCEM images. We show that the depth dependence of the three-dimensional image can be explained in terms of two-dimensional images formed in the detector plane. For a crystalline sample, this so-called probe image is shown to be similar to a conventional diffraction pattern. Experimental results and simulations show how the diffracted probes in this image are elongated in thicker crystals and the use of this elongation to estimate sample thickness is explored. -- Research Highlights: {yields} The confocal probe image in a scanning confocal electron microscopy image reveals information about the thickness and height of the crystalline layer. {yields} The form of the contrast in a three-dimensional bright-field scanning confocal electron microscopy image can be explained in terms of the confocal probe image. {yields} Despite the complicated form of the contrast in bright-field scanning confocal electron microscopy, we see that depth information is transferred on a 10 nm scale.

  2. A Conceptual Design Study for the Error Field Correction Coil Power Supply in JT-60SA

    International Nuclear Information System (INIS)

    Matsukawa, M.; Shimada, K.; Yamauchi, K.; Gaio, E.; Ferro, A.; Novello, L.

    2013-01-01

    This paper describes a conceptual design study for the circuit configuration of the Error Field Correction Coil (EFCC) power supply (PS) to maximize the expected performance with reasonable cost in JT-60SA. The EFCC consists of eighteen sector coils installed inside the vacuum vessel, six in the toroidal direction and three in the poloidal direction, each one rated for 30 kA-turn. As a result, star point connection is proposed for each group of six EFCC coils installed cyclically in the toroidal direction for decoupling with poloidal field coils. In addition, a six phase inverter which is capable of controlling each phase current was chosen as PS topology to ensure higher flexibility of operation with reasonable cost.

  3. Method for determining correction factors induced by irradiation of ionization chamber cables in large radiation field

    International Nuclear Information System (INIS)

    Rodrigues, L.L.C.

    1988-01-01

    A simple method was developed to be suggested to hospital physicists in order to be followed during large radiation field dosimetry, to evaluate the effects of cables, connectors and extension cables irradiation and to determine correction factors for each system or geometry. All quality control tests were performed according to the International Electrotechnical Commission for three clinical dosimeters. Photon and electron irradiation effects for cables, connectors and extention cables were investigated under different experimental conditions by means of measurements of chamber sensitivity to a standard radiation source of 90 Sr. The radiation induced leakage current was also measured for cables, connectors and extension cables irradiated by photons and electrons. All measurements were performed at standard dosimetry conditions. Finally, measurements were performed in large fields. Cable factors and leakage factors were determined by the relation between chamber responses for irradiated and unirradiated cables. (author) [pt

  4. Generalized second law of thermodynamics for non-canonical scalar field model with corrected-entropy

    International Nuclear Information System (INIS)

    Das, Sudipta; Mamon, Abdulla Al; Debnath, Ujjal

    2015-01-01

    In this work, we have considered a non-canonical scalar field dark energy model in the framework of flat FRW background. It has also been assumed that the dark matter sector interacts with the non-canonical dark energy sector through some interaction term. Using the solutions for this interacting non-canonical scalar field dark energy model, we have investigated the validity of generalized second law (GSL) of thermodynamics in various scenarios using first law and area law of thermodynamics. For this purpose, we have assumed two types of horizons viz apparent horizon and event horizon for the universe and using first law of thermodynamics, we have examined the validity of GSL on both apparent and event horizons. Next, we have considered two types of entropy-corrections on apparent and event horizons. Using the modified area law, we have examined the validity of GSL of thermodynamics on apparent and event horizons under some restrictions of model parameters. (orig.)

  5. Automated jitter correction for IR image processing to assess the quality of W7-X high heat flux components

    International Nuclear Information System (INIS)

    Greuner, H; De Marne, P; Herrmann, A; Boeswirth, B; Schindler, T; Smirnow, M

    2009-01-01

    An automated IR image processing method was developed to evaluate the surface temperature distribution of cyclically loaded high heat flux (HHF) plasma facing components. IPP Garching will perform the HHF testing of a high percentage of the series production of the WENDELSTEIN 7-X (W7-X) divertor targets to minimize the number of undiscovered uncertainties in the finally installed components. The HHF tests will be performed as quality assurance (QA) complementary to the non-destructive examination (NDE) methods used during the manufacturing. The IR analysis of an HHF-loaded component detects growing debonding of the plasma facing material, made of carbon fibre composite (CFC), after a few thermal cycles. In the case of the prototype testing, the IR data was processed manually. However, a QA method requires a reliable, reproducible and efficient automated procedure. Using the example of the HHF testing of W7-X pre-series target elements, the paper describes the developed automated IR image processing method. The algorithm is based on an iterative two-step correlation analysis with an individually defined reference pattern for the determination of the jitter.

  6. Automated 3-D method for the correction of axial artifacts in spectral-domain optical coherence tomography images

    Science.gov (United States)

    Antony, Bhavna; Abràmoff, Michael D.; Tang, Li; Ramdas, Wishal D.; Vingerling, Johannes R.; Jansonius, Nomdo M.; Lee, Kyungmoo; Kwon, Young H.; Sonka, Milan; Garvin, Mona K.

    2011-01-01

    The 3-D spectral-domain optical coherence tomography (SD-OCT) images of the retina often do not reflect the true shape of the retina and are distorted differently along the x and y axes. In this paper, we propose a novel technique that uses thin-plate splines in two stages to estimate and correct the distinct axial artifacts in SD-OCT images. The method was quantitatively validated using nine pairs of OCT scans obtained with orthogonal fast-scanning axes, where a segmented surface was compared after both datasets had been corrected. The mean unsigned difference computed between the locations of this artifact-corrected surface after the single-spline and dual-spline correction was 23.36 ± 4.04 μm and 5.94 ± 1.09 μm, respectively, and showed a significant difference (p < 0.001 from two-tailed paired t-test). The method was also validated using depth maps constructed from stereo fundus photographs of the optic nerve head, which were compared to the flattened top surface from the OCT datasets. Significant differences (p < 0.001) were noted between the artifact-corrected datasets and the original datasets, where the mean unsigned differences computed over 30 optic-nerve-head-centered scans (in normalized units) were 0.134 ± 0.035 and 0.302 ± 0.134, respectively. PMID:21833377

  7. pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling

    Science.gov (United States)

    Florian Wellmann, J.; Thiele, Sam T.; Lindsay, Mark D.; Jessell, Mark W.

    2016-03-01

    We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilize the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.

  8. Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions

    Directory of Open Access Journals (Sweden)

    Johann Christian Rose

    2016-12-01

    Full Text Available In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter.

  9. UAS imaging for automated crop lodging detection: a case study over an experimental maize field

    Science.gov (United States)

    Chu, Tianxing; Starek, Michael J.; Brewer, Michael J.; Masiane, Tiisetso; Murray, Seth C.

    2017-05-01

    Lodging has been recognized as one of the major destructive factors for crop quality and yield, particularly in corn. A variety of contributing causes, e.g. disease and/or pest, weather conditions, excessive nitrogen, and high plant density, may lead to lodging before harvesting season. Traditional lodging detection strategies mainly rely on ground data collection, which is insufficient in efficiency and accuracy. To address this problem, this research focuses on the use of unmanned aircraft systems (UAS) for automated detection of crop lodging. The study was conducted over an experimental corn field at the Texas A and M AgriLife Research and Extension Center at Corpus Christi, Texas, during the growing season of 2016. Nadir-view images of the corn field were taken by small UAS platforms equipped with consumer grade RGB and NIR cameras on a per week basis, enabling a timely observation of the plant growth. 3D structural information of the plants was reconstructed using structure-from-motion photogrammetry. The structural information was then applied to calculate crop height, and rates of growth. A lodging index for detecting corn lodging was proposed afterwards. Ground truth data of lodging was collected on a per row basis and used for fair assessment and tuning of the detection algorithm. Results show the UAS-measured height correlates well with the ground-measured height. More importantly, the lodging index can effectively reflect severity of corn lodging and yield after harvesting.

  10. Automated measurement of spatial preference in the open field test with transmitted lighting.

    Science.gov (United States)

    Kulikov, Alexander V; Tikhonova, Maria A; Kulikov, Victor A

    2008-05-30

    New modification of the open field was designed to improve automation of the test. The main innovations were: (1) transmitted lighting and (2) estimation of probability to find pixels associated with an animal in the selected region of arena as an objective index of spatial preference. Transmitted (inverted) lighting significantly ameliorated the contrast between an animal and arena and allowed to track white animals with similar efficacy as colored ones. Probability as a measure of preference of selected region was mathematically proved and experimentally verified. A good correlation between probability and classic indices of spatial preference (number of region entries and time spent therein) was shown. The algorithm of calculation of probability to find pixels associated with an animal in the selected region was implemented in the EthoStudio software. Significant interstrain differences in locomotion and the central zone preference (index of anxiety) were shown using the inverted lighting and the EthoStudio software in mice of six inbred strains. The effects of arena shape (circle or square) and a novel object presence in the center of arena on the open field behavior in mice were studied.

  11. Automated Identification of Northern Leaf Blight-Infected Maize Plants from Field Imagery Using Deep Learning.

    Science.gov (United States)

    DeChant, Chad; Wiesner-Hanks, Tyr; Chen, Siyuan; Stewart, Ethan L; Yosinski, Jason; Gore, Michael A; Nelson, Rebecca J; Lipson, Hod

    2017-11-01

    Northern leaf blight (NLB) can cause severe yield loss in maize; however, scouting large areas to accurately diagnose the disease is time consuming and difficult. We demonstrate a system capable of automatically identifying NLB lesions in field-acquired images of maize plants with high reliability. This approach uses a computational pipeline of convolutional neural networks (CNNs) that addresses the challenges of limited data and the myriad irregularities that appear in images of field-grown plants. Several CNNs were trained to classify small regions of images as containing NLB lesions or not; their predictions were combined into separate heat maps, then fed into a final CNN trained to classify the entire image as containing diseased plants or not. The system achieved 96.7% accuracy on test set images not used in training. We suggest that such systems mounted on aerial- or ground-based vehicles can help in automated high-throughput plant phenotyping, precision breeding for disease resistance, and reduced pesticide use through targeted application across a variety of plant and disease categories.

  12. Software development based on high speed PC oscilloscope for automated pulsed magnetic field measurement system

    International Nuclear Information System (INIS)

    Sun Yuxiang; Shang Lei; Li Ji; Ge Lei

    2011-01-01

    It introduces a method of a software development which is based on high speed PC oscilloscope for pulsed magnetic field measurement system. The previous design has been improved by this design, high-speed virtual oscilloscope has been used in the field for the first time. In the design, the automatic data acquisition, data process, data analysis and storage have been realized. Automated point checking reduces the workload. The use of precise motion bench increases the positioning accuracy. The software gets the data from PC oscilloscope by calling DLLs and includes the function of oscilloscope, such as trigger, ranges, and sample rate setting etc. Spline Interpolation and Bandstop Filter are used to denoise the signals. The core of the software is the state machine which controls the motion of stepper motors and data acquisition and stores the data automatically. NI Vision Acquisition Software and Database Connectivity Toolkit make the video surveillance of laboratory and MySQL database connectivity available. The raw signal and processed signal have been compared in this paper. The waveform has been greatly improved by the signal processing. (authors)

  13. Joint deformable liver registration and bias field correction for MR-guided HDR brachytherapy.

    Science.gov (United States)

    Rak, Marko; König, Tim; Tönnies, Klaus D; Walke, Mathias; Ricke, Jens; Wybranski, Christian

    2017-12-01

    In interstitial high-dose rate brachytherapy, liver cancer is treated by internal radiation, requiring percutaneous placement of applicators within or close to the tumor. To maximize utility, the optimal applicator configuration is pre-planned on magnetic resonance images. The pre-planned configuration is then implemented via a magnetic resonance-guided intervention. Mapping the pre-planning information onto interventional data would reduce the radiologist's cognitive load during the intervention and could possibly minimize discrepancies between optimally pre-planned and actually placed applicators. We propose a fast and robust two-step registration framework suitable for interventional settings: first, we utilize a multi-resolution rigid registration to correct for differences in patient positioning (rotation and translation). Second, we employ a novel iterative approach alternating between bias field correction and Markov random field deformable registration in a multi-resolution framework to compensate for non-rigid movements of the liver, the tumors and the organs at risk. In contrast to existing pre-correction methods, our multi-resolution scheme can recover bias field artifacts of different extents at marginal computational costs. We compared our approach to deformable registration via B-splines, demons and the SyN method on 22 registration tasks from eleven patients. Results showed that our approach is more accurate than the contenders for liver as well as for tumor tissues. We yield average liver volume overlaps of 94.0 ± 2.7% and average surface-to-surface distances of 2.02 ± 0.87 mm and 3.55 ± 2.19 mm for liver and tumor tissue, respectively. The reported distances are close to (or even below) the slice spacing (2.5 - 3.0 mm) of our data. Our approach is also the fastest, taking 35.8 ± 12.8 s per task. The presented approach is sufficiently accurate to map information available from brachytherapy pre-planning onto interventional data. It

  14. Drift correction for single-molecule imaging by molecular constraint field, a distance minimum metric

    International Nuclear Information System (INIS)

    Han, Renmin; Wang, Liansan; Xu, Fan; Zhang, Yongdeng; Zhang, Mingshu; Liu, Zhiyong; Ren, Fei; Zhang, Fa

    2015-01-01

    The recent developments of far-field optical microscopy (single molecule imaging techniques) have overcome the diffraction barrier of light and improve image resolution by a factor of ten compared with conventional light microscopy. These techniques utilize the stochastic switching of probe molecules to overcome the diffraction limit and determine the precise localizations of molecules, which often requires a long image acquisition time. However, long acquisition times increase the risk of sample drift. In the case of high resolution microscopy, sample drift would decrease the image resolution. In this paper, we propose a novel metric based on the distance between molecules to solve the drift correction. The proposed metric directly uses the position information of molecules to estimate the frame drift. We also designed an algorithm to implement the metric for the general application of drift correction. There are two advantages of our method: First, because our method does not require space binning of positions of molecules but directly operates on the positions, it is more natural for single molecule imaging techniques. Second, our method can estimate drift with a small number of positions in each temporal bin, which may extend its potential application. The effectiveness of our method has been demonstrated by both simulated data and experiments on single molecular images

  15. Automated 3-D method for the correction of axial artifacts in spectral-domain optical coherence tomography images

    NARCIS (Netherlands)

    Antony, Bhavna; Abramoff, Michael D.; Tang, Li; Ramdas, Wishal D.; Vingerling, Johannes R.; Jansonius, Nomdo M.; Lee, Kyungmoo; Kwon, Young H.; Sonka, Milan; Garvin, Mona K.

    2011-01-01

    The 3-D spectral-domain optical coherence tomography (SD-OCT) images of the retina often do not reflect the true shape of the retina and are distorted differently along the x and y axes. In this paper, we propose a novel technique that uses thin-plate splines in two stages to estimate and correct

  16. Automated correction of spin-history related motion artefacts in fMRI : Simulated and phantom data

    NARCIS (Netherlands)

    Muresan, L; Renken, R.; Roerdink, J.B.T.M.; Duifhuis, H.

    This paper concerns the problem of correcting spin-history artefacts in fMRI data. We focus on the influence of through-plane motion on the history of magnetization. A change in object position will disrupt the tissue’s steady-state magnetization. The disruption will propagate to the next few

  17. Clearing the waters: Evaluating the need for site-specific field fluorescence corrections based on turbidity measurements

    Science.gov (United States)

    Saraceno, John F.; Shanley, James B.; Downing, Bryan D.; Pellerin, Brian A.

    2017-01-01

    In situ fluorescent dissolved organic matter (fDOM) measurements have gained increasing popularity as a proxy for dissolved organic carbon (DOC) concentrations in streams. One challenge to accurate fDOM measurements in many streams is light attenuation due to suspended particles. Downing et al. (2012) evaluated the need for corrections to compensate for particle interference on fDOM measurements using a single sediment standard in a laboratory study. The application of those results to a large river improved unfiltered field fDOM accuracy. We tested the same correction equation in a headwater tropical stream and found that it overcompensated fDOM when turbidity exceeded ∼300 formazin nephelometric units (FNU). Therefore, we developed a site-specific, field-based fDOM correction equation through paired in situ fDOM measurements of filtered and unfiltered streamwater. The site-specific correction increased fDOM accuracy up to a turbidity as high as 700 FNU, the maximum observed in this study. The difference in performance between the laboratory-based correction equation of Downing et al. (2012) and our site-specific, field-based correction equation likely arises from differences in particle size distribution between the sediment standard used in the lab (silt) and that observed in our study (fine to medium sand), particularly during high flows. Therefore, a particle interference correction equation based on a single sediment type may not be ideal when field sediment size is significantly different. Given that field fDOM corrections for particle interference under turbid conditions are a critical component in generating accurate DOC estimates, we describe a way to develop site-specific corrections.

  18. Quantitative evaluation of automated skull-stripping methods applied to contemporary and legacy images: effects of diagnosis, bias correction, and slice location

    DEFF Research Database (Denmark)

    Fennema-Notestine, Christine; Ozyurt, I Burak; Clark, Camellia P

    2006-01-01

    Performance of automated methods to isolate brain from nonbrain tissues in magnetic resonance (MR) structural images may be influenced by MR signal inhomogeneities, type of MR image set, regional anatomy, and age and diagnosis of subjects studied. The present study compared the performance of four...... methods: Brain Extraction Tool (BET; Smith [2002]: Hum Brain Mapp 17:143-155); 3dIntracranial (Ward [1999] Milwaukee: Biophysics Research Institute, Medical College of Wisconsin; in AFNI); a Hybrid Watershed algorithm (HWA, Segonne et al. [2004] Neuroimage 22:1060-1075; in FreeSurfer); and Brain Surface...... Extractor (BSE, Sandor and Leahy [1997] IEEE Trans Med Imag 16:41-54; Shattuck et al. [2001] Neuroimage 13:856-876) to manually stripped images. The methods were applied to uncorrected and bias-corrected datasets; Legacy and Contemporary T1-weighted image sets; and four diagnostic groups (depressed...

  19. Histograms of Oriented 3D Gradients for Fully Automated Fetal Brain Localization and Robust Motion Correction in 3 T Magnetic Resonance Images.

    Science.gov (United States)

    Serag, Ahmed; Macnaught, Gillian; Denison, Fiona C; Reynolds, Rebecca M; Semple, Scott I; Boardman, James P

    2017-01-01

    Fetal brain magnetic resonance imaging (MRI) is a rapidly emerging diagnostic imaging tool. However, automated fetal brain localization is one of the biggest obstacles in expediting and fully automating large-scale fetal MRI processing. We propose a method for automatic localization of fetal brain in 3 T MRI when the images are acquired as a stack of 2D slices that are misaligned due to fetal motion. First, the Histogram of Oriented Gradients (HOG) feature descriptor is extended from 2D to 3D images. Then, a sliding window is used to assign a score to all possible windows in an image, depending on the likelihood of it containing a brain, and the window with the highest score is selected. In our evaluation experiments using a leave-one-out cross-validation strategy, we achieved 96% of complete brain localization using a database of 104 MRI scans at gestational ages between 34 and 38 weeks. We carried out comparisons against template matching and random forest based regression methods and the proposed method showed superior performance. We also showed the application of the proposed method in the optimization of fetal motion correction and how it is essential for the reconstruction process. The method is robust and does not rely on any prior knowledge of fetal brain development.

  20. Histograms of Oriented 3D Gradients for Fully Automated Fetal Brain Localization and Robust Motion Correction in 3 T Magnetic Resonance Images

    Directory of Open Access Journals (Sweden)

    Ahmed Serag

    2017-01-01

    Full Text Available Fetal brain magnetic resonance imaging (MRI is a rapidly emerging diagnostic imaging tool. However, automated fetal brain localization is one of the biggest obstacles in expediting and fully automating large-scale fetal MRI processing. We propose a method for automatic localization of fetal brain in 3 T MRI when the images are acquired as a stack of 2D slices that are misaligned due to fetal motion. First, the Histogram of Oriented Gradients (HOG feature descriptor is extended from 2D to 3D images. Then, a sliding window is used to assign a score to all possible windows in an image, depending on the likelihood of it containing a brain, and the window with the highest score is selected. In our evaluation experiments using a leave-one-out cross-validation strategy, we achieved 96% of complete brain localization using a database of 104 MRI scans at gestational ages between 34 and 38 weeks. We carried out comparisons against template matching and random forest based regression methods and the proposed method showed superior performance. We also showed the application of the proposed method in the optimization of fetal motion correction and how it is essential for the reconstruction process. The method is robust and does not rely on any prior knowledge of fetal brain development.

  1. Automation of the CHARMM General Force Field (CGenFF) I: bond perception and atom typing.

    Science.gov (United States)

    Vanommeslaeghe, K; MacKerell, A D

    2012-12-21

    Molecular mechanics force fields are widely used in computer-aided drug design for the study of drug-like molecules alone or interacting with biological systems. In simulations involving biological macromolecules, the biological part is typically represented by a specialized biomolecular force field, while the drug is represented by a matching general (organic) force field. In order to apply these general force fields to an arbitrary drug-like molecule, functionality for assignment of atom types, parameters, and charges is required. In the present article, which is part I of a series of two, we present the algorithms for bond perception and atom typing for the CHARMM General Force Field (CGenFF). The CGenFF atom typer first associates attributes to the atoms and bonds in a molecule, such as valence, bond order, and ring membership among others. Of note are a number of features that are specifically required for CGenFF. This information is then used by the atom typing routine to assign CGenFF atom types based on a programmable decision tree. This allows for straightforward implementation of CGenFF's complicated atom typing rules and for equally straightforward updating of the atom typing scheme as the force field grows. The presented atom typer was validated by assigning correct atom types on 477 model compounds including in the training set as well as 126 test-set molecules that were constructed to specifically verify its different components. The program may be utilized via an online implementation at https://www.paramchem.org/ .

  2. Communication technology in process automation - more than a field bus; Kommunikationstechnik in der Prozessautomatisierung - mehr als Feldbus

    Energy Technology Data Exchange (ETDEWEB)

    Schwibach, Martin [BASF SE, Ludwigshafen (Germany)

    2009-07-01

    In recent years, communication technology has come to play an increasingly important role in process automation. For many decades, standardized 4-20 mA electrical signals, which had replaced earlier pneumatic systems, remained the communication basis for nearly all automation technology applications. It was only in the 1990s, along with the sudden, exponential growth of IT, that a wind of change began sweeping through automation technology as well. This has had a profound and lasting impact on system architectures. Terms like HART, OPC and field bus are now familiar automation vocabulary. Networked automation systems have become the norm, and crosssystem communication, horizontal and vertical information integration, and remote access are routine. Reliability and availability. Sustainability and investment protection. These are the basic requirements that every new solution has to fulfill before it can be successfully employed in process plants. What does this mean in concrete terms? The use of modern communication technologies must not bypass the requirements made on previous and existing data transmission technologies. All current requirement profiles for conventional transmission systems must also be satisfied by network-based technologies, from field bus through to wireless. This is particularly important with regard to functional safety, availability, explosion protection, and EMC. More advanced requirements, such as interoperability, IT security or diagnostics must also be met. Over the years, NAMUR has published a series of papers on these topics which can serve as guidelines. The potentials for using modern communication technologies clearly lie in those areas where conventional solutions have proved uneconomic or unsuitable. Rather than simply achieving fault-free system operation, the overriding goal when implementing and further developing communication structures in automation technology should therefore always be to create new, value

  3. A new bias field correction method combining N3 and FCM for improved segmentation of breast density on MRI.

    Science.gov (United States)

    Lin, Muqing; Chan, Siwa; Chen, Jeon-Hor; Chang, Daniel; Nie, Ke; Chen, Shih-Ting; Lin, Cheng-Ju; Shih, Tzu-Ching; Nalcioglu, Orhan; Su, Min-Ying

    2011-01-01

    Quantitative breast density is known as a strong risk factor associated with the development of breast cancer. Measurement of breast density based on three-dimensional breast MRI may provide very useful information. One important step for quantitative analysis of breast density on MRI is the correction of field inhomogeneity to allow an accurate segmentation of the fibroglandular tissue (dense tissue). A new bias field correction method by combining the nonparametric nonuniformity normalization (N3) algorithm and fuzzy-C-means (FCM)-based inhomogeneity correction algorithm is developed in this work. The analysis is performed on non-fat-sat T1-weighted images acquired using a 1.5 T MRI scanner. A total of 60 breasts from 30 healthy volunteers was analyzed. N3 is known as a robust correction method, but it cannot correct a strong bias field on a large area. FCM-based algorithm can correct the bias field on a large area, but it may change the tissue contrast and affect the segmentation quality. The proposed algorithm applies N3 first, followed by FCM, and then the generated bias field is smoothed using Gaussian kernal and B-spline surface fitting to minimize the problem of mistakenly changed tissue contrast. The segmentation results based on the N3+FCM corrected images were compared to the N3 and FCM alone corrected images and another method, coherent local intensity clustering (CLIC), corrected images. The segmentation quality based on different correction methods were evaluated by a radiologist and ranked. The authors demonstrated that the iterative N3+FCM correction method brightens the signal intensity of fatty tissues and that separates the histogram peaks between the fibroglandular and fatty tissues to allow an accurate segmentation between them. In the first reading session, the radiologist found (N3+FCM > N3 > FCM) ranking in 17 breasts, (N3+FCM > N3 = FCM) ranking in 7 breasts, (N3+FCM = N3 > FCM) in 32 breasts, (N3+FCM = N3 = FCM) in 2 breasts, and (N3 > N3

  4. Fast conjugate phase image reconstruction based on a Chebyshev approximation to correct for B0 field inhomogeneity and concomitant gradients.

    Science.gov (United States)

    Chen, Weitian; Sica, Christopher T; Meyer, Craig H

    2008-11-01

    Off-resonance effects can cause image blurring in spiral scanning and various forms of image degradation in other MRI methods. Off-resonance effects can be caused by both B0 inhomogeneity and concomitant gradient fields. Previously developed off-resonance correction methods focus on the correction of a single source of off-resonance. This work introduces a computationally efficient method of correcting for B0 inhomogeneity and concomitant gradients simultaneously. The method is a fast alternative to conjugate phase reconstruction, with the off-resonance phase term approximated by Chebyshev polynomials. The proposed algorithm is well suited for semiautomatic off-resonance correction, which works well even with an inaccurate or low-resolution field map. The proposed algorithm is demonstrated using phantom and in vivo data sets acquired by spiral scanning. Semiautomatic off-resonance correction alone is shown to provide a moderate amount of correction for concomitant gradient field effects, in addition to B0 imhomogeneity effects. However, better correction is provided by the proposed combined method. The best results were produced using the semiautomatic version of the proposed combined method.

  5. Costs to Automate Demand Response - Taxonomy and Results from Field Studies and Programs

    Energy Technology Data Exchange (ETDEWEB)

    Piette, Mary A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Schetrit, Oren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kiliccote, Sila [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cheung, Iris [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Li, Becky Z [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-07-31

    During the past decade, the technology to automate demand response (DR) in buildings and industrial facilities has advanced significantly. Automation allows rapid, repeatable, reliable operation. This study focuses on costs for DR automation in commercial buildings with some discussion on residential buildings and industrial facilities. DR automation technology relies on numerous components, including communication systems, hardware and software gateways, standards-based messaging protocols, controls and integration platforms, and measurement and telemetry systems. This report compares cost data from several DR automation programs and pilot projects, evaluates trends in the cost per unit of DR and kilowatts (kW) available from automated systems, and applies a standard naming convention and classification or taxonomy for system elements. Median costs for the 56 installed automated DR systems studied here are about $200/kW. The deviation around this median is large with costs in some cases being an order of magnitude great or less than the median. This wide range is a result of variations in system age, size of load reduction, sophistication, and type of equipment included in cost analysis. The costs to automate fast DR systems for ancillary services are not fully analyzed in this report because additional research is needed to determine the total cost to install, operate, and maintain these systems. However, recent research suggests that they could be developed at costs similar to those of existing hot-summer DR automation systems. This report considers installation and configuration costs and does include the costs of owning and operating DR automation systems. Future analysis of the latter costs should include the costs to the building or facility manager costs as well as utility or third party program manager cost.

  6. An investigation of coupling of the internal kink mode to error field correction coils in tokamaks

    International Nuclear Information System (INIS)

    Lazarus, E.A.

    2013-01-01

    The coupling of the internal kink to an external m/n = 1/1 perturbation is studied for profiles that are known to result in a saturated internal kink in the limit of a cylindrical tokamak. It is found from three-dimensional equilibrium calculations that, for A ≈ 30 circular plasmas and A ≈ 3 elliptical shapes, this coupling of the boundary perturbation to the internal kink is strong; i.e., the amplitude of the m/n = 1/1 structure at q = 1 is large compared with the amplitude applied at the plasma boundary. Evidence suggests that this saturated internal kink, resulting from small field errors, is an explanation for the TEXTOR and JET measurements of q 0 remaining well below unity throughout the sawtooth cycle, as well as the distinction between sawtooth effects on the q-profile observed in TEXTOR and DIII-D. It is proposed that this excitation, which could readily be applied with error field correction coils, be explored as a mechanism for controlling sawtooth amplitudes in high-performance tokamak discharges. This result is then combined with other recent tokamak results to propose an L-mode approach to fusion in tokamaks. (paper)

  7. SU-C-304-07: Are Small Field Detector Correction Factors Strongly Dependent On Machine-Specific Characteristics?

    International Nuclear Information System (INIS)

    Mathew, D; Tanny, S; Parsai, E; Sperling, N

    2015-01-01

    Purpose: The current small field dosimetry formalism utilizes quality correction factors to compensate for the difference in detector response relative to dose deposited in water. The correction factors are defined on a machine-specific basis for each beam quality and detector combination. Some research has suggested that the correction factors may only be weakly dependent on machine-to-machine variations, allowing for determinations of class-specific correction factors for various accelerator models. This research examines the differences in small field correction factors for three detectors across two Varian Truebeam accelerators to determine the correction factor dependence on machine-specific characteristics. Methods: Output factors were measured on two Varian Truebeam accelerators for equivalently tuned 6 MV and 6 FFF beams. Measurements were obtained using a commercial plastic scintillation detector (PSD), two ion chambers, and a diode detector. Measurements were made at a depth of 10 cm with an SSD of 100 cm for jaw-defined field sizes ranging from 3×3 cm 2 to 0.6×0.6 cm 2 , normalized to values at 5×5cm 2 . Correction factors for each field on each machine were calculated as the ratio of the detector response to the PSD response. Percent change of correction factors for the chambers are presented relative to the primary machine. Results: The Exradin A26 demonstrates a difference of 9% for 6×6mm 2 fields in both the 6FFF and 6MV beams. The A16 chamber demonstrates a 5%, and 3% difference in 6FFF and 6MV fields at the same field size respectively. The Edge diode exhibits less than 1.5% difference across both evaluated energies. Field sizes larger than 1.4×1.4cm2 demonstrated less than 1% difference for all detectors. Conclusion: Preliminary results suggest that class-specific correction may not be appropriate for micro-ionization chamber. For diode systems, the correction factor was substantially similar and may be useful for class-specific reference

  8. SU-C-304-07: Are Small Field Detector Correction Factors Strongly Dependent On Machine-Specific Characteristics?

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, D; Tanny, S; Parsai, E; Sperling, N [University of Toledo Medical Center, Toledo, OH (United States)

    2015-06-15

    Purpose: The current small field dosimetry formalism utilizes quality correction factors to compensate for the difference in detector response relative to dose deposited in water. The correction factors are defined on a machine-specific basis for each beam quality and detector combination. Some research has suggested that the correction factors may only be weakly dependent on machine-to-machine variations, allowing for determinations of class-specific correction factors for various accelerator models. This research examines the differences in small field correction factors for three detectors across two Varian Truebeam accelerators to determine the correction factor dependence on machine-specific characteristics. Methods: Output factors were measured on two Varian Truebeam accelerators for equivalently tuned 6 MV and 6 FFF beams. Measurements were obtained using a commercial plastic scintillation detector (PSD), two ion chambers, and a diode detector. Measurements were made at a depth of 10 cm with an SSD of 100 cm for jaw-defined field sizes ranging from 3×3 cm{sup 2} to 0.6×0.6 cm{sup 2}, normalized to values at 5×5cm{sup 2}. Correction factors for each field on each machine were calculated as the ratio of the detector response to the PSD response. Percent change of correction factors for the chambers are presented relative to the primary machine. Results: The Exradin A26 demonstrates a difference of 9% for 6×6mm{sup 2} fields in both the 6FFF and 6MV beams. The A16 chamber demonstrates a 5%, and 3% difference in 6FFF and 6MV fields at the same field size respectively. The Edge diode exhibits less than 1.5% difference across both evaluated energies. Field sizes larger than 1.4×1.4cm2 demonstrated less than 1% difference for all detectors. Conclusion: Preliminary results suggest that class-specific correction may not be appropriate for micro-ionization chamber. For diode systems, the correction factor was substantially similar and may be useful for class

  9. Reconstructing interacting entropy-corrected holographic scalar field models of dark energy in the non-flat universe

    Energy Technology Data Exchange (ETDEWEB)

    Karami, K; Khaledian, M S [Department of Physics, University of Kurdistan, Pasdaran Street, Sanandaj (Iran, Islamic Republic of); Jamil, Mubasher, E-mail: KKarami@uok.ac.ir, E-mail: MS.Khaledian@uok.ac.ir, E-mail: mjamil@camp.nust.edu.pk [Center for Advanced Mathematics and Physics (CAMP), National University of Sciences and Technology (NUST), Islamabad (Pakistan)

    2011-02-15

    Here we consider the entropy-corrected version of the holographic dark energy (DE) model in the non-flat universe. We obtain the equation of state parameter in the presence of interaction between DE and dark matter. Moreover, we reconstruct the potential and the dynamics of the quintessence, tachyon, K-essence and dilaton scalar field models according to the evolutionary behavior of the interacting entropy-corrected holographic DE model.

  10. Field and laboratory emission cell automation and control system for investigating surface chemistry reactions

    Science.gov (United States)

    Flemmer, Michael M.; Ham, Jason E.; Wells, J. R.

    2007-01-01

    A novel system [field and laboratory emission cell (FLEC) automation and control system] has been developed to deliver ozone to a surface utilizing the FLEC to simulate indoor surface chemistry. Ozone, humidity, and air flow rate to the surface were continuously monitored using an ultraviolet ozone monitor, humidity, and flow sensors. Data from these sensors were used as feedback for system control to maintain predetermined experimental parameters. The system was used to investigate the chemistry of ozone with α-terpineol on a vinyl surface over 72h. Keeping all other experimental parameters the same, volatile organic compound emissions from the vinyl tile with α-terpineol were collected from both zero and 100ppb(partsper109) ozone exposures. System stability profiles collected from sensor data indicated experimental parameters were maintained to within a few percent of initial settings. Ozone data from eight experiments at 100ppb (over 339h) provided a pooled standard deviation of 1.65ppb and a 95% tolerance of 3.3ppb. Humidity data from 17 experiments at 50% relative humidity (over 664h) provided a pooled standard deviation of 1.38% and a 95% tolerance of 2.77%. Data of the flow rate of air flowing through the FLEC from 14 experiments at 300ml/min (over 548h) provided a pooled standard deviation of 3.02ml/min and a 95% tolerance range of 6.03ml/min. Initial experimental results yielded long term emissions of ozone/α-terpineol reaction products, suggesting that surface chemistry could play an important role in indoor environments.

  11. Influence and Correction from the Human Body on the Measurement of a Power-Frequency Electric Field Sensor

    Directory of Open Access Journals (Sweden)

    Dongping Xiao

    2016-06-01

    Full Text Available According to the operating specifications of existing electric field measuring instruments, measuring technicians must be located far from the instruments to eliminate the influence of the human body occupancy on a spatial electric field. Nevertheless, in order to develop a portable safety protection instrument with an effective electric field warning function for working staff in a high-voltage environment, it is necessary to study the influence of an approaching human body on the measurement of an electric field and to correct the measurement results. A single-shaft electric field measuring instrument called the Type LP-2000, which was developed by our research team, is used as the research object in this study. First, we explain the principle of electric field measurement and describe the capacitance effect produced by the human body. Through a theoretical analysis, we show that the measured electric field value decreases as a human body approaches. Their relationship is linearly proportional. Then, the ratio is identified as a correction coefficient to correct for the influence of human body proximity. The conclusion drawn from the theoretical analysis is proved via simulation. The correction coefficient kb = 1.8010 is obtained on the basis of the linear fitting of simulated data. Finally, a physical experiment is performed. When no human is present, we compare the results from the Type LP-2000 measured with Narda EFA-300 and the simulated value to verify the accuracy of the Type LP-2000. For the case of an approaching human body, the correction coefficient kb* = 1.9094 is obtained by comparing the data measured with the Type LP-2000 to the simulated value. The correction coefficient obtained from the experiment (i.e., kb* is highly consistent with that obtained from the simulation (i.e., kb. Two experimental programs are set; under these programs, the excitation voltages and distance measuring points are regulated to produce different

  12. Influence and Correction from the Human Body on the Measurement of a Power-Frequency Electric Field Sensor.

    Science.gov (United States)

    Xiao, Dongping; Liu, Huaitong; Zhou, Qiang; Xie, Yutong; Ma, Qichao

    2016-06-10

    According to the operating specifications of existing electric field measuring instruments, measuring technicians must be located far from the instruments to eliminate the influence of the human body occupancy on a spatial electric field. Nevertheless, in order to develop a portable safety protection instrument with an effective electric field warning function for working staff in a high-voltage environment, it is necessary to study the influence of an approaching human body on the measurement of an electric field and to correct the measurement results. A single-shaft electric field measuring instrument called the Type LP-2000, which was developed by our research team, is used as the research object in this study. First, we explain the principle of electric field measurement and describe the capacitance effect produced by the human body. Through a theoretical analysis, we show that the measured electric field value decreases as a human body approaches. Their relationship is linearly proportional. Then, the ratio is identified as a correction coefficient to correct for the influence of human body proximity. The conclusion drawn from the theoretical analysis is proved via simulation. The correction coefficient kb = 1.8010 is obtained on the basis of the linear fitting of simulated data. Finally, a physical experiment is performed. When no human is present, we compare the results from the Type LP-2000 measured with Narda EFA-300 and the simulated value to verify the accuracy of the Type LP-2000. For the case of an approaching human body, the correction coefficient kb* = 1.9094 is obtained by comparing the data measured with the Type LP-2000 to the simulated value. The correction coefficient obtained from the experiment (i.e., kb*) is highly consistent with that obtained from the simulation (i.e., kb). Two experimental programs are set; under these programs, the excitation voltages and distance measuring points are regulated to produce different electric field

  13. Corrections to classical kinetic and transport theory for a two-temparature, fully ionized plasma in electromagnetic fields

    International Nuclear Information System (INIS)

    Oeien, A.H.

    1977-06-01

    Sets of lower order and higher order kinetic and macroscopic equations are developed for a plasma where collisions are important but electrons and ions are allowed to have different temperatures when transports, due to gradients and fields, set in. Solving the lower order kinetic equations and taking appropriate velocity moments we show that usual classical transports emerge. From the higher order kinetic equations special notice is taken of some new correction terms to the classical transports. These corrections are linear in gradients and fields, some of which are found in a two-temperature state only. (Auth.)

  14. Improved UTE-based attenuation correction for cranial PET-MR using dynamic magnetic field monitoring

    International Nuclear Information System (INIS)

    Aitken, A. P.; Giese, D.; Tsoumpas, C.; Schleyer, P.; Kozerke, S.; Prieto, C.; Schaeffter, T.

    2014-01-01

    Purpose: Ultrashort echo time (UTE) MRI has been proposed as a way to produce segmented attenuation maps for PET, as it provides contrast between bone, air, and soft tissue. However, UTE sequences require samples to be acquired during rapidly changing gradient fields, which makes the resulting images prone to eddy current artifacts. In this work it is demonstrated that this can lead to misclassification of tissues in segmented attenuation maps (AC maps) and that these effects can be corrected for by measuring the true k-space trajectories using a magnetic field camera. Methods: The k-space trajectories during a dual echo UTE sequence were measured using a dynamic magnetic field camera. UTE images were reconstructed using nominal trajectories and again using the measured trajectories. A numerical phantom was used to demonstrate the effect of reconstructing with incorrect trajectories. Images of an ovine leg phantom were reconstructed and segmented and the resulting attenuation maps were compared to a segmented map derived from a CT scan of the same phantom, using the Dice similarity measure. The feasibility of the proposed method was demonstrated inin vivo cranial imaging in five healthy volunteers. Simulated PET data were generated for one volunteer to show the impact of misclassifications on the PET reconstruction. Results: Images of the numerical phantom exhibited blurring and edge artifacts on the bone–tissue and air–tissue interfaces when nominal k-space trajectories were used, leading to misclassification of soft tissue as bone and misclassification of bone as air. Images of the tissue phantom and thein vivo cranial images exhibited the same artifacts. The artifacts were greatly reduced when the measured trajectories were used. For the tissue phantom, the Dice coefficient for bone in MR relative to CT was 0.616 using the nominal trajectories and 0.814 using the measured trajectories. The Dice coefficients for soft tissue were 0.933 and 0.934 for the

  15. Individual TL detector characteristics in automated processing of personnel dosemeters: correction factors as extension to identity codes of dosemeter cards

    International Nuclear Information System (INIS)

    Toivonen, Matti.

    1979-07-01

    One, two and three-component dosemeter cards and their associated processing equipment were developed for personnel monitoring. A novel feature of the TLD system is that the individual sensitivity correction factors of TL detectors for β/γ radiation dosimetry and special timing factors for the readout of neutron detectors are stored on dosemeter cards as an extension of the identity codes. These data are utilized in the automatic TL reading process with the aim of cancelling out the influence of the individual detector characteristics on the measuring results. Stimulation of TL is done with hot nitrogen without removing the detectors from their cards and without any metal contact. Changes in detector characteristics are thus improbable. The reading process can be adjusted in a variety of ways. For example, each detector in the same card can be processed with optimal heating and the specific 250 deg C glow peak of neutron radiation can be roughly separated from the main LiF glow peaks. (author)

  16. On the transmit field inhomogeneity correction of relaxation‐compensated amide and NOE CEST effects at 7 T

    Science.gov (United States)

    Windschuh, Johannes; Siero, Jeroen C.W.; Zaiss, Moritz; Luijten, Peter R.; Klomp, Dennis W.J.; Hoogduin, Hans

    2017-01-01

    High field MRI is beneficial for chemical exchange saturation transfer (CEST) in terms of high SNR, CNR, and chemical shift dispersion. These advantages may, however, be counter‐balanced by the increased transmit field inhomogeneity normally associated with high field MRI. The relatively high sensitivity of the CEST contrast to B 1 inhomogeneity necessitates the development of correction methods, which is essential for the clinical translation of CEST. In this work, two B 1 correction algorithms for the most studied CEST effects, amide‐CEST and nuclear Overhauser enhancement (NOE), were analyzed. Both methods rely on fitting the multi‐pool Bloch‐McConnell equations to the densely sampled CEST spectra. In the first method, the correction is achieved by using a linear B 1 correction of the calculated amide and NOE CEST effects. The second method uses the Bloch‐McConnell fit parameters and the desired B 1 amplitude to recalculate the CEST spectra, followed by the calculation of B 1‐corrected amide and NOE CEST effects. Both algorithms were systematically studied in Bloch‐McConnell equations and in human data, and compared with the earlier proposed ideal interpolation‐based B 1 correction method. In the low B 1 regime of 0.15–0.50 μT (average power), a simple linear model was sufficient to mitigate B 1 inhomogeneity effects on a par with the interpolation B 1 correction, as demonstrated by a reduced correlation of the CEST contrast with B 1 in both the simulations and the experiments. PMID:28111824

  17. Corrective measures technology for shallow land burial at arid sites: field studies of biointrusion barriers and erosion control

    International Nuclear Information System (INIS)

    Nyhan, J.W.; Hakonson, T.E.; Lopez, E.A.

    1986-03-01

    The field research program involving corrective measures technologies for arid shallow land burial (SLB) sites is described. Results of field testing of a biointrusion barrier installed at a close-out waste disposal site (Area B) at Los Alamos are presented. Soil erosion and infiltration of water into a simulated trench cap with various surface treatments were measured, and the interaction between erosion control and subsurface water dynamics is discussed relative to waste management

  18. Fully automated laboratory and field-portable goniometer used for performing accurate and precise multiangular reflectance measurements

    Science.gov (United States)

    Harms, Justin D.; Bachmann, Charles M.; Ambeau, Brittany L.; Faulring, Jason W.; Ruiz Torres, Andres J.; Badura, Gregory; Myers, Emily

    2017-10-01

    Field-portable goniometers are created for a wide variety of applications. Many of these applications require specific types of instruments and measurement schemes and must operate in challenging environments. Therefore, designs are based on the requirements that are specific to the application. We present a field-portable goniometer that was designed for measuring the hemispherical-conical reflectance factor (HCRF) of various soils and low-growing vegetation in austere coastal and desert environments and biconical reflectance factors in laboratory settings. Unlike some goniometers, this system features a requirement for "target-plane tracking" to ensure that measurements can be collected on sloped surfaces, without compromising angular accuracy. The system also features a second upward-looking spectrometer to measure the spatially dependent incoming illumination, an integrated software package to provide full automation, an automated leveling system to ensure a standard frame of reference, a design that minimizes the obscuration due to self-shading to measure the opposition effect, and the ability to record a digital elevation model of the target region. This fully automated and highly mobile system obtains accurate and precise measurements of HCRF in a wide variety of terrain and in less time than most other systems while not sacrificing consistency or repeatability in laboratory environments.

  19. Manual versus Automated Rodent Behavioral Assessment: Comparing Efficacy and Ease of Bederson and Garcia Neurological Deficit Scores to an Open Field Video-Tracking System.

    Science.gov (United States)

    Desland, Fiona A; Afzal, Aqeela; Warraich, Zuha; Mocco, J

    2014-01-01

    Animal models of stroke have been crucial in advancing our understanding of the pathophysiology of cerebral ischemia. Currently, the standards for determining neurological deficit in rodents are the Bederson and Garcia scales, manual assessments scoring animals based on parameters ranked on a narrow scale of severity. Automated open field analysis of a live-video tracking system that analyzes animal behavior may provide a more sensitive test. Results obtained from the manual Bederson and Garcia scales did not show significant differences between pre- and post-stroke animals in a small cohort. When using the same cohort, however, post-stroke data obtained from automated open field analysis showed significant differences in several parameters. Furthermore, large cohort analysis also demonstrated increased sensitivity with automated open field analysis versus the Bederson and Garcia scales. These early data indicate use of automated open field analysis software may provide a more sensitive assessment when compared to traditional Bederson and Garcia scales.

  20. Manual versus Automated Rodent Behavioral Assessment: Comparing Efficacy and Ease of Bederson and Garcia Neurological Deficit Scores to an Open Field Video-Tracking System

    Directory of Open Access Journals (Sweden)

    Fiona A. Desland

    2014-01-01

    Full Text Available Animal models of stroke have been crucial in advancing our understanding of the pathophysiology of cerebral ischemia. Currently, the standards for determining neurological deficit in rodents are the Bederson and Garcia scales, manual assessments scoring animals based on parameters ranked on a narrow scale of severity. Automated open field analysis of a live-video tracking system that analyzes animal behavior may provide a more sensitive test. Results obtained from the manual Bederson and Garcia scales did not show significant differences between pre- and post-stroke animals in a small cohort. When using the same cohort, however, post-stroke data obtained from automated open field analysis showed significant differences in several parameters. Furthermore, large cohort analysis also demonstrated increased sensitivity with automated open field analysis versus the Bederson and Garcia scales. These early data indicate use of automated open field analysis software may provide a more sensitive assessment when compared to traditional Bederson and Garcia scales.

  1. Application of the iterative probe correction technique for a high-order probe in spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Laitinen, Tommi; Pivnenko, Sergey; Breinbjerg, Olav

    2006-01-01

    An iterative probe-correction technique for spherical near-field antenna measurements is examined. This technique has previously been shown to be well-suited for non-ideal first-order probes. In this paper, its performance in the case of a high-order probe (a dual-ridged horn) is examined....

  2. Automated gait analysis in the open-field test for laboratory mice.

    Science.gov (United States)

    Leroy, Toon; Silva, Mitchell; D'Hooge, Rudi; Aerts, Jean-Marie; Berckmans, Daniel

    2009-02-01

    In this article, an automated and accurate mouse observation method, based on a conventional test for motor function evaluation, is outlined. The proposed measurement technique was integrated in a regular open-field test, where the trajectory and locomotion of a free-moving mouse were measured simultaneously. The system setup consisted of a transparent cage and a camera placed below it with its lens pointing upward, allowing for images to be captured from underneath the cage while the mouse was walking on the transparent cage floor. Thus, additional information was obtained about the position of the limbs of the mice for gait reconstruction. In a first step, the camera was calibrated as soon as it was fixed in place. A linear calibration factor, relating distances in image coordinates to real-world dimensions, was determined. In a second step, the mouse was located and its body contour segmented from the image by subtracting a previously taken "background" image of the empty cage from the camera image. In a third step, the movement of the mouse was analyzed and its speed estimated from its location in the past few images. If the speed was above a 1-sec threshold, the mouse was recognized to be running, and the image was further processed for footprint recognition. In a fourth step, color filtering was applied within the recovered mouse region to measure the position of the mouse's paws, which were visible in the image as small pink spots. Paws that were detected at the same location in a number of subsequent images were kept as footprints-that is, paws in contact with the cage floor. The footprints were classified by their position relative to the mouse's outline as corresponding to the front left or right paw or the hind left or right paw. Finally, eight parameters were calculated from the footprint pattern to describe the locomotion of the mouse: right/left overlap, front/hind base, right/left front limb stride, and right/left hind limb stride. As an application

  3. Code it rite the first time : automated invoice processing solution designed to ensure validity to field ticket coding

    Energy Technology Data Exchange (ETDEWEB)

    Chandler, G.

    2010-03-15

    An entrepreneur who ran 55 rigs for a major oilfield operator in Calgary has developed a solution for the oil industry that reduces field ticketing errors from 40 per cent to almost none. The Code-Rite not only simplifies field ticketing but can eliminate weeks of trying to balance authorization for expenditure (AFE) numbers. A service provider who wants a field ticket signed for billing purposes following a service call to a well site receives all pertinent information on a barcode that includes AFE number, location, routing, approval authority and mailing address. Attaching the label to the field ticket provides all the invoicing information needed. This article described the job profile, education and life experiences and opportunities that led the innovator to develop this technology that solves an industry-wide problem. Code-Rite is currently being used by 3 large upstream oil and gas operators and plans are underway to automate the entire invoice processing system. 1 fig.

  4. Coulomb Force Correction to the Decay b→ccs in the Threshold(Particles and Fields)

    OpenAIRE

    Kouhei, HASEGAWA; Department of Physics, University of Alberta

    2007-01-01

    We study the physical origins of the O(α_s) and O(α^2_s) corrections to the c-s current in the decay b→ccs in the threshold region δ=(M_b-2m_c)/2M_b ≪1. We obtain the corrections which are produced by the Coulomb force between the anti-charm and strange quarks. The Coulomb corrections C_Fπ^2 at O(α_s) and -C^2_Fπ^2ln δ at O(α^2_s) account for 300% and 120% of the corresponding terms in the Abelian-type perturbative corrections respectively. The differences between the Coulomb and perturbative...

  5. Unitarity corrections and high field strengths in high energy hard collisions

    International Nuclear Information System (INIS)

    Kovchegov, Y.V.; Mueller, A.H.

    1997-01-01

    Unitarity corrections to the BFKL description of high energy hard scattering are viewed in large N c QCD in light-cone quantization. In a center of mass frame unitarity corrections to high energy hard scattering are manifestly perturbatively calculable and unrelated to questions of parton saturation. In a frame where one of the hadrons is initially at rest unitarity corrections are related to parton saturation effects and involve potential strengths A μ ∝1/g. In such a frame we describe the high energy scattering in terms of the expectation value of a Wilson loop. The large potentials A μ ∝1/g are shown to be pure gauge terms allowing perturbation theory to again describe unitarity corrections and parton saturation effects. Genuine nonperturbative effects only come in at energies well beyond those energies where unitarity constraints first become important. (orig.)

  6. Development of automated welding process for field fabrication of thick walled pressure vessels

    International Nuclear Information System (INIS)

    Schneider, U.A.

    Research on automatic welding processes for the fabrication of thick-walled pressure vessels continued. A literature review on the subject was completed. A laboratory study of criteria for judging acceptable root parameters continued. Equipment for a demonstration facility to test the components and processes of the automated welding system has been specified and is being obtained

  7. Development of automated welding process for field fabrication of thick walled pressure vessels

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, U A

    1981-01-01

    Research on automatic welding processes for the fabrication of thick-walled pressure vessels continued. A literature review on the subject was completed. A laboratory study of criteria for judging acceptable root parameters continued. Equipment for a demonstration facility to test the components and processes of the automated welding system has been specified and is being obtained. (LCL)

  8. Findings from Seven Years of Field Performance Data for Automated Demand Response in Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Kiliccote, Sila; Piette, Mary Ann; Mathieu, Johanna; Parrish, Kristen

    2010-05-14

    California is a leader in automating demand response (DR) to promote low-cost, consistent, and predictable electric grid management tools. Over 250 commercial and industrial facilities in California participate in fully-automated programs providing over 60 MW of peak DR savings. This paper presents a summary of Open Automated DR (OpenADR) implementation by each of the investor-owned utilities in California. It provides a summary of participation, DR strategies and incentives. Commercial buildings can reduce peak demand from 5 to 15percent with an average of 13percent. Industrial facilities shed much higher loads. For buildings with multi-year savings we evaluate their load variability and shed variability. We provide a summary of control strategies deployed, along with costs to install automation. We report on how the electric DR control strategies perform over many years of events. We benchmark the peak demand of this sample of buildings against their past baselines to understand the differences in building performance over the years. This is done with peak demand intensities and load factors. The paper also describes the importance of these data in helping to understand possible techniques to reach net zero energy using peak day dynamic control capabilities in commercial buildings. We present an example in which the electric load shape changed as a result of a lighting retrofit.

  9. Recent progress in the field of automated welding applied to maintenance activities

    International Nuclear Information System (INIS)

    Cullafroz, M.

    2004-01-01

    Automated and robot welding has 5 advantages compared to manual welding: -) under some conditions the automated circular welding does not require requalification testing as manual welding does, -) welding heads in robots have a reduced size compared to manual gears so they can enter and treat complex piping, -) by using an adequate viewing system the operator can be more than 10 meters away from the welding site which means that the radiation doses he receives is cut by a factor 1.5 to 2, -) whatever the configuration is, the deposition rate in automated welding stays high, the quality standard is steady and the risk of repairing is low, -) a gain in productivity if adequate equipment is used. In general, automated welding requires a TIG welding process and is applied in maintenance activities to: -) the main primary system and other circuits in stainless austenitic steels, -) the main secondary system and other circuits in low-percentage carbon steels, and -) the closure of spent fuel canisters. An application to the repairing of BWR's pipes is shown. (A.C.)

  10. Models of Automation surprise : results of a field survey in aviation

    NARCIS (Netherlands)

    De Boer, Robert; Dekker, Sidney

    2017-01-01

    Automation surprises in aviation continue to be a significant safety concern and the community’s search for effective strategies to mitigate them are ongoing. The literature has offered two fundamentally divergent directions, based on different ideas about the nature of cognition and collaboration

  11. Study and optimal correction of a systematic skew quadrupole field in the Tevatron

    International Nuclear Information System (INIS)

    Snopok, Pavel; Johnstone, Carol; Berz, Martin; Ovsyannikov, Dmitry A.; Ovsyannikov, Alexander D.

    2006-01-01

    Increasing demands for luminosity in existing and future colliders have made lattice design and error tolerance and correction critical to achieving performance goals. The current state of the Tevatron collider is an example, with a strong skew quadrupole error present in the operational lattice. This work studies the high-order performance of the Tevatron and the strong nonlinear behavior introduced when a significant skew quadrupole error is combined with conventional sextupole correction, a behavior still clearly evident after optimal tuning of available skew quadrupole circuits. An optimization study is performed using different skew quadrupole families, and, importantly, local and global correction of the linear skew terms in maps generated by the code COSY INFINITY [M. Berz, COSY INFINITY version 8.1 user's guide and reference manual, Department of Physics and Astronomy MSUHEP-20704, Michigan State University (2002). URL http://cosy.pa.msu.edu/cosymanu/index.html]. Two correction schemes with one family locally correcting each arc and eight independent correctors in the straight sections for global correction are proposed and shown to dramatically improve linearity and performance of the baseline Tevatron lattice

  12. Physiologic noise regression, motion regression, and TOAST dynamic field correction in complex-valued fMRI time series.

    Science.gov (United States)

    Hahn, Andrew D; Rowe, Daniel B

    2012-02-01

    As more evidence is presented suggesting that the phase, as well as the magnitude, of functional MRI (fMRI) time series may contain important information and that there are theoretical drawbacks to modeling functional response in the magnitude alone, removing noise in the phase is becoming more important. Previous studies have shown that retrospective correction of noise from physiologic sources can remove significant phase variance and that dynamic main magnetic field correction and regression of estimated motion parameters also remove significant phase fluctuations. In this work, we investigate the performance of physiologic noise regression in a framework along with correction for dynamic main field fluctuations and motion regression. Our findings suggest that including physiologic regressors provides some benefit in terms of reduction in phase noise power, but it is small compared to the benefit of dynamic field corrections and use of estimated motion parameters as nuisance regressors. Additionally, we show that the use of all three techniques reduces phase variance substantially, removes undesirable spatial phase correlations and improves detection of the functional response in magnitude and phase. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Mixed model phase evolution for correction of magnetic field inhomogeneity effects in 3D quantitative gradient echo-based MRI

    DEFF Research Database (Denmark)

    Fatnassi, Chemseddine; Boucenna, Rachid; Zaidi, Habib

    2017-01-01

    PURPOSE: In 3D gradient echo magnetic resonance imaging (MRI), strong field gradients B0macro are visually observed at air/tissue interfaces. At low spatial resolution in particular, the respective field gradients lead to an apparent increase in intravoxel dephasing, and subsequently, to signal...... loss or inaccurate R2* estimates. If the strong field gradients are measured, their influence can be removed by postprocessing. METHODS: Conventional corrections usually assume a linear phase evolution with time. For high macroscopic gradient inhomogeneities near the edge of the brain...

  14. Complacency and Automation Bias in the Use of Imperfect Automation.

    Science.gov (United States)

    Wickens, Christopher D; Clegg, Benjamin A; Vieane, Alex Z; Sebok, Angelia L

    2015-08-01

    We examine the effects of two different kinds of decision-aiding automation errors on human-automation interaction (HAI), occurring at the first failure following repeated exposure to correctly functioning automation. The two errors are incorrect advice, triggering the automation bias, and missing advice, reflecting complacency. Contrasts between analogous automation errors in alerting systems, rather than decision aiding, have revealed that alerting false alarms are more problematic to HAI than alerting misses are. Prior research in decision aiding, although contrasting the two aiding errors (incorrect vs. missing), has confounded error expectancy. Participants performed an environmental process control simulation with and without decision aiding. For those with the aid, automation dependence was created through several trials of perfect aiding performance, and an unexpected automation error was then imposed in which automation was either gone (one group) or wrong (a second group). A control group received no automation support. The correct aid supported faster and more accurate diagnosis and lower workload. The aid failure degraded all three variables, but "automation wrong" had a much greater effect on accuracy, reflecting the automation bias, than did "automation gone," reflecting the impact of complacency. Some complacency was manifested for automation gone, by a longer latency and more modest reduction in accuracy. Automation wrong, creating the automation bias, appears to be a more problematic form of automation error than automation gone, reflecting complacency. Decision-aiding automation should indicate its lower degree of confidence in uncertain environments to avoid the automation bias. © 2015, Human Factors and Ergonomics Society.

  15. The relationship between anatomically correct electric and magnetic field dosimetry and publishe delectric and magnetic field exposure limits.

    Science.gov (United States)

    Kavet, Robert; Dovan, Thanh; Reilly, J Patrick

    2012-12-01

    Electric and magnetic field exposure limits published by International Commission for Non-Ionizing Radiation Protection and Institute of Electrical and Electronics Engineers are aimed at protection against adverse electrostimulation, which may occur by direct coupling to excitable tissue and, in the case of electric fields, through indirect means associated with surface charge effects (e.g. hair vibration, skin sensations), spark discharge and contact current. For direct coupling, the basic restriction (BR) specifies the not-to-be-exceeded induced electric field. The key results of anatomically based electric and magnetic field dosimetry studies and the relevant characteristics of excitable tissue were first identified. This permitted us to assess the electric and magnetic field exposure levels that induce dose in tissue equal to the basic restrictions, and the relationships of those exposure levels to the limits now in effect. We identify scenarios in which direct coupling of electric fields to peripheral nerve could be a determining factor for electric field limits.

  16. Automated Identification of Initial Storm Electrification and End-of-Storm Electrification Using Electric Field Mill Sensors

    Science.gov (United States)

    Maier, Launa M.; Huddleston, Lisa L.

    2017-01-01

    Kennedy Space Center (KSC) operations are located in a region which experiences one of the highest lightning densities across the United States. As a result, on average, KSC loses almost 30 minutes of operational availability each day for lightning sensitive activities. KSC is investigating using existing instrumentation and automated algorithms to improve the timeliness and accuracy of lightning warnings. Additionally, the automation routines will be warning on a grid to minimize under-warnings associated with not being located in the center of the warning area and over-warnings associated with encompassing too large an area. This study discusses utilization of electric field mill data to provide improved warning times. Specifically, this paper will demonstrate improved performance of an enveloping algorithm of the electric field mill data as compared with the electric field zero crossing to identify initial storm electrification. End-of-Storm-Oscillation (EOSO) identification algorithms will also be analyzed to identify performance improvement, if any, when compared with 30 minutes after the last lightning flash.

  17. Two-Loop Self-Energy Correction in a Strong Coulomb Nuclear Field

    International Nuclear Information System (INIS)

    Yerokhin, V.A.; Indelicato, P.; Shabaev, V.M.

    2005-01-01

    The two-loop self-energy correction to the ground-state energy levels of hydrogen-like ions with nuclear charges Z ≥ 10 is calculated without the Zα expansion, where α is the fine-structure constant. The data obtained are compared with the results of analytical calculations within the Zα expansion; significant disagreement with the analytical results of order α 2 (Zα) 6 has been found. Extrapolation is used to obtain the most accurate value for the two-loop self-energy correction for the 1s state in hydrogen

  18. System of automized determination of charged particle trajectories in extended magnetic fields

    International Nuclear Information System (INIS)

    Toumanian, A.R.

    1981-01-01

    An automized system for the determination of particle trajectories by the floating current-carrying wire method is described. The system is able to determine the charged particle trajectories with the energy above 100 MeV in magnetic systems of any configuration and with track extent up to several tens metres with momentum resolution up to 3.10 -4 . The system efficiency makes 1500 tracks/hour on the average [ru

  19. Analysis and correction of intrinsic non-axisymmetric magnetic fields in high-β DIII-D plasmas

    International Nuclear Information System (INIS)

    Garofalo, A.M.; La Haye, R.J.; Scoville, J.T.

    2002-01-01

    Rapid plasma toroidal rotation, sufficient for stabilization of the n=1 resistive wall mode, can be sustained by improving the axisymmetry of the toroidal magnetic field geometry of DIII-D. The required symmetrization is determined experimentally both by optimizing currents in external n=1 correction coils with respect to the plasma rotation, and by use of the n=1 magnetic feedback to detect and minimize the plasma response to non-axisymmetric fields as β increases. Both methods point to an intrinsic ∼7 G (0.03% of the toroidal field), m/n=2/1 resonant helical field at the q=2 surface as the cause of the plasma rotation slowdown above the no-wall β limit. The drag exerted by this field on the plasma rotation is consistent with the behaviour of 'slipping' in a simple induction motor model. (author)

  20. Fator de correção para indivíduos com capacidade acomodativa baseado no uso do refrator automático Correction factor for individuals with accommodative capacity based on automated refractor

    Directory of Open Access Journals (Sweden)

    Rodrigo Ueno Takahagi

    2009-12-01

    Full Text Available OBJETIVO: Pesquisar um fator de correção para avaliação do erro refrativo sem a utilização da cicloplegia. MÉTODOS: Foram estudados 623 pacientes (1.246 olhos, de ambos os sexos, com idade entre 3 e 40 anos. As refratometrias estática e dinâmica foram obtidas usando-se o refrator automático Shin-Nippon Accuref-K 9001. A cicloplegia foi obtida com a instilação de uma gota de colírio ciclopentolato a 1%, com refratometria estática 30 minutos após. Os dados foram submetidos à análise estatística usando a técnica do modelo de regressão linear e modelo de regressão múltipla do valor dióptrico com e sem cicloplegia, em função da idade. RESULTADOS: A correlação entre valores dióptricos sem e com cicloplegia quanto ao erro astigmático variou de 81,52% a 92,27%. Quanto ao valor dióptrico esférico, a correlação foi menor (53,57% a 87,78%. O mesmo se observou em relação ao eixo do astigmatismo (28,86% a 58,80%. O modelo de regressão múltipla em função da idade mostrou coeficiente de determinação múltiplo maior para a miopia (86,38% e astigmatismo (79,79%. O menor coeficiente foi observado para o eixo do astigmatismo (17,70%. CONCLUSÃO: Avaliando-se os erros refrativos com e sem cicloplegia, observou-se alta correlação nas ametropias cilíndricas. Foram desenvolvidas equações matemáticas como fator de correção para refratometrias dos pacientes sem cicloplegia, portadores de ametropias cilíndricas e esféricas.PURPOSE: To determine a correction factor for refractive errors evaluated without cycloplegy effect. METHODS: A study was made with 623 patients (1,246 eyes of both sexes, aging between 3 and 40 years old. The dynamic and static refractometries were obtained using the automated refractor Shin-Nippon Accuref-K 9001. 1% Cyclopentolate was dropped and the static refractometry was performed in 30 minutes. Data were analyzed using the linear regression model and the multiple regression model of the diopter

  1. Vertex corrections to the mean-field electrical conductivity in disordered electron systems

    Czech Academy of Sciences Publication Activity Database

    Pokorný, Vladislav; Janiš, Václav

    2013-01-01

    Roč. 25, č. 17 (2013), "175502-1"-"175502-10" ISSN 0953-8984 Institutional support: RVO:68378271 Keywords : disordered electron systems * electrical conductivity * vertex corrections Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 2.223, year: 2013

  2. Can Lucifer Yellow Indicate Correct Permeability of Biological Cell Membrane under An Electric and Magnetic Field?

    OpenAIRE

    Tahereh Pourmirjafari Firoozabadi; Zeinab Shankayi; Azam Izadi; Seyed Mohammad Pourmirjafari Firoozabadi

    2015-01-01

    The effect of external magnetic and electric fields, in the range of electroporation and magnetoporation, on Lucifer Yellow (LY) fluorescence in the absence of cells is studied. Electric-field-induced quenching and magnetic field-induced increase are observed for fluorescence intensity of LY. Regard to the fact that the variation of field-induced fluorescence, even in the absence of cells, can be observed, the application of LY, as a marker, is debatable in electroporation and magnetoporation...

  3. Analysis of the Failures and Corrective Actions for the LHC Cryogenics Radiation Tolerant Electronics and its Field Instruments

    CERN Document Server

    Balle, Ch; Vauthier, N

    2014-01-01

    The LHC cryogenic system radiation tolerant electronics and their associated field instruments have been in nominal conditions since before the commissioning of the first LHC beams in September 2008. This system is made of about 15’000 field instruments (thermometers, pressure sensors, liquid helium level gauges, electrical heaters and position switches), 7’500 electronic cards and 853 electronic crates. Since mid-2008 a software tool has been deployed, this allows an operator to report a problem and then lists the corrective actions. The tool is a great help in detecting recurrent problems that may be tackled by a hardware or software consolidation. The corrective actions range from simple resets, exchange of defective equipment, repair of electrical connectors, etc. However a recurrent problem that heals by itself is present on some channels. This type of fault is extremely difficult to diagnose and it appears as a temporary opening of an electrical circuit; its duration can range from a few minutes to ...

  4. Simulations of the magnet misalignments, field errors and orbit correction for the SLC north arc

    International Nuclear Information System (INIS)

    Kheifets, S.; Chao, A.; Jaeger, J.; Shoaee, H.

    1983-11-01

    Given the intensity of linac bunches and their repetition rate the desired luminosity of SLC 1.0 x 10 30 cm -2 sec -1 requires focusing the interaction bunches to a spot size in the micrometer (μm) range. The lattice that achieves this goal is obtained by careful design of both the arcs and the final focus systems. For the micrometer range of the beam spot size both the second order geometric and chromatic aberrations may be completely destructive. The concept of second order achromat proved to be extremely important in this respect and the arcs are built essentially as a sequence of such achromats. Between the end of the linac and the interaction point (IP) there are three special sections in addition to the regular structure: matching section (MS) designed for matching the phase space from the linac to the arcs, reverse bend section (RB) which provides the matching when the sign of the curvature is reversed in the arc and the final focus system (FFS). The second order calculations are done by the program TURTLE. Using the TURTLE histogram in the x-y plane and assuming identical histogram for the south arc, corresponding 'luminosity' L is found. The simulation of the misalignments and error effects have to be done simultaneously with the design and simulation of the orbit correction scheme. Even after the orbit is corrected and the beam can be transmitted through the vacuum chamber, the focusing of the beam to the desired size at the IP remains a serious potential problem. It is found, as will be elaborated later, that even for the best achieved orbit correction, additional corrections of the dispersion function and possibly transfer matrix are needed. This report describes a few of the presently conceived correction schemes and summarizes some results of computer simulations done for the SLC north arc. 8 references, 12 figures, 6 tables

  5. The relationship between anatomically correct electric and magnetic field dosimetry and published electric and magnetic field exposure limits

    International Nuclear Information System (INIS)

    Kavet, R.; Dovan, T.; Patrick Reilly, J.

    2012-01-01

    Electric and magnetic field exposure limits published by International Commission for Non-Ionizing Radiation Protection and Inst. of Electrical and Electronics Engineers are aimed at protection against adverse electro-stimulation, which may occur by direct coupling to excitable tissue and, in the case of electric fields, through indirect means associated with surface charge effects (e.g. hair vibration, skin sensations), spark discharge and contact current. For direct coupling, the basic restriction (BR) specifies the not-to-be-exceeded induced electric field. The key results of anatomically based electric and magnetic field dosimetry studies and the relevant characteristics of excitable tissue were first identified. This permitted us to assess the electric and magnetic field exposure levels that induce dose in tissue equal to the basic restrictions, and the relationships of those exposure levels to the limits now in effect. We identify scenarios in which direct coupling of electric fields to peripheral nerve could be a determining factor for electric field limits. (authors)

  6. Dose corrections for field obliquity for 45-MV x-ray therapy

    International Nuclear Information System (INIS)

    McGinley, P.H.; Clanton, A.; Downes, B.; Nuskind, J.

    1983-01-01

    The degree of dose perturbation produced by a 25.7-cm-diam circular water phantom was determined for a 45-MV x-ray beam by direct measurement. Data obtained in a circular and a cubical water phantom was utilized to test three accepted techniques (isodose shift, TAR method, and effective SSD method) for the correction of isodose levels to account for patient curvature. In general, the effective SSD method yielded the most accurate results for all depth including the buildup region. An isodose shift factor of 0.8 was found for the 45-MV x-ray beam. Key words: curvature corrections, 45-MV x ray, isodose shift, TAR, effective SSD method

  7. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    Science.gov (United States)

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  8. Effect of Retinal Nerve Fibre Layer Injury on Visual Field After LASIK for Correction of Myopia

    International Nuclear Information System (INIS)

    Saif, S.E.H.; Bahgat, M.; El'emary, A.T.; Naguib, N.I.; Lotfy, A.A.

    2006-01-01

    This work aimed at clinical assessment of the damage to the retinal nerve fibre layer (RNFL) due to the suction time during LASIK on the visual field of the patients. forty-five patients were subjected to LASIk followed by optic coherence tomography (OCT) and visual field (VF) in this study in the research institute of ophthalmology. clinical assessment will be achieved by using visual perimetry. we concluded that LASIK did not cause visual field defects in the study. actually we were expecting a field defect in the upper, and to a lesser extent, in the lower quadrant but this could be detected by more sophisticated technology

  9. Effect of Retinal Nerve Fibre Layer Injury on Visual Field After LASIK for Correction of Myopia

    Energy Technology Data Exchange (ETDEWEB)

    Saif, S E.H.; Bahgat, M [Ophthalmology dept, Cairo University, Cairo (Egypt); El' emary, A T [Research Institute of Ophthalmology (Egypt); Naguib, N I; Lotfy, A A [National Centre for Rdiation Research and Tecnology (NCRRT), Atomic Energy Authority (AEA), Cairo (Egypt)

    2006-05-15

    This work aimed at clinical assessment of the damage to the retinal nerve fibre layer (RNFL) due to the suction time during LASIK on the visual field of the patients. forty-five patients were subjected to LASIk followed by optic coherence tomography (OCT) and visual field (VF) in this study in the research institute of ophthalmology. clinical assessment will be achieved by using visual perimetry. we concluded that LASIK did not cause visual field defects in the study. actually we were expecting a field defect in the upper, and to a lesser extent, in the lower quadrant but this could be detected by more sophisticated technology.

  10. Automated spectro-goniometer: A spherical robot for the field measurement of the directional reflectance of snow

    International Nuclear Information System (INIS)

    Painter, Thomas H.; Paden, Brad; Dozier, Jeff

    2003-01-01

    We describe an automated spectro-goniometer (ASG) that rapidly measures the spectral hemispherical-directional reflectance factor (HDRF) of snow in the field across the wavelength range 0.4≤λ≤2.5 μm. Few measurements of snow's HDRF exist in the literature, in part caused by a lack of a portable instrument capable of rapid, repeatable sampling. The ASG is a two-link spherical robot coupled to a field spectroradiometer. The ASG is the first revolute joint and first automated field goniometer for use over snow and other smooth surfaces. It is light enough (∼50 kg) to be portable in a sled by an individual. The ASG samples the HDRF at arbitrary angular resolution and 0.5 Hz sampling rate. The arm attaches to the fixed-point frame 0.65 m above the surface. With vertical and oblique axes, the ASG places the sensor of the field spectroradiometer at any point on the hemisphere above a snow target. In standard usage, the ASG has the sun as the illumination source to facilitate in situ measurements over fragile surfaces not easily transported to the laboratory and to facilitate simultaneous illumination conditions for validation and calibration of satellite retrievals. The kinematics of the ASG is derived using Rodrigues' formula applied to the 2 degree-of-freedom arm. We describe the inverse kinematics for the ASG and solve the inverse problem from a given view angle to the necessary rotation about each axis. Its two-dimensional hemispheric sampling space facilitates the measurement of spectral reflectance from snow and other relatively smooth surfaces into any direction. The measurements will be used to validate radiative transfer model results of directional reflectance and to validate/calibrate directional satellite measurements of reflectance from these smooth surfaces

  11. Inelastic neutron scattering, Raman, vibrational analysis with anharmonic corrections, and scaled quantum mechanical force field for polycrystalline L-alanine

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Robert W. [Department of Biomedical Informatics, Uniformed Services University, 4301 Jones Bridge Road, Bethesda, MD 20815 (United States)], E-mail: bob@bob.usuhs.mil; Schluecker, Sebastian [Institute of Physical Chemistry, University of Wuerzburg, Wuerzburg (Germany); Hudson, Bruce S. [Department of Chemistry, Syracuse University, Syracuse, NY (United States)

    2008-01-22

    A scaled quantum mechanical harmonic force field (SQMFF) corrected for anharmonicity is obtained for the 23 K L-alanine crystal structure using van der Waals corrected periodic boundary condition density functional theory (DFT) calculations with the PBE functional. Scale factors are obtained with comparisons to inelastic neutron scattering (INS), Raman, and FT-IR spectra of polycrystalline L-alanine at 15-23 K. Calculated frequencies for all 153 normal modes differ from observed frequencies with a standard deviation of 6 wavenumbers. Non-bonded external k = 0 lattice modes are included, but assignments to these modes are presently ambiguous. The extension of SQMFF methodology to lattice modes is new, as are the procedures used here for providing corrections for anharmonicity and van der Waals interactions in DFT calculations on crystals. First principles Born-Oppenheimer molecular dynamics (BOMD) calculations are performed on the L-alanine crystal structure at a series of classical temperatures ranging from 23 K to 600 K. Corrections for zero-point energy (ZPE) are estimated by finding the classical temperature that reproduces the mean square displacements (MSDs) measured from the diffraction data at 23 K. External k = 0 lattice motions are weakly coupled to bonded internal modes.

  12. Inelastic neutron scattering, Raman, vibrational analysis with anharmonic corrections, and scaled quantum mechanical force field for polycrystalline L-alanine

    International Nuclear Information System (INIS)

    Williams, Robert W.; Schluecker, Sebastian; Hudson, Bruce S.

    2008-01-01

    A scaled quantum mechanical harmonic force field (SQMFF) corrected for anharmonicity is obtained for the 23 K L-alanine crystal structure using van der Waals corrected periodic boundary condition density functional theory (DFT) calculations with the PBE functional. Scale factors are obtained with comparisons to inelastic neutron scattering (INS), Raman, and FT-IR spectra of polycrystalline L-alanine at 15-23 K. Calculated frequencies for all 153 normal modes differ from observed frequencies with a standard deviation of 6 wavenumbers. Non-bonded external k = 0 lattice modes are included, but assignments to these modes are presently ambiguous. The extension of SQMFF methodology to lattice modes is new, as are the procedures used here for providing corrections for anharmonicity and van der Waals interactions in DFT calculations on crystals. First principles Born-Oppenheimer molecular dynamics (BOMD) calculations are performed on the L-alanine crystal structure at a series of classical temperatures ranging from 23 K to 600 K. Corrections for zero-point energy (ZPE) are estimated by finding the classical temperature that reproduces the mean square displacements (MSDs) measured from the diffraction data at 23 K. External k = 0 lattice motions are weakly coupled to bonded internal modes

  13. Self-propelled in-tube shuttle and control system for automated measurements of magnetic field alignment

    International Nuclear Information System (INIS)

    Boroski, W.N.; Nicol, T.H.; Pidcoe, S.V.

    1990-03-01

    A magnetic field alignment gauge is used to measure the field angle as a function of axial position in each of the magnets for the Superconducting Super Collider (SSC). Present measurements are made by manually pushing the through the magnet bore tube and stopping at intervals to record field measurements. Gauge location is controlled through graduation marks and alignment pins on the push rods. Field measurements are recorded on a logging multimeter with tape output. Described is a computerized control system being developed to replace the manual procedure for field alignment measurements. The automated system employs a pneumatic walking device to move the measurement gauge through the bore tube. Movement of the device, called the Self-Propelled In-Tube Shuttle (SPITS), is accomplished through an integral, gas driven, double-acting cylinder. The motion of the SPITS is transferred to the bore tube by means of a pair of controlled, retractable support feet. Control of the SPITS is accomplished through an RS-422 interface from an IBM-compatible computer to a series of solenoid-actuated air valves. Direction of SPITS travel is determined by the air-valve sequence, and is managed through the control software. Precise axial position of the gauge within the magnet is returned to the control system through an optically-encoded digital position transducer attached to the shuttle. Discussed is the performance of the transport device and control system during preliminary testing of the first prototype shuttle. 1 ref., 7 figs

  14. Two Ramond-Ramond corrections to type II supergravity via field-theory amplitude

    Energy Technology Data Exchange (ETDEWEB)

    Bakhtiarizadeh, Hamid R. [Sirjan University of Technology, Department of Physics, Sirjan (Iran, Islamic Republic of)

    2017-12-15

    Motivated by the standard form of the string-theory amplitude, we calculate the field-theory amplitude to complete the higher-derivative terms in type II supergravity theories in their conventional form. We derive explicitly the O(α{sup '3}) interactions for the RR (Ramond-Ramond) fields with graviton, B-field and dilaton in the low-energy effective action of type II superstrings. We check our results by comparison with previous work that has been done by the other methods, and we find exact agreement. (orig.)

  15. Automated bone segmentation from large field of view 3D MR images of the hip joint

    International Nuclear Information System (INIS)

    Xia, Ying; Fripp, Jurgen; Chandra, Shekhar S; Schwarz, Raphael; Engstrom, Craig; Crozier, Stuart

    2013-01-01

    Accurate bone segmentation in the hip joint region from magnetic resonance (MR) images can provide quantitative data for examining pathoanatomical conditions such as femoroacetabular impingement through to varying stages of osteoarthritis to monitor bone and associated cartilage morphometry. We evaluate two state-of-the-art methods (multi-atlas and active shape model (ASM) approaches) on bilateral MR images for automatic 3D bone segmentation in the hip region (proximal femur and innominate bone). Bilateral MR images of the hip joints were acquired at 3T from 30 volunteers. Image sequences included water-excitation dual echo stead state (FOV 38.6 × 24.1 cm, matrix 576 × 360, thickness 0.61 mm) in all subjects and multi-echo data image combination (FOV 37.6 × 23.5 cm, matrix 576 × 360, thickness 0.70 mm) for a subset of eight subjects. Following manual segmentation of femoral (head–neck, proximal-shaft) and innominate (ilium+ischium+pubis) bone, automated bone segmentation proceeded via two approaches: (1) multi-atlas segmentation incorporating non-rigid registration and (2) an advanced ASM-based scheme. Mean inter- and intra-rater reliability Dice's similarity coefficients (DSC) for manual segmentation of femoral and innominate bone were (0.970, 0.963) and (0.971, 0.965). Compared with manual data, mean DSC values for femoral and innominate bone volumes using automated multi-atlas and ASM-based methods were (0.950, 0.922) and (0.946, 0.917), respectively. Both approaches delivered accurate (high DSC values) segmentation results; notably, ASM data were generated in substantially less computational time (12 min versus 10 h). Both automated algorithms provided accurate 3D bone volumetric descriptions for MR-based measures in the hip region. The highly computational efficient ASM-based approach is more likely suitable for future clinical applications such as extracting bone–cartilage interfaces for potential cartilage segmentation. (paper)

  16. Automated bone segmentation from large field of view 3D MR images of the hip joint

    Science.gov (United States)

    Xia, Ying; Fripp, Jurgen; Chandra, Shekhar S.; Schwarz, Raphael; Engstrom, Craig; Crozier, Stuart

    2013-10-01

    Accurate bone segmentation in the hip joint region from magnetic resonance (MR) images can provide quantitative data for examining pathoanatomical conditions such as femoroacetabular impingement through to varying stages of osteoarthritis to monitor bone and associated cartilage morphometry. We evaluate two state-of-the-art methods (multi-atlas and active shape model (ASM) approaches) on bilateral MR images for automatic 3D bone segmentation in the hip region (proximal femur and innominate bone). Bilateral MR images of the hip joints were acquired at 3T from 30 volunteers. Image sequences included water-excitation dual echo stead state (FOV 38.6 × 24.1 cm, matrix 576 × 360, thickness 0.61 mm) in all subjects and multi-echo data image combination (FOV 37.6 × 23.5 cm, matrix 576 × 360, thickness 0.70 mm) for a subset of eight subjects. Following manual segmentation of femoral (head-neck, proximal-shaft) and innominate (ilium+ischium+pubis) bone, automated bone segmentation proceeded via two approaches: (1) multi-atlas segmentation incorporating non-rigid registration and (2) an advanced ASM-based scheme. Mean inter- and intra-rater reliability Dice's similarity coefficients (DSC) for manual segmentation of femoral and innominate bone were (0.970, 0.963) and (0.971, 0.965). Compared with manual data, mean DSC values for femoral and innominate bone volumes using automated multi-atlas and ASM-based methods were (0.950, 0.922) and (0.946, 0.917), respectively. Both approaches delivered accurate (high DSC values) segmentation results; notably, ASM data were generated in substantially less computational time (12 min versus 10 h). Both automated algorithms provided accurate 3D bone volumetric descriptions for MR-based measures in the hip region. The highly computational efficient ASM-based approach is more likely suitable for future clinical applications such as extracting bone-cartilage interfaces for potential cartilage segmentation.

  17. Particle crossing versus field crossing; a corrective response to Duff's recent account of string theory

    International Nuclear Information System (INIS)

    Schroer, Bert; FU-Berlin

    2012-02-01

    Using recent results of advanced quantum field theory, we confute some of M. Duff's claims about string theory which he wrote as an invited paper to the project 'Forty Years Of String Theory: Reflecting on the Foundations' (author)

  18. 77 FR 39508 - Notice of Inventory Completion: Field Museum of Natural History, Chicago, IL; Correction

    Science.gov (United States)

    2012-07-03

    ... the minimum number of individuals removed from Fresno County from six to four. Field Museum staff also...) comprised of one larger piece of shell and its fragments. In the Federal Register (76 FR 80393-80394...

  19. Loop corrections and other many-body effects in relativistic field theories

    International Nuclear Information System (INIS)

    Ainsworth, T.L.; Brown, G.E.; Prakash, M.; Weise, W.

    1988-01-01

    Incorporation of effective masses into negative energy states (nucleon loop corrections) gives rise to repulsive many-body forces, as has been known for some time. Rather than renormalizing away the three- and four-body terms, we introduce medium corrections into the effective σ-exchange, which roughly cancel the nucleon loop terms for densities ρ ≅ ρ nm , where ρ nm is nuclear matter density. Going to higher densities, the repulsive contributions tend to saturate whereas the attractive ones keep on growing in magnitude. The latter is achieved through use of a density-dependent effective mass for the σ-particle, m σ = m σ (ρ), such that m σ (ρ) decreases with increasing density. Such a behavior is seen e.g. in the Nambu-Jona-Lasinio model. It is argued that a smooth transition to chiral restoration implies a similar behavior. The resulting nuclear equation of state is, because of the self-consistency in the problem, immensely insensitive to changes in the mass or coupling constant of the σ-particle. (orig.)

  20. Instrumentation, Field Network And Process Automation for the LHC Cryogenic Line Tests

    CERN Document Server

    Bager, T; Bertrand, G; Casas-Cubillos, J; Gomes, P; Parente, C; Riddone, G; Suraci, A

    2000-01-01

    This paper describes the cryogenic control system and associated instrumentation of the test facility for 3 pre-series units of the LHC Cryogenic Distribution Line. For each unit, the process automation is based on a Programmable Logic Con-troller implementing more than 30 closed control loops and handling alarms, in-terlocks and overall process management. More than 160 sensors and actuators are distributed over 150 m on a Profibus DP/PA network. Parameterization, cali-bration and diagnosis are remotely available through the bus. Considering the diversity, amount and geographical distribution of the instru-mentation involved, this is a representative approach to the cryogenic control system for CERN's next accelerator.

  1. Diffraction corrections for second harmonic beam fields and effects on the nonlinearity parameter evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Hyun Jo; Cho, Sung Jong; Nam, Ki Woong; Lee, Jang Hyun [Division of Mechanical and Automotive Engineering, Wonkwang University, Iksan (Korea, Republic of)

    2016-04-15

    The nonlinearity parameter is frequently measured as a sensitive indicator in damaged material characterization or tissue harmonic imaging. Several previous studies have employed the plane wave solution, and ignored the effects of beam diffraction when measuring the non-linearity parameter β. This paper presents a multi-Gaussian beam approach to explicitly derive diffraction corrections for fundamental and second harmonics under quasilinear and paraxial approximation. Their effects on the nonlinearity parameter estimation demonstrate complicated dependence of β on the transmitter-receiver geometries, frequency, and propagation distance. The diffraction effects on the non-linearity parameter estimation are important even in the nearfield region. Experiments are performed to show that improved β values can be obtained by considering the diffraction effects.

  2. POGO satellite orbit corrections: an opportunity to improve the quality of the geomagnetic field measurements?

    DEFF Research Database (Denmark)

    Stockmann, Reto; Christiansen, Freddy; Olsen, Nils

    2015-01-01

    We present an attempt to improve the quality of the geomagnetic field measurements from the Polar Orbiting Geophysical Observatory (POGO) satellite missions in the late 1960s. Inaccurate satellite positions are believed to be a major source of errors for using the magnetic observations for field...... modelling. To improve the data, we use aniterative approach consisting of two main parts: one is a main field modelling process to obtain the radial fieldgradient to perturb the orbits and the other is the state-of-the-art GPS orbit modelling software BERNESE to calculatenew physical orbits. We report....... With this approach, weeliminate the orbit discontinuities at midnight but only tiny quality improvements could be achieved forgeomagnetically quiet data. We believe that improvements to the data are probably still possible, but it would require the original tracking observations to be found....

  3. Evaluating the potential of automated telephony systems in rural communities: Field assessment for project Lwazi of HLT Meraka

    CSIR Research Space (South Africa)

    Gumede, T

    2008-11-01

    Full Text Available the potential role automated telephony services in the improving access to important government information and services. Our interviews, focus groups and surveys revealed that an automated telephony service could be greatly support current government efforts...

  4. Online corrections - Evidence based practice utilizing electronic portal imaging to improve the accuracy of field placement for locally advanced prostate cancer

    International Nuclear Information System (INIS)

    Middleton, M.; Medwell, S.; Rolfo, A.; Joon, M.L.

    2003-01-01

    The requirement of accurate field placement in the treatment of locally advanced prostate cancer is of great significance given the onset of dose escalation and increased Planning Target Volume (PTV) conformity. With these factors in mind, it becomes essential to ensure accurate field placement for the duration of a course of Radiotherapy. This study examines the role of Online Corrections to increase accuracy of field placement, utilizing Varian Vision EPI equipment. The study also examines the hypothetical scenario of effect on three-dimensional computer dosimetry if Online Corrections were not performed, incorporating TCP and NTCP data. Field placement data was collected on patients receiving radical radiotherapy to the prostate utilizing the Varian Vision (TM)EPI software. Both intra and inter field data was collected with Online Corrections being carried out within the confines of the BAROC PROSTATE EPI POLICY. Analysis was performed on the data to illustrate the value of Online Corrections in the pursuit of accurate field placement. This evidence was further supported by computer dosimetry presenting the worst case possible impact upon a patients total course of treatment if Online Corrections were not performed. The use of Online Corrections can prove to be of enormous benefit to both patient and practitioner. For centres with the available technology, it places the responsibility of field placement upon the Radiation Therapist. This responsibility in turn impacts on the education, training and empowerment of the Radiation Therapy group. These are issues of the utmost importance to centres considering the use of Online Corrections

  5. Arbitrary magnetic field gradient waveform correction using an impulse response based pre-equalization technique.

    Science.gov (United States)

    Goora, Frédéric G; Colpitts, Bruce G; Balcom, Bruce J

    2014-01-01

    The time-varying magnetic fields used in magnetic resonance applications result in the induction of eddy currents on conductive structures in the vicinity of both the sample under investigation and the gradient coils. These eddy currents typically result in undesired degradations of image quality for MRI applications. Their ubiquitous nature has resulted in the development of various approaches to characterize and minimize their impact on image quality. This paper outlines a method that utilizes the magnetic field gradient waveform monitor method to directly measure the temporal evolution of the magnetic field gradient from a step-like input function and extracts the system impulse response. With the basic assumption that the gradient system is sufficiently linear and time invariant to permit system theory analysis, the impulse response is used to determine a pre-equalized (optimized) input waveform that provides a desired gradient response at the output of the system. An algorithm has been developed that calculates a pre-equalized waveform that may be accurately reproduced by the amplifier (is physically realizable) and accounts for system limitations including system bandwidth, amplifier slew rate capabilities, and noise inherent in the initial measurement. Significant improvements in magnetic field gradient waveform fidelity after pre-equalization have been realized and are summarized. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. First-order correction terms in the weak-field asymptotic theory of tunneling ionization

    DEFF Research Database (Denmark)

    Trinh, Vinh H.; Tolstikhin, Oleg I.; Madsen, Lars Bojer

    2013-01-01

    of the WFAT at the quantitative level toward stronger fields, practically up to the boundary between tunneling and over-the-barrier regimes of ionization. The results apply to any atom or molecule treated in the single-active-electron and frozen-nuclei approximations. The theory is illustrated by calculations...

  7. Correction factors for clinical dosemeters used in dosimetry of large fields

    International Nuclear Information System (INIS)

    Campos, L.L.; Caldas, L.V.E.

    1989-01-01

    A method for using by physicist for evaluation of irradiation effect in cables connectors of ionization chambers, and the consequent determination of adequate conection factors, for each measure the geometric system, during the realization of large fields radiation dosimetry are studied. (C.G.C.) [pt

  8. Automated high resolution full-field spatial coherence tomography for quantitative phase imaging of human red blood cells

    Science.gov (United States)

    Singla, Neeru; Dubey, Kavita; Srivastava, Vishal; Ahmad, Azeem; Mehta, D. S.

    2018-02-01

    We developed an automated high-resolution full-field spatial coherence tomography (FF-SCT) microscope for quantitative phase imaging that is based on the spatial, rather than the temporal, coherence gating. The Red and Green color laser light was used for finding the quantitative phase images of unstained human red blood cells (RBCs). This study uses morphological parameters of unstained RBCs phase images to distinguish between normal and infected cells. We recorded the single interferogram by a FF-SCT microscope for red and green color wavelength and average the two phase images to further reduced the noise artifacts. In order to characterize anemia infected from normal cells different morphological features were extracted and these features were used to train machine learning ensemble model to classify RBCs with high accuracy.

  9. Resting State fMRI in the moving fetus: a robust framework for motion, bias field and spin history correction.

    Science.gov (United States)

    Ferrazzi, Giulio; Kuklisova Murgasova, Maria; Arichi, Tomoki; Malamateniou, Christina; Fox, Matthew J; Makropoulos, Antonios; Allsop, Joanna; Rutherford, Mary; Malik, Shaihan; Aljabar, Paul; Hajnal, Joseph V

    2014-11-01

    There is growing interest in exploring fetal functional brain development, particularly with Resting State fMRI. However, during a typical fMRI acquisition, the womb moves due to maternal respiration and the fetus may perform large-scale and unpredictable movements. Conventional fMRI processing pipelines, which assume that brain movements are infrequent or at least small, are not suitable. Previous published studies have tackled this problem by adopting conventional methods and discarding as much as 40% or more of the acquired data. In this work, we developed and tested a processing framework for fetal Resting State fMRI, capable of correcting gross motion. The method comprises bias field and spin history corrections in the scanner frame of reference, combined with slice to volume registration and scattered data interpolation to place all data into a consistent anatomical space. The aim is to recover an ordered set of samples suitable for further analysis using standard tools such as Group Independent Component Analysis (Group ICA). We have tested the approach using simulations and in vivo data acquired at 1.5 T. After full motion correction, Group ICA performed on a population of 8 fetuses extracted 20 networks, 6 of which were identified as matching those previously observed in preterm babies. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Automated system for complex investigation of radiative and hydrophysical fields of the ocean

    International Nuclear Information System (INIS)

    Sojfer, V.N.; Kobylyanskij, V.V.; Myagkikh, A.I.; Mojsechenko, V.V.; Rozhkova, O.B.

    1980-01-01

    An automated system for synchronous measurements of the differential gamma spectra of sea water and its hydrologic parameters: temperature, electric conductivity and pressure, -is described. The problems of increasing the capacity of data transferred, unification of the various type data transmission system and improving its reliability are solved by pulse modulation of signals through binary codes. The system consists of a submerged unit, aboard unit for data reception and decoding, the NTA-1024 pulse analyzer, the ''Spark-125'' computer and a set of peripheral units for data storage and representation. The flowsheet of the submerged probe-gamma spectrometer is presented. The ST-4-16 termistor and a NaI(Tl) crystal serves as temperature and γ radiation detectors respectively. The whole electronic logic unit is made of contemporary integral microcircuits. A number of original engineering solutions have been used in designing the probe, that increased the data transmission rate via cabel by 3 orders of magnitude (up to 30 kbites/s) in comparison with modern hydrophysical probes without any deterioration of its accuracy and temperature measurement ranges

  11. Scott Correction for Large Atoms and Molecules in a Self-Generated Magnetic Field

    DEFF Research Database (Denmark)

    Erdös, Laszlo; Fournais, Søren; Solovej, Jan Philip

    2012-01-01

    constant. We show that, in the simultaneous limit $Z\\to\\infty$, $\\al\\to 0$ such that $\\kappa =Z\\al^2$ is fixed, the ground state energy of the system is given by a two term expansion $c_1Z^{7/3} + c_2(\\kappa) Z^2 + o(Z^2)$. The leading term is given by the non-magnetic Thomas-Fermi theory. Our result shows......We consider a large neutral molecule with total nuclear charge $Z$ in non-relativistic quantum mechanics with a self-generated classical electromagnetic field. To ensure stability, we assume that $Z\\al^2\\le \\kappa_0$ for a sufficiently small $\\kappa_0$, where $\\al$ denotes the fine structure...... that the magnetic field affects only the second (so-called Scott) term in the expansion....

  12. Development of a new error field correction coil (C-coil) for DIII-D

    International Nuclear Information System (INIS)

    Robinson, J.I.; Scoville, J.T.

    1995-12-01

    The C-coil recently installed on the DIII-D tokamak was developed to reduce the error fields created by imperfections in the location and geometry of the existing coils used to confine, heat, and shape the plasma. First results from C-coil experiments include stable operation in a 1.6 MA plasma with a density less than 1.0 x 10 13 cm -3 , nearly a factor of three lower density than that achievable without the C-coil. The C-coil has also been used in magnetic braking of the plasma rotation and high energy particle confinement experiments. The C-coil system consists of six individual saddle coils, each 60 degree wide toroidally, spanning the midplane of the vessel with a vertical height of 1.6 m. The coils are located at a major radius of 3.2 m, just outside of the toroidal field coils. The actual shape and geometry of each coil section varied somewhat from the nominal dimensions due to the large number of obstructions to the desired coil path around the already crowded tokamak. Each coil section consists of four turns of 750 MCM insulated copper cable banded with stainless steel straps within the web of a 3 in. x 3 in. stainless steel angle frame. The C-coil structure was designed to resist peak transient radial forces (up to 1,800 Nm) exerted on the coil by the toroidal and ploidal fields. The coil frames were supported from existing poloidal field coil case brackets, coil studs, and various other structures on the tokamak

  13. Geometric corrections due to inhomogeneous field in the magnetospheric double current layer

    International Nuclear Information System (INIS)

    Callebaut, D.K.; Van den Buys, A.M.

    1985-01-01

    The case of oblique incidence and of a slope in the magnetic field for plane parallel models of the magnetospheric double layer is considered. The two models are the Magnetospheric Double Layer (MDL) and the Magnetospheric Double Current Layer (MDCL). The latter is more appropriate but due to some approximations it gives sometimes incorrect results. An improved model uses a triple current layer. (R.P.)

  14. Automated extraction of faults and porous reservoir bodies. Examples from the Vallhall Field

    Energy Technology Data Exchange (ETDEWEB)

    Barkved, Olav Inge; Whitman, Doug; Kunz, Tim

    1998-12-31

    The Norwegian Vahall field is located 250 km South-West of Stavanger. The production is primarily from the highly porous and fractured chalk, the Tor formation. Fractures, evidently play a significant role in enhancing flow properties as well as production rates, are significantly higher than expected from matrix permeability alone. The fractures are primarily tectonically induced and related to faulting. Syn-depositional faulting is believed to be a controlling factor on reservoir thickness variations observed across the field. Due to the low acoustic contrast and weak appearance of the highly porous chalk, direct evidence of faulting in well bore logs is limited. The seismic data quality in the most central area of the field is very poor due to tertiary gas charging, but in the flank area of the field, the quality is excellent. 1 ref., 5 figs.

  15. Intelligent indexing: a semi-automated, trainable system for field labeling

    Science.gov (United States)

    Clawson, Robert; Barrett, William

    2015-01-01

    We present Intelligent Indexing: a general, scalable, collaborative approach to indexing and transcription of non-machinereadable documents that exploits visual consensus and group labeling while harnessing human recognition and domain expertise. In our system, indexers work directly on the page, and with minimal context switching can navigate the page, enter labels, and interact with the recognition engine. Interaction with the recognition engine occurs through preview windows that allow the indexer to quickly verify and correct recommendations. This interaction is far superior to conventional, tedious, inefficient post-correction and editing. Intelligent Indexing is a trainable system that improves over time and can provide benefit even without prior knowledge. A user study was performed to compare Intelligent Indexing to a basic, manual indexing system. Volunteers report that using Intelligent Indexing is less mentally fatiguing and more enjoyable than the manual indexing system. Their results also show that it reduces significantly (30.2%) the time required to index census records, while maintaining comparable accuracy. (a video demonstration is available at http://youtube.com/gqdVzEPnBEw)

  16. Multiple scattering corrections to the Beer-Lambert law. 2: Detector with a variable field of view.

    Science.gov (United States)

    Zardecki, A; Tam, W G

    1982-07-01

    The multiple scattering corrections to the Beer-Lambert law in the case of a detector with a variable field of view are analyzed. We introduce transmission functions relating the received radiant power to reference power levels relevant to two different experimental situations. In the first case, the transmission function relates the received power to a reference power level appropriate to a nonattenuating medium. In the second case, the reference power level is established by bringing the receiver to the close-up position with respect to the source. To examine the effect of the variation of the detector field of view the behavior of the gain factor is studied. Numerical results modeling the laser beam propagation in fog, cloud, and rain are presented.

  17. Cubic Dresselhaus interaction parameter from quantum corrections to the conductivity in the presence of an in-plane magnetic field

    Science.gov (United States)

    Marinescu, D. C.

    2017-09-01

    We evaluate the quantum corrections to the conductivity of a two-dimensional electron system with competing Rashba (R) and linear and cubic Dresselhaus (D) spin-orbit interactions in the presence of an in-plane magnetic field B . Within a perturbative approximation, we investigate the interplay between the spin-orbit coupling and the magnetic field in determining the transport regime in two different limiting scenarios: when only one of the linear terms, either Rashba or Dresselhaus, dominates, and at equal linear couplings, when the cubic Dresselhaus breaks the spin symmetry. In each instance, we find that for B higher than a critical value, the antilocalization correction is suppressed and the effective dephasing time saturates to a constant value determined only by the spin-orbit interaction. At equal R-D linear couplings, this value is directly proportional with the cubic Dresselhaus contribution. In the same regime, the magnetoconductivity is expressed as a simple logarithmic function dependent only on the cubic Dresselhaus constant.

  18. Analysis of the failures and corrective actions for the LHC cryogenics radiation tolerant electronics and its field instruments

    Energy Technology Data Exchange (ETDEWEB)

    Balle, Christoph; Casas, Juan; Vauthier, Nicolas [CERN, TE Department, 1211 Geneva (Switzerland)

    2014-01-29

    The LHC cryogenic system radiation tolerant electronics and their associated field instruments have been in nominal conditions since before the commissioning of the first LHC beams in September 2008. This system is made of about 15’000 field instruments (thermometers, pressure sensors, liquid helium level gauges, electrical heaters and position switches), 7’500 electronic cards and 853 electronic crates. Since mid-2008 a software tool has been deployed, this allows an operator to report a problem and then lists the corrective actions. The tool is a great help in detecting recurrent problems that may be tackled by a hardware or software consolidation. The corrective actions range from simple resets, exchange of defective equipment, repair of electrical connectors, etc. However a recurrent problem that heals by itself is present on some channels. This type of fault is extremely difficult to diagnose and it appears as a temporary opening of an electrical circuit; its duration can range from a few minutes to several months. This paper presents the main type of problems encountered during the last four years, their evolution over time, the various hardware or software consolidations that have resulted and whether they have had an impact in the availability of the LHC beam.

  19. Edge-Corrected Mean-Field Hubbard Model: Principle and Applications in 2D Materials

    Directory of Open Access Journals (Sweden)

    Xi Zhang

    2017-05-01

    Full Text Available This work reviews the current progress of tight-binding methods and the recent edge-modified mean-field Hubbard model. Undercoordinated atoms (atoms not fully coordinated exist at a high rate in nanomaterials with their impact overlooked. A quantum theory was proposed to calculate electronic structure of nanomaterials by incorporating bond order-length-strength (BOLS correlation to mean-field Hubbard model, i.e., BOLS-HM. Consistency between the BOLS-HM calculation and density functional theory (DFT calculation on 2D materials verified that (i bond contractions and potential well depression occur at the edge of graphene, phosphorene, and antimonene nanoribbons; (ii the physical origin of the band gap opening of graphene, phosphorene, and antimonene nanoribbons lays in the enhancement of edge potentials and hopping integrals due to the shorter and stronger bonds between undercoordinated atoms; (iii the band gap of 2D material nanoribbons expand as the width decreases due to the increasing under-coordination effects of edges which modulates the conductive behaviors; and (iv non-bond electrons at the edges and atomic vacancies of 2D material accompanied with the broken bond contribute to the Dirac-Fermi polaron (DFP with a local magnetic moment.

  20. Corrections for a constant radial magnetic field in the muon g - 2 and electric-dipole-moment experiments in storage rings

    Energy Technology Data Exchange (ETDEWEB)

    Silenko, Alexander J. [Belarusian State University, Research Institute for Nuclear Problems, Minsk (Belarus); Joint Institute for Nuclear Research, Bogoliubov Laboratory of Theoretical Physics, Dubna (Russian Federation)

    2017-10-15

    We calculate the corrections for constant radial magnetic field in muon g - 2 and electric-dipole-moment experiments in storage rings. While the correction is negligible for the current generation of g - 2 experiments, it affects the upcoming muon electric-dipole-moment experiment at Fermilab. (orig.)

  1. Corrections for a constant radial magnetic field in the muon \\varvec{g}-2 and electric-dipole-moment experiments in storage rings

    Science.gov (United States)

    Silenko, Alexander J.

    2017-10-01

    We calculate the corrections for constant radial magnetic field in muon {g}-2 and electric-dipole-moment experiments in storage rings. While the correction is negligible for the current generation of {g}-2 experiments, it affects the upcoming muon electric-dipole-moment experiment at Fermilab.

  2. Temporal resolution criterion for correctly simulating relativistic electron motion in a high-intensity laser field

    Energy Technology Data Exchange (ETDEWEB)

    Arefiev, Alexey V. [Institute for Fusion Studies, The University of Texas, Austin, Texas 78712 (United States); Cochran, Ginevra E.; Schumacher, Douglass W. [Physics Department, The Ohio State University, Columbus, Ohio 43210 (United States); Robinson, Alexander P. L. [Central Laser Facility, STFC Rutherford-Appleton Laboratory, Didcot OX11 0QX (United Kingdom); Chen, Guangye [Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)

    2015-01-15

    Particle-in-cell codes are now standard tools for studying ultra-intense laser-plasma interactions. Motivated by direct laser acceleration of electrons in sub-critical plasmas, we examine temporal resolution requirements that must be satisfied to accurately calculate electron dynamics in strong laser fields. Using the motion of a single electron in a perfect plane electromagnetic wave as a test problem, we show surprising deterioration of the numerical accuracy with increasing wave amplitude a{sub 0} for a given time-step. We go on to show analytically that the time-step must be significantly less than λ/ca{sub 0} to achieve good accuracy. We thus propose adaptive electron sub-cycling as an efficient remedy.

  3. Quantum-corrected plasmonic field analysis using a time domain PMCHWT integral equation

    KAUST Repository

    Uysal, Ismail E.

    2016-03-13

    When two structures are within sub-nanometer distance of each other, quantum tunneling, i.e., electrons "jumping" from one structure to another, becomes relevant. Classical electromagnetic solvers do not directly account for this additional path of current. In this work, an auxiliary tunnel made of Drude material is used to "connect" the structures as a support for this current path (R. Esteban et al., Nat. Commun., 2012). The plasmonic fields on the resulting connected structure are analyzed using a time domain surface integral equation solver. Time domain samples of the dispersive medium Green function and the dielectric permittivities are computed from the analytical inverse Fourier transform applied to the rational function representation of their frequency domain samples.

  4. Prisons and Correctional Facilities, Located during MicroData field address collection 2004-2006. Kept in Spillman database for retrieval., Published in 2004, Vilas County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Prisons and Correctional Facilities dataset current as of 2004. Located during MicroData field address collection 2004-2006. Kept in Spillman database for retrieval..

  5. Automated TV based system for open field studies: Effects of methamphetamine

    NARCIS (Netherlands)

    Tanger, H.J.; Vanwersch, R.A.P.; Wolthuis, O.L.

    1978-01-01

    A method is described whereby open field behaviour of rats can be automatically registered using a TV camera, a video converter, an X-Y recorder and a papertape puncher. Use is made of the scanning properties of the TV camera to obtain the X and Y coordinates of the rat's position and to print this

  6. Automating the mean-field method for large dynamic gossip networks

    NARCIS (Netherlands)

    Bakhshi, Rena; Endrullis, Jörg; Endrullis, Stefan; Fokkink, Wan; Haverkort, Boudewijn R.H.M.

    We investigate an abstraction method, called mean- field method, for the performance evaluation of dynamic net- works with pairwise communication between nodes. It allows us to evaluate systems with very large numbers of nodes, that is, systems of a size where traditional performance evaluation

  7. Use of a platform in an automated open-field to enhance assessment of anxiety-like behaviors in mice.

    Science.gov (United States)

    Pogorelov, Vladimir M; Lanthorn, Thomas H; Savelieva, Katerina V

    2007-05-15

    The present report describes a setup for simultaneously measuring anxiety-like behaviors and locomotor activity in mice. Animals are placed in a brightly lit, standard automated open-field (OF) in which a rectangular ceramic platform 8 cm high covers one quadrant of the floor. Mice preferred to stay under the platform, avoiding the area with bright illumination. Activities under and outside the platform were measured for 5 min. Chlordiazepoxide and buspirone dose-dependently increased time spent outside the platform (L-Time) and the light distance to total OF distance ratio (L:T-TD) in both genders without changing total OF distance. By contrast, amphetamine decreased L-Time and L:T-TD in males, thus displaying an anxiogenic effect. Imipramine was without selective effect on L-Time or L:T-TD, but decreased total OF distance at the highest dose indicative of a sedative effect. Drug effects were also evaluated in the OF without platform using conventional anxiety measures. Introduction of the platform into the OF apparatus strongly enhanced the sensitivity to anxiolytics. Comparison of strains differing in activity or anxiety levels showed that L-Time and L:T-TD can be used as measures of anxiety-like behavior independent of locomotor activity. Changes in motor activity are reflected in the total distance traveled under and outside the platform. Therefore, the platform test is fully automated, sensitive to both anxiolytic and anxiogenic effects of drugs and genetic phenotypes with little evidence of gender-specific responses, and can be easily utilized by most laboratories measuring behavior.

  8. Operational experience with open communication in the field of building automation at the IBM center Ehningen; GA-Betriebserfahrung - mit offener Kommunikation im Informatikzentrum Ehningen

    Energy Technology Data Exchange (ETDEWEB)

    Damnig, A [IBM Deutschland Informationssysteme GmbH, Ehningen bei Boeblingen (Germany)

    1995-12-31

    In chapter 21 of the anthology about building control the operational experience with open communication in the field of building automation is discussed. The following aspects are discussed: building automation at IBM in Ehningen, the FACN experience, what has been achieved? Energy and operation optimisation. (BWI) [Deutsch] Kapitel 21 des Sammelbandes ueber Building Control ist dem Thema der Betriebserfahrung mit Gebaeudeautomationen mit offener Kommunikation gewidmet. In diesem Zusammenhang werden folgende Themenbereiche angesprochen: Gebaeudeautomation bei IBM in Ehningen, das FACN-Protokoll; Betriebserfahrungen; Was wurde erreicht?; Energie- und Betriebsoptimierungen. (BWI)

  9. Rough Sets and Stomped Normal Distribution for Simultaneous Segmentation and Bias Field Correction in Brain MR Images.

    Science.gov (United States)

    Banerjee, Abhirup; Maji, Pradipta

    2015-12-01

    The segmentation of brain MR images into different tissue classes is an important task for automatic image analysis technique, particularly due to the presence of intensity inhomogeneity artifact in MR images. In this regard, this paper presents a novel approach for simultaneous segmentation and bias field correction in brain MR images. It integrates judiciously the concept of rough sets and the merit of a novel probability distribution, called stomped normal (SN) distribution. The intensity distribution of a tissue class is represented by SN distribution, where each tissue class consists of a crisp lower approximation and a probabilistic boundary region. The intensity distribution of brain MR image is modeled as a mixture of finite number of SN distributions and one uniform distribution. The proposed method incorporates both the expectation-maximization and hidden Markov random field frameworks to provide an accurate and robust segmentation. The performance of the proposed approach, along with a comparison with related methods, is demonstrated on a set of synthetic and real brain MR images for different bias fields and noise levels.

  10. Simultaneous PET-MR acquisition and MR-derived motion fields for correction of non-rigid motion in PET

    International Nuclear Information System (INIS)

    Tsoumpas, C.; Mackewn, J.E.; Halsted, P.; King, A.P.; Buerger, C.; Totman, J.J.; Schaeffter, T.; Marsden, P.K.

    2010-01-01

    Positron emission tomography (PET) provides an accurate measurement of radiotracer concentration in vivo, but performance can be limited by subject motion which degrades spatial resolution and quantitative accuracy. This effect may become a limiting factor for PET studies in the body as PET scanner technology improves. In this work, we propose a new approach to address this problem by employing motion information from images measured simultaneously using a magnetic resonance (MR) scanner. The approach is demonstrated using an MR-compatible PET scanner and PET-MR acquisition with a purpose-designed phantom capable of non-rigid deformations. Measured, simultaneously acquired MR data were used to correct for motion in PET, and results were compared with those obtained using motion information from PET images alone. Motion artefacts were significantly reduced and the PET image quality and quantification was significantly improved by the use of MR motion fields, whilst the use of PET-only motion information was less successful. Combined PET-MR acquisitions potentially allow PET motion compensation in whole-body acquisitions without prolonging PET acquisition time or increasing radiation dose. This, to the best of our knowledge, is the first study to demonstrate that simultaneously acquired MR data can be used to estimate and correct for the effects of non-rigid motion in PET. (author)

  11. Evaluation of a portable automated serum chemistry analyzer for field assessment of harlequin ducks, Histrionicus histrionicus

    Science.gov (United States)

    Stoskopf, Michael K.; Mulcahy, Daniel M.; Esler, Daniel N.

    2010-01-01

    A portable analytical chemistry analyzer was used to make field assessments of wild harlequin ducks (Histrionicus histrionicus) in association with telemetry studies of winter survival in Prince William Sound, Alaska. We compared serum chemistry results obtained on-site with results from a traditional laboratory. Particular attention was paid to serum glucose and potassium concentrations as potential indicators of high-risk surgical candidates based on evaluation of the field data. The median differential for glucose values ( = 8 2) between methods was 0.6 mmol/L (quartiles 0.3 and 0.9 mmol/L) with the median value higher when assayed on site. Analysis of potassium on site returned a median of 2.7 mmol/L ( = 8 8 ; quartiles 2.4 and 3.0 mmol/L). Serum potassium values were too low for quantitation by the traditional laboratory. Changes in several serum chemistry values following a three-day storm during the study support the value of on site evaluation of serum potassium to identify presurgical patients with increased anesthetic risk.

  12. Determination of small field synthetic single-crystal diamond detector correction factors for CyberKnife, Leksell Gamma Knife Perfexion and linear accelerator.

    Science.gov (United States)

    Veselsky, T; Novotny, J; Pastykova, V; Koniarova, I

    2017-12-01

    The aim of this study was to determine small field correction factors for a synthetic single-crystal diamond detector (PTW microDiamond) for routine use in clinical dosimetric measurements. Correction factors following small field Alfonso formalism were calculated by comparison of PTW microDiamond measured ratio M Qclin fclin /M Qmsr fmsr with Monte Carlo (MC) based field output factors Ω Qclin,Qmsr fclin,fmsr determined using Dosimetry Diode E or with MC simulation itself. Diode measurements were used for the CyberKnife and Varian Clinac 2100C/D linear accelerator. PTW microDiamond correction factors for Leksell Gamma Knife (LGK) were derived using MC simulated reference values from the manufacturer. PTW microDiamond correction factors for CyberKnife field sizes 25-5 mm were mostly smaller than 1% (except for 2.9% for 5 mm Iris field and 1.4% for 7.5 mm fixed cone field). The correction of 0.1% and 2.0% for 8 mm and 4 mm collimators, respectively, needed to be applied to PTW microDiamond measurements for LGK Perfexion. Finally, PTW microDiamond M Qclin fclin /M Qmsr fmsr for the linear accelerator varied from MC corrected Dosimetry Diode data by less than 0.5% (except for 1 × 1 cm 2 field size with 1.3% deviation). Regarding low resulting correction factor values, the PTW microDiamond detector may be considered an almost ideal tool for relative small field dosimetry in a large variety of stereotactic and radiosurgery treatment devices. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  13. Automated x-ray/light field congruence using the LINAC EPID panel

    Energy Technology Data Exchange (ETDEWEB)

    Polak, Wojciech [Department of Medical Physics, Royal Surrey County Hospital, Guildford GU2 7XX (United Kingdom); Department of Medical Physics, Radiotherapy Section, Queen Alexandra Hospital NHS Trust, Portsmouth PO6 3LY (United Kingdom); O' Doherty, Jim [Division of Imaging Sciences and Biomedical Engineering, King' s College London, London SE1 7EH, United Kingdom and Department of Medical Physics, Royal Surrey County Hospital, Guildford GU2 7XX (United Kingdom); Jones, Matt [Department of Medical Physics, Royal Surrey County Hospital, Guildford GU2 7XX (United Kingdom)

    2013-03-15

    Purpose: X-ray/light field alignment is a test described in many guidelines for the routine quality control of clinical linear accelerators (LINAC). Currently, the gold standard method for measuring alignment is through utilization of radiographic film. However, many modern LINACs are equipped with an electronic portal imaging device (EPID) that may be used to perform this test and thus subsequently reducing overall cost, processing, and analysis time, removing operator dependency and the requirement to sustain the departmental film processor. Methods: This work describes a novel method of utilizing the EPID together with a custom inhouse designed jig and automatic image processing software allowing measurement of the light field size, x-ray field size, and congruence between them. The authors present results of testing the method for aS1000 and aS500 Varian EPID detectors for six LINACs at a range of energies (6, 10, and 15 MV) in comparison with the results obtained from the use of radiographic film. Results: Reproducibility of the software in fully automatic operation under a range of operating conditions for a single image showed a congruence of 0.01 cm with a coefficient of variation of 0. Slight variation in congruence repeatability was noted through semiautomatic processing by four independent operators due to manual marking of positions on the jig. Testing of the methodology using the automatic method shows a high precision of 0.02 mm compared to a maximum of 0.06 mm determined by film processing. Intraindividual examination of operator measurements of congruence was shown to vary as much as 0.75 mm. Similar congruence measurements of 0.02 mm were also determined for a lower resolution EPID (aS500 model), after rescaling of the image to the aS1000 image size. Conclusions: The designed methodology was proven to be time efficient, cost effective, and at least as accurate as using the gold standard radiographic film. Additionally, congruence testing can be

  14. Automated x-ray/light field congruence using the LINAC EPID panel

    International Nuclear Information System (INIS)

    Polak, Wojciech; O’Doherty, Jim; Jones, Matt

    2013-01-01

    Purpose: X-ray/light field alignment is a test described in many guidelines for the routine quality control of clinical linear accelerators (LINAC). Currently, the gold standard method for measuring alignment is through utilization of radiographic film. However, many modern LINACs are equipped with an electronic portal imaging device (EPID) that may be used to perform this test and thus subsequently reducing overall cost, processing, and analysis time, removing operator dependency and the requirement to sustain the departmental film processor. Methods: This work describes a novel method of utilizing the EPID together with a custom inhouse designed jig and automatic image processing software allowing measurement of the light field size, x-ray field size, and congruence between them. The authors present results of testing the method for aS1000 and aS500 Varian EPID detectors for six LINACs at a range of energies (6, 10, and 15 MV) in comparison with the results obtained from the use of radiographic film. Results: Reproducibility of the software in fully automatic operation under a range of operating conditions for a single image showed a congruence of 0.01 cm with a coefficient of variation of 0. Slight variation in congruence repeatability was noted through semiautomatic processing by four independent operators due to manual marking of positions on the jig. Testing of the methodology using the automatic method shows a high precision of 0.02 mm compared to a maximum of 0.06 mm determined by film processing. Intraindividual examination of operator measurements of congruence was shown to vary as much as 0.75 mm. Similar congruence measurements of 0.02 mm were also determined for a lower resolution EPID (aS500 model), after rescaling of the image to the aS1000 image size. Conclusions: The designed methodology was proven to be time efficient, cost effective, and at least as accurate as using the gold standard radiographic film. Additionally, congruence testing can be

  15. Automated x-ray/light field congruence using the LINAC EPID panel.

    Science.gov (United States)

    Polak, Wojciech; O'Doherty, Jim; Jones, Matt

    2013-03-01

    X-ray/light field alignment is a test described in many guidelines for the routine quality control of clinical linear accelerators (LINAC). Currently, the gold standard method for measuring alignment is through utilization of radiographic film. However, many modern LINACs are equipped with an electronic portal imaging device (EPID) that may be used to perform this test and thus subsequently reducing overall cost, processing, and analysis time, removing operator dependency and the requirement to sustain the departmental film processor. This work describes a novel method of utilizing the EPID together with a custom inhouse designed jig and automatic image processing software allowing measurement of the light field size, x-ray field size, and congruence between them. The authors present results of testing the method for aS1000 and aS500 Varian EPID detectors for six LINACs at a range of energies (6, 10, and 15 MV) in comparison with the results obtained from the use of radiographic film. Reproducibility of the software in fully automatic operation under a range of operating conditions for a single image showed a congruence of 0.01 cm with a coefficient of variation of 0. Slight variation in congruence repeatability was noted through semiautomatic processing by four independent operators due to manual marking of positions on the jig. Testing of the methodology using the automatic method shows a high precision of 0.02 mm compared to a maximum of 0.06 mm determined by film processing. Intraindividual examination of operator measurements of congruence was shown to vary as much as 0.75 mm. Similar congruence measurements of 0.02 mm were also determined for a lower resolution EPID (aS500 model), after rescaling of the image to the aS1000 image size. The designed methodology was proven to be time efficient, cost effective, and at least as accurate as using the gold standard radiographic film. Additionally, congruence testing can be easily performed for all four cardinal

  16. Analytical Formulation of the Electric Field Induced by Electrode Arrays: Towards Automated Dielectrophoretic Cell Sorting

    Directory of Open Access Journals (Sweden)

    Vladimir Gauthier

    2017-08-01

    Full Text Available Dielectrophoresis is defined as the motion of an electrically polarisable particle in a non-uniform electric field. Current dielectrophoretic devices enabling sorting of cells are mostly controlled in open-loop applying a predefined voltage on micro-electrodes. Closed-loop control of these devices would enable to get advanced functionalities and also more robust behavior. Currently, the numerical models of dielectrophoretic force are too complex to be used in real-time closed-loop control. The aim of this paper is to propose a new type of models usable in this framework. We propose an analytical model of the electric field based on Fourier series to compute the dielectrophoretic force produced by parallel electrode arrays. Indeed, this method provides an analytical expression of the electric potential which decouples the geometrical factors (parameter of our system, the voltages applied on electrodes (input of our system, and the position of the cells (output of our system. Considering the Newton laws on each cell, it enables to generate easily a dynamic model of the cell positions (output function of the voltages on electrodes (input. This dynamic model of our system is required to design the future closed-loop control law. The predicted dielectrophoretic forces are compared to a numerical simulation based on finite element model using COMSOL software. The model presented in this paper enables to compute the dielectrophoretic force applied to a cell by an electrode array in a few tenths of milliseconds. This model could be consequently used in future works for closed-loop control of dielectrophoretic devices.

  17. SU-E-T-469: A Practical Approach for the Determination of Small Field Output Factors Using Published Monte Carlo Derived Correction Factors

    International Nuclear Information System (INIS)

    Calderon, E; Siergiej, D

    2014-01-01

    Purpose: Output factor determination for small fields (less than 20 mm) presents significant challenges due to ion chamber volume averaging and diode over-response. Measured output factor values between detectors are known to have large deviations as field sizes are decreased. No set standard to resolve this difference in measurement exists. We observed differences between measured output factors of up to 14% using two different detectors. Published Monte Carlo derived correction factors were used to address this challenge and decrease the output factor deviation between detectors. Methods: Output factors for Elekta's linac-based stereotactic cone system were measured using the EDGE detector (Sun Nuclear) and the A16 ion chamber (Standard Imaging). Measurements conditions were 100 cm SSD (source to surface distance) and 1.5 cm depth. Output factors were first normalized to a 10.4 cm × 10.4 cm field size using a daisy-chaining technique to minimize the dependence of field size on detector response. An equation expressing the relation between published Monte Carlo correction factors as a function of field size for each detector was derived. The measured output factors were then multiplied by the calculated correction factors. EBT3 gafchromic film dosimetry was used to independently validate the corrected output factors. Results: Without correction, the deviation in output factors between the EDGE and A16 detectors ranged from 1.3 to 14.8%, depending on cone size. After applying the calculated correction factors, this deviation fell to 0 to 3.4%. Output factors determined with film agree within 3.5% of the corrected output factors. Conclusion: We present a practical approach to applying published Monte Carlo derived correction factors to measured small field output factors for the EDGE and A16 detectors. Using this method, we were able to decrease the percent deviation between both detectors from 14.8% to 3.4% agreement

  18. Field-portable and automated immunosensors for hexavalent uranium, other heavy metals and chelators. Final Report

    International Nuclear Information System (INIS)

    Blake, Diane A.

    2009-01-01

    This is the final technical report for this 10-year project. A better understanding of in situ bioremediation processes and the development of strategies to enhance bacterial remediation of contaminated sites depend either directly or indirectly upon accurate detection and measurement of organics, metal and other toxic elements prior to, during and following the remediation process. Detection and measurement costs are presently high due to the complex methodologies required for analysis. Remediation costs could be significantly reduced through the use of rapid, simple on-site methods. The cost of laboratory analysis continues to climb and the outlay for the assessment of a single site can frequently reach hundreds of thousands of dollars. One estimate suggests that the use of low cost field methods (defined as less than $100/test) with 5-20% standard laboratory confirmation could reduce analytical costs by greater than 70%. Perhaps as important as the cost of analysis is ability to obtain data about the remediation process in near real-time. The instruments normally used for environmental analysis of uranium (atomic absorption spectrophotometer, inductive coupled plasma emission spectrometer, IC-MS and kinetic phosphorescence analyzer) or can be quite expensive; these instruments are thus usually located only in centralized facilities. Environmental samples must therefore be transported to these facilities and often wait in a queue before they can be analyzed. Both sample transport and time-in-queue lead to long turn-around times (days to weeks). Such long turn-around times are especially worrisome during site remediation, especially when an unexpected finding might dictate a change in the methodologies being employed at the site. The goal of this project was to develop sensors that could yield reliable data in near realtime (< 1 hour) be field-ready (ie, simple, durable and accurate) and present low costs (<< $100/assay and <$5,000 for the initial equipment

  19. “Booster” training: Evaluation of instructor-led bedside cardiopulmonary resuscitation skill training and automated corrective feedback to improve cardiopulmonary resuscitation compliance of Pediatric Basic Life Support providers during simulated cardiac arrest

    Science.gov (United States)

    Sutton, Robert M.; Niles, Dana; Meaney, Peter A.; Aplenc, Richard; French, Benjamin; Abella, Benjamin S.; Lengetti, Evelyn L.; Berg, Robert A.; Helfaer, Mark A.; Nadkarni, Vinay

    2013-01-01

    Objective To investigate the effectiveness of brief bedside “booster” cardiopulmonary resuscitation (CPR) training to improve CPR guideline compliance of hospital-based pediatric providers. Design Prospective, randomized trial. Setting General pediatric wards at Children’s Hospital of Philadelphia. Subjects Sixty-nine Basic Life Support–certified hospital-based providers. Intervention CPR recording/feedback defibrillators were used to evaluate CPR quality during simulated pediatric arrest. After a 60-sec pretraining CPR evaluation, subjects were randomly assigned to one of three instructional/feedback methods to be used during CPR booster training sessions. All sessions (training/CPR manikin practice) were of equal duration (2 mins) and differed only in the method of corrective feedback given to participants during the session. The study arms were as follows: 1) instructor-only training; 2) automated defibrillator feedback only; and 3) instructor training combined with automated feedback. Measurements and Main Results Before instruction, 57% of the care providers performed compressions within guideline rate recommendations (rate >90 min−1 and 38 mm); and 36% met overall CPR compliance (rate and depth within targets). After instruction, guideline compliance improved (instructor-only training: rate 52% to 87% [p .01], and overall CPR compliance, 43% to 78% [p CPR compliance, 35% to 96% [p training combined with automated feedback: rate 48% to 100% [p CPR compliance, 30% to 100% [p CPR instruction, most certified Pediatric Basic Life Support providers did not perform guideline-compliant CPR. After a brief bedside training, CPR quality improved irrespective of training content (instructor vs. automated feedback). Future studies should investigate bedside training to improve CPR quality during actual pediatric cardiac arrests. PMID:20625336

  20. "Booster" training: evaluation of instructor-led bedside cardiopulmonary resuscitation skill training and automated corrective feedback to improve cardiopulmonary resuscitation compliance of Pediatric Basic Life Support providers during simulated cardiac arrest.

    Science.gov (United States)

    Sutton, Robert M; Niles, Dana; Meaney, Peter A; Aplenc, Richard; French, Benjamin; Abella, Benjamin S; Lengetti, Evelyn L; Berg, Robert A; Helfaer, Mark A; Nadkarni, Vinay

    2011-05-01

    To investigate the effectiveness of brief bedside "booster" cardiopulmonary resuscitation (CPR) training to improve CPR guideline compliance of hospital-based pediatric providers. Prospective, randomized trial. General pediatric wards at Children's Hospital of Philadelphia. Sixty-nine Basic Life Support-certified hospital-based providers. CPR recording/feedback defibrillators were used to evaluate CPR quality during simulated pediatric arrest. After a 60-sec pretraining CPR evaluation, subjects were randomly assigned to one of three instructional/feedback methods to be used during CPR booster training sessions. All sessions (training/CPR manikin practice) were of equal duration (2 mins) and differed only in the method of corrective feedback given to participants during the session. The study arms were as follows: 1) instructor-only training; 2) automated defibrillator feedback only; and 3) instructor training combined with automated feedback. Before instruction, 57% of the care providers performed compressions within guideline rate recommendations (rate >90 min(-1) and 38 mm); and 36% met overall CPR compliance (rate and depth within targets). After instruction, guideline compliance improved (instructor-only training: rate 52% to 87% [p .01], and overall CPR compliance, 43% to 78% [p CPR compliance, 35% to 96% [p training combined with automated feedback: rate 48% to 100% [p CPR compliance, 30% to 100% [p CPR instruction, most certified Pediatric Basic Life Support providers did not perform guideline-compliant CPR. After a brief bedside training, CPR quality improved irrespective of training content (instructor vs. automated feedback). Future studies should investigate bedside training to improve CPR quality during actual pediatric cardiac arrests.

  1. Field-programmable gate array based controller for multi spot light-addressable potentiometric sensors with integrated signal correction mode

    Energy Technology Data Exchange (ETDEWEB)

    Werner, Carl Frederik; Schusser, Sebastian; Spelthahn, Heiko [Aachen University of Applied Sciences, Juelich Campus, Institute of Nano- and Biotechnologies, Heinrich-Mussmann-Strasse 1, 52428 Juelich (Germany); Institute of Bio- and Nanosystems (IBN-2), Research Centre Juelich GmbH, 52425 Juelich (Germany); Wagner, Torsten; Yoshinobu, Tatsuo [Tohoku University, Department of Electronic Engineering, 6-6-05 Aramaki Aza Aoba, Aoba-ku, Sendai, Miyagi 980-8579 (Japan); Schoening, Michael J., E-mail: schoening@fh-aachen.de [Aachen University of Applied Sciences, Juelich Campus, Institute of Nano- and Biotechnologies, Heinrich-Mussmann-Strasse 1, 52428 Juelich (Germany); Institute of Bio- and Nanosystems (IBN-2), Research Centre Juelich GmbH, 52425 Juelich (Germany)

    2011-11-01

    Highlights: > Flexible up-scalable design of a light-addressable potentiometric sensor set-up. > Utilisation of a field-programmable gate array to address LAPS measurement spots. > Measurements in amplitude-mode and phase-mode for different pH solutions. > Amplitude, phase and frequency behaviour of LAPS for single and multiple light stimulus. > Signal calibration method by brightness control to compensated systematic errors. - Abstract: A light-addressable potentiometric sensor (LAPS) can measure the concentration of one or several analytes at the sensor surface simultaneously in a spatially resolved manner. A modulated light pointer stimulates the semiconductor structure at the area of interest and a responding photocurrent can be read out. By simultaneous stimulation of several areas with light pointers of different modulation frequencies, the read out can be performed at the same time. With the new proposed controller electronic based on a field-programmable gate array (FPGA), it is possible to control the modulation frequencies, phase shifts, and light brightness of multiple light pointers independently and simultaneously. Thus, it is possible to investigate the frequency response of the sensor, and to examine the analyte concentration by the determination of the surface potential with the help of current/voltage curves and phase/voltage curves. Additionally, the ability to individually change the light intensities of each light pointer is used to perform signal correction.

  2. Breast Imaging Reporting and Data System (BI-RADS) breast composition descriptors: Automated measurement development for full field digital mammography

    International Nuclear Information System (INIS)

    Fowler, E. E.; Sellers, T. A.; Lu, B.; Heine, J. J.

    2013-01-01

    Purpose: The Breast Imaging Reporting and Data System (BI-RADS) breast composition descriptors are used for standardized mammographic reporting and are assessed visually. This reporting is clinically relevant because breast composition can impact mammographic sensitivity and is a breast cancer risk factor. New techniques are presented and evaluated for generating automated BI-RADS breast composition descriptors using both raw and calibrated full field digital mammography (FFDM) image data.Methods: A matched case-control dataset with FFDM images was used to develop three automated measures for the BI-RADS breast composition descriptors. Histograms of each calibrated mammogram in the percent glandular (pg) representation were processed to create the new BR pg measure. Two previously validated measures of breast density derived from calibrated and raw mammograms were converted to the new BR vc and BR vr measures, respectively. These three measures were compared with the radiologist-reported BI-RADS compositions assessments from the patient records. The authors used two optimization strategies with differential evolution to create these measures: method-1 used breast cancer status; and method-2 matched the reported BI-RADS descriptors. Weighted kappa (κ) analysis was used to assess the agreement between the new measures and the reported measures. Each measure's association with breast cancer was evaluated with odds ratios (ORs) adjusted for body mass index, breast area, and menopausal status. ORs were estimated as per unit increase with 95% confidence intervals.Results: The three BI-RADS measures generated by method-1 had κ between 0.25–0.34. These measures were significantly associated with breast cancer status in the adjusted models: (a) OR = 1.87 (1.34, 2.59) for BR pg ; (b) OR = 1.93 (1.36, 2.74) for BR vc ; and (c) OR = 1.37 (1.05, 1.80) for BR vr . The measures generated by method-2 had κ between 0.42–0.45. Two of these measures were significantly

  3. Importance of the Decompensative Correction of the Gravity Field for Study of the Upper Crust: Application to the Arabian Plate and Surroundings

    OpenAIRE

    M. K. Kaban; Sami El Khrepy; Nassir Al-Arifi

    2017-01-01

    The isostatic correction represents one of the most useful “geological” reduction methods of the gravity field. With this correction it is possible to remove a significant part of the effect of deep density heterogeneity, which dominates in the Bouguer gravity anomalies. However, even this reduction does not show the full gravity effect of unknown anomalies in the upper crust since their impact is substantially reduced by the isostatic compensation. We analyze a so-called decompensative corre...

  4. Quantitative evaluation of automated skull-stripping methods applied to contemporary and legacy images: effects of diagnosis, bias correction, and slice location

    DEFF Research Database (Denmark)

    Fennema-Notestine, Christine; Ozyurt, I Burak; Clark, Camellia P

    2006-01-01

    Extractor (BSE, Sandor and Leahy [1997] IEEE Trans Med Imag 16:41-54; Shattuck et al. [2001] Neuroimage 13:856-876) to manually stripped images. The methods were applied to uncorrected and bias-corrected datasets; Legacy and Contemporary T1-weighted image sets; and four diagnostic groups (depressed...... distances, and an Expectation-Maximization algorithm. Methods tended to perform better on contemporary datasets; bias correction did not significantly improve method performance. Mesial sections were most difficult for all methods. Although AD image sets were most difficult to strip, HWA and BSE were more...

  5. Gypsy moth (Lepidoptera: Lymantriidae) flight behavior and phenology based on field-deployed automated pheromone-baited traps

    Science.gov (United States)

    Patrick C. Tobin; Kenneth T. Klein; Donna S. Leonard

    2009-01-01

    Populations of the gypsy moth, Lymantria dispar (L.), are extensively monitored in the United States through the use of pheromone-baited traps.We report on use of automated pheromone-baited traps that use a recording sensor and data logger to record the unique date-time stamp of males as they enter the trap.We deployed a total of 352 automated traps...

  6. Manual versus Automated Rodent Behavioral Assessment: Comparing Efficacy and Ease of Bederson and Garcia Neurological Deficit Scores to an Open Field Video-Tracking System

    OpenAIRE

    Fiona A. Desland; Aqeela Afzal; Zuha Warraich; J Mocco

    2014-01-01

    Animal models of stroke have been crucial in advancing our understanding of the pathophysiology of cerebral ischemia. Currently, the standards for determining neurological deficit in rodents are the Bederson and Garcia scales, manual assessments scoring animals based on parameters ranked on a narrow scale of severity. Automated open field analysis of a live-video tracking system that analyzes animal behavior may provide a more sensitive test. Results obtained from the manual Bederson and Garc...

  7. Sci—Fri AM: Mountain — 01: Validation of a new formulism and the related correction factors on output factor determination for small photon fields

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yizhen; Younge, Kelly; Nielsen, Michelle; Mutanga, Theodore [Peel Regional Cancer Center, Trillium Health Partners, Mississauga, ON (Canada); Cui, Congwu [Peel Regional Cancer Center, Trillium Health Partners, Mississauga, ON (Canada); Department of Radiation Oncology, University of Toronto, Toronto, ON (Canada); Das, Indra J. [Radiation Oncology Dept., Indiana University- School of Medicine, Indianapolis, IN (United States)

    2014-08-15

    Small field dosimetry measurements including output factors are difficult due to lack of charged-particle equilibrium, occlusion of the radiation source, the finite size of detectors, and non-water equivalence of detector components. With available detectors significant variations could be measured that will lead to incorrect delivered dose to patients. IAEA/AAPM have provided a framework and formulation to correct the detector response in small photon fields. Monte Carlo derived correction factors for some commonly used small field detectors are now available, however validation has not been performed prior to this study. An Exradin A16 chamber, EDGE detector and SFD detector were used to perform the output factor measurement for a series of conical fields (5–30mm) on a Varian iX linear accelerator. Discrepancies up to 20%, 10% and 6% were observed for 5, 7.5 and 10 mm cones between the initial output factors measured by the EDGE detector and the A16 ion chamber, while the discrepancies for the conical fields larger than 10 mm were less than 4%. After the application of the correction, the output factors agree well with each other to within 1%. Caution is needed when determining the output factors for small photon fields, especially for fields 10 mm in diameter or smaller. More than one type of detector should be used, each with proper corrections applied to the measurement results. It is concluded that with the application of correction factors to appropriately chosen detectors, output can be measured accurately for small fields.

  8. Assessing the Efficiency of Phenotyping Early Traits in a Greenhouse Automated Platform for Predicting Drought Tolerance of Soybean in the Field.

    Science.gov (United States)

    Peirone, Laura S; Pereyra Irujo, Gustavo A; Bolton, Alejandro; Erreguerena, Ignacio; Aguirrezábal, Luis A N

    2018-01-01

    Conventional field phenotyping for drought tolerance, the most important factor limiting yield at a global scale, is labor-intensive and time-consuming. Automated greenhouse platforms can increase the precision and throughput of plant phenotyping and contribute to a faster release of drought tolerant varieties. The aim of this work was to establish a framework of analysis to identify early traits which could be efficiently measured in a greenhouse automated phenotyping platform, for predicting the drought tolerance of field grown soybean genotypes. A group of genotypes was evaluated, which showed variation in their drought susceptibility index (DSI) for final biomass and leaf area. A large number of traits were measured before and after the onset of a water deficit treatment, which were analyzed under several criteria: the significance of the regression with the DSI, phenotyping cost, earliness, and repeatability. The most efficient trait was found to be transpiration efficiency measured at 13 days after emergence. This trait was further tested in a second experiment with different water deficit intensities, and validated using a different set of genotypes against field data from a trial network in a third experiment. The framework applied in this work for assessing traits under different criteria could be helpful for selecting those most efficient for automated phenotyping.

  9. Assessing the Efficiency of Phenotyping Early Traits in a Greenhouse Automated Platform for Predicting Drought Tolerance of Soybean in the Field

    Directory of Open Access Journals (Sweden)

    Laura S. Peirone

    2018-05-01

    Full Text Available Conventional field phenotyping for drought tolerance, the most important factor limiting yield at a global scale, is labor-intensive and time-consuming. Automated greenhouse platforms can increase the precision and throughput of plant phenotyping and contribute to a faster release of drought tolerant varieties. The aim of this work was to establish a framework of analysis to identify early traits which could be efficiently measured in a greenhouse automated phenotyping platform, for predicting the drought tolerance of field grown soybean genotypes. A group of genotypes was evaluated, which showed variation in their drought susceptibility index (DSI for final biomass and leaf area. A large number of traits were measured before and after the onset of a water deficit treatment, which were analyzed under several criteria: the significance of the regression with the DSI, phenotyping cost, earliness, and repeatability. The most efficient trait was found to be transpiration efficiency measured at 13 days after emergence. This trait was further tested in a second experiment with different water deficit intensities, and validated using a different set of genotypes against field data from a trial network in a third experiment. The framework applied in this work for assessing traits under different criteria could be helpful for selecting those most efficient for automated phenotyping.

  10. State-of-the art comparability of corrected emission spectra. 2. Field laboratory assessment of calibration performance using spectral fluorescence standards.

    Science.gov (United States)

    Resch-Genger, Ute; Bremser, Wolfram; Pfeifer, Dietmar; Spieles, Monika; Hoffmann, Angelika; DeRose, Paul C; Zwinkels, Joanne C; Gauthier, François; Ebert, Bernd; Taubert, R Dieter; Voigt, Jan; Hollandt, Jörg; Macdonald, Rainer

    2012-05-01

    In the second part of this two-part series on the state-of-the-art comparability of corrected emission spectra, we have extended this assessment to the broader community of fluorescence spectroscopists by involving 12 field laboratories that were randomly selected on the basis of their fluorescence measuring equipment. These laboratories performed a reference material (RM)-based fluorometer calibration with commercially available spectral fluorescence standards following a standard operating procedure that involved routine measurement conditions and the data evaluation software LINKCORR developed and provided by the Federal Institute for Materials Research and Testing (BAM). This instrument-specific emission correction curve was subsequently used for the determination of the corrected emission spectra of three test dyes, X, QS, and Y, revealing an average accuracy of 6.8% for the corrected emission spectra. This compares well with the relative standard uncertainties of 4.2% for physical standard-based spectral corrections demonstrated in the first part of this study (previous paper in this issue) involving an international group of four expert laboratories. The excellent comparability of the measurements of the field laboratories also demonstrates the effectiveness of RM-based correction procedures.

  11. Operational proof of automation

    International Nuclear Information System (INIS)

    Jaerschky, R.; Reifenhaeuser, R.; Schlicht, K.

    1976-01-01

    Automation of the power plant process may imply quite a number of problems. The automation of dynamic operations requires complicated programmes often interfering in several branched areas. This reduces clarity for the operating and maintenance staff, whilst increasing the possibilities of errors. The synthesis and the organization of standardized equipment have proved very successful. The possibilities offered by this kind of automation for improving the operation of power plants will only sufficiently and correctly be turned to profit, however, if the application of these technics of equipment is further improved and if its volume is tallied with a definite etc. (orig.) [de

  12. Operational proof of automation

    International Nuclear Information System (INIS)

    Jaerschky, R.; Schlicht, K.

    1977-01-01

    Automation of the power plant process may imply quite a number of problems. The automation of dynamic operations requires complicated programmes often interfering in several branched areas. This reduces clarity for the operating and maintenance staff, whilst increasing the possibilities of errors. The synthesis and the organization of standardized equipment have proved very successful. The possibilities offered by this kind of automation for improving the operation of power plants will only sufficiently and correctly be turned to profit, however, if the application of these equipment techniques is further improved and if it stands in a certain ratio with a definite efficiency. (orig.) [de

  13. Automated drumlin shape and volume estimation using high resolution LiDAR imagery (Curvature Based Relief Separation): A test from the Wadena Drumlin Field, Minnesota

    Science.gov (United States)

    Yu, Peter; Eyles, Nick; Sookhan, Shane

    2015-10-01

    Resolving the origin(s) of drumlins and related megaridges in areas of megascale glacial lineations (MSGL) left by paleo-ice sheets is critical to understanding how ancient ice sheets interacted with their sediment beds. MSGL is now linked with fast-flowing ice streams but there is a broad range of erosional and depositional models. Further progress is reliant on constraining fluxes of subglacial sediment at the ice sheet base which in turn is dependent on morphological data such as landform shape and elongation and most importantly landform volume. Past practice in determining shape has employed a broad range of geomorphological methods from strictly visualisation techniques to more complex semi-automated and automated drumlin extraction methods. This paper reviews and builds on currently available visualisation, semi-automated and automated extraction methods and presents a new, Curvature Based Relief Separation (CBRS) technique; for drumlin mapping. This uses curvature analysis to generate a base level from which topography can be normalized and drumlin volume can be derived. This methodology is tested using a high resolution (3 m) LiDAR elevation dataset from the Wadena Drumlin Field, Minnesota, USA, which was constructed by the Wadena Lobe of the Laurentide Ice Sheet ca. 20,000 years ago and which as a whole contains 2000 drumlins across an area of 7500 km2. This analysis demonstrates that CBRS provides an objective and robust procedure for automated drumlin extraction. There is strong agreement with manually selected landforms but the method is also capable of resolving features that were not detectable manually thereby considerably expanding the known population of streamlined landforms. CBRS provides an effective automatic method for visualisation of large areas of the streamlined beds of former ice sheets and for modelling sediment fluxes below ice sheets.

  14. Quantum corrections to conductivity observed at intermediate magnetic fields in a high mobility GaAs/AlGaAs 2-dimensional electron gas

    International Nuclear Information System (INIS)

    Taboryski, R.; Veje, E.; Lindelof, P.E.

    1990-01-01

    Magnetoresistance with the field perpendicular to the 2-dimensional electron gas in a high mobility GaAs/AlGaAs heterostructure at low temperatures is studied. At the lowest magnetic field we observe the weak localization. At magnetic fields, where the product of the mobility and the magnetic field is of the order of unity, the quantum correction to conductivity due to the electron-electron interaction is as a source of magnetoresistance. A consistent analysis of experiments in this regime is for the first time performed. In addition to the well known electron-electron term with the expected temperature dependence, we find a new type of temperature independent quantum correction, which varies logarithmically with mobility. (orig.)

  15. Automatic computation of radiative corrections

    International Nuclear Information System (INIS)

    Fujimoto, J.; Ishikawa, T.; Shimizu, Y.; Kato, K.; Nakazawa, N.; Kaneko, T.

    1997-01-01

    Automated systems are reviewed focusing on their general structure and requirement specific to the calculation of radiative corrections. Detailed description of the system and its performance is presented taking GRACE as a concrete example. (author)

  16. SU-C-201-06: Small Field Correction Factors for the MicroDiamond Detector in the Gamma Knife-Model C Derived Using Monte Carlo Methods

    International Nuclear Information System (INIS)

    Barrett, J C; Knill, C

    2016-01-01

    Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes. Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to

  17. SU-C-201-06: Small Field Correction Factors for the MicroDiamond Detector in the Gamma Knife-Model C Derived Using Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, J C [Wayne State University, Detroit, MI (United States); Karmanos Cancer Institute McLaren-Macomb, Clinton Township, MI (United States); Knill, C [Wayne State University, Detroit, MI (United States); Beaumont Hospital, Canton, MI (United States)

    2016-06-15

    Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes. Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to

  18. Global daily precipitation fields from bias-corrected rain gauge and satellite observations. Pt. 1. Design and development

    Energy Technology Data Exchange (ETDEWEB)

    Kottek, M.; Rubel, F. [Univ. of Veterinary Medicine, Vienna (Austria). Biometeorology Group

    2007-10-15

    Global daily precipitation analyses are mainly based on satellite estimates, often calibrated with monthly ground analyses or merged with model predictions. We argue here that an essential improvement of their accuracy is only possible by incorporation of daily ground measurements. In this work we apply geostatistical methods to compile a global precipitation product based on daily rain gauge measurements. The raw ground measurements, disseminated via Global Telecommunication System (GTS), are corrected for their systematic measurement errors and interpolated onto a global 1 degree grid. For interpolation ordinary block kriging is applied, with precalculated spatial auto-correlation functions (ACFs). This technique allows to incorporate additional climate information. First, monthly ACFs are calculated from the daily data; second, they are regionalised according to the five main climatic zones of the Koeppen-Geiger climate classification. The interpolation error, a by-product of kriging, is used to flag grid points as missing if the error is above a predefined threshold. But for many applications missing values constitute a problem. Due to a combination of the ground analyses with the daily multi-satellite product of the Global Precipitation Climatology Project (GPCP-1DD) not only these missing values are replaced but also the spatial structure of the satellite estimates is considered. As merging method bivariate ordinary co-kriging is applied. The ACFs necessary for the gauge and the satellite fields as well as the corresponding spatial cross-correlation functions (CCFs) are again precalculated for each of the five main climatic zones and for each individual month. As a result two new global daily data sets for the period 1996 up to today will be available on the Internet (www.gmes-geoland.info): A precipitation product over land, analysed from ground measurements; and a global precipitation product merged from this and the GPCP-1DD multi-satellite product. (orig.)

  19. On the truncation of the azimuthal mode spectrum of high-order probes in probe-corrected spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Pivnenko, Sergey; Laitinen, Tommi

    2011-01-01

    Azimuthal mode (m mode) truncation of a high-order probe pattern in probe-corrected spherical near-field antenna measurements is studied in this paper. The results of this paper provide rules for appropriate and sufficient m-mode truncation for non-ideal first-order probes and odd-order probes wi...

  20. On transcending the impasse of respiratory motion correction applications in routine clinical imaging - a consideration of a fully automated data driven motion control framework

    International Nuclear Information System (INIS)

    Kesner, Adam L; Schleyer, Paul J; Büther, Florian; Walter, Martin A; Schäfers, Klaus P; Koo, Phillip J

    2014-01-01

    Positron emission tomography (PET) is increasingly used for the detection, characterization, and follow-up of tumors located in the thorax. However, patient respiratory motion presents a unique limitation that hinders the application of high-resolution PET technology for this type of imaging. Efforts to transcend this limitation have been underway for more than a decade, yet PET remains for practical considerations a modality vulnerable to motion-induced image degradation. Respiratory motion control is not employed in routine clinical operations. In this article, we take an opportunity to highlight some of the recent advancements in data-driven motion control strategies and how they may form an underpinning for what we are presenting as a fully automated data-driven motion control framework. This framework represents an alternative direction for future endeavors in motion control and can conceptually connect individual focused studies with a strategy for addressing big picture challenges and goals. The online version of this article (doi:10.1186/2197-7364-1-8) contains supplementary material, which is available to authorized users.

  1. On the correctness of the thermoluminescent high-temperature ratio (HTR) method for estimating ionization density effects in mixed radiation fields

    International Nuclear Information System (INIS)

    Bilski, Pawel

    2010-01-01

    The high-temperature ratio (HTR) method which exploits changes in the LiF:Mg,Ti glow-curve due to high-LET radiation, has been used for several years to estimate LET in an unknown radiation field. As TL efficiency is known to decrease after doses of densely ionizing radiation, a LET estimate is used to correct the TLD-measured values of dose. The HTR method is purely empirical and its general correctness is questionable. The validity of the HTR method was investigated by theoretical simulation of various mixed radiation fields. The LET eff values estimated with the HTR method for mixed radiation fields were found in general to be incorrect, in some cases underestimating the true values of dose-averaged LET by an order of magnitude. The method produced correct estimates of average LET only in cases of almost mono-energetic fields (i.e. in non-mixed radiation conditions). The value of LET eff found by the HTR method may therefore be treated as a qualitative indicator of increased LET, but not as a quantitative estimator of average LET. However, HTR-based correction of the TLD-measured dose value (HTR-B method) was found to be quite reliable. In all cases studied, application of this technique improved the result. Most of the measured doses fell within 10% of the true values. A further empirical improvement to the method is proposed. One may therefore recommend the HTR-B method to correct for decreased TL efficiency in mixed high-LET fields.

  2. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  3. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  4. Automatic Power Factor Correction Using Capacitive Bank

    OpenAIRE

    Mr.Anant Kumar Tiwari,; Mrs. Durga Sharma

    2014-01-01

    The power factor correction of electrical loads is a problem common to all industrial companies. Earlier the power factor correction was done by adjusting the capacitive bank manually [1]. The automated power factor corrector (APFC) using capacitive load bank is helpful in providing the power factor correction. Proposed automated project involves measuring the power factor value from the load using microcontroller. The design of this auto-adjustable power factor correction is ...

  5. A study of the dosimetry of small field photon beams used in intensity modulated radiation therapy in inhomogeneous media: Monte Carlo simulations, and algorithm comparisons and corrections

    International Nuclear Information System (INIS)

    Jones, Andrew Osler

    2004-01-01

    There is an increasing interest in the use of inhomogeneity corrections for lung, air, and bone in radiotherapy treatment planning. Traditionally, corrections based on physical density have been used. Modern algorithms use the electron density derived from CT images. Small fields are used in both conformal radiotherapy and IMRT, however, their beam characteristics in inhomogeneous media have not been extensively studied. This work compares traditional and modern treatment planning algorithms to Monte Carlo simulations in and near low-density inhomogeneities. Field sizes ranging from 0.5 cm to 5 cm in diameter are projected onto a phantom containing inhomogeneities and depth dose curves are compared. Comparisons of the Dose Perturbation Factors (DPF) are presented as functions of density and field size. Dose Correction Factors (DCF), which scale the algorithms to the Monte Carlo data, are compared for each algorithm. Physical scaling algorithms such as Batho and Equivalent Pathlength (EPL) predict an increase in dose for small fields passing through lung tissue, where Monte Carlo simulations show a sharp dose drop. The physical model-based collapsed cone convolution (CCC) algorithm correctly predicts the dose drop, but does not accurately predict the magnitude. Because the model-based algorithms do not correctly account for the change in backscatter, the dose drop predicted by CCC occurs farther downstream compared to that predicted by the Monte Carlo simulations. Beyond the tissue inhomogeneity all of the algorithms studied predict dose distributions in close agreement with Monte Carlo simulations. Dose-volume relationships are important in understanding the effects of radiation to the lung. The dose within the lung is affected by a complex function of beam energy, lung tissue density, and field size. Dose algorithms vary in their abilities to correctly predict the dose to the lung tissue. A thorough analysis of the effects of density, and field size on dose to the

  6. Importance of the Decompensative Correction of the Gravity Field for Study of the Upper Crust: Application to the Arabian Plate and Surroundings

    Science.gov (United States)

    Kaban, Mikhail K.; El Khrepy, Sami; Al-Arifi, Nassir

    2017-01-01

    The isostatic correction represents one of the most useful "geological" reduction methods of the gravity field. With this correction it is possible to remove a significant part of the effect of deep density heterogeneity, which dominates in the Bouguer gravity anomalies. However, even this reduction does not show the full gravity effect of unknown anomalies in the upper crust since their impact is substantially reduced by the isostatic compensation. We analyze a so-called decompensative correction of the isostatic anomalies, which provides a possibility to separate these effects. It was demonstrated that this correction is very significant at the mid-range wavelengths and may exceed 100 m/s2 (mGal), therefore ignoring this effect would lead to wrong conclusions about the upper crust structure. At the same time, the decompensative correction is very sensitive to the compensation depth and effective elastic thickness of the lithosphere. Therefore, these parameters should be properly determined based on other studies. Based on this technique, we estimate the decompensative correction for the Arabian plate and surrounding regions. The amplitude of the decompensative anomalies reaches ±250 m/s2 10-5 (mGal), evidencing for both, large density anomalies of the upper crust (including sediments) and strong isostatic disturbances of the lithosphere. These results improve the knowledge about the crustal structure in the Middle East.

  7. SU-E-T-101: Determination and Comparison of Correction Factors Obtained for TLDs in Small Field Lung Heterogenous Phantom Using Acuros XB and EGSnrc

    International Nuclear Information System (INIS)

    Soh, R; Lee, J; Harianto, F

    2014-01-01

    Purpose: To determine and compare the correction factors obtained for TLDs in 2 × 2cm 2 small field in lung heterogenous phantom using Acuros XB (AXB) and EGSnrc. Methods: This study will simulate the correction factors due to the perturbation of TLD-100 chips (Harshaw/Thermoscientific, 3 × 3 × 0.9mm 3 , 2.64g/cm 3 ) in small field lung medium for Stereotactic Body Radiation Therapy (SBRT). A physical lung phantom was simulated by a 14cm thick composite cork phantom (0.27g/cm 3 , HU:-743 ± 11) sandwiched between 4cm thick Plastic Water (CIRS,Norfolk). Composite cork has been shown to be a good lung substitute material for dosimetric studies. 6MV photon beam from Varian Clinac iX (Varian Medical Systems, Palo Alto, CA) with field size 2 × 2cm 2 was simulated. Depth dose profiles were obtained from the Eclipse treatment planning system Acuros XB (AXB) and independently from DOSxyznrc, EGSnrc. Correction factors was calculated by the ratio of unperturbed to perturbed dose. Since AXB has limitations in simulating actual material compositions, EGSnrc will also simulate the AXB-based material composition for comparison to the actual lung phantom. Results: TLD-100, with its finite size and relatively high density, causes significant perturbation in 2 × 2cm 2 small field in a low lung density phantom. Correction factors calculated by both EGSnrc and AXB was found to be as low as 0.9. It is expected that the correction factor obtained by EGSnrc wlll be more accurate as it is able to simulate the actual phantom material compositions. AXB have a limited material library, therefore it only approximates the composition of TLD, Composite cork and Plastic water, contributing to uncertainties in TLD correction factors. Conclusion: It is expected that the correction factors obtained by EGSnrc will be more accurate. Studies will be done to investigate the correction factors for higher energies where perturbation may be more pronounced

  8. Interacting viscous entropy-corrected holographic scalar field models of dark energy with time-varying G in modified FRW cosmology

    International Nuclear Information System (INIS)

    Adabi, Farzin; Karami, Kayoomars; Felegary, Fereshte; Azarmi, Zohre

    2012-01-01

    We study the entropy-corrected version of the holographic dark energy (HDE) model in the framework of modified Friedmann-Robertson-Walker cosmology. We consider a non-flat universe filled with an interacting viscous entropy-corrected HDE (ECHDE) with dark matter. Also included in our model is the case of the variable gravitational constant G. We obtain the equation of state and the deceleration parameters of the interacting viscous ECHDE. Moreover, we reconstruct the potential and the dynamics of the quintessence, tachyon, K-essence and dilaton scalar field models according to the evolutionary behavior of the interacting viscous ECHDE model with time-varying G. (research papers)

  9. Creation of subsonic macro-and microjets facilities and automated measuring system (AMS-2) for the spatial - temporal hot - wire anemometric visualization of jet flow field

    Science.gov (United States)

    Sorokin, A. M.; Grek, G. R.; Gilev, V. M.; Zverkov, I. D.

    2017-10-01

    Macro-and microjets facilities for generation of the round and plane subsonic jets are designed and fabricated. Automated measuring system (AMS - 2) for the spatial - temporal hot - wire anemometric visualization of jet flow field is designed and fabricated. Coordinate device and unit of the measurement, collecting, storage and processing of hot - wire anemometric information were integrated in the AMS. Coordinate device is intended for precision movement of the hot - wire probe in jet flow field according to the computer program. At the same time accuracy of the hot - wire probe movement is 5 microns on all three coordinates (x, y, z). Unit of measurement, collecting, storage and processing of hot - wire anemometric information is intended for the hot - wire anemometric measurement of the jet flow field parameters (registration of the mean - U and fluctuation - u' characteristics of jet flow velocity), their accumulation and preservation in the computer memory, and also carries out their processing according to certain programms.

  10. Comparison of Threshold Saccadic Vector Optokinetic Perimetry (SVOP) and Standard Automated Perimetry (SAP) in Glaucoma. Part II: Patterns of Visual Field Loss and Acceptability.

    Science.gov (United States)

    McTrusty, Alice D; Cameron, Lorraine A; Perperidis, Antonios; Brash, Harry M; Tatham, Andrew J; Agarwal, Pankaj K; Murray, Ian C; Fleck, Brian W; Minns, Robert A

    2017-09-01

    We compared patterns of visual field loss detected by standard automated perimetry (SAP) to saccadic vector optokinetic perimetry (SVOP) and examined patient perceptions of each test. A cross-sectional study was done of 58 healthy subjects and 103 with glaucoma who were tested using SAP and two versions of SVOP (v1 and v2). Visual fields from both devices were categorized by masked graders as: 0, normal; 1, paracentral defect; 2, nasal step; 3, arcuate defect; 4, altitudinal; 5, biarcuate; and 6, end-stage field loss. SVOP and SAP classifications were cross-tabulated. Subjects completed a questionnaire on their opinions of each test. We analyzed 142 (v1) and 111 (v2) SVOP and SAP test pairs. SVOP v2 had a sensitivity of 97.7% and specificity of 77.9% for identifying normal versus abnormal visual fields. SAP and SVOP v2 classifications showed complete agreement in 54% of glaucoma patients, with a further 23% disagreeing by one category. On repeat testing, 86% of SVOP v2 classifications agreed with the previous test, compared to 91% of SAP classifications; 71% of subjects preferred SVOP compared to 20% who preferred SAP. Eye-tracking perimetry can be used to obtain threshold visual field sensitivity values in patients with glaucoma and produce maps of visual field defects, with patterns exhibiting close agreement to SAP. Patients preferred eye-tracking perimetry compared to SAP. This first report of threshold eye tracking perimetry shows good agreement with conventional automated perimetry and provides a benchmark for future iterations.

  11. Multi-site Field Verification of Laboratory Derived FDOM Sensor Corrections: The Good, the Bad and the Ugly

    Science.gov (United States)

    Saraceno, J.; Shanley, J. B.; Aulenbach, B. T.

    2014-12-01

    Fluorescent dissolved organic matter (FDOM) is an excellent proxy for dissolved organic carbon (DOC) in natural waters. Through this relationship, in situ FDOM can be utilized to capture both high frequency time series and long term fluxes of DOC in small streams. However, in order to calculate accurate DOC fluxes for comparison across sites, in situ FDOM data must be compensated for matrix effects. Key matrix effects, include temperature, turbidity and the inner filter effect due to color. These interferences must be compensated for to develop a reasonable relationship between FDOM and DOC. In this study, we applied laboratory-derived correction factors to real time data from the five USGS WEBB headwater streams in order to gauge their effectiveness across a range of matrix effects. The good news is that laboratory derived correction factors improved the predicative relationship (higher r2) between DOC and FDOM when compared to uncorrected data. The relative importance of each matrix effect (i.e. temperature) varied by site and by time, implying that each and every matrix effect should be compensated for when available. In general, temperature effects were more important on longer time scales, while corrections for turbidity and DOC inner filter effects were most prevalent during hydrologic events, when the highest instantaneous flux of DOC occurred. Unfortunately, even when corrected for matrix effects, in situ FDOM is a weaker predictor of DOC than A254, a common surrogate for DOC, implying that either DOC fluoresces at varying degrees (but should average out over time), that some matrix effects (e.g. pH) are either unaccounted for or laboratory-derived correction factors do not encompass the site variability of particles and organics. The least impressive finding is that the inherent dependence on three variables in the FDOM correction algorithm increases the likelihood of record data gaps which increases the uncertainty in calculated DOC flux values.

  12. Building Automation Systems.

    Science.gov (United States)

    Honeywell, Inc., Minneapolis, Minn.

    A number of different automation systems for use in monitoring and controlling building equipment are described in this brochure. The system functions include--(1) collection of information, (2) processing and display of data at a central panel, and (3) taking corrective action by sounding alarms, making adjustments, or automatically starting and…

  13. Design and experimental testing of air slab caps which convert commercial electron diodes into dual purpose, correction-free diodes for small field dosimetry.

    Science.gov (United States)

    Charles, P H; Cranmer-Sargison, G; Thwaites, D I; Kairn, T; Crowe, S B; Pedrazzini, G; Aland, T; Kenny, J; Langton, C M; Trapp, J V

    2014-10-01

    Two diodes which do not require correction factors for small field relative output measurements are designed and validated using experimental methodology. This was achieved by adding an air layer above the active volume of the diode detectors, which canceled out the increase in response of the diodes in small fields relative to standard field sizes. Due to the increased density of silicon and other components within a diode, additional electrons are created. In very small fields, a very small air gap acts as an effective filter of electrons with a high angle of incidence. The aim was to design a diode that balanced these perturbations to give a response similar to a water-only geometry. Three thicknesses of air were placed at the proximal end of a PTW 60017 electron diode (PTWe) using an adjustable "air cap". A set of output ratios (ORDet (fclin) ) for square field sizes of side length down to 5 mm was measured using each air thickness and compared to ORDet (fclin) measured using an IBA stereotactic field diode (SFD). kQclin,Qmsr (fclin,fmsr) was transferred from the SFD to the PTWe diode and plotted as a function of air gap thickness for each field size. This enabled the optimal air gap thickness to be obtained by observing which thickness of air was required such that kQclin,Qmsr (fclin,fmsr) was equal to 1.00 at all field sizes. A similar procedure was used to find the optimal air thickness required to make a modified Sun Nuclear EDGE detector (EDGEe) which is "correction-free" in small field relative dosimetry. In addition, the feasibility of experimentally transferring kQclin,Qmsr (fclin,fmsr) values from the SFD to unknown diodes was tested by comparing the experimentally transferred kQclin,Qmsr (fclin,fmsr) values for unmodified PTWe and EDGEe diodes to Monte Carlo simulated values. 1.0 mm of air was required to make the PTWe diode correction-free. This modified diode (PTWeair) produced output factors equivalent to those in water at all field sizes (5-50 mm

  14. Automated model building

    CERN Document Server

    Caferra, Ricardo; Peltier, Nicholas

    2004-01-01

    This is the first book on automated model building, a discipline of automated deduction that is of growing importance Although models and their construction are important per se, automated model building has appeared as a natural enrichment of automated deduction, especially in the attempt to capture the human way of reasoning The book provides an historical overview of the field of automated deduction, and presents the foundations of different existing approaches to model construction, in particular those developed by the authors Finite and infinite model building techniques are presented The main emphasis is on calculi-based methods, and relevant practical results are provided The book is of interest to researchers and graduate students in computer science, computational logic and artificial intelligence It can also be used as a textbook in advanced undergraduate courses

  15. Automation in Immunohematology

    Directory of Open Access Journals (Sweden)

    Meenu Bajpai

    2012-01-01

    Full Text Available There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process.

  16. M-theory and stringy corrections to anti-de Sitter black holes and conformal field theories

    International Nuclear Information System (INIS)

    Caldarelli, Marco M.; Klemm, Dietmar

    1999-01-01

    We consider black holes in anti-de Sitter space AdS p+2 (p = 2, 3, 5), which have hyperbolic, flat or spherical event horizons. The O(α' 3 ) corrections (or the leading corrections in powers of the eleven-dimensional Planck length, in the case of M-theory compactifications) to the black hole metrics are computed for the various topologies and dimensions. We investigate the consequences of the stringy or M-theory corrections for the black hole thermodynamics. In particular, we show the emergence of a stable branch of small spherical black holes. Surprisingly, for any of the considered dimension and topologies, the corrected thermodynamical quantities turn out to coincide with those calculated within a simplified approach, which uses only the unperturbed metric. We obtain the corrected Hawking-Page transition temperature for black holes with spherical horizons, and show that for p = 3 this phase transition disappears at a value of α' considerably smaller than that estimated previously by Gao and Li. Using the AdS/CFT correspondence, we determine the S 1 x S 3 N = 4 SYM phase diagram for sufficiently large 't Hooft coupling, and show that the critical point at which the Hawking-Page transition disappears (the correspondence point of Horowitz-Polchinski), occurs at g 2 YM N ∼ 20.5. The d = 4 and d = 7 black hole phase diagrams are also determined, and connection is made with the corresponding boundary CFTs. Finally, for flat and hyperbolic horizons, we show that the leading stringy or M-theory corrections do not give rise to any phase transition. However, if the horizon is compactified to a torus T p or to a quotient of hyperbolic space, H p /Γ, the appearance of light winding modes around non-contractible cycles signal new phase transitions, which in the toroidal case have previously been discussed by Barbon et al. We comment on these phase transitions for SYM on H p /Γ and on T p , when the moduli of the torus are taken into account

  17. Influence of computer technology on the automation of oil and gas fields and on the companies' information structures

    Energy Technology Data Exchange (ETDEWEB)

    Graf, H.G.

    1984-02-01

    Exemplified by a Direct Digital Control System, the fundamentals of process automation are demonstrated. Description of the so-called ''General-purpose computers'' and their peripherals which are used in the mineral oil industry. Explanation of individual types of information processing such as data, process and text processing. Broad outline of typical applications of EDP Systems in the mineral oil/natural gas producing industries. Further chapters deal with the incompany information structure and with economic shaping of the information system.

  18. Correction magnetic field in electromagnet of proton accelerator using CST software; Correcao do campo magnetico em um eletroima de um acelerador de protons usando o software CST

    Energy Technology Data Exchange (ETDEWEB)

    Rabelo, L.A.; Campos, T.P.R., E-mail: luisarabelo88@gmail.com [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte (Brazil). Dept. de Engenharia Nuclear

    2016-07-01

    The aim of this paper is to present the study and simulation of uniform magnetic field electromagnets new circular accelerator model for protons with energy range between 15 MeV and 64 MeV. In addition, investigating materials and the changes induced by the presence of 'gaps' synchronism correction. The electromagnet simulations, predefined, were made in electromagnetic field simulation software CST EM Studio® 3D 2015. The results showed an even distribution of the magnetic field in the compact electromagnet with the same homogenization structures. The results showed regular distribution of the magnetic field in the compact electromagnet with homogenization structures. In conclusion, the electromagnetic model proposed shown to be feasible for a circular accelerator and comply the synchronization requirements. (author)

  19. Remote monitoring field trial. Application to automated air sampling. Report on Task FIN-E935 of the Finnish Support Programme to IAEA Safeguards

    International Nuclear Information System (INIS)

    Poellaenen, R.; Ilander, T.; Lehtinen, J.; Leppaenen, A.; Nikkinen, M.; Toivonen, H.; Ylaetalo, S.; Smartt, H.; Garcia, R.; Martinez, R.; Glidewell, D.; Krantz, K.

    1999-01-01

    An automated air sampling station has recently been developed by Radiation and Nuclear Safety Authority (STUK). The station is furnished with equipment that allows comprehensive remote monitoring of the station and the data. Under the Finnish Support Programme to IAEA Safeguards, STUK and Sandia National Laboratories (SNL) established a field trial to demonstrate the use of remote monitoring technologies. STUK provided means for real-lime radiation monitoring and sample authentication whereas SNL delivered means for authenticated surveillance of the equipment and its location. The field trial showed that remote monitoring can be carried out using simple means although advanced facilities are needed for comprehensive surveillance. Authenticated measurement data could be reliably transferred from the monitoring site to the headquarters without the presence of authorized personnel in the monitoring site. The operation of the station and the remote monitoring system were reliable. (orig.)

  20. Higher-order corrections in the cut vertex theory and the reciprocity relation in (PHI3)6 field theory

    International Nuclear Information System (INIS)

    Kubota, T.

    1980-01-01

    Higher-order corrections to deep inelastic and inclusive annihilation processes in the asymptotically free (PHI 3 ) 6 theory are calculated by using the method of cut vertices proposed by Mueller. Renormalization of the cut vertices is carried out up the two-loop level and it is found that, in the minimal subtraction scheme, the equality between the anomalous dimension of the space-like cut vertex and that of the corresponding time-like cut vertex does not hold beyond the leading order. Corrections to the coefficient functions are also calculated to study the Q 2 dependence of the moment up to the next-to-leading order. It is shown that the reciprocity relation suggested by Gribov and Lipatov on the basis of the leading-order calculation does not hold in the higher order. (orig.)

  1. Self-interaction error in density functional theory: a mean-field correction for molecules and large systems

    International Nuclear Information System (INIS)

    Ciofini, Ilaria; Adamo, Carlo; Chermette, Henry

    2005-01-01

    Corrections to the self-interaction error which is rooted in all standard exchange-correlation functionals in the density functional theory (DFT) have become the object of an increasing interest. After an introduction reminding the origin of the self-interaction error in the DFT formalism, and a brief review of the self-interaction free approximations, we present a simple, yet effective, self-consistent method to correct this error. The model is based on an average density self-interaction correction (ADSIC), where both exchange and Coulomb contributions are screened by a fraction of the electron density. The ansatz on which the method is built makes it particularly appealing, due to its simplicity and its favorable scaling with the size of the system. We have tested the ADSIC approach on one of the classical pathological problem for density functional theory: the direct estimation of the ionization potential from orbital eigenvalues. A large set of different chemical systems, ranging from simple atoms to large fullerenes, has been considered as test cases. Our results show that the ADSIC approach provides good numerical values for all the molecular systems, the agreement with the experimental values increasing, due to its average ansatz, with the size (conjugation) of the systems

  2. Home Automation

    OpenAIRE

    Ahmed, Zeeshan

    2010-01-01

    In this paper I briefly discuss the importance of home automation system. Going in to the details I briefly present a real time designed and implemented software and hardware oriented house automation research project, capable of automating house's electricity and providing a security system to detect the presence of unexpected behavior.

  3. A code for the correction of field imperfections in iron-core superconducting magnets by shimming of iron

    International Nuclear Information System (INIS)

    Pradhan, J.; Bhunia, U.; Dey, M.K.; Mallik, C.; Bhandari, R.K.

    2005-01-01

    The magnetic field measurement of the median plane of K500 superconducting cyclotron at VECC have been carried out. A code has been developed using the mathematical software to calculate the magnetic field distribution for an arbitrary shaped saturated iron piece, and the various harmonics therein

  4. Correcting Improper Uses of Perspectives, Pronouns, and Dualities in Wilberian Integral Theory: An Application of Holarchical Field Theory

    Directory of Open Access Journals (Sweden)

    Kevin J. Bowman

    2014-03-01

    Full Text Available This article uses my pre-existing extension of Wilberian metatheory, holarchical field theory, to diagnose and work towards overcoming the confusion within attempts to analyze action, events, and communication using Ken Wilber’s AQAL model. In holarchical field theory, holarchical fields become the fundamental component of reality. These fields comprise 1 holons in relation to one another and to their potential, and 2 their interpenetrating forces engaged by their interactions. In light of the theory, problems in the Wilberian literature have included inconsistent uses of certain dualities (subject-object, interior-exterior, and inside-outside as well as person perspectives and pronouns. Previous attempts to overcome these issues without precise diagnoses suffer from a conflation of the dual definitions of the subjective-objective duality, one a philosophical definition, the other grammatical. State versus action language is classified within the dualities of holarchical field theory.

  5. Demonstration and validation of automated agricultural field extraction from multi-temporal Landsat data for the majority of United States harvested cropland

    Science.gov (United States)

    Yan, L.; Roy, D. P.

    2014-12-01

    The spatial distribution of agricultural fields is a fundamental description of rural landscapes and the location and extent of fields is important to establish the area of land utilized for agricultural yield prediction, resource allocation, and for economic planning, and may be indicative of the degree of agricultural capital investment, mechanization, and labor intensity. To date, field objects have not been extracted from satellite data over large areas because of computational constraints, the complexity of the extraction task, and because consistently processed appropriate resolution data have not been available or affordable. A recently published automated methodology to extract agricultural crop fields from weekly 30 m Web Enabled Landsat data (WELD) time series was refined and applied to 14 states that cover 70% of harvested U.S. cropland (USDA 2012 Census). The methodology was applied to 2010 combined weekly Landsat 5 and 7 WELD data. The field extraction and quantitative validation results are presented for the following 14 states: Iowa, North Dakota, Illinois, Kansas, Minnesota, Nebraska, Texas, South Dakota, Missouri, Indiana, Ohio, Wisconsin, Oklahoma and Michigan (sorted by area of harvested cropland). These states include the top 11 U.S states by harvested cropland area. Implications and recommendations for systematic application to global coverage Landsat data are discussed.

  6. Electromagnetic corrections to ππ scattering lengths: some lessons for the construction of effective hadronic field theories

    International Nuclear Information System (INIS)

    Maltman, K.

    1998-01-01

    Using the framework of effective chiral Lagrangians, we show that, in order to correctly implement electromagnetism (EM), as generated from the Standard Model, into effective hadronic theories (such as meson-exchange models) it is insufficient to consider only graphs in the low-energy effective theory containing explicit photon lines. The Standard Model requires the presence of contact interactions in the effective theory which are electromagnetic in origin, but which involve no photons in the effective theory. We illustrate the problems which can result from a ''standard'' EM subtraction: i.e., from assuming that removing all contributions in the effective theory generated by graphs with explicit photon lines fully removes EM effects, by considering the case of the s-wave ππ scattering lengths. In this case it is shown that such a subtraction procedure would lead to the incorrect conclusion that the strong interaction isospin-breaking contributions to these quantities were large when, in fact, they are known to vanish at leading order in m d -m u . The leading EM contact corrections for the channels employed in the extraction of the I=0,2 s-wave ππ scattering lengths from experiment are also evaluated. (orig.)

  7. A new method to detect and correct sample tilt in scanning transmission electron microscopy bright-field imaging

    Energy Technology Data Exchange (ETDEWEB)

    Brown, H.G. [School of Physics, University of Melbourne, Parkville, Victoria 3010 (Australia); Ishikawa, R.; Sánchez-Santolino, G. [Institute of Engineering Innovation, School of Engineering, University of Tokyo, Tokyo 113-8656 (Japan); Lugg, N.R., E-mail: shibata@sigma.t.u-tokyo.ac.jp [Institute of Engineering Innovation, School of Engineering, University of Tokyo, Tokyo 113-8656 (Japan); Ikuhara, Y. [Institute of Engineering Innovation, School of Engineering, University of Tokyo, Tokyo 113-8656 (Japan); Allen, L.J. [School of Physics, University of Melbourne, Parkville, Victoria 3010 (Australia); Shibata, N. [Institute of Engineering Innovation, School of Engineering, University of Tokyo, Tokyo 113-8656 (Japan)

    2017-02-15

    Important properties of functional materials, such as ferroelectric shifts and octahedral distortions, are associated with displacements of the positions of lighter atoms in the unit cell. Annular bright-field scanning transmission electron microscopy is a good experimental method for investigating such phenomena due to its ability to image light and heavy atoms simultaneously. To map atomic positions at the required accuracy precise angular alignment of the sample with the microscope optical axis is necessary, since misalignment (tilt) of the specimen contributes to errors in position measurements of lighter elements in annular bright-field imaging. In this paper it is shown that it is possible to detect tilt with the aid of images recorded using a central bright-field detector placed within the inner radius of the annular bright-field detector. For a probe focus near the middle of the specimen the central bright-field image becomes especially sensitive to tilt and we demonstrate experimentally that misalignment can be detected with a precision of less than a milliradian, as we also confirm in simulation. Coma in the probe, an aberration that can be misidentified as tilt of the specimen, is also investigated and it is shown how the effects of coma and tilt can be differentiated. The effects of tilt may be offset to a large extent by shifting the diffraction plane detector an amount equivalent to the specimen tilt and we provide an experimental proof of principle of this using a segmented detector system. - Highlights: • Octahedral distortions are associated with displacements of lighter atoms. • Annular bright-field imaging is sensitive to light and heavy atoms simultaneously. • Mistilt of the specimen leads to errors in position measurements of lighter elements. • It is possible to detect tilt using images taken by a central bright-field detector. • Tilt may be offset by shifting the diffraction plane detector by an equivalent amount.

  8. Automatic physiological waveform processing for FMRI noise correction and analysis.

    Directory of Open Access Journals (Sweden)

    Daniel J Kelley

    2008-03-01

    Full Text Available Functional MRI resting state and connectivity studies of brain focus on neural fluctuations at low frequencies which share power with physiological fluctuations originating from lung and heart. Due to the lack of automated software to process physiological signals collected at high magnetic fields, a gap exists in the processing pathway between the acquisition of physiological data and its use in fMRI software for both physiological noise correction and functional analyses of brain activation and connectivity. To fill this gap, we developed an open source, physiological signal processing program, called PhysioNoise, in the python language. We tested its automated processing algorithms and dynamic signal visualization on resting monkey cardiac and respiratory waveforms. PhysioNoise consistently identifies physiological fluctuations for fMRI noise correction and also generates covariates for subsequent analyses of brain activation and connectivity.

  9. Space environments and their effects on space automation and robotics

    Science.gov (United States)

    Garrett, Henry B.

    1990-01-01

    Automated and robotic systems will be exposed to a variety of environmental anomalies as a result of adverse interactions with the space environment. As an example, the coupling of electrical transients into control systems, due to EMI from plasma interactions and solar array arcing, may cause spurious commands that could be difficult to detect and correct in time to prevent damage during critical operations. Spacecraft glow and space debris could introduce false imaging information into optical sensor systems. The presentation provides a brief overview of the primary environments (plasma, neutral atmosphere, magnetic and electric fields, and solid particulates) that cause such adverse interactions. The descriptions, while brief, are intended to provide a basis for the other papers presented at this conference which detail the key interactions with automated and robotic systems. Given the growing complexity and sensitivity of automated and robotic space systems, an understanding of adverse space environments will be crucial to mitigating their effects.

  10. Surge of Bering Glacier and Bagley Ice Field: Parameterization of surge characteristics based on automated analysis of crevasse image data and laser altimeter data

    Science.gov (United States)

    Stachura, M.; Herzfeld, U. C.; McDonald, B.; Weltman, A.; Hale, G.; Trantow, T.

    2012-12-01

    The dynamical processes that occur during the surge of a large, complex glacier system are far from being understood. The aim of this paper is to derive a parameterization of surge characteristics that captures the principle processes and can serve as the basis for a dynamic surge model. Innovative mathematical methods are introduced that facilitate derivation of such a parameterization from remote-sensing observations. Methods include automated geostatistical characterization and connectionist-geostatistical classification of dynamic provinces and deformation states, using the vehicle of crevasse patterns. These methods are applied to analyze satellite and airborne image and laser altimeter data collected during the current surge of Bering Glacier and Bagley Ice Field, Alaska.

  11. Sensitivity and specificity of automated analysis of single-field non-mydriatic fundus photographs by Bosch DR Algorithm-Comparison with mydriatic fundus photography (ETDRS for screening in undiagnosed diabetic retinopathy.

    Directory of Open Access Journals (Sweden)

    Pritam Bawankar

    Full Text Available Diabetic retinopathy (DR is a leading cause of blindness among working-age adults. Early diagnosis through effective screening programs is likely to improve vision outcomes. The ETDRS seven-standard-field 35-mm stereoscopic color retinal imaging (ETDRS of the dilated eye is elaborate and requires mydriasis, and is unsuitable for screening. We evaluated an image analysis application for the automated diagnosis of DR from non-mydriatic single-field images. Patients suffering from diabetes for at least 5 years were included if they were 18 years or older. Patients already diagnosed with DR were excluded. Physiologic mydriasis was achieved by placing the subjects in a dark room. Images were captured using a Bosch Mobile Eye Care fundus camera. The images were analyzed by the Retinal Imaging Bosch DR Algorithm for the diagnosis of DR. All subjects also subsequently underwent pharmacological mydriasis and ETDRS imaging. Non-mydriatic and mydriatic images were read by ophthalmologists. The ETDRS readings were used as the gold standard for calculating the sensitivity and specificity for the software. 564 consecutive subjects (1128 eyes were recruited from six centers in India. Each subject was evaluated at a single outpatient visit. Forty-four of 1128 images (3.9% could not be read by the algorithm, and were categorized as inconclusive. In four subjects, neither eye provided an acceptable image: these four subjects were excluded from the analysis. This left 560 subjects for analysis (1084 eyes. The algorithm correctly diagnosed 531 of 560 cases. The sensitivity, specificity, and positive and negative predictive values were 91%, 97%, 94%, and 95% respectively. The Bosch DR Algorithm shows favorable sensitivity and specificity in diagnosing DR from non-mydriatic images, and can greatly simplify screening for DR. This also has major implications for telemedicine in the use of screening for retinopathy in patients with diabetes mellitus.

  12. MRI intensity inhomogeneity correction by combining intensity and spatial information

    International Nuclear Information System (INIS)

    Vovk, Uros; Pernus, Franjo; Likar, Bostjan

    2004-01-01

    We propose a novel fully automated method for retrospective correction of intensity inhomogeneity, which is an undesired phenomenon in many automatic image analysis tasks, especially if quantitative analysis is the final goal. Besides most commonly used intensity features, additional spatial image features are incorporated to improve inhomogeneity correction and to make it more dynamic, so that local intensity variations can be corrected more efficiently. The proposed method is a four-step iterative procedure in which a non-parametric inhomogeneity correction is conducted. First, the probability distribution of image intensities and corresponding second derivatives is obtained. Second, intensity correction forces, condensing the probability distribution along the intensity feature, are computed for each voxel. Third, the inhomogeneity correction field is estimated by regularization of all voxel forces, and fourth, the corresponding partial inhomogeneity correction is performed. The degree of inhomogeneity correction dynamics is determined by the size of regularization kernel. The method was qualitatively and quantitatively evaluated on simulated and real MR brain images. The obtained results show that the proposed method does not corrupt inhomogeneity-free images and successfully corrects intensity inhomogeneity artefacts even if these are more dynamic

  13. A machine vision system for automated non-invasive assessment of cell viability via dark field microscopy, wavelet feature selection and classification

    Directory of Open Access Journals (Sweden)

    Friehs Karl

    2008-10-01

    Full Text Available Abstract Background Cell viability is one of the basic properties indicating the physiological state of the cell, thus, it has long been one of the major considerations in biotechnological applications. Conventional methods for extracting information about cell viability usually need reagents to be applied on the targeted cells. These reagent-based techniques are reliable and versatile, however, some of them might be invasive and even toxic to the target cells. In support of automated noninvasive assessment of cell viability, a machine vision system has been developed. Results This system is based on supervised learning technique. It learns from images of certain kinds of cell populations and trains some classifiers. These trained classifiers are then employed to evaluate the images of given cell populations obtained via dark field microscopy. Wavelet decomposition is performed on the cell images. Energy and entropy are computed for each wavelet subimage as features. A feature selection algorithm is implemented to achieve better performance. Correlation between the results from the machine vision system and commonly accepted gold standards becomes stronger if wavelet features are utilized. The best performance is achieved with a selected subset of wavelet features. Conclusion The machine vision system based on dark field microscopy in conjugation with supervised machine learning and wavelet feature selection automates the cell viability assessment, and yields comparable results to commonly accepted methods. Wavelet features are found to be suitable to describe the discriminative properties of the live and dead cells in viability classification. According to the analysis, live cells exhibit morphologically more details and are intracellularly more organized than dead ones, which display more homogeneous and diffuse gray values throughout the cells. Feature selection increases the system's performance. The reason lies in the fact that feature

  14. Detection of myocardial ischemia by automated, motion-corrected, color-encoded perfusion maps compared with visual analysis of adenosine stress cardiovascular magnetic resonance imaging at 3 T: a pilot study.

    Science.gov (United States)

    Doesch, Christina; Papavassiliu, Theano; Michaely, Henrik J; Attenberger, Ulrike I; Glielmi, Christopher; Süselbeck, Tim; Fink, Christian; Borggrefe, Martin; Schoenberg, Stefan O

    2013-09-01

    The purpose of this study was to compare automated, motion-corrected, color-encoded (AMC) perfusion maps with qualitative visual analysis of adenosine stress cardiovascular magnetic resonance imaging for detection of flow-limiting stenoses. Myocardial perfusion measurements applying the standard adenosine stress imaging protocol and a saturation-recovery temporal generalized autocalibrating partially parallel acquisition (t-GRAPPA) turbo fast low angle shot (Turbo FLASH) magnetic resonance imaging sequence were performed in 25 patients using a 3.0-T MAGNETOM Skyra (Siemens Healthcare Sector, Erlangen, Germany). Perfusion studies were analyzed using AMC perfusion maps and qualitative visual analysis. Angiographically detected coronary artery (CA) stenoses greater than 75% or 50% or more with a myocardial perfusion reserve index less than 1.5 were considered as hemodynamically relevant. Diagnostic performance and time requirement for both methods were compared. Interobserver and intraobserver reliability were also assessed. A total of 29 CA stenoses were included in the analysis. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy for detection of ischemia on a per-patient basis were comparable using the AMC perfusion maps compared to visual analysis. On a per-CA territory basis, the attribution of an ischemia to the respective vessel was facilitated using the AMC perfusion maps. Interobserver and intraobserver reliability were better for the AMC perfusion maps (concordance correlation coefficient, 0.94 and 0.93, respectively) compared to visual analysis (concordance correlation coefficient, 0.73 and 0.79, respectively). In addition, in comparison to visual analysis, the AMC perfusion maps were able to significantly reduce analysis time from 7.7 (3.1) to 3.2 (1.9) minutes (P < 0.0001). The AMC perfusion maps yielded a diagnostic performance on a per-patient and on a per-CA territory basis comparable with the visual analysis

  15. Determination of uncertainty of automated emission measuring systems under field conditions using a second method as a reference

    Energy Technology Data Exchange (ETDEWEB)

    Puustinen, H.; Aunela-Tapola, L.; Tolvanen, M.; Vahlman, T. [VTT Chemical Technology, Espoo (Finland). Environmental Technology; Kovanen, K. [VTT Building Technology, Espoo (Finland). Building Physics, Building Services and Fire Technology

    1999-09-01

    This report presents a procedure to determine the uncertainty of an automated emission measuring system (AMS) by comparing the results with a second method (REF). The procedure determines the uncertainty of AMS by comparing the final concentration and emission results of AMS and REF. In this way, the data processing of the plant is included in the result evaluation. This procedure assumes that the uncertainty of REF is known and determined in due form. The uncertainty determination has been divided into two cases; varying and nearly constant concentration. The suggested procedure calculates the uncertainty of AMS at the 95 % confidence level by a tabulated t-value. A minimum of three data pairs is required. However, a higher amount of data pairs is desirable, since a low amount of data pairs results in a higher uncertainty of AMS. The uncertainty of AMS is valid only within the range of concentrations at which the tests were carried out. Statistical data processing shows that the uncertainty of the reference method has a significant effect on the uncertainty of AMS, which always becomes larger than the uncertainty of REF. This should be taken into account when testing whether AMS fulfils the given uncertainty limits. Practical details, concerning parallel measurements at the plant, and the costs of the measurement campaign, have been taken into account when suggesting alternative ways for implementing the comparative measurements. (orig.) 6 refs.

  16. Multi-parameter Nonlinear Gain Correction of X-ray Transition Edge Sensors for the X-ray Integral Field Unit

    Science.gov (United States)

    Cucchetti, E.; Eckart, M. E.; Peille, P.; Porter, F. S.; Pajot, F.; Pointecouteau, E.

    2018-04-01

    With its array of 3840 Transition Edge Sensors (TESs), the Athena X-ray Integral Field Unit (X-IFU) will provide spatially resolved high-resolution spectroscopy (2.5 eV up to 7 keV) from 0.2 to 12 keV, with an absolute energy scale accuracy of 0.4 eV. Slight changes in the TES operating environment can cause significant variations in its energy response function, which may result in systematic errors in the absolute energy scale. We plan to monitor such changes at pixel level via onboard X-ray calibration sources and correct the energy scale accordingly using a linear or quadratic interpolation of gain curves obtained during ground calibration. However, this may not be sufficient to meet the 0.4 eV accuracy required for the X-IFU. In this contribution, we introduce a new two-parameter gain correction technique, based on both the pulse-height estimate of a fiducial line and the baseline value of the pixels. Using gain functions that simulate ground calibration data, we show that this technique can accurately correct deviations in detector gain due to changes in TES operating conditions such as heat sink temperature, bias voltage, thermal radiation loading and linear amplifier gain. We also address potential optimisations of the onboard calibration source and compare the performance of this new technique with those previously used.

  17. On the possibility of a relativistic correction to the E and B fields around a current-carrying wire

    International Nuclear Information System (INIS)

    Folman, Ron

    2013-01-01

    It is well known that electric and magnetic fields may change when they are observed from different frames of reference. For example, the motion of a charged probe particle moving parallel to a current-carrying wire would be described by utilizing different electric or magnetic fields, depending on from which frame of reference the system is observed and described. To describe the situation in all frames by utilizing the theory of relativity, one has to first describe the situation in one particular frame, and this choice in the case of a current-carrying wire is the topic of this paper. Specifically, I consider the question of in which frame the current carrying wire is neutral. The importance of relaxation processes is emphasized. As an example, I examine a specific alternative to the standard choice, and consider its theoretical and experimental validity. An outcome of alternative approaches is that in the rest frame of a wire, running a current introduces also an electric field by giving rise to a minute charge. Present day experimental sensitivities, specifically those of cold ions, may be able to differentiate between the observable signatures predicted by the different approaches.

  18. Design and experimental testing of air slab caps which convert commercial electron diodes into dual purpose, correction-free diodes for small field dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Charles, P. H., E-mail: paulcharles111@gmail.com [Department of Radiation Oncology, Princess Alexandra Hospital, Ipswich Road, Woolloongabba, Brisbane, Queensland 4102, Australia and School of Chemistry, Physics and Mechanical Engineering, Queensland University of Technology, GPO Box 2434, Brisbane, Queensland 4001 (Australia); Cranmer-Sargison, G. [Department of Medical Physics, Saskatchewan Cancer Agency, 20 Campus Drive, Saskatoon, Saskatchewan S7L 3P6, Canada and College of Medicine, University of Saskatchewan, 107 Wiggins Road, Saskatoon, Saskatchewan S7N 5E5 (Canada); Thwaites, D. I. [Institute of Medical Physics, School of Physics, University of Sydney, New South Wales 2006 (Australia); Kairn, T. [School of Chemistry, Physics and Mechanical Engineering, Queensland University of Technology, GPO Box 2434, Brisbane, Queensland 4001, Australia and Genesis CancerCare Queensland, The Wesley Medical Centre, Suite 1, 40 Chasely Street, Auchenflower, Brisbane, Queensland 4066 (Australia); Crowe, S. B.; Langton, C. M.; Trapp, J. V. [School of Chemistry, Physics and Mechanical Engineering, Queensland University of Technology, GPO Box 2434, Brisbane, Queensland 4001 (Australia); Pedrazzini, G. [Genesis CancerCare Queensland, The Wesley Medical Centre, Suite 1, 40 Chasely Street, Auchenflower, Brisbane, Queensland 4066 (Australia); Aland, T.; Kenny, J. [Epworth Radiation Oncology, 89 Bridge Road, Richmond, Melbourne, Victoria 3121 (Australia)

    2014-10-15

    Purpose: Two diodes which do not require correction factors for small field relative output measurements are designed and validated using experimental methodology. This was achieved by adding an air layer above the active volume of the diode detectors, which canceled out the increase in response of the diodes in small fields relative to standard field sizes. Methods: Due to the increased density of silicon and other components within a diode, additional electrons are created. In very small fields, a very small air gap acts as an effective filter of electrons with a high angle of incidence. The aim was to design a diode that balanced these perturbations to give a response similar to a water-only geometry. Three thicknesses of air were placed at the proximal end of a PTW 60017 electron diode (PTWe) using an adjustable “air cap”. A set of output ratios (OR{sub Det}{sup f{sub c}{sub l}{sub i}{sub n}}) for square field sizes of side length down to 5 mm was measured using each air thickness and compared to OR{sub Det}{sup f{sub c}{sub l}{sub i}{sub n}} measured using an IBA stereotactic field diode (SFD). k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} was transferred from the SFD to the PTWe diode and plotted as a function of air gap thickness for each field size. This enabled the optimal air gap thickness to be obtained by observing which thickness of air was required such that k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} was equal to 1.00 at all field sizes. A similar procedure was used to find the optimal air thickness required to make a modified Sun Nuclear EDGE detector (EDGEe) which is “correction-free” in small field relative dosimetry. In addition, the feasibility of experimentally transferring k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r

  19. Comparison of visual field test results obtained through Humphrey matrix frequency doubling technology perimetry versus standard automated perimetry in healthy children.

    Science.gov (United States)

    Kocabeyoglu, Sibel; Uzun, Salih; Mocan, Mehmet Cem; Bozkurt, Banu; Irkec, Murat; Orhan, Mehmet

    2013-10-01

    The aim of this study was to compare the visual field test results in healthy children obtained via the Humphrey matrix 24-2 threshold program and standard automated perimetry (SAP) using the Swedish interactive threshold algorithm (SITA)-Standard 24-2 test. This prospective study included 55 healthy children without ocular or systemic disorders who underwent both SAP and frequency doubling technology (FDT) perimetry visual field testing. Visual field test reliability indices, test duration, global indices (mean deviation [MD], and pattern standard deviation [PSD]) were compared between the 2 tests using the Wilcoxon signed-rank test and paired t-test. The performance of the Humphrey field analyzer (HFA) 24-2 SITA-standard and frequency-doubling technology Matrix 24-2 tests between genders were compared with Mann-Whitney U-test. Fifty-five healthy children with a mean age of 12.2 ± 1.9 years (range from 8 years to 16 years) were included in this prospective study. The test durations of SAP and FDT were similar (5.2 ± 0.5 and 5.1 ± 0.2 min, respectively, P = 0.651). MD and the PSD values obtained via FDT Matrix were significantly higher than those obtained via SAP (P tests in terms of MD (r = 0.352, P = 0.008) and PSD (r = 0.329, P = 0.014) was observed. Children were able to complete both the visual test algorithms successfully within 6 min. However, SAP testing appears to be associated with less depression of the visual field indices of healthy children. FDT Matrix and SAP should not be used interchangeably in the follow-up of children.

  20. Comparison of visual field test results obtained through Humphrey matrix frequency doubling technology perimetry versus standard automated perimetry in healthy children

    Directory of Open Access Journals (Sweden)

    Sibel Kocabeyoglu

    2013-01-01

    Full Text Available Aims : The aim of this study was to compare the visual field test results in healthy children obtained via the Humphrey matrix 24-2 threshold program and standard automated perimetry (SAP using the Swedish interactive threshold algorithm (SITA-Standard 24-2 test. Materials and Methods: This prospective study included 55 healthy children without ocular or systemic disorders who underwent both SAP and frequency doubling technology (FDT perimetry visual field testing. Visual field test reliability indices, test duration, global indices (mean deviation [MD], and pattern standard deviation [PSD] were compared between the 2 tests using the Wilcoxon signed-rank test and paired t-test. The performance of the Humphrey field analyzer (HFA 24-2 SITA-standard and frequency-doubling technology Matrix 24-2 tests between genders were compared with Mann-Whitney U-test. Results: Fifty-five healthy children with a mean age of 12.2 ± 1.9 years (range from 8 years to 16 years were included in this prospective study. The test durations of SAP and FDT were similar (5.2 ± 0.5 and 5.1 ± 0.2 min, respectively, P = 0.651. MD and the PSD values obtained via FDT Matrix were significantly higher than those obtained via SAP (P < 0.001, and fixation losses and false negative errors were significantly less with SAP (P < 0.05. A weak positive correlation between the two tests in terms of MD (r = 0.352, P = 0.008 and PSD (r = 0.329, P = 0.014 was observed. Conclusion: Children were able to complete both the visual test algorithms successfully within 6 min. However, SAP testing appears to be associated with less depression of the visual field indices of healthy children. FDT Matrix and SAP should not be used interchangeably in the follow-up of children.

  1. Field estimates of gravity terrain corrections and Y2K-compatible method to convert from gravity readings with multiple base stations to tide- and long-term drift-corrected observations

    Science.gov (United States)

    Plouff, Donald

    2000-01-01

    Gravity observations are directly made or are obtained from other sources by the U.S. Geological Survey in order to prepare maps of the anomalous gravity field and consequently to interpret the subsurface distribution of rock densities and associated lithologic or geologic units. Observations are made in the field with gravity meters at new locations and at reoccupations of previously established gravity "stations." This report illustrates an interactively-prompted series of steps needed to convert gravity "readings" to values that are tied to established gravity datums and includes computer programs to implement those steps. Inasmuch as individual gravity readings have small variations, gravity-meter (instrument) drift may not be smoothly variable, and acommodations may be needed for ties to previously established stations, the reduction process is iterative. Decision-making by the program user is prompted by lists of best values and graphical displays. Notes about irregularities of topography, which affect the value of observed gravity but are not shown in sufficient detail on topographic maps, must be recorded in the field. This report illustrates ways to record field notes (distances, heights, and slope angles) and includes computer programs to convert field notes to gravity terrain corrections. This report includes approaches that may serve as models for other applications, for example: portrayal of system flow; style of quality control to document and validate computer applications; lack of dependence on proprietary software except source code compilation; method of file-searching with a dwindling list; interactive prompting; computer code to write directly in the PostScript (Adobe Systems Incorporated) printer language; and high-lighting the four-digit year on the first line of time-dependent data sets for assured Y2K compatibility. Computer source codes provided are written in the Fortran scientific language. In order for the programs to operate, they first

  2. Design of an automated cart and mount for a hyperspectral imaging system to be used in produce fields

    Science.gov (United States)

    Lefcourt, Alan M.; Kistler, Ross; Gadsden, S. Andrew

    2016-05-01

    The goal of this project was to construct a cart and a mounting system that would allow a hyperspectral laser-induced fluorescence imaging system (HLIFIS) to be used to detect fecal material in produce fields. Fecal contaminated produce is a recognized food safety risk. Previous research demonstrated the HLIFIS could detect fecal contamination in a laboratory setting. A cart was designed and built, and then tested to demonstrate that the cart was capable of moving at constant speeds or at precise intervals. A mounting system was designed and built to facilitate the critical alignment of the camera's imaging and the laser's illumination fields, and to allow the HLIFIS to be used in both field and laboratory settings without changing alignments. A hardened mount for the Powell lens that is used to produce the appropriate illumination profile was also designed, built, and tested.

  3. Understanding human management of automation errors

    Science.gov (United States)

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  4. Automated evaluation of ultrasonic indications

    International Nuclear Information System (INIS)

    Hansch, M.K.T.; Stegemann, D.

    1994-01-01

    Future requirements of reliability and reproducibility in quality assurance demand computer evaluation of defect indications. The ultrasonic method with its large field of applications and a high potential for automation provides all preconditions for fully automated inspection. The survey proposes several desirable hardware improvements, data acquisition requirements and software configurations. (orig.) [de

  5. On the computational assessment of white matter hyperintensity progression: difficulties in method selection and bias field correction performance on images with significant white matter pathology

    Energy Technology Data Exchange (ETDEWEB)

    Valdes Hernandez, Maria del C.; Gonzalez-Castro, Victor; Wang, Xin; Doubal, Fergus; Munoz Maniega, Susana; Wardlaw, Joanna M. [Centre for Clinical Brian Sciences, Department of Neuroimaging Sciences, Edinburgh (United Kingdom); Ghandour, Dina T. [University of Edinburgh, College of Medicine and Veterinary Medicine, Edinburgh (United Kingdom); Armitage, Paul A. [University of Sheffield, Department of Cardiovascular Sciences, Sheffield (United Kingdom)

    2016-05-15

    Subtle inhomogeneities in the scanner's magnetic fields (B{sub 0} and B{sub 1}) alter the intensity levels of the structural magnetic resonance imaging (MRI) affecting the volumetric assessment of WMH changes. Here, we investigate the influence that (1) correcting the images for the B{sub 1} inhomogeneities (i.e. bias field correction (BFC)) and (2) selection of the WMH change assessment method can have on longitudinal analyses of WMH progression and discuss possible solutions. We used brain structural MRI from 46 mild stroke patients scanned at stroke onset and 3 years later. We tested three BFC approaches: FSL-FAST, N4 and exponentially entropy-driven homomorphic unsharp masking (E{sup 2}D-HUM) and analysed their effect on the measured WMH change. Separately, we tested two methods to assess WMH changes: measuring WMH volumes independently at both time points semi-automatically (MCMxxxVI) and subtracting intensity-normalised FLAIR images at both time points following image gamma correction. We then combined the BFC with the computational method that performed best across the whole sample to assess WMH changes. Analysis of the difference in the variance-to-mean intensity ratio in normal tissue between BFC and uncorrected images and visual inspection showed that all BFC methods altered the WMH appearance and distribution, but FSL-FAST in general performed more consistently across the sample and MRI modalities. The WMH volume change over 3 years obtained with MCMxxxVI with vs. without FSL-FAST BFC did not significantly differ (medians(IQR)(with BFC) = 3.2(6.3) vs. 2.9(7.4)ml (without BFC), p = 0.5), but both differed significantly from the WMH volume change obtained from subtracting post-processed FLAIR images (without BFC)(7.6(8.2)ml, p < 0.001). This latter method considerably inflated the WMH volume change as subtle WMH at baseline that became more intense at follow-up were counted as increase in the volumetric change. Measurement of WMH volume change remains

  6. The Effective Dynamic Ranges for Glaucomatous Visual Field Progression With Standard Automated Perimetry and Stimulus Sizes III and V.

    Science.gov (United States)

    Wall, Michael; Zamba, Gideon K D; Artes, Paul H

    2018-01-01

    It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on "censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher.

  7. TH-CD-BRA-05: First Water Calorimetric Dw Measurement and Direct Measurement of Magnetic Field Correction Factors, KQ,B, in a 1.5 T B-Field of An MRI Linac

    Energy Technology Data Exchange (ETDEWEB)

    Prez, L de; Pooter, J de; Jansen, B [VSL, Delft (Netherlands); Wolthaus, J; Asselen, B van; Woodings, S; Soest, T; Kok, J; Raaymakers, B [University Medical Center Utrecht, Utrecht (Netherlands)

    2016-06-15

    Purpose: Reference dosimetry in MR-guided radiotherapy is performed in the presence of a B-field. As a consequence the response of ionization chambers changes considerably and depends on parameters not considered in traditional reference dosimetry. Therefore future Codes of Practices need ionization chamber correction factors to correct for both the change in beam quality and the presence of a B-field. The objective was to study the feasibility of water calorimetric absorbed-dose measurements in a 1.5 T B-field of an MRLinac and the direct measurement of kQ,B calibration of ionization chambers. Methods: Calorimetric absorbed dose to water Dw was measured with a new water calorimeter in the bore of an MRLinac (TPR20,10 of 0.702). Two waterproof ionization chambers (PTW 30013, IBA FC-65G) were calibrated inside the calorimeter phantom (ND,w,Q,B). Both measurements were normalized to a monitor ionization chamber. Ionization chamber measurements were corrected for conventional influence parameter. Based on the chambers’ Co-60 calibrations (ND,w,Q0), measured directly against the calorimeter. In this study the correction factors kQ,B was determined as the ratio of the calibration coefficients in the MRLinac and in Co-60. Additionally, kB was determined based on kQ values obtained with the IAEA TRS-398 Code of Practice. Results: The kQ,B factors of the ionization chambers mentioned above were respectively 0.9488(8) and 0.9445(8) with resulting kB factors of 0.961(13) and 0.952(13) with standard uncertainties on the least significant digit(s) between brackets. Conclusion: Calorimetric Dw measurements and calibration of waterproof ionization chambers were successfully carried out in the 1.5 T B-field of an MRLinac with a standard uncertainty of 0.7%. Preliminary kQ,B and kB factors were determined with standard uncertainties of respectively 0.8% and 1.3%. The kQ,B agrees with an alternative method within 0.4%. The feasibility of water calorimetry in the presence of B-fields

  8. Automated Computer-Based Facility for Measurement of Near-Field Structure of Microwave Radiators and Scatterers

    DEFF Research Database (Denmark)

    Mishra, Shantnu R.;; Pavlasek, Tomas J. F.;; Muresan, Letitia V.

    1980-01-01

    An automatic facility for measuring the three-dimensional structure of the near fields of microwave radiators and scatterers is described. The amplitude and phase for different polarization components can be recorded in analog and digital form using a microprocessor-based system. The stored data...... are transferred to a large high-speed computer for bulk processing and for the production of isophot and equiphase contour maps or profiles. The performance of the system is demonstrated through results for a single conical horn, for interacting rectangular horns, for multiple cylindrical scatterers...

  9. Process automation

    International Nuclear Information System (INIS)

    Moser, D.R.

    1986-01-01

    Process automation technology has been pursued in the chemical processing industries and to a very limited extent in nuclear fuel reprocessing. Its effective use has been restricted in the past by the lack of diverse and reliable process instrumentation and the unavailability of sophisticated software designed for process control. The Integrated Equipment Test (IET) facility was developed by the Consolidated Fuel Reprocessing Program (CFRP) in part to demonstrate new concepts for control of advanced nuclear fuel reprocessing plants. A demonstration of fuel reprocessing equipment automation using advanced instrumentation and a modern, microprocessor-based control system is nearing completion in the facility. This facility provides for the synergistic testing of all chemical process features of a prototypical fuel reprocessing plant that can be attained with unirradiated uranium-bearing feed materials. The unique equipment and mission of the IET facility make it an ideal test bed for automation studies. This effort will provide for the demonstration of the plant automation concept and for the development of techniques for similar applications in a full-scale plant. A set of preliminary recommendations for implementing process automation has been compiled. Some of these concepts are not generally recognized or accepted. The automation work now under way in the IET facility should be useful to others in helping avoid costly mistakes because of the underutilization or misapplication of process automation. 6 figs

  10. Decision Making In A High-Tech World: Automation Bias and Countermeasures

    Science.gov (United States)

    Mosier, Kathleen L.; Skitka, Linda J.; Burdick, Mark R.; Heers, Susan T.; Rosekind, Mark R. (Technical Monitor)

    1996-01-01

    Automated decision aids and decision support systems have become essential tools in many high-tech environments. In aviation, for example, flight management systems computers not only fly the aircraft, but also calculate fuel efficient paths, detect and diagnose system malfunctions and abnormalities, and recommend or carry out decisions. Air Traffic Controllers will soon be utilizing decision support tools to help them predict and detect potential conflicts and to generate clearances. Other fields as disparate as nuclear power plants and medical diagnostics are similarly becoming more and more automated. Ideally, the combination of human decision maker and automated decision aid should result in a high-performing team, maximizing the advantages of additional cognitive and observational power in the decision-making process. In reality, however, the presence of these aids often short-circuits the way that even very experienced decision makers have traditionally handled tasks and made decisions, and introduces opportunities for new decision heuristics and biases. Results of recent research investigating the use of automated aids have indicated the presence of automation bias, that is, errors made when decision makers rely on automated cues as a heuristic replacement for vigilant information seeking and processing. Automation commission errors, i.e., errors made when decision makers inappropriately follow an automated directive, or automation omission errors, i.e., errors made when humans fail to take action or notice a problem because an automated aid fails to inform them, can result from this tendency. Evidence of the tendency to make automation-related omission and commission errors has been found in pilot self reports, in studies using pilots in flight simulations, and in non-flight decision making contexts with student samples. Considerable research has found that increasing social accountability can successfully ameliorate a broad array of cognitive biases and

  11. Automated Morphological Classification in Deep Hubble Space Telescope UBVI Fields: Rapidly and Passively Evolving Faint Galaxy Populations

    Science.gov (United States)

    Odewahn, Stephen C.; Windhorst, Rogier A.; Driver, Simon P.; Keel, William C.

    1996-11-01

    We analyze deep Hubble Space Telescope Wide Field Planetary Camera 2 (WFPC2) images in U, B, V, I using artificial neural network (ANN) classifiers, which are based on galaxy surface brightness and light profile (but not on color nor on scale length, rhl). The ANN distinguishes quite well between E/S0, Sabc, and Sd/Irr+M galaxies (M for merging systems) for BJ ~ 24 mag. The faint blue galaxy counts in the B band are dominated by Sd/Irr+M galaxies and can be explained by a moderately steep local luminosity function (LF) undergoing strong luminosity evolution. We suggest that these faint late-type objects (24 mag <~ BJ <~ 28 mag) are a combination of low-luminosity lower redshift dwarf galaxies, plus compact star-forming galaxies and merging systems at z ~= 1--3, possibly the building blocks of the luminous early-type galaxies seen today.

  12. An automated field spectrometer system for studying VIS, NIR and SWIR anisotropy for semi-arid savanna

    DEFF Research Database (Denmark)

    Huber, Silvia; Tagesson, Håkan Torbern; Fensholt, Rasmus

    2014-01-01

    in shading (analyzed by measurements from forward and backward scatter direction) did not have a noticeable impact on the indices (0.2 % and 0.5 % difference for NDVI and SIWSI in the backward and forward scatter direction, respectively). The presented data show the large potential of continuous time series....... The instrumental setup allows studying surface anisotropy for different phenological phases. First data retrieved from the Dahra field spectrometer system show distinctive patterns of spectrally dependent anisotropic behavior: during the rainy season normalized reflectance was highest around solar noon for small...... off-nadir observation angles but for observations of large off-nadir angles highest values were found in the morning or evening hours (both forward and backward scatter direction). Anisotropy factors corresponding to MODIS, SPOT and SEVIRI red, near-infrared (NIR) and shortwave-infrared (SWIR) sensor...

  13. Advanced health monitor for automated driving functions

    OpenAIRE

    Mikovski Iotov, I.

    2017-01-01

    There is a trend in the automotive domain where driving functions are taken from the driver by automated driving functions. In order to guarantee the correct behavior of these auto-mated driving functions, the report introduces an Advanced Health Monitor that uses Tem-poral Logic and Probabilistic Analysis to indicate the system’s health.

  14. Advanced health monitor for automated driving functions

    NARCIS (Netherlands)

    Mikovski Iotov, I.

    2017-01-01

    There is a trend in the automotive domain where driving functions are taken from the driver by automated driving functions. In order to guarantee the correct behavior of these auto-mated driving functions, the report introduces an Advanced Health Monitor that uses Tem-poral Logic and Probabilistic

  15. An automated device for the digitization and 3D modelling of insects, combining extended-depth-of-field and all-side multi-view imaging

    Directory of Open Access Journals (Sweden)

    Bernhard Ströbel

    2018-05-01

    Full Text Available Digitization of natural history collections is a major challenge in archiving biodiversity. In recent years, several approaches have emerged, allowing either automated digitization, extended depth of field (EDOF or multi-view imaging of insects. Here, we present DISC3D: a new digitization device for pinned insects and other small objects that combines all these aspects. A PC and a microcontroller board control the device. It features a sample holder on a motorized two-axis gimbal, allowing the specimens to be imaged from virtually any view. Ambient, mostly reflection-free illumination is ascertained by two LED-stripes circularly installed in two hemispherical white-coated domes (front-light and back-light. The device is equipped with an industrial camera and a compact macro lens, mounted on a motorized macro rail. EDOF images are calculated from an image stack using a novel calibrated scaling algorithm that meets the requirements of the pinhole camera model (a unique central perspective. The images can be used to generate a calibrated and real color texturized 3Dmodel by ‘structure from motion’ with a visibility consistent mesh generation. Such models are ideal for obtaining morphometric measurement data in 1D, 2D and 3D, thereby opening new opportunities for trait-based research in taxonomy, phylogeny, eco-physiology, and functional ecology.

  16. Automated Field-of-View, Illumination, and Recognition Algorithm Design of a Vision System for Pick-and-Place Considering Colour Information in Illumination and Images.

    Science.gov (United States)

    Chen, Yibing; Ogata, Taiki; Ueyama, Tsuyoshi; Takada, Toshiyuki; Ota, Jun

    2018-05-22

    Machine vision is playing an increasingly important role in industrial applications, and the automated design of image recognition systems has been a subject of intense research. This study has proposed a system for automatically designing the field-of-view (FOV) of a camera, the illumination strength and the parameters in a recognition algorithm. We formulated the design problem as an optimisation problem and used an experiment based on a hierarchical algorithm to solve it. The evaluation experiments using translucent plastics objects showed that the use of the proposed system resulted in an effective solution with a wide FOV, recognition of all objects and 0.32 mm and 0.4° maximal positional and angular errors when all the RGB (red, green and blue) for illumination and R channel image for recognition were used. Though all the RGB illumination and grey scale images also provided recognition of all the objects, only a narrow FOV was selected. Moreover, full recognition was not achieved by using only G illumination and a grey-scale image. The results showed that the proposed method can automatically design the FOV, illumination and parameters in the recognition algorithm and that tuning all the RGB illumination is desirable even when single-channel or grey-scale images are used for recognition.

  17. Distribution automation

    International Nuclear Information System (INIS)

    Gruenemeyer, D.

    1991-01-01

    This paper reports on a Distribution Automation (DA) System enhances the efficiency and productivity of a utility. It also provides intangible benefits such as improved public image and market advantages. A utility should evaluate the benefits and costs of such a system before committing funds. The expenditure for distribution automation is economical when justified by the deferral of a capacity increase, a decrease in peak power demand, or a reduction in O and M requirements

  18. Monte Carlo simulated corrections for beam commissioning measurements with circular and MLC shaped fields on the CyberKnife M6 System: a study including diode, microchamber, point scintillator, and synthetic microdiamond detectors

    Science.gov (United States)

    Francescon, P.; Kilby, W.; Noll, J. M.; Masi, L.; Satariano, N.; Russo, S.

    2017-02-01

    Monte Carlo simulation was used to calculate correction factors for output factor (OF), percentage depth-dose (PDD), and off-axis ratio (OAR) measurements with the CyberKnife M6 System. These include the first such data for the InCise MLC. Simulated detectors include diodes, air-filled microchambers, a synthetic microdiamond detector, and point scintillator. Individual perturbation factors were also evaluated. OF corrections show similar trends to previous studies. With a 5 mm fixed collimator the diode correction to convert a measured OF to the corresponding point dose ratio varies between  -6.1% and  -3.5% for the diode models evaluated, while in a 7.6 mm  ×  7.7 mm MLC field these are  -4.5% to  -1.8%. The corresponding microchamber corrections are  +9.9% to  +10.7% and  +3.5% to  +4.0%. The microdiamond corrections have a maximum of  -1.4% for the 7.5 mm and 10 mm collimators. The scintillator corrections are  15%, reducing to    d max were  M6 Systems and retrospectively checking estimated corrections used previously. We recommend the PDD and OAR corrections are used to guide detector selection and inform the evaluation of results rather than to explicitly correct measurements.

  19. Publisher Correction

    DEFF Research Database (Denmark)

    Turcot, Valérie; Lu, Yingchang; Highland, Heather M

    2018-01-01

    In the published version of this paper, the name of author Emanuele Di Angelantonio was misspelled. This error has now been corrected in the HTML and PDF versions of the article.......In the published version of this paper, the name of author Emanuele Di Angelantonio was misspelled. This error has now been corrected in the HTML and PDF versions of the article....

  20. Author Correction

    DEFF Research Database (Denmark)

    Grundle, D S; Löscher, C R; Krahmann, G

    2018-01-01

    A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper.......A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper....

  1. Automated drawing generation system

    International Nuclear Information System (INIS)

    Yoshinaga, Toshiaki; Kawahata, Junichi; Yoshida, Naoto; Ono, Satoru

    1991-01-01

    Since automated CAD drawing generation systems still require human intervention, improvements were focussed on an interactive processing section (data input and correcting operation) which necessitates a vast amount of work. As a result, human intervention was eliminated, the original objective of a computerized system. This is the first step taken towards complete automation. The effects of development and commercialization of the system are as described below. (1) The interactive processing time required for generating drawings was improved. It was determined that introduction of the CAD system has reduced the time required for generating drawings. (2) The difference in skills between workers preparing drawings has been eliminated and the quality of drawings has been made uniform. (3) The extent of knowledge and experience demanded of workers has been reduced. (author)

  2. Effects of dynamical paths on the energy gap and the corrections to the free energy in path integrals of mean-field quantum spin systems

    Science.gov (United States)

    Koh, Yang Wei

    2018-03-01

    In current studies of mean-field quantum spin systems, much attention is placed on the calculation of the ground-state energy and the excitation gap, especially the latter, which plays an important role in quantum annealing. In pure systems, the finite gap can be obtained by various existing methods such as the Holstein-Primakoff transform, while the tunneling splitting at first-order phase transitions has also been studied in detail using instantons in many previous works. In disordered systems, however, it remains challenging to compute the gap of large-size systems with specific realization of disorder. Hitherto, only quantum Monte Carlo techniques are practical for such studies. Recently, Knysh [Nature Comm. 7, 12370 (2016), 10.1038/ncomms12370] proposed a method where the exponentially large dimensionality of such systems is condensed onto a random potential of much lower dimension, enabling efficient study of such systems. Here we propose a slightly different approach, building upon the method of static approximation of the partition function widely used for analyzing mean-field models. Quantum effects giving rise to the excitation gap and nonextensive corrections to the free energy are accounted for by incorporating dynamical paths into the path integral. The time-dependence of the trace of the time-ordered exponential of the effective Hamiltonian is calculated by solving a differential equation perturbatively, yielding a finite-size series expansion of the path integral. Formulae for the first excited-state energy are proposed to aid in computing the gap. We illustrate our approach using the infinite-range ferromagnetic Ising model and the Hopfield model, both in the presence of a transverse field.

  3. Disassembly automation automated systems with cognitive abilities

    CERN Document Server

    Vongbunyong, Supachai

    2015-01-01

    This book presents a number of aspects to be considered in the development of disassembly automation, including the mechanical system, vision system and intelligent planner. The implementation of cognitive robotics increases the flexibility and degree of autonomy of the disassembly system. Disassembly, as a step in the treatment of end-of-life products, can allow the recovery of embodied value left within disposed products, as well as the appropriate separation of potentially-hazardous components. In the end-of-life treatment industry, disassembly has largely been limited to manual labor, which is expensive in developed countries. Automation is one possible solution for economic feasibility. The target audience primarily comprises researchers and experts in the field, but the book may also be beneficial for graduate students.

  4. Virtual automation.

    Science.gov (United States)

    Casis, E; Garrido, A; Uranga, B; Vives, A; Zufiaurre, C

    2001-01-01

    Total laboratory automation (TLA) can be substituted in mid-size laboratories by a computer sample workflow control (virtual automation). Such a solution has been implemented in our laboratory using PSM, software developed in cooperation with Roche Diagnostics (Barcelona, Spain), to this purpose. This software is connected to the online analyzers and to the laboratory information system and is able to control and direct the samples working as an intermediate station. The only difference with TLA is the replacement of transport belts by personnel of the laboratory. The implementation of this virtual automation system has allowed us the achievement of the main advantages of TLA: workload increase (64%) with reduction in the cost per test (43%), significant reduction in the number of biochemistry primary tubes (from 8 to 2), less aliquoting (from 600 to 100 samples/day), automation of functional testing, drastic reduction of preanalytical errors (from 11.7 to 0.4% of the tubes) and better total response time for both inpatients (from up to 48 hours to up to 4 hours) and outpatients (from up to 10 days to up to 48 hours). As an additional advantage, virtual automation could be implemented without hardware investment and significant headcount reduction (15% in our lab).

  5. UTILIZACIÓN DE SOFTWARE DE CORRECCIÓN AUTOMÁTICA EN EL CAMPO DE LAS CIENCIAS DE LA SALUD Using automatic correction software in the field of health sciences

    Directory of Open Access Journals (Sweden)

    Ferrán Prados

    2010-06-01

    Full Text Available Estamos viviendo una época de cambios profundos en la educación universitaria. La implantación del plan de Bolonia nos ha llevado a plantear nuevas metodologías docentes, a revisar el papel del estudiante, la evaluación por competencias, la incorporación de las TIC. Hechos impensables hace poco más de una década. Entre las diferentes plataformas informáticas, cabe destacar las que permiten corrección automática de ejercicios, porque son instrumentos de un gran interés pedagógico ya que evalúan al instante al alumnado y aportan un feedback del conocimiento que tiene en forma de mensaje de ayuda o de nota. Si la potencia de estas herramientas la sumamos a la de Internet, usando un entorno de e-learning, el resultado permitirá trabajar, corregir, evaluar, resolver dudas, etc., desde cualquier lugar y a cualquier hora. Este trabajo presenta parte de una plataforma y los resultados de su utilización en el ámbito de las ciencias de la salud.We live in an era of profound changes in university education. The implementation of Bologna plan has led us to raise new teaching methodologies, to review the role of the student, competency assessment, the incorporation of ICT. Unthinkable acts, one or two decade ago. The TIC concept is very broad and is attributed to the media, processes and content usage. Inside the supports and platforms, we stress tools that allow automatic correction of exercises, because they are instruments of great educational value because instantly they assess students and provide instant feedback about the knowledge that they have either as message support or note. If the power of these tools, we add the Internet, using e-learning environment, the results allow us to work, edit, evaluate, resolve doubts, and so on, anywhere, anytime. We present part of a platform and the results of its use in the field of health sciences.

  6. Methods and measurement variance for field estimations of coral colony planar area using underwater photographs and semi-automated image segmentation.

    Science.gov (United States)

    Neal, Benjamin P; Lin, Tsung-Han; Winter, Rivah N; Treibitz, Tali; Beijbom, Oscar; Kriegman, David; Kline, David I; Greg Mitchell, B

    2015-08-01

    Size and growth rates for individual colonies are some of the most essential descriptive parameters for understanding coral communities, which are currently experiencing worldwide declines in health and extent. Accurately measuring coral colony size and changes over multiple years can reveal demographic, growth, or mortality patterns often not apparent from short-term observations and can expose environmental stress responses that may take years to manifest. Describing community size structure can reveal population dynamics patterns, such as periods of failed recruitment or patterns of colony fission, which have implications for the future sustainability of these ecosystems. However, rapidly and non-invasively measuring coral colony sizes in situ remains a difficult task, as three-dimensional underwater digital reconstruction methods are currently not practical for large numbers of colonies. Two-dimensional (2D) planar area measurements from projection of underwater photographs are a practical size proxy, although this method presents operational difficulties in obtaining well-controlled photographs in the highly rugose environment of the coral reef, and requires extensive time for image processing. Here, we present and test the measurement variance for a method of making rapid planar area estimates of small to medium-sized coral colonies using a lightweight monopod image-framing system and a custom semi-automated image segmentation analysis program. This method demonstrated a coefficient of variation of 2.26% for repeated measurements in realistic ocean conditions, a level of error appropriate for rapid, inexpensive field studies of coral size structure, inferring change in colony size over time, or measuring bleaching or disease extent of large numbers of individual colonies.

  7. Automating Finance

    Science.gov (United States)

    Moore, John

    2007-01-01

    In past years, higher education's financial management side has been riddled with manual processes and aging mainframe applications. This article discusses schools which had taken advantage of an array of technologies that automate billing, payment processing, and refund processing in the case of overpayment. The investments are well worth it:…

  8. Library Automation.

    Science.gov (United States)

    Husby, Ole

    1990-01-01

    The challenges and potential benefits of automating university libraries are reviewed, with special attention given to cooperative systems. Aspects discussed include database size, the role of the university computer center, storage modes, multi-institutional systems, resource sharing, cooperative system management, networking, and intelligent…

  9. 2015 Chinese Intelligent Automation Conference

    CERN Document Server

    Li, Hongbo

    2015-01-01

    Proceedings of the 2015 Chinese Intelligent Automation Conference presents selected research papers from the CIAC’15, held in Fuzhou, China. The topics include adaptive control, fuzzy control, neural network based control, knowledge based control, hybrid intelligent control, learning control, evolutionary mechanism based control, multi-sensor integration, failure diagnosis, reconfigurable control, etc. Engineers and researchers from academia, industry and the government can gain valuable insights into interdisciplinary solutions in the field of intelligent automation.

  10. Automated one-loop calculations with GOSAM

    International Nuclear Information System (INIS)

    Cullen, Gavin; Greiner, Nicolas; Heinrich, Gudrun; Reiter, Thomas; Luisoni, Gionata

    2011-11-01

    We present the program package GoSam which is designed for the automated calculation of one-loop amplitudes for multi-particle processes in renormalisable quantum field theories. The amplitudes, which are generated in terms of Feynman diagrams, can be reduced using either D-dimensional integrand-level decomposition or tensor reduction. GoSam can be used to calculate one-loop QCD and/or electroweak corrections to Standard Model processes and offers the flexibility to link model files for theories Beyond the Standard Model. A standard interface to programs calculating real radiation is also implemented. We demonstrate the flexibility of the program by presenting examples of processes with up to six external legs attached to the loop. (orig.)

  11. Automated one-loop calculations with GOSAM

    Energy Technology Data Exchange (ETDEWEB)

    Cullen, Gavin [Edinburgh Univ. (United Kingdom). School of Physics and Astronomy; Deutsches Elektronen-Synchrotron, Zeuthen [DESY; Germany; Greiner, Nicolas [Illinois Univ., Urbana-Champaign, IL (United States). Dept. of Physics; Max-Planck-Institut fuer Physik, Muenchen (Germany); Heinrich, Gudrun; Reiter, Thomas [Max-Planck-Institut fuer Physik, Muenchen (Germany); Luisoni, Gionata [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology; Mastrolia, Pierpaolo [Max-Planck-Institut fuer Physik, Muenchen (Germany); Padua Univ. (Italy). Dipt. di Fisica; Ossola, Giovanni [New York City Univ., NY (United States). New York City College of Technology; New York City Univ., NY (United States). The Graduate School and University Center; Tramontano, Francesco [European Organization for Nuclear Research (CERN), Geneva (Switzerland)

    2011-11-15

    We present the program package GoSam which is designed for the automated calculation of one-loop amplitudes for multi-particle processes in renormalisable quantum field theories. The amplitudes, which are generated in terms of Feynman diagrams, can be reduced using either D-dimensional integrand-level decomposition or tensor reduction. GoSam can be used to calculate one-loop QCD and/or electroweak corrections to Standard Model processes and offers the flexibility to link model files for theories Beyond the Standard Model. A standard interface to programs calculating real radiation is also implemented. We demonstrate the flexibility of the program by presenting examples of processes with up to six external legs attached to the loop. (orig.)

  12. Automated MRI segmentation for individualized modeling of current flow in the human head.

    Science.gov (United States)

    Huang, Yu; Dmochowski, Jacek P; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C

    2013-12-01

    High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Fully automated individualized modeling may now be feasible

  13. Publisher Correction

    DEFF Research Database (Denmark)

    Stokholm, Jakob; Blaser, Martin J.; Thorsen, Jonathan

    2018-01-01

    The originally published version of this Article contained an incorrect version of Figure 3 that was introduced following peer review and inadvertently not corrected during the production process. Both versions contain the same set of abundance data, but the incorrect version has the children...

  14. Publisher Correction

    DEFF Research Database (Denmark)

    Flachsbart, Friederike; Dose, Janina; Gentschew, Liljana

    2018-01-01

    The original version of this Article contained an error in the spelling of the author Robert Häsler, which was incorrectly given as Robert Häesler. This has now been corrected in both the PDF and HTML versions of the Article....

  15. Correction to

    DEFF Research Database (Denmark)

    Roehle, Robert; Wieske, Viktoria; Schuetz, Georg M

    2018-01-01

    The original version of this article, published on 19 March 2018, unfortunately contained a mistake. The following correction has therefore been made in the original: The names of the authors Philipp A. Kaufmann, Ronny Ralf Buechel and Bernhard A. Herzog were presented incorrectly....

  16. Monte Carlo study of MOSFET dosimeter dose correction factors considering energy spectrum of radiation field in a steam generator channel head

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Sung Koo; Choi, Sang Hyoun; Kim, Chan Hyeong [Hanyang Univ., Seoul (Korea, Republic of)

    2006-12-15

    In Korea, a real-time effective dose measurement system is in development. The system uses 32 high-sensitivity MOSFET dosimeters to measure radiation doses at various organ locations in an anthropomorphic physical phantom. The MOSFET dosimeters are, however, mainly made of silicon and shows some degree of energy and angular dependence especially for low energy photons. This study determines the correction factors to correct for these dependences of the MOSFET dosimeters for accurate measurement of radiation doses at organ locations in the phantom. For this, first, the dose correction factors of MOSFET dosimeters were determined for the energy spectrum in the steam generator channel of the Kori Nuclear Power Plant Unit no.1 by Monte Carlo simulations. Then, the results were compared with the dose correction factors from 0.662 MeV and 1.25 MeV mono-energetic photons. The difference of the dose correction factors were found very negligible ({<=}1.5%), which in general shows that the dose corrections factors determined from 0.662 MeV and 1.25 MeV can be in a steam general channel head of a nuclear power plant. The measured effective dose was generally found to decrease by {approx}7% when we apply the dose correction factors.

  17. Quantitative Estimation for the Effectiveness of Automation

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun

    2012-01-01

    In advanced MCR, various automation systems are applied to enhance the human performance and reduce the human errors in industrial fields. It is expected that automation provides greater efficiency, lower workload, and fewer human errors. However, these promises are not always fulfilled. As the new types of events related to application of the imperfect and complex automation are occurred, it is required to analyze the effects of automation system for the performance of human operators. Therefore, we suggest the quantitative estimation method to analyze the effectiveness of the automation systems according to Level of Automation (LOA) classification, which has been developed over 30 years. The estimation of the effectiveness of automation will be achieved by calculating the failure probability of human performance related to the cognitive activities

  18. Increased Automation in Stereo Camera Calibration Techniques

    Directory of Open Access Journals (Sweden)

    Brandi House

    2006-08-01

    Full Text Available Robotic vision has become a very popular field in recent years due to the numerous promising applications it may enhance. However, errors within the cameras and in their perception of their environment can cause applications in robotics to fail. To help correct these internal and external imperfections, stereo camera calibrations are performed. There are currently many accurate methods of camera calibration available; however, most or all of them are time consuming and labor intensive. This research seeks to automate the most labor intensive aspects of a popular calibration technique developed by Jean-Yves Bouguet. His process requires manual selection of the extreme corners of a checkerboard pattern. The modified process uses embedded LEDs in the checkerboard pattern to act as active fiducials. Images are captured of the checkerboard with the LEDs on and off in rapid succession. The difference of the two images automatically highlights the location of the four extreme corners, and these corner locations take the place of the manual selections. With this modification to the calibration routine, upwards of eighty mouse clicks are eliminated per stereo calibration. Preliminary test results indicate that accuracy is not substantially affected by the modified procedure. Improved automation to camera calibration procedures may finally penetrate the barriers to the use of calibration in practice.

  19. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ...

  20. SU-E-T-225: Correction Matrix for PinPoint Ionization Chamber for Dosimetric Measurements in the Newly Released Incise™ Multileaf Collimator Shaped Small Field for CyberKnife M6™ Machine

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Y; Li, T; Heron, D; Huq, M [University of Pittsburgh Cancer Institute and UPMC CancerCenter, Pittsburgh, PA (United States)

    2015-06-15

    Purpose: For small field dosimetry, such as measurements of output factors for cones or MLC-shaped irregular small fields, ion chambers often Result in an underestimation of the dose, due to both the volume averaging effect and the lack of lateral charged particle equilibrium. This work presents a mathematical model for correction matrix for a PTW PinPoint ionization chamber for dosimetric measurements made in the newly released Incise™ Multileaf collimator fields of the CyberKnife M6™ machine. Methods: A correction matrix for a PTW 0.015cc PinPoint ionization chamber was developed by modeling its 3D dose response in twelve cone-shaped circular fields created using the 5mm, 7.5mm, 10mm, 12.5mm, 15mm, 20mm, 25mm, 30mm, 35mm, 40mm, 50mm, 60mm cones in a CyberKnife M6™ machine. For each field size, hundreds of readings were recorded for every 2mm chamber shift in the horizontal plane. The contribution of each dose pixel to a measurement point depended on the radial distance and the angle to the chamber axis. These readings were then compared with the theoretical dose as obtained with Monte Carlo calculation. A penalized least-square optimization algorithm was developed to generate the correction matrix. After the parameter fitting, the mathematical model was validated for MLC-shaped irregular fields. Results: The optimization algorithm used for parameter fitting was stable and the resulted response factors were smooth in spatial domain. After correction with the mathematical model, the chamber reading matched with the calculation for all the tested fields to within 2%. Conclusion: A novel mathematical model has been developed for PinPoint chamber for dosimetric measurements in small MLC-shaped irregular fields. The correction matrix is dependent on detector, treatment unit and the geometry of setup. The model can be applied to non-standard composite fields and provides an access to IMRT point dose validation.

  1. SU-E-T-225: Correction Matrix for PinPoint Ionization Chamber for Dosimetric Measurements in the Newly Released Incise™ Multileaf Collimator Shaped Small Field for CyberKnife M6™ Machine

    International Nuclear Information System (INIS)

    Zhang, Y; Li, T; Heron, D; Huq, M

    2015-01-01

    Purpose: For small field dosimetry, such as measurements of output factors for cones or MLC-shaped irregular small fields, ion chambers often Result in an underestimation of the dose, due to both the volume averaging effect and the lack of lateral charged particle equilibrium. This work presents a mathematical model for correction matrix for a PTW PinPoint ionization chamber for dosimetric measurements made in the newly released Incise™ Multileaf collimator fields of the CyberKnife M6™ machine. Methods: A correction matrix for a PTW 0.015cc PinPoint ionization chamber was developed by modeling its 3D dose response in twelve cone-shaped circular fields created using the 5mm, 7.5mm, 10mm, 12.5mm, 15mm, 20mm, 25mm, 30mm, 35mm, 40mm, 50mm, 60mm cones in a CyberKnife M6™ machine. For each field size, hundreds of readings were recorded for every 2mm chamber shift in the horizontal plane. The contribution of each dose pixel to a measurement point depended on the radial distance and the angle to the chamber axis. These readings were then compared with the theoretical dose as obtained with Monte Carlo calculation. A penalized least-square optimization algorithm was developed to generate the correction matrix. After the parameter fitting, the mathematical model was validated for MLC-shaped irregular fields. Results: The optimization algorithm used for parameter fitting was stable and the resulted response factors were smooth in spatial domain. After correction with the mathematical model, the chamber reading matched with the calculation for all the tested fields to within 2%. Conclusion: A novel mathematical model has been developed for PinPoint chamber for dosimetric measurements in small MLC-shaped irregular fields. The correction matrix is dependent on detector, treatment unit and the geometry of setup. The model can be applied to non-standard composite fields and provides an access to IMRT point dose validation

  2. Plant automation

    International Nuclear Information System (INIS)

    Christensen, L.J.; Sackett, J.I.; Dayal, Y.; Wagner, W.K.

    1989-01-01

    This paper describes work at EBR-II in the development and demonstration of new control equipment and methods and associated schemes for plant prognosis, diagnosis, and automation. The development work has attracted the interest of other national laboratories, universities, and commercial companies. New initiatives include use of new control strategies, expert systems, advanced diagnostics, and operator displays. The unique opportunity offered by EBR-II is as a test bed where a total integrated approach to automatic reactor control can be directly tested under real power plant conditions

  3. AUTOMATION OF CONVEYOR BELT TRANSPORT

    Directory of Open Access Journals (Sweden)

    Nenad Marinović

    1990-12-01

    Full Text Available Belt conveyor transport, although one of the most economical mining transport system, introduce many problems to mantain the continuity of the operation. Every stop causes economical loses. Optimal operation require correct tension of the belt, correct belt position and velocity and faultless rolls, which are together input conditions for automation. Detection and position selection of the faults are essential for safety to eliminate fire hazard and for efficient maintenance. Detection and location of idler roll faults are still open problem and up to now not solved successfully (the paper is published in Croatian.

  4. ICT: isotope correction toolbox.

    Science.gov (United States)

    Jungreuthmayer, Christian; Neubauer, Stefan; Mairinger, Teresa; Zanghellini, Jürgen; Hann, Stephan

    2016-01-01

    Isotope tracer experiments are an invaluable technique to analyze and study the metabolism of biological systems. However, isotope labeling experiments are often affected by naturally abundant isotopes especially in cases where mass spectrometric methods make use of derivatization. The correction of these additive interferences--in particular for complex isotopic systems--is numerically challenging and still an emerging field of research. When positional information is generated via collision-induced dissociation, even more complex calculations for isotopic interference correction are necessary. So far, no freely available tools can handle tandem mass spectrometry data. We present isotope correction toolbox, a program that corrects tandem mass isotopomer data from tandem mass spectrometry experiments. Isotope correction toolbox is written in the multi-platform programming language Perl and, therefore, can be used on all commonly available computer platforms. Source code and documentation can be freely obtained under the Artistic License or the GNU General Public License from: https://github.com/jungreuc/isotope_correction_toolbox/ {christian.jungreuthmayer@boku.ac.at,juergen.zanghellini@boku.ac.at} Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Optimization of automation: III. Development of optimization method for determining automation rate in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Kim, Man Cheol; Seong, Poong Hyun

    2016-01-01

    Highlights: • We propose an appropriate automation rate that enables the best human performance. • We analyze the shortest working time considering Situation Awareness Recovery (SAR). • The optimized automation rate is estimated by integrating the automation and ostracism rate estimation methods. • The process to derive the optimized automation rate is demonstrated through case studies. - Abstract: Automation has been introduced in various industries, including the nuclear field, because it is commonly believed that automation promises greater efficiency, lower workloads, and fewer operator errors through reducing operator errors and enhancing operator and system performance. However, the excessive introduction of automation has deteriorated operator performance due to the side effects of automation, which are referred to as Out-of-the-Loop (OOTL), and this is critical issue that must be resolved. Thus, in order to determine the optimal level of automation introduction that assures the best human operator performance, a quantitative method of optimizing the automation is proposed in this paper. In order to propose the optimization method for determining appropriate automation levels that enable the best human performance, the automation rate and ostracism rate, which are estimation methods that quantitatively analyze the positive and negative effects of automation, respectively, are integrated. The integration was conducted in order to derive the shortest working time through considering the concept of situation awareness recovery (SAR), which states that the automation rate with the shortest working time assures the best human performance. The process to derive the optimized automation rate is demonstrated through an emergency operation scenario-based case study. In this case study, four types of procedures are assumed through redesigning the original emergency operating procedure according to the introduced automation and ostracism levels. Using the

  6. WIDAFELS flexible automation systems

    International Nuclear Information System (INIS)

    Shende, P.S.; Chander, K.P.; Ramadas, P.

    1990-01-01

    After discussing the various aspects of automation, some typical examples of various levels of automation are given. One of the examples is of automated production line for ceramic fuel pellets. (M.G.B.)

  7. An Automation Planning Primer.

    Science.gov (United States)

    Paynter, Marion

    1988-01-01

    This brief planning guide for library automation incorporates needs assessment and evaluation of options to meet those needs. A bibliography of materials on automation planning and software reviews, library software directories, and library automation journals is included. (CLB)

  8. Low cost automation

    International Nuclear Information System (INIS)

    1987-03-01

    This book indicates method of building of automation plan, design of automation facilities, automation and CHIP process like basics of cutting, NC processing machine and CHIP handling, automation unit, such as drilling unit, tapping unit, boring unit, milling unit and slide unit, application of oil pressure on characteristics and basic oil pressure circuit, application of pneumatic, automation kinds and application of process, assembly, transportation, automatic machine and factory automation.

  9. Electroweak corrections

    International Nuclear Information System (INIS)

    Beenakker, W.J.P.

    1989-01-01

    The prospect of high accuracy measurements investigating the weak interactions, which are expected to take place at the electron-positron storage ring LEP at CERN and the linear collider SCL at SLAC, offers the possibility to study also the weak quantum effects. In order to distinguish if the measured weak quantum effects lie within the margins set by the standard model and those bearing traces of new physics one had to go beyond the lowest order and also include electroweak radiative corrections (EWRC) in theoretical calculations. These higher-order corrections also can offer the possibility of getting information about two particles present in the Glashow-Salam-Weinberg model (GSW), but not discovered up till now, the top quark and the Higgs boson. In ch. 2 the GSW standard model of electroweak interactions is described. In ch. 3 some special techniques are described for determination of integrals which are responsible for numerical instabilities caused by large canceling terms encountered in the calculation of EWRC effects, and methods necessary to get hold of the extensive algebra typical for EWRC. In ch. 4 various aspects related to EWRC effects are discussed, in particular the dependence of the unknown model parameters which are the masses of the top quark and the Higgs boson. The processes which are discussed are production of heavy fermions from electron-positron annihilation and those of the fermionic decay of the Z gauge boson. (H.W.). 106 refs.; 30 figs.; 6 tabs.; schemes

  10. Automated Budget System -

    Data.gov (United States)

    Department of Transportation — The Automated Budget System (ABS) automates management and planning of the Mike Monroney Aeronautical Center (MMAC) budget by providing enhanced capability to plan,...

  11. Comparison of the FFT/matrix inversion and system matrix techniques for higher-order probe correction in spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Pivnenko, Sergey; Nielsen, Jeppe Majlund; Breinbjerg, Olav

    2011-01-01

    correction of general high-order probes, including non-symmetric dual-polarized antennas with independent ports. The investigation was carried out by processing with each technique the same measurement data for a challenging case with an antenna under test significantly offset from the center of rotation...

  12. The effect of individual differences in working memory in older adults on performance with different degrees of automated technology.

    Science.gov (United States)

    Pak, Richard; McLaughlin, Anne Collins; Leidheiser, William; Rovira, Ericka

    2017-04-01

    A leading hypothesis to explain older adults' overdependence on automation is age-related declines in working memory. However, it has not been empirically examined. The purpose of the current experiment was to examine how working memory affected performance with different degrees of automation in older adults. In contrast to the well-supported idea that higher degrees of automation, when the automation is correct, benefits performance but higher degrees of automation, when the automation fails, increasingly harms performance, older adults benefited from higher degrees of automation when the automation was correct but were not differentially harmed by automation failures. Surprisingly, working memory did not interact with degree of automation but did interact with automation correctness or failure. When automation was correct, older adults with higher working memory ability had better performance than those with lower abilities. But when automation was incorrect, all older adults, regardless of working memory ability, performed poorly. Practitioner Summary: The design of automation intended for older adults should focus on ways of making the correctness of the automation apparent to the older user and suggest ways of helping them recover when it is malfunctioning.

  13. Automation 2017

    CERN Document Server

    Zieliński, Cezary; Kaliczyńska, Małgorzata

    2017-01-01

    This book consists of papers presented at Automation 2017, an international conference held in Warsaw from March 15 to 17, 2017. It discusses research findings associated with the concepts behind INDUSTRY 4.0, with a focus on offering a better understanding of and promoting participation in the Fourth Industrial Revolution. Each chapter presents a detailed analysis of a specific technical problem, in most cases followed by a numerical analysis, simulation and description of the results of implementing the solution in a real-world context. The theoretical results, practical solutions and guidelines presented are valuable for both researchers working in the area of engineering sciences and practitioners looking for solutions to industrial problems. .

  14. Marketing automation

    Directory of Open Access Journals (Sweden)

    TODOR Raluca Dania

    2017-01-01

    Full Text Available The automation of the marketing process seems to be nowadays, the only solution to face the major changes brought by the fast evolution of technology and the continuous increase in supply and demand. In order to achieve the desired marketing results, businessis have to employ digital marketing and communication services. These services are efficient and measurable thanks to the marketing technology used to track, score and implement each campaign. Due to the technical progress, the marketing fragmentation, demand for customized products and services on one side and the need to achieve constructive dialogue with the customers, immediate and flexible response and the necessity to measure the investments and the results on the other side, the classical marketing approached had changed continue to improve substantially.

  15. Using the automized system ''section'' to forecast velocity sections using data on borehole velocity measurement and seismic field prospecting

    Energy Technology Data Exchange (ETDEWEB)

    Dorman, M.I.; Gein, F.F.; Zubairov, F.B.

    1981-01-01

    A system of automated processing of seismic data is examined which makes it possible to set up rate functions at arbitrary points of a seismic prospecting section or at points conciding with boreholes in which rate measurements have not been completed. The basis for the forecasting method is data on seismic well logging investigations, seismic prospecting and some indirect observations on sections. The bases of a procedure realizing a forecasting method are set forth, as are those requirements which satisfy the system as a whole. The results of using the ''section'' system in a terrestrial section of Western Siberia are set forth.

  16. Development and Application of Tools for MRI Analysis - A Study on the Effects of Exercise in Patients with Alzheimer's Disease and Generative Models for Bias Field Correction in MR Brain Imaging

    DEFF Research Database (Denmark)

    Larsen, Christian Thode

    in several cognitive performance measures, including mental speed, attention and verbal uency. MRI suffers from an image artifact often referred to as the "bias field”. This effect complicates automatized analysis of the images. For this reason, bias field correction is typical an early preprocessing step...... as a "histogram sharpening” method, actually employs an underlying generative model, and that the bias field is estimated using an algorithm that is identical to generalized expectation maximization, but relies on heuristic parameter updates. The thesis progresses to present a new generative model...

  17. A fully automated and reproducible level-set segmentation approach for generation of MR-based attenuation correction map of PET images in the brain employing single STE-MR imaging modality

    Energy Technology Data Exchange (ETDEWEB)

    Kazerooni, Anahita Fathi; Aarabi, Mohammad Hadi [Quantitative MR Imaging and Spectroscopy Group, Research Center for Cellular and Molecular Imaging, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Ay, Mohammadreza [Quantitative MR Imaging and Spectroscopy Group, Research Center for Cellular and Molecular Imaging, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Medical Imaging Systems Group, Research Center for Cellular and Molecular Imaging, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Rad, Hamidreza Saligheh [Quantitative MR Imaging and Spectroscopy Group, Research Center for Cellular and Molecular Imaging, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of)

    2014-07-29

    Generating MR-based attenuation correction map (μ-map) for quantitative reconstruction of PET images still remains a challenge in hybrid PET/MRI systems, mainly because cortical bone structures are indistinguishable from proximal air cavities in conventional MR images. Recently, development of short echo-time (STE) MR imaging sequences, has shown promise in differentiating cortical bone from air. However, on STE-MR images, the bone appears with discontinuous boundaries. Therefore, segmentation techniques based on intensity classification, such as thresholding or fuzzy C-means, fail to homogeneously delineate bone boundaries, especially in the presence of intrinsic noise and intensity inhomogeneity. Consequently, they cannot be fully automatized, must be fine-tuned on the case-by-case basis, and require additional morphological operations for segmentation refinement. To overcome the mentioned problems, in this study, we introduce a new fully automatic and reproducible STE-MR segmentation approach exploiting level-set in a clustering-based intensity inhomogeneity correction framework to reliably delineate bone from soft tissue and air.

  18. A fully automated and reproducible level-set segmentation approach for generation of MR-based attenuation correction map of PET images in the brain employing single STE-MR imaging modality

    International Nuclear Information System (INIS)

    Kazerooni, Anahita Fathi; Aarabi, Mohammad Hadi; Ay, Mohammadreza; Rad, Hamidreza Saligheh

    2014-01-01

    Generating MR-based attenuation correction map (μ-map) for quantitative reconstruction of PET images still remains a challenge in hybrid PET/MRI systems, mainly because cortical bone structures are indistinguishable from proximal air cavities in conventional MR images. Recently, development of short echo-time (STE) MR imaging sequences, has shown promise in differentiating cortical bone from air. However, on STE-MR images, the bone appears with discontinuous boundaries. Therefore, segmentation techniques based on intensity classification, such as thresholding or fuzzy C-means, fail to homogeneously delineate bone boundaries, especially in the presence of intrinsic noise and intensity inhomogeneity. Consequently, they cannot be fully automatized, must be fine-tuned on the case-by-case basis, and require additional morphological operations for segmentation refinement. To overcome the mentioned problems, in this study, we introduce a new fully automatic and reproducible STE-MR segmentation approach exploiting level-set in a clustering-based intensity inhomogeneity correction framework to reliably delineate bone from soft tissue and air.

  19. Home automation as an example of construction innovation

    NARCIS (Netherlands)

    Vlies, R.D. van der; Bronswijk, J.E.M.H. van

    2009-01-01

    Home automation can contribute to the health of (older) adults. Home automation covers a broad field of ‘intelligent’ electronic or mechanical devices in the home (domestic) environment. Realizing home automation is technically possible, though still not common. In this paper main influential

  20. Development of an automated asbestos counting software based on fluorescence microscopy.

    Science.gov (United States)

    Alexandrov, Maxym; Ichida, Etsuko; Nishimura, Tomoki; Aoki, Kousuke; Ishida, Takenori; Hirota, Ryuichi; Ikeda, Takeshi; Kawasaki, Tetsuo; Kuroda, Akio

    2015-01-01

    An emerging alternative to the commonly used analytical methods for asbestos analysis is fluorescence microscopy (FM), which relies on highly specific asbestos-binding probes to distinguish asbestos from interfering non-asbestos fibers. However, all types of microscopic asbestos analysis require laborious examination of large number of fields of view and are prone to subjective errors and large variability between asbestos counts by different analysts and laboratories. A possible solution to these problems is automated counting of asbestos fibers by image analysis software, which would lower the cost and increase the reliability of asbestos testing. This study seeks to develop a fiber recognition and counting software for FM-based asbestos analysis. We discuss the main features of the developed software and the results of its testing. Software testing showed good correlation between automated and manual counts for the samples with medium and high fiber concentrations. At low fiber concentrations, the automated counts were less accurate, leading us to implement correction mode for automated counts. While the full automation of asbestos analysis would require further improvements in accuracy of fiber identification, the developed software could already assist professional asbestos analysts and record detailed fiber dimensions for the use in epidemiological research.

  1. International Conference Automation : Challenges in Automation, Robotics and Measurement Techniques

    CERN Document Server

    Zieliński, Cezary; Kaliczyńska, Małgorzata

    2016-01-01

    This book presents the set of papers accepted for presentation at the International Conference Automation, held in Warsaw, 2-4 March of 2016. It presents the research results presented by top experts in the fields of industrial automation, control, robotics and measurement techniques. Each chapter presents a thorough analysis of a specific technical problem which is usually followed by numerical analysis, simulation, and description of results of implementation of the solution of a real world problem. The presented theoretical results, practical solutions and guidelines will be valuable for both researchers working in the area of engineering sciences and for practitioners solving industrial problems. .

  2. Complex Automated Negotiations Theories, Models, and Software Competitions

    CERN Document Server

    Zhang, Minjie; Robu, Valentin; Matsuo, Tokuro

    2013-01-01

    Complex Automated Negotiations are a widely studied, emerging area in the field of Autonomous Agents and Multi-Agent Systems. In general, automated negotiations can be complex, since there are a lot of factors that characterize such negotiations. For this book, we solicited papers on all aspects of such complex automated negotiations, which are studied in the field of Autonomous Agents and Multi-Agent Systems. This book includes two parts, which are Part I: Agent-based Complex Automated Negotiations and Part II: Automated Negotiation Agents Competition. Each chapter in Part I is an extended version of ACAN 2011 papers after peer reviews by three PC members. Part II includes ANAC 2011 (The Second Automated Negotiating Agents Competition), in which automated agents who have different negotiation strategies and implemented by different developers are automatically negotiate in the several negotiation domains. ANAC is an international competition in which automated negotiation strategies, submitted by a number of...

  3. Illumination correction in psoriasis lesions images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær

    2003-01-01

    An approach to automatically correct illumination problems in dermatological images is presented. The illumination function is estimated after combining the thematic map indicating skin-produced by an automated classification scheme- with the dermatological image data. The user is only required t...

  4. Judson_Mansouri_Automated_Chemical_Curation_QSAREnvRes_Data

    Data.gov (United States)

    U.S. Environmental Protection Agency — Here we describe the development of an automated KNIME workflow to curate and correct errors in the structure and identity of chemicals using the publically...

  5. Enhancing Cooperative Loan Scheme Through Automated Loan ...

    African Journals Online (AJOL)

    Journal Home > Vol 6, No 1 (2013) > ... The concept of automation has been variously applied in most computing fields. ... competent capabilities to eliminate data inconsistency and redundancy as well as ensuring data integrity and security, ...

  6. Both Automation and Paper.

    Science.gov (United States)

    Purcell, Royal

    1988-01-01

    Discusses the concept of a paperless society and the current situation in library automation. Various applications of automation and telecommunications are addressed, and future library automation is considered. Automation at the Monroe County Public Library in Bloomington, Indiana, is described as an example. (MES)

  7. Mobile home automation-merging mobile value added services and home automation technologies

    OpenAIRE

    Rosendahl, Andreas; Hampe, Felix J.; Botterweck, Goetz

    2007-01-01

    non-peer-reviewed In this paper we study mobile home automation, a field that emerges from an integration of mobile application platforms and home automation technologies. In a conceptual introduction we first illustrate the need for such applications by introducing a two-dimensional conceptual model of mobility. Subsequently we suggest an architecture and discuss different options of how a user might access a mobile home automation service and the controlled devices. As another contrib...

  8. fields

    Directory of Open Access Journals (Sweden)

    Brad J. Arnold

    2014-07-01

    Full Text Available Surface irrigation, such as flood or furrow, is the predominant form of irrigation in California for agronomic crops. Compared to other irrigation methods, however, it is inefficient in terms of water use; large quantities of water, instead of being used for crop production, are lost to excess deep percolation and tail runoff. In surface-irrigated fields, irrigators commonly cut off the inflow of water when the water advance reaches a familiar or convenient location downfield, but this experience-based strategy has not been very successful in reducing the tail runoff water. Our study compared conventional cutoff practices to a retroactively applied model-based cutoff method in four commercially producing alfalfa fields in Northern California, and evaluated the model using a simple sensor system for practical application in typical alfalfa fields. These field tests illustrated that the model can be used to reduce tail runoff in typical surface-irrigated fields, and using it with a wireless sensor system saves time and labor as well as water.

  9. System of automated map design

    International Nuclear Information System (INIS)

    Ponomarjov, S.Yu.; Rybalko, S.I.; Proskura, N.I.

    1992-01-01

    Preprint 'System of automated map design' contains information about the program shell for construction of territory map, performing level line drawing of arbitrary two-dimension field (in particular, the radionuclide concentration field). The work schedule and data structures are supplied, as well as data on system performance. The preprint can become useful for experts in radioecology and for all persons involved in territory pollution mapping or multi-purpose geochemical mapping. (author)

  10. Automated Podcasting System for Universities

    Directory of Open Access Journals (Sweden)

    Ypatios Grigoriadis

    2013-03-01

    Full Text Available This paper presents the results achieved at Graz University of Technology (TU Graz in the field of automating the process of recording and publishing university lectures in a very new way. It outlines cornerstones of the development and integration of an automated recording system such as the lecture hall setup, the recording hardware and software architecture as well as the development of a text-based search for the final product by method of indexing video podcasts. Furthermore, the paper takes a look at didactical aspects, evaluations done in this context and future outlook.

  11. 2013 Chinese Intelligent Automation Conference

    CERN Document Server

    Deng, Zhidong

    2013-01-01

    Proceedings of the 2013 Chinese Intelligent Automation Conference presents selected research papers from the CIAC’13, held in Yangzhou, China. The topics include e.g. adaptive control, fuzzy control, neural network based control, knowledge based control, hybrid intelligent control, learning control, evolutionary mechanism based control, multi-sensor integration, failure diagnosis, and reconfigurable control. Engineers and researchers from academia, industry, and government can gain an inside view of new solutions combining ideas from multiple disciplines in the field of intelligent automation.   Zengqi Sun and Zhidong Deng are professors at the Department of Computer Science, Tsinghua University, China.

  12. 2013 Chinese Intelligent Automation Conference

    CERN Document Server

    Deng, Zhidong

    2013-01-01

    Proceedings of the 2013 Chinese Intelligent Automation Conference presents selected research papers from the CIAC’13, held in Yangzhou, China. The topics include e.g. adaptive control, fuzzy control, neural network based control, knowledge based control, hybrid intelligent control, learning control, evolutionary mechanism based control, multi-sensor integration, failure diagnosis, and reconfigurable control. Engineers and researchers from academia, industry, and government can gain an inside view of new solutions combining ideas from multiple disciplines in the field of intelligent automation. Zengqi Sun and Zhidong Deng are professors at the Department of Computer Science, Tsinghua University, China.

  13. Automated Studies of Continuing Current in Lightning Flashes

    Science.gov (United States)

    Martinez-Claros, Jose

    Continuing current (CC) is a continuous luminosity in the lightning channel that lasts longer than 10 ms following a lightning return stroke to ground. Lightning flashes following CC are associated with direct damage to power lines and are thought to be responsible for causing lightning-induced forest fires. The development of an algorithm that automates continuing current detection by combining NLDN (National Lightning Detection Network) and LEFA (Langmuir Electric Field Array) datasets for CG flashes will be discussed. The algorithm was applied to thousands of cloud-to-ground (CG) flashes within 40 km of Langmuir Lab, New Mexico measured during the 2013 monsoon season. It counts the number of flashes in a single minute of data and the number of return strokes of an individual lightning flash; records the time and location of each return stroke; performs peak analysis on E-field data, and uses the slope of interstroke interval (ISI) E-field data fits to recognize whether continuing current (CC) exists within the interval. Following CC detection, duration and magnitude are measured. The longest observed C in 5588 flashes was 631 ms. The performance of the algorithm (vs. human judgement) was checked on 100 flashes. At best, the reported algorithm is "correct" 80% of the time, where correct means that multiple stations agree with each other and with a human on both the presence and duration of CC. Of the 100 flashes that were validated against human judgement, 62% were hybrid. Automated analysis detects the first but misses the second return stroke in many cases where the second return stroke is followed by long CC. This problem is also present in human interpretation of field change records.

  14. Experience of automation failures in training: effects on trust, automation bias, complacency and performance.

    Science.gov (United States)

    Sauer, Juergen; Chavaillaz, Alain; Wastell, David

    2016-06-01

    This work examined the effects of operators' exposure to various types of automation failures in training. Forty-five participants were trained for 3.5 h on a simulated process control environment. During training, participants either experienced a fully reliable, automatic fault repair facility (i.e. faults detected and correctly diagnosed), a misdiagnosis-prone one (i.e. faults detected but not correctly diagnosed) or a miss-prone one (i.e. faults not detected). One week after training, participants were tested for 3 h, experiencing two types of automation failures (misdiagnosis, miss). The results showed that automation bias was very high when operators trained on miss-prone automation encountered a failure of the diagnostic system. Operator errors resulting from automation bias were much higher when automation misdiagnosed a fault than when it missed one. Differences in trust levels that were instilled by the different training experiences disappeared during the testing session. Practitioner Summary: The experience of automation failures during training has some consequences. A greater potential for operator errors may be expected when an automatic system failed to diagnose a fault than when it failed to detect one.

  15. Automation, parallelism, and robotics for proteomics.

    Science.gov (United States)

    Alterovitz, Gil; Liu, Jonathan; Chow, Jijun; Ramoni, Marco F

    2006-07-01

    The speed of the human genome project (Lander, E. S., Linton, L. M., Birren, B., Nusbaum, C. et al., Nature 2001, 409, 860-921) was made possible, in part, by developments in automation of sequencing technologies. Before these technologies, sequencing was a laborious, expensive, and personnel-intensive task. Similarly, automation and robotics are changing the field of proteomics today. Proteomics is defined as the effort to understand and characterize proteins in the categories of structure, function and interaction (Englbrecht, C. C., Facius, A., Comb. Chem. High Throughput Screen. 2005, 8, 705-715). As such, this field nicely lends itself to automation technologies since these methods often require large economies of scale in order to achieve cost and time-saving benefits. This article describes some of the technologies and methods being applied in proteomics in order to facilitate automation within the field as well as in linking proteomics-based information with other related research areas.

  16. Comparison between the Correlations of Retinal Nerve Fiber Layer Thickness Measured by Spectral Domain Optical Coherence Tomography and Visual Field Defects in Standard Automated White-on-White Perimetry versus Pulsar Perimetry.

    Science.gov (United States)

    Alnawaiseh, Maged; Hömberg, Lisann; Eter, Nicole; Prokosch, Verena

    2017-01-01

    To compare the structure-function relationships between retinal nerve fiber layer thickness (RNFLT) and visual field defects measured either by standard automated perimetry (SAP) or by Pulsar perimetry (PP). 263 eyes of 143 patients were prospectively included. Depending on the RNFLT, patients were assigned to the glaucoma group (group A: RNFL score 3-6) or the control group (group B: RNFL score 0-2). Structure-function relationships between RNFLT and mean sensitivity (MS) measured by SAP and PP were analyzed. Throughout the entire group, the MS assessed by PP and SAP correlated significantly with RNFLT in all sectors. In the glaucoma group, there was no significant difference between the correlations RNFL-SAP and RNFL-PP, whereas a significant difference was found in the control group. In the control group, the correlation between structure and function based on the PP data was significantly stronger than that based on SAP.

  17. Brain extraction in partial volumes T2*@7T by using a quasi-anatomic segmentation with bias field correction.

    Science.gov (United States)

    Valente, João; Vieira, Pedro M; Couto, Carlos; Lima, Carlos S

    2018-02-01

    Poor brain extraction in Magnetic Resonance Imaging (MRI) has negative consequences in several types of brain post-extraction such as tissue segmentation and related statistical measures or pattern recognition algorithms. Current state of the art algorithms for brain extraction work on weighted T1 and T2, being not adequate for non-whole brain images such as the case of T2*FLASH@7T partial volumes. This paper proposes two new methods that work directly in T2*FLASH@7T partial volumes. The first is an improvement of the semi-automatic threshold-with-morphology approach adapted to incomplete volumes. The second method uses an improved version of a current implementation of the fuzzy c-means algorithm with bias correction for brain segmentation. Under high inhomogeneity conditions the performance of the first method degrades, requiring user intervention which is unacceptable. The second method performed well for all volumes, being entirely automatic. State of the art algorithms for brain extraction are mainly semi-automatic, requiring a correct initialization by the user and knowledge of the software. These methods can't deal with partial volumes and/or need information from atlas which is not available in T2*FLASH@7T. Also, combined volumes suffer from manipulations such as re-sampling which deteriorates significantly voxel intensity structures making segmentation tasks difficult. The proposed method can overcome all these difficulties, reaching good results for brain extraction using only T2*FLASH@7T volumes. The development of this work will lead to an improvement of automatic brain lesions segmentation in T2*FLASH@7T volumes, becoming more important when lesions such as cortical Multiple-Sclerosis need to be detected. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Laboratory automation: trajectory, technology, and tactics.

    Science.gov (United States)

    Markin, R S; Whalen, S A

    2000-05-01

    Laboratory automation is in its infancy, following a path parallel to the development of laboratory information systems in the late 1970s and early 1980s. Changes on the horizon in healthcare and clinical laboratory service that affect the delivery of laboratory results include the increasing age of the population in North America, the implementation of the Balanced Budget Act (1997), and the creation of disease management companies. Major technology drivers include outcomes optimization and phenotypically targeted drugs. Constant cost pressures in the clinical laboratory have forced diagnostic manufacturers into less than optimal profitability states. Laboratory automation can be a tool for the improvement of laboratory services and may decrease costs. The key to improvement of laboratory services is implementation of the correct automation technology. The design of this technology should be driven by required functionality. Automation design issues should be centered on the understanding of the laboratory and its relationship to healthcare delivery and the business and operational processes in the clinical laboratory. Automation design philosophy has evolved from a hardware-based approach to a software-based approach. Process control software to support repeat testing, reflex testing, and transportation management, and overall computer-integrated manufacturing approaches to laboratory automation implementation are rapidly expanding areas. It is clear that hardware and software are functionally interdependent and that the interface between the laboratory automation system and the laboratory information system is a key component. The cost-effectiveness of automation solutions suggested by vendors, however, has been difficult to evaluate because the number of automation installations are few and the precision with which operational data have been collected to determine payback is suboptimal. The trend in automation has moved from total laboratory automation to a

  19. Optimization of automation: I. Estimation method of cognitive automation rates reflecting the effects of automation on human operators in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Seong, Poong Hyun

    2014-01-01

    Highlights: • We propose an estimation method of the automation rate by taking the advantages of automation as the estimation measures. • We conduct the experiments to examine the validity of the suggested method. • The higher the cognitive automation rate is, the greater the decreased rate of the working time will be. • The usefulness of the suggested estimation method is proved by statistical analyses. - Abstract: Since automation was introduced in various industrial fields, the concept of the automation rate has been used to indicate the inclusion proportion of automation among all work processes or facilities. Expressions of the inclusion proportion of automation are predictable, as is the ability to express the degree of the enhancement of human performance. However, many researchers have found that a high automation rate does not guarantee high performance. Therefore, to reflect the effects of automation on human performance, this paper proposes a new estimation method of the automation rate that considers the effects of automation on human operators in nuclear power plants (NPPs). Automation in NPPs can be divided into two types: system automation and cognitive automation. Some general descriptions and characteristics of each type of automation are provided, and the advantages of automation are investigated. The advantages of each type of automation are used as measures of the estimation method of the automation rate. One advantage was found to be a reduction in the number of tasks, and another was a reduction in human cognitive task loads. The system and the cognitive automation rate were proposed as quantitative measures by taking advantage of the aforementioned benefits. To quantify the required human cognitive task loads and thus suggest the cognitive automation rate, Conant’s information-theory-based model was applied. The validity of the suggested method, especially as regards the cognitive automation rate, was proven by conducting

  20. Evolution of a Benthic Imaging System From a Towed Camera to an Automated Habitat Characterization System

    Science.gov (United States)

    2008-09-01

    automated processing of images for color correction, segmentation of foreground targets from sediment and classification of targets to taxonomic category...element in the development of HabCam as a tool for habitat characterization is the automated processing of images for color correction, segmentation of

  1. Differences in field effectiveness and adoption between a novel automated chlorination system and household manual chlorination of drinking water in Dhaka, Bangladesh: a randomized controlled trial.

    Directory of Open Access Journals (Sweden)

    Amy J Pickering

    Full Text Available The number of people served by networked systems that supply intermittent and contaminated drinking water is increasing. In these settings, centralized water treatment is ineffective, while household-level water treatment technologies have not been brought to scale. This study compares a novel low-cost technology designed to passively (automatically dispense chlorine at shared handpumps with a household-level intervention providing water disinfection tablets (Aquatab, safe water storage containers, and behavior promotion. Twenty compounds were enrolled in Dhaka, Bangladesh, and randomly assigned to one of three groups: passive chlorinator, Aquatabs, or control. Over a 10-month intervention period, the mean percentage of households whose stored drinking water had detectable total chlorine was 75% in compounds with access to the passive chlorinator, 72% in compounds receiving Aquatabs, and 6% in control compounds. Both interventions also significantly improved microbial water quality. Aquatabs usage fell by 50% after behavioral promotion visits concluded, suggesting intensive promotion is necessary for sustained uptake. The study findings suggest high potential for an automated decentralized water treatment system to increase consistent access to clean water in low-income urban communities.

  2. Differences in Field Effectiveness and Adoption between a Novel Automated Chlorination System and Household Manual Chlorination of Drinking Water in Dhaka, Bangladesh: A Randomized Controlled Trial

    Science.gov (United States)

    Pickering, Amy J.; Crider, Yoshika; Amin, Nuhu; Bauza, Valerie; Unicomb, Leanne; Davis, Jennifer; Luby, Stephen P.

    2015-01-01

    The number of people served by networked systems that supply intermittent and contaminated drinking water is increasing. In these settings, centralized water treatment is ineffective, while household-level water treatment technologies have not been brought to scale. This study compares a novel low-cost technology designed to passively (automatically) dispense chlorine at shared handpumps with a household-level intervention providing water disinfection tablets (Aquatab), safe water storage containers, and behavior promotion. Twenty compounds were enrolled in Dhaka, Bangladesh, and randomly assigned to one of three groups: passive chlorinator, Aquatabs, or control. Over a 10-month intervention period, the mean percentage of households whose stored drinking water had detectable total chlorine was 75% in compounds with access to the passive chlorinator, 72% in compounds receiving Aquatabs, and 6% in control compounds. Both interventions also significantly improved microbial water quality. Aquatabs usage fell by 50% after behavioral promotion visits concluded, suggesting intensive promotion is necessary for sustained uptake. The study findings suggest high potential for an automated decentralized water treatment system to increase consistent access to clean water in low-income urban communities. PMID:25734448

  3. Differences in field effectiveness and adoption between a novel automated chlorination system and household manual chlorination of drinking water in Dhaka, Bangladesh: a randomized controlled trial.

    Science.gov (United States)

    Pickering, Amy J; Crider, Yoshika; Amin, Nuhu; Bauza, Valerie; Unicomb, Leanne; Davis, Jennifer; Luby, Stephen P

    2015-01-01

    The number of people served by networked systems that supply intermittent and contaminated drinking water is increasing. In these settings, centralized water treatment is ineffective, while household-level water treatment technologies have not been brought to scale. This study compares a novel low-cost technology designed to passively (automatically) dispense chlorine at shared handpumps with a household-level intervention providing water disinfection tablets (Aquatab), safe water storage containers, and behavior promotion. Twenty compounds were enrolled in Dhaka, Bangladesh, and randomly assigned to one of three groups: passive chlorinator, Aquatabs, or control. Over a 10-month intervention period, the mean percentage of households whose stored drinking water had detectable total chlorine was 75% in compounds with access to the passive chlorinator, 72% in compounds receiving Aquatabs, and 6% in control compounds. Both interventions also significantly improved microbial water quality. Aquatabs usage fell by 50% after behavioral promotion visits concluded, suggesting intensive promotion is necessary for sustained uptake. The study findings suggest high potential for an automated decentralized water treatment system to increase consistent access to clean water in low-income urban communities.

  4. Automated Test Case Generation

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I would like to present the concept of automated test case generation. I work on it as part of my PhD and I think it would be interesting also for other people. It is also the topic of a workshop paper that I am introducing in Paris. (abstract below) Please note that the talk itself would be more general and not about the specifics of my PhD, but about the broad field of Automated Test Case Generation. I would introduce the main approaches (combinatorial testing, symbolic execution, adaptive random testing) and their advantages and problems. (oracle problem, combinatorial explosion, ...) Abstract of the paper: Over the last decade code-based test case generation techniques such as combinatorial testing or dynamic symbolic execution have seen growing research popularity. Most algorithms and tool implementations are based on finding assignments for input parameter values in order to maximise the execution branch coverage. Only few of them consider dependencies from outside the Code Under Test’s scope such...

  5. Automating quantum experiment control

    Science.gov (United States)

    Stevens, Kelly E.; Amini, Jason M.; Doret, S. Charles; Mohler, Greg; Volin, Curtis; Harter, Alexa W.

    2017-03-01

    The field of quantum information processing is rapidly advancing. As the control of quantum systems approaches the level needed for useful computation, the physical hardware underlying the quantum systems is becoming increasingly complex. It is already becoming impractical to manually code control for the larger hardware implementations. In this chapter, we will employ an approach to the problem of system control that parallels compiler design for a classical computer. We will start with a candidate quantum computing technology, the surface electrode ion trap, and build a system instruction language which can be generated from a simple machine-independent programming language via compilation. We incorporate compile time generation of ion routing that separates the algorithm description from the physical geometry of the hardware. Extending this approach to automatic routing at run time allows for automated initialization of qubit number and placement and additionally allows for automated recovery after catastrophic events such as qubit loss. To show that these systems can handle real hardware, we present a simple demonstration system that routes two ions around a multi-zone ion trap and handles ion loss and ion placement. While we will mainly use examples from transport-based ion trap quantum computing, many of the issues and solutions are applicable to other architectures.

  6. Toward Automated Benchmarking of Atomistic Force Fields: Neat Liquid Densities and Static Dielectric Constants from the ThermoML Data Archive.

    Science.gov (United States)

    Beauchamp, Kyle A; Behr, Julie M; Rustenburg, Ariën S; Bayly, Christopher I; Kroenlein, Kenneth; Chodera, John D

    2015-10-08

    Atomistic molecular simulations are a powerful way to make quantitative predictions, but the accuracy of these predictions depends entirely on the quality of the force field employed. Although experimental measurements of fundamental physical properties offer a straightforward approach for evaluating force field quality, the bulk of this information has been tied up in formats that are not machine-readable. Compiling benchmark data sets of physical properties from non-machine-readable sources requires substantial human effort and is prone to the accumulation of human errors, hindering the development of reproducible benchmarks of force-field accuracy. Here, we examine the feasibility of benchmarking atomistic force fields against the NIST ThermoML data archive of physicochemical measurements, which aggregates thousands of experimental measurements in a portable, machine-readable, self-annotating IUPAC-standard format. As a proof of concept, we present a detailed benchmark of the generalized Amber small-molecule force field (GAFF) using the AM1-BCC charge model against experimental measurements (specifically, bulk liquid densities and static dielectric constants at ambient pressure) automatically extracted from the archive and discuss the extent of data available for use in larger scale (or continuously performed) benchmarks. The results of even this limited initial benchmark highlight a general problem with fixed-charge force fields in the representation low-dielectric environments, such as those seen in binding cavities or biological membranes.

  7. PROBLEM SETTING AND SOLUTION OF THE RESPONSE CORRECTION OF ARRIVAL AND DEPARTURE AIR TRAFFIC FLOW IN THE VICINITY OF THE FIELD BY MEANS OF THE GENETIC ALGORITHM

    Directory of Open Access Journals (Sweden)

    Georgii N. Lebedev

    2017-01-01

    Full Text Available The improvement in the effectiveness of airfield operation largely depends on the problem solving quality on the interaction boundaries of different technological sections. One of such hotspots is the use of the same runway by inbound and outbound aircraft. At certain intensity of outbound and inbound air traffic flow the conflict of aircraft interests appears, where it may be quite difficult to sort out priorities even for experienced controllers, in consequence of which mistakes in decision-making unavoidably appear.In this work the task of response correction of landing and takeoff time of the aircraft using the same RW, in condition of the conflict of interests “arrival – departure” at the increased operating intensity is formulated. The choice of optimal solution is made taking into account mutual interests without the complete sorting and the evaluation of all solutions.Accordingly, the genetic algorithm, which offers a simple and effective approach to optimal control problem solution by providing flight safety at an acceptably high level, is proposed. The estimation of additional aviation fuel consumption is used as optimal choice evaluation criterion.The advantages of the genetic algorithm application at decision-making in comparison with today’s “team” solution of the conflict “departure – arrival” in the airfield area are shown.

  8. An automated swimming respirometer

    DEFF Research Database (Denmark)

    STEFFENSEN, JF; JOHANSEN, K; BUSHNELL, PG

    1984-01-01

    An automated respirometer is described that can be used for computerized respirometry of trout and sharks.......An automated respirometer is described that can be used for computerized respirometry of trout and sharks....

  9. Autonomy and Automation

    Science.gov (United States)

    Shively, Jay

    2017-01-01

    A significant level of debate and confusion has surrounded the meaning of the terms autonomy and automation. Automation is a multi-dimensional concept, and we propose that Remotely Piloted Aircraft Systems (RPAS) automation should be described with reference to the specific system and task that has been automated, the context in which the automation functions, and other relevant dimensions. In this paper, we present definitions of automation, pilot in the loop, pilot on the loop and pilot out of the loop. We further propose that in future, the International Civil Aviation Organization (ICAO) RPAS Panel avoids the use of the terms autonomy and autonomous when referring to automated systems on board RPA. Work Group 7 proposes to develop, in consultation with other workgroups, a taxonomy of Levels of Automation for RPAS.

  10. Configuration Management Automation (CMA) -

    Data.gov (United States)

    Department of Transportation — Configuration Management Automation (CMA) will provide an automated, integrated enterprise solution to support CM of FAA NAS and Non-NAS assets and investments. CMA...

  11. A Physical Model-based Correction for Charge Traps in the Hubble Space Telescope ’s Wide Field Camera 3 Near-IR Detector and Its Applications to Transiting Exoplanets and Brown Dwarfs

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Yifan; Apai, Dániel; Schneider, Glenn [Department of Astronomy/Steward Observatory, The University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721 (United States); Lew, Ben W. P., E-mail: yzhou@as.arizona.edu [Department of Planetary Science/Lunar and Planetary Laboratory, The University of Arizona, 1640 E. University Boulevard, Tucson, AZ 85718 (United States)

    2017-06-01

    The Hubble Space Telescope Wide Field Camera 3 (WFC3) near-IR channel is extensively used in time-resolved observations, especially for transiting exoplanet spectroscopy as well as brown dwarf and directly imaged exoplanet rotational phase mapping. The ramp effect is the dominant source of systematics in the WFC3 for time-resolved observations, which limits its photometric precision. Current mitigation strategies are based on empirical fits and require additional orbits to help the telescope reach a thermal equilibrium . We show that the ramp-effect profiles can be explained and corrected with high fidelity using charge trapping theories. We also present a model for this process that can be used to predict and to correct charge trap systematics. Our model is based on a very small number of parameters that are intrinsic to the detector. We find that these parameters are very stable between the different data sets, and we provide best-fit values. Our model is tested with more than 120 orbits (∼40 visits) of WFC3 observations and is proved to be able to provide near photon noise limited corrections for observations made with both staring and scanning modes of transiting exoplanets as well as for starting-mode observations of brown dwarfs. After our model correction, the light curve of the first orbit in each visit has the same photometric precision as subsequent orbits, so data from the first orbit no longer need to be discarded. Near-IR arrays with the same physical characteristics (e.g., JWST/NIRCam ) may also benefit from the extension of this model if similar systematic profiles are observed.

  12. Quantum error correction for beginners

    International Nuclear Information System (INIS)

    Devitt, Simon J; Nemoto, Kae; Munro, William J

    2013-01-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)

  13. Evaluation of the Analytical Anisotropic Algorithm (AAA) in dose calculation for fields with non-uniform fluences considering heterogeneity correction; Avaliacao do Algoritmo Analitico Anisotropico (AAA) no calculo de dose para campos com fluencia nao uniforme considerando correcao de heterogeneidade

    Energy Technology Data Exchange (ETDEWEB)

    Bornatto, P.; Funchal, M.; Bruning, F.; Toledo, H.; Lyra, J.; Fernandes, T.; Toledo, F.; Marciao, C., E-mail: pricila_bornatto@yahoo.com.br [Hospital Erasto Gaertner (LPCC), Curitiba, PR (Brazil). Departamento de Radioterapia

    2014-08-15

    The purpose of this study is to evaluate the calculation of dose distribution AAA (Varian Medical Systems) for fields with non-uniform fluences considering heterogeneity correction. Five different phantoms were used with different density materials. These phantoms were scanned in the CT BrightSpeed (©GE Healthcare) upon the array of detectors MAPCHECK2 TM (Sun Nuclear Corporation) and irradiated in a linear accelerator 600 CD (Varian Medical Systems) 6MV and rate dose 400MU/min with isocentric setup. The fluences used were exported from IMRT plans, calculated by ECLIPSE™ planning system (Varian Medical Systems), and a 10x10 cm{sup 2} field to assess the heterogeneity correction for uniform fluence. The measured dose distribution was compared to the calculated by Gamma analysis with approval criteria of 3% / 3 mm and 10% threshold. The evaluation was performed using the software SNCPatient (Sun Nuclear Corporation) and considering absolute dose normalized at maximum. The phantoms best performers were those with low density materials, with an average of 99.2% approval. Already phantoms with plates of higher density material presented various fluences below 95% of the points approved. The average value reached 94.3%. It was observed a dependency between fluency and approved percentage points, whereas for the same fluency, 100% of the points have been approved in all phantoms. The approval criteria for IMRT plans recommended in most centers is 3% / 3mm with at least 95% of points approved, it can be concluded that, under these conditions, the IMRT plans with heterogeneity correction can be performed , however the quality control must be careful because the difficulty of the system to accurately predict the dose distribution in certain situations. (author)

  14. Automated solid-phase subcloning based on beads brought into proximity by magnetic force.

    Science.gov (United States)

    Hudson, Elton P; Nikoshkov, Andrej; Uhlen, Mathias; Rockberg, Johan

    2012-01-01

    In the fields of proteomics, metabolic engineering and synthetic biology there is a need for high-throughput and reliable cloning methods to facilitate construction of expression vectors and genetic pathways. Here, we describe a new approach for solid-phase cloning in which both the vector and the gene are immobilized to separate paramagnetic beads and brought into proximity by magnetic force. Ligation events were directly evaluated using fluorescent-based microscopy and flow cytometry. The highest ligation efficiencies were obtained when gene- and vector-coated beads were brought into close contact by application of a magnet during the ligation step. An automated procedure was developed using a laboratory workstation to transfer genes into various expression vectors and more than 95% correct clones were obtained in a number of various applications. The method presented here is suitable for efficient subcloning in an automated manner to rapidly generate a large number of gene constructs in various vectors intended for high throughput applications.

  15. Low-field magnetic resonance imaging or combined ultrasonography and anti-cyclic citrullinated peptide antibody improve correct classification of individuals as established rheumatoid arthritis

    DEFF Research Database (Denmark)

    Pedersen, Jens K; Lorenzen, Tove; Ejbjerg, Bo

    2014-01-01

    (RA). METHODS: In 53 individuals from a population-based, cross-sectional study, historic fulfilment of the American College of Rheumatology (ACR) 1987 criteria ("classification") or RA diagnosed by a rheumatologist ("diagnosis") were used as standard references. The sensitivity, specificity and Area....../specificity) was 78% (62%/94%) (classification) and 85% (69%/100%) (diagnosis), while for the total synovitis score of MCP joints plus wrist (cut-off ≥10) it was 78% (62%/94%) (both classification and diagnosis). CONCLUSIONS: Compared with the ACR 1987 criteria, low-field MRI alone or adapted criteria incorporating...

  16. Brain refractive index measured in vivo with high-NA defocus-corrected full-field OCT and consequences for two-photon microscopy.

    Science.gov (United States)

    Binding, Jonas; Ben Arous, Juliette; Léger, Jean-François; Gigan, Sylvain; Boccara, Claude; Bourdieu, Laurent

    2011-03-14

    Two-photon laser scanning microscopy (2PLSM) is an important tool for in vivo tissue imaging with sub-cellular resolution, but the penetration depth of current systems is potentially limited by sample-induced optical aberrations. To quantify these, we measured the refractive index n' in the somatosensory cortex of 7 rats in vivo using defocus optimization in full-field optical coherence tomography (ff-OCT). We found n' to be independent of imaging depth or rat age. From these measurements, we calculated that two-photon imaging beyond 200 µm into the cortex is limited by spherical aberration, indicating that adaptive optics will improve imaging depth.

  17. Fast Automated Decoupling at RHIC

    CERN Document Server

    Beebe-Wang, Joanne

    2005-01-01

    Coupling correction is essential for the operational performance of RHIC. The independence of the transverse degrees of freedom makes diagnostics and tune control easier, and it is advantageous to operate an accelerator close to the coupling resonance to minimize nearby nonlinear sidebands. An automated decoupling application has been developed at RHIC for coupling correction during routine operations. The application decouples RHIC globally by minimizing the tune separation through finding the optimal settings of two orthogonal skew quadrupole families. The program provides options of automatic, semi-automatic and manual decoupling operations. It accesses tune information from all RHIC tune measurement systems: the PLL (Phase Lock Loop), the high frequency Schottky system, and the tune meter. It also supplies tune and skew quadrupole scans, finding the minimum tune separation, display the real time results and interface with the RHIC control system. We summarize the capabilities of the decoupling application...

  18. Automated 3-D Radiation Mapping

    International Nuclear Information System (INIS)

    Tarpinian, J. E.

    1991-01-01

    This work describes an automated radiation detection and imaging system which combines several state-of-the-art technologies to produce a portable but very powerful visualization tool for planning work in radiation environments. The system combines a radiation detection system, a computerized radiation imaging program, and computerized 3-D modeling to automatically locate and measurements are automatically collected and imaging techniques are used to produce colored, 'isodose' images of the measured radiation fields. The isodose lines from the images are then superimposed over the 3-D model of the area. The final display shows the various components in a room and their associated radiation fields. The use of an automated radiation detection system increases the quality of radiation survey obtained measurements. The additional use of a three-dimensional display allows easier visualization of the area and associated radiological conditions than two-dimensional sketches

  19. Safeguards Automated Facility Evaluation (SAFE) methodology

    International Nuclear Information System (INIS)

    Chapman, L.D.; Grady, L.M.; Bennett, H.A.; Sasser, D.W.; Engi, D.

    1978-08-01

    An automated approach to facility safeguards effectiveness evaluation has been developed. This automated process, called Safeguards Automated Facility Evaluation (SAFE), consists of a collection of a continuous stream of operational modules for facility characterization, the selection of critical paths, and the evaluation of safeguards effectiveness along these paths. The technique has been implemented on an interactive computer time-sharing system and makes use of computer graphics for the processing and presentation of information. Using this technique, a comprehensive evaluation of a safeguards system can be provided by systematically varying the parameters that characterize the physical protection components of a facility to reflect the perceived adversary attributes and strategy, environmental conditions, and site operational conditions. The SAFE procedure has broad applications in the nuclear facility safeguards field as well as in the security field in general. Any fixed facility containing valuable materials or components to be protected from theft or sabotage could be analyzed using this same automated evaluation technique

  20. Quantum corrections to Schwarzschild black hole

    Energy Technology Data Exchange (ETDEWEB)

    Calmet, Xavier; El-Menoufi, Basem Kamal [University of Sussex, Department of Physics and Astronomy, Brighton (United Kingdom)

    2017-04-15

    Using effective field theory techniques, we compute quantum corrections to spherically symmetric solutions of Einstein's gravity and focus in particular on the Schwarzschild black hole. Quantum modifications are covariantly encoded in a non-local effective action. We work to quadratic order in curvatures simultaneously taking local and non-local corrections into account. Looking for solutions perturbatively close to that of classical general relativity, we find that an eternal Schwarzschild black hole remains a solution and receives no quantum corrections up to this order in the curvature expansion. In contrast, the field of a massive star receives corrections which are fully determined by the effective field theory. (orig.)

  1. Performance evaluation of contrast-detail in full field digital mammography systems using ideal (Hotelling) observer vs. conventional automated analysis of CDMAM images for quality control of contrast-detail characteristics.

    Science.gov (United States)

    Delakis, Ioannis; Wise, Robert; Morris, Lauren; Kulama, Eugenia

    2015-11-01

    The purpose of this work was to evaluate the contrast-detail performance of full field digital mammography (FFDM) systems using ideal (Hotelling) observer Signal-to-Noise Ratio (SNR) methodology and ascertain whether it can be considered an alternative to the conventional, automated analysis of CDMAM phantom images. Five FFDM units currently used in the national breast screening programme were evaluated, which differed with respect to age, detector, Automatic Exposure Control (AEC) and target/filter combination. Contrast-detail performance was analysed using CDMAM and ideal observer SNR methodology. The ideal observer SNR was calculated for input signal originating from gold discs of varying thicknesses and diameters, and then used to estimate the threshold gold thickness for each diameter as per CDMAM analysis. The variability of both methods and the dependence of CDMAM analysis on phantom manufacturing discrepancies also investigated. Results from both CDMAM and ideal observer methodologies were informative differentiators of FFDM systems' contrast-detail performance, displaying comparable patterns with respect to the FFDM systems' type and age. CDMAM results suggested higher threshold gold thickness values compared with the ideal observer methodology, especially for small-diameter details, which can be attributed to the behaviour of the CDMAM phantom used in this study. In addition, ideal observer methodology results showed lower variability than CDMAM results. The Ideal observer SNR methodology can provide a useful metric of the FFDM systems' contrast detail characteristics and could be considered a surrogate for conventional, automated analysis of CDMAM images. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  2. Automated ISS Flight Utilities

    Science.gov (United States)

    Offermann, Jan Tuzlic

    2016-01-01

    EVADES output. As mentioned above, GEnEVADOSE makes extensive use of ROOT version 6, the data analysis framework developed at the European Organization for Nuclear Research (CERN), and the code is written to the C++11 standard (as are the other projects). My second project is the Automated Mission Reference Exposure Utility (AMREU).Unlike GEnEVADOSE, AMREU is a combination of three frameworks written in both Python and C++, also making use of ROOT (and PyROOT). Run as a combination of daily and weekly cron jobs, these macros query the SRAG database system to determine the active ISS missions, and query minute-by-minute radiation dose information from ISS-TEPC (Tissue Equivalent Proportional Counter), one of the radiation detectors onboard the ISS. Using this information, AMREU creates a corrected data set of daily radiation doses, addressing situations where TEPC may be offline or locked up by correcting doses for days with less than 95% live time (the total amount time the instrument acquires data) by averaging the past 7 days. As not all errors may be automatically detectable, AMREU also allows for manual corrections, checking an updated plaintext file each time it runs. With the corrected data, AMREU generates cumulative dose plots for each mission, and uses a Python script to generate a flight note file (.docx format) containing these plots, as well as information sections to be filled in and modified by the space weather environment officers with information specific to the week. AMREU is set up to run without requiring any user input, and it automatically archives old flight notes and information files for missions that are no longer active. My other projects involve cleaning up a large data set from the Charged Particle Directional Spectrometer (CPDS), joining together many different data sets in order to clean up information in SRAG SQL databases, and developing other automated utilities for displaying information on active solar regions, that may be used by the

  3. Correction magnet power supplies for APS machine

    International Nuclear Information System (INIS)

    Kang, Y.G.

    1991-04-01

    A number of correction magnets are required for the advanced photon source (APS) machine to correct the beam. There are five kinds of correction magnets for the storage ring, two for the injector synchrotron, and two for the positron accumulator ring (PAR). Table I shoes a summary of the correction magnet power supplies for the APS machine. For the storage ring, the displacement of the quadrupole magnets due to the low frequency vibration below 25 Hz has the most significant effect on the stability of the positron closed orbit. The primary external source of the low frequency vibration is the ground motion of approximately 20 μm amplitude, with frequency components concentrated below 10 Hz. These low frequency vibrations can be corrected by using the correction magnets, whose field strengths are controlled individually through the feedback loop comprising the beam position monitoring system. The correction field require could be either positive or negative. Thus for all the correction magnets, bipolar power supplies (BPSs) are required to produce both polarities of correction fields. Three different types of BPS are used for all the correction magnets. Type I BPSs cover all the correction magnets for the storage ring, except for the trim dipoles. The maximum output current of the Type I BPS is 140 Adc. A Type II BPS powers a trim dipole, and its maximum output current is 60 Adc. The injector synchrotron and PAR correction magnets are powered form Type III BPSs, whose maximum output current is 25 Adc

  4. Automated one-loop calculations with GoSam

    International Nuclear Information System (INIS)

    Cullen, Gavin; Greiner, Nicolas; Heinrich, Gudrun; Reiter, Thomas; Luisoni, Gionata; Mastrolia, Pierpaolo; Ossola, Giovanni; Tramontano, Francesco

    2012-01-01

    We present the program package GoSam which is designed for the automated calculation of one-loop amplitudes for multi-particle processes in renormalisable quantum field theories. The amplitudes, which are generated in terms of Feynman diagrams, can be reduced using either D-dimensional integrand-level decomposition or tensor reduction. GoSam can be used to calculate one-loop QCD and/or electroweak corrections to Standard Model processes and offers the flexibility to link model files for theories Beyond the Standard Model. A standard interface to programs calculating real radiation is also implemented. We demonstrate the flexibility of the program by presenting examples of processes with up to six external legs attached to the loop. (orig.)

  5. Automated One-Loop Calculations with GoSam

    CERN Document Server

    Cullen, Gavin; Heinrich, Gudrun; Luisoni, Gionata; Mastrolia, Pierpaolo; Ossola, Giovanni; Reiter, Thomas; Tramontano, Francesco

    2012-01-01

    We present the program package GoSam which is designed for the automated calculation of one-loop amplitudes for multi-particle processes in renormalisable quantum field theories. The amplitudes, which are generated in terms of Feynman diagrams, can be reduced using either D-dimensional integrand-level decomposition or tensor reduction. GoSam can be used to calculate one-loop QCD and/or electroweak corrections to Standard Model processes and offers the flexibility to link model files for theories Beyond the Standard Model. A standard interface to programs calculating real radiation is also implemented. We demonstrate the flexibility of the program by presenting examples of processes with up to six external legs attached to the loop.

  6. Automation in Clinical Microbiology

    Science.gov (United States)

    Ledeboer, Nathan A.

    2013-01-01

    Historically, the trend toward automation in clinical pathology laboratories has largely bypassed the clinical microbiology laboratory. In this article, we review the historical impediments to automation in the microbiology laboratory and offer insight into the reasons why we believe that we are on the cusp of a dramatic change that will sweep a wave of automation into clinical microbiology laboratories. We review the currently available specimen-processing instruments as well as the total laboratory automation solutions. Lastly, we outline the types of studies that will need to be performed to fully assess the benefits of automation in microbiology laboratories. PMID:23515547

  7. Towards quantitative analysis of core-shell catalyst nano-particles by aberration corrected high angle annular dark field STEM and EDX

    International Nuclear Information System (INIS)

    Haibo, E; Nellist, P D; Lozano-Perez, S; Ozkaya, D

    2010-01-01

    Core-shell structured heterogeneous catalyst nano-particles offer the promise of more efficient precious metal usage and also novel functionalities but are as yet poorly characterised due to large compositional variations over short ranges. High angle annular dark field detector in a scanning transmission electron microscope is frequently used to image at high resolution because of its Z-contrast and incoherent imaging process, but generally little attention is paid to quantification. Energy dispersive X-ray analysis provides information on thickness and chemical composition and, used in conjunction with HAADF-STEM, aids interpretation of imaged nano-particles. We present important calibrations and initial data for truly quantitative high resolution analysis.

  8. Development of a Small, Inexpensive, and Field-deployable Gas Chromatograph for the Automated Collection, Separation, and Analysis of Gas-phase Organic Compounds

    Science.gov (United States)

    Skog, K.; Xiong, F.; Gentner, D. R.

    2017-12-01

    The identification and quantification of gas-phase organic compounds, like volatile organic compounds (VOCs), in the atmosphere relies on separation of complex mixtures and sensitive detection. Gas chromatography (GC) is widely applied, but relies on the need for high-purity compressed gases for separation and, often for detection. We have developed a low-cost, compact GC-based system for the collection and quantitative chemical speciation of complex mixtures of common atmospheric VOCs without the need for compressed high-purity gases or expensive detectors. We present results of lab and field testing against a commercially-available GC system. At optimized linear velocities challenging VOC pairs of similar volatility were resolved within 30 minutes, including n- and i-pentane; n-pentane and isoprene; and ethylbenzene and m/p-xylene. For 5-30 minute samples, we observe ppt-level detection limits for common VOCs such as benzene, toluene, ethylbenzene, xylenes, alpha-pinene, and limonene. We also present results of in-field use for VOC measurements. In all, this instrument is accurate, precise, small, and inexpensive (<$2500). Its lack of compressed gas cylinders make it ideal for field deployment and has been demonstrated to produce similar quality data to available GC technology.

  9. Field assessment of semi-aerobic condition and the methane correction factor for the semi-aerobic landfills provided by IPCC guidelines

    International Nuclear Information System (INIS)

    Jeong, Sangjae; Nam, Anwoo; Yi, Seung-Muk; Kim, Jae Young

    2015-01-01

    Highlights: • CH 4 /CO 2 and CH 4 + CO 2 % are proposed as indices to evaluate semi-aerobic landfills. • A landfill which CH 4 /CO 2 > 1.0 is difficult to be categorized as semi-aerobic landfill. • Field conditions should be carefully investigated to determine landfill types. • The MCF default value for semi-aerobic landfills underestimates the methane emissions. - Abstract: According to IPCC guidelines, a semi-aerobic landfill site produces one-half of the amount of CH 4 produced by an equally-sized anaerobic landfill site. Therefore categorizing the landfill type is important on greenhouse gas inventories. In order to assess semi-aerobic condition in the sites and the MCF value for semi-aerobic landfill, landfill gas has been measured from vent pipes in five semi-aerobically designed landfills in South Korea. All of the five sites satisfied requirements of semi-aerobic landfills in 2006 IPCC guidelines. However, the ends of leachate collection pipes which are main entrance of air in the semi-aerobic landfill were closed in all five sites. The CH 4 /CO 2 ratio in landfill gas, indicator of aerobic and anaerobic decomposition, ranged from 1.08 to 1.46 which is higher than the values (0.3–1.0) reported for semi-aerobic landfill sites and is rather close to those (1.0–2.0) for anaerobic landfill sites. The low CH 4 + CO 2 % in landfill gas implied air intrusion into the landfill. However, there was no evidence that air intrusion has caused by semi-aerobic design and operation. Therefore, the landfills investigated in this study are difficult to be classified as semi-aerobic landfills. Also MCF of 0.5 may significantly underestimate methane emissions compared to other researches. According to the carbon mass balance analyses, the higher MCF needs to be proposed for semi-aerobic landfills. Consequently, methane emission estimate should be based on field evaluation for the semi-aerobically designed landfills

  10. Field assessment of semi-aerobic condition and the methane correction factor for the semi-aerobic landfills provided by IPCC guidelines

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Sangjae [Department of Civil and Environmental Engineering, College of Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 151-742 (Korea, Republic of); Nam, Anwoo [Korea Environment Corporation, 42 Hwangyeong-ro, Seo-gu, Incheon 404-170 (Korea, Republic of); Yi, Seung-Muk [Department of Environmental Health, School of Public Health, Seoul National University, Seoul 151-742 (Korea, Republic of); Kim, Jae Young, E-mail: jaeykim@snu.ac.kr [Department of Civil and Environmental Engineering, College of Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 151-742 (Korea, Republic of)

    2015-02-15

    Highlights: • CH{sub 4}/CO{sub 2} and CH{sub 4} + CO{sub 2}% are proposed as indices to evaluate semi-aerobic landfills. • A landfill which CH{sub 4}/CO{sub 2} > 1.0 is difficult to be categorized as semi-aerobic landfill. • Field conditions should be carefully investigated to determine landfill types. • The MCF default value for semi-aerobic landfills underestimates the methane emissions. - Abstract: According to IPCC guidelines, a semi-aerobic landfill site produces one-half of the amount of CH{sub 4} produced by an equally-sized anaerobic landfill site. Therefore categorizing the landfill type is important on greenhouse gas inventories. In order to assess semi-aerobic condition in the sites and the MCF value for semi-aerobic landfill, landfill gas has been measured from vent pipes in five semi-aerobically designed landfills in South Korea. All of the five sites satisfied requirements of semi-aerobic landfills in 2006 IPCC guidelines. However, the ends of leachate collection pipes which are main entrance of air in the semi-aerobic landfill were closed in all five sites. The CH{sub 4}/CO{sub 2} ratio in landfill gas, indicator of aerobic and anaerobic decomposition, ranged from 1.08 to 1.46 which is higher than the values (0.3–1.0) reported for semi-aerobic landfill sites and is rather close to those (1.0–2.0) for anaerobic landfill sites. The low CH{sub 4} + CO{sub 2}% in landfill gas implied air intrusion into the landfill. However, there was no evidence that air intrusion has caused by semi-aerobic design and operation. Therefore, the landfills investigated in this study are difficult to be classified as semi-aerobic landfills. Also MCF of 0.5 may significantly underestimate methane emissions compared to other researches. According to the carbon mass balance analyses, the higher MCF needs to be proposed for semi-aerobic landfills. Consequently, methane emission estimate should be based on field evaluation for the semi-aerobically designed landfills.

  11. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... and Craniofacial Surgery Cleft Lip/Palate and Craniofacial Surgery A cleft lip may require one or more ... find out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment ...

  12. An automated wide-field time-gated optically sectioning fluorescence lifetime imaging multiwell plate reader for high-content analysis of protein-protein interactions

    Science.gov (United States)

    Alibhai, Dominic; Kumar, Sunil; Kelly, Douglas; Warren, Sean; Alexandrov, Yuriy; Munro, Ian; McGinty, James; Talbot, Clifford; Murray, Edward J.; Stuhmeier, Frank; Neil, Mark A. A.; Dunsby, Chris; French, Paul M. W.

    2011-03-01

    We describe an optically-sectioned FLIM multiwell plate reader that combines Nipkow microscopy with wide-field time-gated FLIM, and its application to high content analysis of FRET. The system acquires sectioned FLIM images in fluorescent protein. It has been applied to study the formation of immature HIV virus like particles (VLPs) in live cells by monitoring Gag-Gag protein interactions using FLIM FRET of HIV-1 Gag transfected with CFP or YFP. VLP formation results in FRET between closely packed Gag proteins, as confirmed by our FLIM analysis that includes automatic image segmentation.

  13. Comparação entre a correção cilíndrica total e o equivalente esférico na realização da perimetria computadorizada Comparison between the full cylindrical correction and the spherical equivalent in the execution of automated perimetry

    Directory of Open Access Journals (Sweden)

    Paulo Leonardi Filho

    2004-08-01

    Full Text Available OBJETIVO: Verificar a existência de diferença estatisticamente significativa entre exames de campimetria computadorizada, realizados com a utilização da correção total e do equivalente esférico, em pacientes com ametropia cilíndrica de valores iguais ou maiores que 1,50 dioptrias. MÉTODOS: Vinte pacientes (35 olhos foram submetidos a exame de campo visual, perimetria computadorizada Humphrey - estratégia SITA 24-2, usando em um exame a correção total e em outro o equivalente esférico. Foram utilizados como parâmetros de comparação os valores de Mean Deviation, Pattern Standard Deviation, perdas de fixação, falso-positivos, falso-negativos e duração dos exames. RESULTADOS: Os parâmetros Mean Deviation, Standard Pattern Deviation, falso-positivos, falso-negativos e duração do exame não apresentaram diferença estatisticamente significativa. A perda de fixação foi maior no grupo usando correção cilíndrica total, dado estatisticamente significante. CONCLUSÃO: Exames de campo visual realizados com equivalente esférico não mostram diferença na sensibilidade retínica quando comparados com o uso de correção cilíndrica total, dentro dos padrões adotados neste estudo.PURPOSE: To compare the results in subjects undergoing visual field testing using the full cylindrical correction and the spherical equivalent. METHODS: Twenty patients (35 eyes underwent visual field testing with Humphrey (SITA 24-2 perimetry using the full cylindrical correction and the spherical equivalent at a random sequence. Mean Deviation, Standard Pattern Deviation, fixation loss, false positive and false negative results and test duration were compared. RESULTS: No difference was found regarding Mean Deviation, Standard Pattern Deviation, false positive and false negative results and test duration. Fixation loss was significantly higher with the full cylindrical correction. CONCLUSION: Visual fields performed with the spherical equivalent show no

  14. Novel insights in agent-based complex automated negotiation

    CERN Document Server

    Lopez-Carmona, Miguel; Ito, Takayuki; Zhang, Minjie; Bai, Quan; Fujita, Katsuhide

    2014-01-01

    This book focuses on all aspects of complex automated negotiations, which are studied in the field of autonomous agents and multi-agent systems. This book consists of two parts. I: Agent-Based Complex Automated Negotiations, and II: Automated Negotiation Agents Competition. The chapters in Part I are extended versions of papers presented at the 2012 international workshop on Agent-Based Complex Automated Negotiation (ACAN), after peer reviews by three Program Committee members. Part II examines in detail ANAC 2012 (The Third Automated Negotiating Agents Competition), in which automated agents that have different negotiation strategies and are implemented by different developers are automatically negotiated in the several negotiation domains. ANAC is an international competition in which automated negotiation strategies, submitted by a number of universities and research institutes across the world, are evaluated in tournament style. The purpose of the competition is to steer the research in the area of bilate...

  15. Automated Testing of Event-Driven Applications

    DEFF Research Database (Denmark)

    Jensen, Casper Svenning

    may be tested by selecting an interesting input (i.e. a sequence of events), and deciding if a failure occurs when the selected input is applied to the event-driven application under test. Automated testing promises to reduce the workload for developers by automatically selecting interesting inputs...... and detect failures. However, it is non-trivial to conduct automated testing of event-driven applications because of, for example, infinite input spaces and the absence of specifications of correct application behavior. In this PhD dissertation, we identify a number of specific challenges when conducting...... automated testing of event-driven applications, and we present novel techniques for solving these challenges. First, we present an algorithm for stateless model-checking of event-driven applications with partial-order reduction, and we show how this algorithm may be used to systematically test web...

  16. Automated imaging system for single molecules

    Science.gov (United States)

    Schwartz, David Charles; Runnheim, Rodney; Forrest, Daniel

    2012-09-18

    There is provided a high throughput automated single molecule image collection and processing system that requires minimal initial user input. The unique features embodied in the present disclosure allow automated collection and initial processing of optical images of single molecules and their assemblies. Correct focus may be automatically maintained while images are collected. Uneven illumination in fluorescence microscopy is accounted for, and an overall robust imaging operation is provided yielding individual images prepared for further processing in external systems. Embodiments described herein are useful in studies of any macromolecules such as DNA, RNA, peptides and proteins. The automated image collection and processing system and method of same may be implemented and deployed over a computer network, and may be ergonomically optimized to facilitate user interaction.

  17. Automation bias: empirical results assessing influencing factors.

    Science.gov (United States)

    Goddard, Kate; Roudsari, Abdul; Wyatt, Jeremy C

    2014-05-01

    To investigate the rate of automation bias - the propensity of people to over rely on automated advice and the factors associated with it. Tested factors were attitudinal - trust and confidence, non-attitudinal - decision support experience and clinical experience, and environmental - task difficulty. The paradigm of simulated decision support advice within a prescribing context was used. The study employed within participant before-after design, whereby 26 UK NHS General Practitioners were shown 20 hypothetical prescribing scenarios with prevalidated correct and incorrect answers - advice was incorrect in 6 scenarios. They were asked to prescribe for each case, followed by being shown simulated advice. Participants were then asked whether they wished to change their prescription, and the post-advice prescription was recorded. Rate of overall decision switching was captured. Automation bias was measured by negative consultations - correct to incorrect prescription switching. Participants changed prescriptions in 22.5% of scenarios. The pre-advice accuracy rate of the clinicians was 50.38%, which improved to 58.27% post-advice. The CDSS improved the decision accuracy in 13.1% of prescribing cases. The rate of automation bias, as measured by decision switches from correct pre-advice, to incorrect post-advice was 5.2% of all cases - a net improvement of 8%. More immediate factors such as trust in the specific CDSS, decision confidence, and task difficulty influenced rate of decision switching. Lower clinical experience was associated with more decision switching. Age, DSS experience and trust in CDSS generally were not significantly associated with decision switching. This study adds to the literature surrounding automation bias in terms of its potential frequency and influencing factors. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Advances in Automation and Robotics

    CERN Document Server

    International conference on Automation and Robotics ICAR2011

    2012-01-01

    The international conference on Automation and Robotics-ICAR2011 is held during December 12-13, 2011 in Dubai, UAE. The proceedings of ICAR2011 have been published by Springer Lecture Notes in Electrical Engineering, which include 163 excellent papers selected from more than 400 submitted papers.   The conference is intended to bring together the researchers and engineers/technologists working in different aspects of intelligent control systems and optimization, robotics and automation, signal processing, sensors, systems modeling and control, industrial engineering, production and management.   This part of proceedings includes 81 papers contributed by many researchers in relevant topic areas covered at ICAR2011 from various countries such as France, Japan, USA, Korea and China etc.     Many papers introduced their advanced research work recently; some of them gave a new solution to problems in the field, with powerful evidence and detail demonstration. Others stated the application of their designed and...

  19. Automating occupational protection records systems

    International Nuclear Information System (INIS)

    Lyon, M.; Martin, J.B.

    1991-10-01

    Occupational protection records have traditionally been generated by field and laboratory personnel, assembled into files in the safety office, and eventually stored in a warehouse or other facility. Until recently, these records have been primarily paper copies, often handwritten. Sometimes, the paper is microfilmed for storage. However, electronic records are beginning to replace these traditional methods. The purpose of this paper is to provide guidance for making the transition to automated record keeping and retrieval using modern computer equipment. This paper describes the types of records most readily converted to electronic record keeping and a methodology for implementing an automated record system. The process of conversion is based on a requirements analysis to assess program needs and a high level of user involvement during the development. The importance of indexing the hard copy records for easy retrieval is also discussed. The concept of linkage between related records and its importance relative to reporting, research, and litigation will be addressed. 2 figs

  20. Sensors and Automated Analyzers for Radionuclides

    International Nuclear Information System (INIS)

    Grate, Jay W.; Egorov, Oleg B.

    2003-01-01

    The production of nuclear weapons materials has generated large quantities of nuclear waste and significant environmental contamination. We have developed new, rapid, automated methods for determination of radionuclides using sequential injection methodologies to automate extraction chromatographic separations, with on-line flow-through scintillation counting for real time detection. This work has progressed in two main areas: radionuclide sensors for water monitoring and automated radiochemical analyzers for monitoring nuclear waste processing operations. Radionuclide sensors have been developed that collect and concentrate radionuclides in preconcentrating minicolumns with dual functionality: chemical selectivity for radionuclide capture and scintillation for signal output. These sensors can detect pertechnetate to below regulatory levels and have been engineered into a prototype for field testing. A fully automated process monitor has been developed for total technetium in nuclear waste streams. This instrument performs sample acidification, speciation adjustment, separation and detection in fifteen minutes or less

  1. Automation systems for radioimmunoassay

    International Nuclear Information System (INIS)

    Yamasaki, Paul

    1974-01-01

    The application of automation systems for radioimmunoassay (RIA) was discussed. Automated systems could be useful in the second step, of the four basic processes in the course of RIA, i.e., preparation of sample for reaction. There were two types of instrumentation, a semi-automatic pipete, and a fully automated pipete station, both providing for fast and accurate dispensing of the reagent or for the diluting of sample with reagent. Illustrations of the instruments were shown. (Mukohata, S.)

  2. SU-G-IeP1-01: A Novel MRI Post-Processing Algorithm for Visualization of the Prostate LDR Brachytherapy Seeds and Calcifications Based On B0 Field Inhomogeneity Correction and Hough Transform

    Energy Technology Data Exchange (ETDEWEB)

    Nosrati, R [Reyrson University, Toronto, Ontario (Canada); Sunnybrook Health Sciences Centre, Toronto, Ontario (Canada); Soliman, A; Owrangi, A [Sunnybrook Research Institute, Toronto, Ontario (Canada); Sunnybrook Health Sciences Centre, Toronto, Ontario (Canada); Ghugre, N [Sunnybrook Research Institute, Toronto, Ontario (Canada); University of Toronto, Toronto, ON (Canada); Morton, G [Sunnybrook Health Sciences Centre, Toronto, Ontario (Canada); University of Toronto, Toronto, ON (Canada); Pejovic-Milic, A [Reyrson University, Toronto, Ontario (Canada); Song, W [Reyrson University, Toronto, Ontario (Canada); Sunnybrook Research Institute, Toronto, Ontario (Canada); Sunnybrook Health Sciences Centre, Toronto, Ontario (Canada); University of Toronto, Toronto, ON (Canada)

    2016-06-15

    Purpose: This study aims at developing an MRI-only workflow for post-implant dosimetry of the prostate LDR brachytherapy seeds. The specific goal here is to develop a post-processing algorithm to produce positive contrast for the seeds and prostatic calcifications and differentiate between them on MR images. Methods: An agar-based phantom incorporating four dummy seeds (I-125) and five calcifications of different sizes (from sheep cortical bone) was constructed. Seeds were placed arbitrarily in the coronal plane. The phantom was scanned with 3T Philips Achieva MR scanner using an 8-channel head coil array. Multi-echo turbo spin echo (ME-TSE) and multi-echo gradient recalled echo (ME-GRE) sequences were acquired. Due to minimal susceptibility artifacts around seeds, ME-GRE sequence (flip angle=15; TR/TE=20/2.3/2.3; resolution=0.7×0.7×2mm3) was further processed.The induced field inhomogeneity due to the presence of titaniumencapsulated seeds was corrected using a B0 field map. B0 map was calculated using the ME-GRE sequence by calculating the phase difference at two different echo times. Initially, the product of the first echo and B0 map was calculated. The features corresponding to the seeds were then extracted in three steps: 1) the edge pixels were isolated using “Prewitt” operator; 2) the Hough transform was employed to detect ellipses approximately matching the dimensions of the seeds and 3) at the position and orientation of the detected ellipses an ellipse was drawn on the B0-corrected image. Results: The proposed B0-correction process produced positive contrast for the seeds and calcifications. The Hough transform based on Prewitt edge operator successfully identified all the seeds according to their ellipsoidal shape and dimensions in the edge image. Conclusion: The proposed post-processing algorithm successfully visualized the seeds and calcifications with positive contrast and differentiates between them according to their shapes. Further

  3. Classical Electron Model with QED Corrections

    OpenAIRE

    Lenk, Ron

    2010-01-01

    In this article we build a metric for a classical general relativistic electron model with QED corrections. We calculate the stress-energy tensor for the radiative corrections to the Coulomb potential in both the near-field and far-field approximations. We solve the three field equations in both cases by using a perturbative expansion to first order in alpha (the fine-structure constant) while insisting that the usual (+, +, -, -) structure of the stress-energy tensor is maintained. The resul...

  4. RCrane: semi-automated RNA model building.

    Science.gov (United States)

    Keating, Kevin S; Pyle, Anna Marie

    2012-08-01

    RNA crystals typically diffract to much lower resolutions than protein crystals. This low-resolution diffraction results in unclear density maps, which cause considerable difficulties during the model-building process. These difficulties are exacerbated by the lack of computational tools for RNA modeling. Here, RCrane, a tool for the partially automated building of RNA into electron-density maps of low or intermediate resolution, is presented. This tool works within Coot, a common program for macromolecular model building. RCrane helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RCrane then allows the crystallographer to review the newly built structure and select alternative backbone conformations where desired. This tool can also be used to automatically correct the backbone structure of previously built nucleotides. These automated corrections can fix incorrect sugar puckers, steric clashes and other structural problems.

  5. Laboratory Automation and Middleware.

    Science.gov (United States)

    Riben, Michael

    2015-06-01

    The practice of surgical pathology is under constant pressure to deliver the highest quality of service, reduce errors, increase throughput, and decrease turnaround time while at the same time dealing with an aging workforce, increasing financial constraints, and economic uncertainty. Although not able to implement total laboratory automation, great progress continues to be made in workstation automation in all areas of the pathology laboratory. This report highlights the benefits and challenges of pathology automation, reviews middleware and its use to facilitate automation, and reviews the progress so far in the anatomic pathology laboratory. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Automated cloning methods.; TOPICAL

    International Nuclear Information System (INIS)

    Collart, F.

    2001-01-01

    Argonne has developed a series of automated protocols to generate bacterial expression clones by using a robotic system designed to be used in procedures associated with molecular biology. The system provides plate storage, temperature control from 4 to 37 C at various locations, and Biomek and Multimek pipetting stations. The automated system consists of a robot that transports sources from the active station on the automation system. Protocols for the automated generation of bacterial expression clones can be grouped into three categories (Figure 1). Fragment generation protocols are initiated on day one of the expression cloning procedure and encompass those protocols involved in generating purified coding region (PCR)

  7. Modulation of glacier ablation by tephra coverage from Eyjafjallajökull and Grímsvötn volcanoes, Iceland: an automated field experiment

    Science.gov (United States)

    Möller, Rebecca; Möller, Marco; Kukla, Peter A.; Schneider, Christoph

    2018-01-01

    We report results from a field experiment investigating the influence of volcanic tephra coverage on glacier ablation. These influences are known to be significantly different from those of moraine debris on glaciers due to the contrasting grain size distribution and thermal conductivity. Thus far, the influences of tephra deposits on glacier ablation have rarely been studied. For the experiment, artificial plots of two different tephra types from Eyjafjallajökull and Grímsvötn volcanoes were installed on a snow-covered glacier surface of Vatnajökull ice cap, Iceland. Snow-surface lowering and atmospheric conditions were monitored in summer 2015 and compared to a tephra-free reference site. For each of the two volcanic tephra types, three plots of variable thickness ( ˜ 1.5, ˜ 8.5 and ˜ 80 mm) were monitored. After limiting the records to a period of reliable measurements, a 50-day data set of hourly records was obtained, which can be downloaded from the Pangaea data repository (pangaea.de" target="_blank">https://www.pangaea.de; doi:10.1594/PANGAEA.876656). The experiment shows a substantial increase in snow-surface lowering rates under the ˜ 1.5 and ˜ 8.5 mm tephra plots when compared to uncovered conditions. Under the thick tephra cover some insulating effects could be observed. These results are in contrast to other studies which depicted insulating effects for much thinner tephra coverage on bare-ice glacier surfaces. Differences between the influences of the two different petrological types of tephra exist but are negligible compared to the effect of tephra coverage overall.

  8. Use of a force-sensing automated open field apparatus in a longitudinal study of multiple behavioral deficits in CAG140 Huntington's disease model mice.

    Science.gov (United States)

    Fowler, Stephen C; Muma, Nancy A

    2015-11-01

    Behavioral testing of mouse models of Huntington's disease (HD) is a key component of preclinical assessment for potential pharmacological intervention. An open field with a force plate floor was used to quantify numerous spontaneous behaviors in a slowly progressing model of HD. CAG140 (+/+, +/-, -/-) male and female mice were compared in a longitudinal study from 6 to 65 weeks of age. Distance traveled, wall rears, wall rear duration, number of low mobility bouts, in-place movements, number of high velocity runs, and gait parameters (stride rate, stride length, and velocity) were extracted from the ground reaction forces recorded in 20-min actometer sessions. Beginning at 11 weeks, HD mice (both +/- and +/+) were consistently hypoactive throughout testing. Robust hypoactivity at 39 weeks of age was not accompanied by gait disturbances. By 52 and 65 weeks of age the duration of wall rears increased and in-place tremor-like movements emerged at 65 weeks of age in the +/+, but not in the +/- HD mice. Taken together, these results suggest that hypoactivity preceding frank motor dysfunction is a characteristic of CAG140 mice that may correspond to low motivation to move seen clinically in the premanifest/prediagnostic stage in human HD. The results also show that the force plate method provides a means for tracking the progression of behavioral dysfunction in HD mice beyond the stage when locomotion is lost while enabling quantification of tremor-like and similar in-place behaviors without a change in instrumentation. Use of force plate actometry also minimizes testing-induced enrichment effects when batteries of different tests are carried out longitudinally. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. A New Tool for Automated Data Collection and Complete On-site Flux Data Processing for Eddy Covariance Measurements

    Science.gov (United States)

    Begashaw, I. G.; Kathilankal, J. C.; Li, J.; Beaty, K.; Ediger, K.; Forgione, A.; Fratini, G.; Johnson, D.; Velgersdyk, M.; Hupp, J. R.; Xu, L.; Burba, G. G.

    2014-12-01

    The eddy covariance method is widely used for direct measurements of turbulent exchange of gases and energy between the surface and atmosphere. In the past, raw data were collected first in the field and then processed back in the laboratory to achieve fully corrected publication-ready flux results. This post-processing consumed significant amount of time and resources, and precluded researchers from accessing near real-time final flux results. A new automated measurement system with novel hardware and software designs was developed, tested, and deployed starting late 2013. The major advancements with this automated flux system include: 1) Enabling logging high-frequency, three-dimensional wind speeds and multiple gas densities (CO2, H2O and CH4), low-frequency meteorological data, and site metadata simultaneously through a specially designed file format 2) Conducting fully corrected, real-time on-site flux computations using conventional as well as user-specified methods, by implementing EddyPro Software on a small low-power microprocessor 3) Providing precision clock control and coordinate information for data synchronization and inter-site data comparison by incorporating a GPS and Precision Time Protocol. Along with these innovations, a data management server application was also developed to chart fully corrected real-time fluxes to assist remote system monitoring, to send e-mail alerts, and to automate data QA/QC, transfer and archiving at individual stations or on a network level. Combination of all of these functions was designed to help save substantial amount of time and costs associated with managing a research site by eliminating the post-field data processing, reducing user errors and facilitating real-time access to fully corrected flux results. The design, functionality, and test results from this new eddy covariance measurement tool will be presented.

  10. About development of automation control systems

    Science.gov (United States)

    Myshlyaev, L. P.; Wenger, K. G.; Ivushkin, K. A.; Makarov, V. N.

    2018-05-01

    The shortcomings of approaches to the development of modern control automation systems and ways of their improvement are given: the correct formation of objects for study and optimization; a joint synthesis of control objects and control systems, an increase in the structural diversity of the elements of control systems. Diagrams of control systems with purposefully variable structure of their elements are presented. Structures of control algorithms for an object with a purposefully variable structure are given.

  11. Photogrammetric approach to automated checking of DTMs

    DEFF Research Database (Denmark)

    Potucková, Marketa

    2005-01-01

    Geometrically accurate digital terrain models (DTMs) are essential for orthoimage production and many other applications. Collecting reference data or visual inspection are reliable but time consuming and therefore expensive methods for finding errors in DTMs. In this paper, a photogrammetric...... approach to automated checking and improving of DTMs is evaluated. Corresponding points in two overlapping orthoimages are found by means of area based matching. Provided the image orientation is correct, discovered displacements correspond to DTM errors. Improvements of the method regarding its...

  12. Determination of the Optimized Automation Rate considering Effects of Automation on Human Operators in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun; Kim, Jong Hyun; Kim, Man Cheol

    2015-01-01

    Automation refers to the use of a device or a system to perform a function previously performed by a human operator. It is introduced to reduce the human errors and to enhance the performance in various industrial fields, including the nuclear industry. However, these positive effects are not always achieved in complex systems such as nuclear power plants (NPPs). An excessive introduction of automation can generate new roles for human operators and change activities in unexpected ways. As more automation systems are accepted, the ability of human operators to detect automation failures and resume manual control is diminished. This disadvantage of automation is called the Out-of-the- Loop (OOTL) problem. We should consider the positive and negative effects of automation at the same time to determine the appropriate level of the introduction of automation. Thus, in this paper, we suggest an estimation method to consider the positive and negative effects of automation at the same time to determine the appropriate introduction of automation. This concept is limited in that it does not consider the effects of automation on human operators. Thus, a new estimation method for automation rate was suggested to overcome this problem

  13. Magnetospheric ULF wave studies in the frame of Swarm mission: new advanced tools for automated detection of pulsations in magnetic and electric field observations

    Science.gov (United States)

    Balasis, Georgios; Daglis, Ioannis A.; Papadimitriou, Constantinos; Georgiou, Marina; Giamini, Sigiava A.; Sandberg, Ingmar; Haagmans, Roger

    2014-05-01

    The rekindling of the interest in space science in the last 15 years has led to many successful satellite missions in the Earth's magnetosphere and topside ionosphere, which were able to provide the scientific community with high-quality data on the magnetic and electric fields surrounding our planet. This data pool will be further enriched by the measurements of ESA's Swarm mission, a constellation of three satellites in different polar orbits, flying at altitudes from 400 to 550 km, which was launched on the 22nd of November 2013. Aiming at the best scientific exploitation of this corpus of accumulated data, we have developed a set of analysis tools that can cope with measurements of various spacecraft, at various regions of the magnetosphere and in the topside ionosphere. Our algorithms are based on a combination of wavelet spectral methods and artificial neural network techniques and are suited for the detection of waves and wave-like disturbances as well as the extraction of several physical parameters. Our recent work demonstrates the applicability of our developed analysis tools, both for individual case studies and statistical analysis of ultra low frequency (ULF) waves. We provide evidence for a rare simultaneous observation of a ULF wave event in the Earth's magnetosphere, topside ionosphere and surface: we have found a specific time interval during the Halloween 2003 magnetic storm, when the Cluster and CHAMP spacecraft were in good local time (LT) conjunction, and have examined the ULF wave activity in the Pc3 (22-100 mHz), Pc4 (7-22 mHz) and Pc5 (1-7 mHz) bands using data from the Geotail, Cluster and CHAMP missions, as well as the CARISMA, GIMA and IMAGE magnetometer networks. Our study shows that the same wave event, characterized by increased activity in the high end of the Pc3 band, was simultaneously observed by all three satellite missions and by certain stations of ground networks. This observation provides a strong argument in favour of the

  14. Bright-field in situ hybridization for HER2 gene amplification in breast cancer using tissue microarrays: correlation between chromogenic (CISH) and automated silver-enhanced (SISH) methods with patient outcome.

    Science.gov (United States)

    Francis, Glenn D; Jones, Mark A; Beadle, Geoffrey F; Stein, Sandra R

    2009-06-01

    with immunohistochemistry results and with breast cancer-specific survival. HER2 SISH testing combines the advantages of automation and bright-field microscopy to facilitate workflow within the laboratory, improves turnaround time, and correlates with patient outcome.

  15. Reload safety analysis automation tools

    International Nuclear Information System (INIS)

    Havlůj, F.; Hejzlar, J.; Vočka, R.

    2013-01-01

    Performing core physics calculations for the sake of reload safety analysis is a very demanding and time consuming process. This process generally begins with the preparation of libraries for the core physics code using a lattice code. The next step involves creating a very large set of calculations with the core physics code. Lastly, the results of the calculations must be interpreted, correctly applying uncertainties and checking whether applicable limits are satisfied. Such a procedure requires three specialized experts. One must understand the lattice code in order to correctly calculate and interpret its results. The next expert must have a good understanding of the physics code in order to create libraries from the lattice code results and to correctly define all the calculations involved. The third expert must have a deep knowledge of the power plant and the reload safety analysis procedure in order to verify, that all the necessary calculations were performed. Such a procedure involves many steps and is very time consuming. At ÚJV Řež, a.s., we have developed a set of tools which can be used to automate and simplify the whole process of performing reload safety analysis. Our application QUADRIGA automates lattice code calculations for library preparation. It removes user interaction with the lattice code and reduces his task to defining fuel pin types, enrichments, assembly maps and operational parameters all through a very nice and user-friendly GUI. The second part in reload safety analysis calculations is done by CycleKit, a code which is linked with our core physics code ANDREA. Through CycleKit large sets of calculations with complicated interdependencies can be performed using simple and convenient notation. CycleKit automates the interaction with ANDREA, organizes all the calculations, collects the results, performs limit verification and displays the output in clickable html format. Using this set of tools for reload safety analysis simplifies

  16. FAST AUTOMATED DECOUPLING AT RHIC

    International Nuclear Information System (INIS)

    BEEBE-WANG, J.J.

    2005-01-01

    Coupling correction is essential for the operational performance of RHIC. The independence of the transverse degrees of freedom makes diagnostics and tune control easier, and it is advantageous to operate an accelerator close to the coupling resonance to minimize nearby nonlinear sidebands. An automated coupling correction application iDQmini has been developed for RHIC routine operations. The application decouples RHIC globally by minimizing the tune separation through finding the optimal settings of two orthogonal skew quadrupole families. The program iDQmini provides options of automatic, semi-automatic and manual decoupling operations. It accesses tune information from all RHIC tune measurement systems: the PLL (phase lock loop), the high frequency Schottky system and the tune meter. It also supplies tune and skew quadrupole scans, finding the minimum tune separation, display the real time results and interface with the RHIC control system. We summarize the capabilities of the coupling correction application iDQmini, and discuss the operational protections incorporated in the program

  17. Corrected body surface potential mapping.

    Science.gov (United States)

    Krenzke, Gerhard; Kindt, Carsten; Hetzer, Roland

    2007-02-01

    In the method for body surface potential mapping described here, the influence of thorax shape on measured ECG values is corrected. The distances of the ECG electrodes from the electrical heart midpoint are determined using a special device for ECG recording. These distances are used to correct the ECG values as if they had been measured on the surface of a sphere with a radius of 10 cm with its midpoint localized at the electrical heart midpoint. The equipotential lines of the electrical heart field are represented on the virtual surface of such a sphere. It is demonstrated that the character of a dipole field is better represented if the influence of the thorax shape is reduced. The site of the virtual reference electrode is also important for the dipole character of the representation of the electrical heart field.

  18. Automated processing for proton spectroscopic imaging using water reference deconvolution.

    Science.gov (United States)

    Maudsley, A A; Wu, Z; Meyerhoff, D J; Weiner, M W

    1994-06-01

    Automated formation of MR spectroscopic images (MRSI) is necessary before routine application of these methods is possible for in vivo studies; however, this task is complicated by the presence of spatially dependent instrumental distortions and the complex nature of the MR spectrum. A data processing method is presented for completely automated formation of in vivo proton spectroscopic images, and applied for analysis of human brain metabolites. This procedure uses the water reference deconvolution method (G. A. Morris, J. Magn. Reson. 80, 547(1988)) to correct for line shape distortions caused by instrumental and sample characteristics, followed by parametric spectral analysis. Results for automated image formation were found to compare favorably with operator dependent spectral integration methods. While the water reference deconvolution processing was found to provide good correction of spatially dependent resonance frequency shifts, it was found to be susceptible to errors for correction of line shape distortions. These occur due to differences between the water reference and the metabolite distributions.

  19. Quantum corrections to inflaton and curvaton dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Markkanen, Tommi [Helsinki Institute of Physics and Department of Physics, University of Helsinki, P.O. Box 64, FI-00014, Helsinki (Finland); Tranberg, Anders, E-mail: tommi.markkanen@helsinki.fi, E-mail: anders.tranberg@nbi.dk [Niels Bohr International Academy and Discovery Center, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen (Denmark)

    2012-11-01

    We compute the fully renormalized one-loop effective action for two interacting and self-interacting scalar fields in FRW space-time. We then derive and solve the quantum corrected equations of motion both for fields that dominate the energy density (such as an inflaton) and fields that do not (such as a subdominant curvaton). In particular, we introduce quantum corrected Friedmann equations that determine the evolution of the scale factor. We find that in general, gravitational corrections are negligible for the field dynamics. For the curvaton-type fields this leaves only the effect of the flat-space Coleman-Weinberg-type effective potential, and we find that these can be significant. For the inflaton case, both the corrections to the potential and the Friedmann equations can lead to behaviour very different from the classical evolution. Even to the point that inflation, although present at tree level, can be absent at one-loop order.

  20. Brane cosmology with curvature corrections

    International Nuclear Information System (INIS)

    Kofinas, Georgios; Maartens, Roy; Papantonopoulos, Eleftherios

    2003-01-01

    We study the cosmology of the Randall-Sundrum brane-world where the Einstein-Hilbert action is modified by curvature correction terms: a four-dimensional scalar curvature from induced gravity on the brane, and a five-dimensional Gauss-Bonnet curvature term. The combined effect of these curvature corrections to the action removes the infinite-density big bang singularity, although the curvature can still diverge for some parameter values. A radiation brane undergoes accelerated expansion near the minimal scale factor, for a range of parameters. This acceleration is driven by the geometric effects, without an inflation field or negative pressures. At late times, conventional cosmology is recovered. (author)

  1. Automated System Marketplace 1994.

    Science.gov (United States)

    Griffiths, Jose-Marie; Kertis, Kimberly

    1994-01-01

    Reports results of the 1994 Automated System Marketplace survey based on responses from 60 vendors. Highlights include changes in the library automation marketplace; estimated library systems revenues; minicomputer and microcomputer-based systems; marketplace trends; global markets and mergers; research needs; new purchase processes; and profiles…

  2. Automation in Warehouse Development

    NARCIS (Netherlands)

    Hamberg, R.; Verriet, J.

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and

  3. Order Division Automated System.

    Science.gov (United States)

    Kniemeyer, Justin M.; And Others

    This publication was prepared by the Order Division Automation Project staff to fulfill the Library of Congress' requirement to document all automation efforts. The report was originally intended for internal use only and not for distribution outside the Library. It is now felt that the library community at-large may have an interest in the…

  4. Automate functional testing

    Directory of Open Access Journals (Sweden)

    Ramesh Kalindri

    2014-06-01

    Full Text Available Currently, software engineers are increasingly turning to the option of automating functional tests, but not always have successful in this endeavor. Reasons range from low planning until over cost in the process. Some principles that can guide teams in automating these tests are described in this article.

  5. Automation and robotics

    Science.gov (United States)

    Montemerlo, Melvin

    1988-01-01

    The Autonomous Systems focus on the automation of control systems for the Space Station and mission operations. Telerobotics focuses on automation for in-space servicing, assembly, and repair. The Autonomous Systems and Telerobotics each have a planned sequence of integrated demonstrations showing the evolutionary advance of the state-of-the-art. Progress is briefly described for each area of concern.

  6. Automating the Small Library.

    Science.gov (United States)

    Skapura, Robert

    1987-01-01

    Discusses the use of microcomputers for automating school libraries, both for entire systems and for specific library tasks. Highlights include available library management software, newsletters that evaluate software, constructing an evaluation matrix, steps to consider in library automation, and a brief discussion of computerized card catalogs.…

  7. End-to-end workflow for finite element analysis of tumor treating fields in glioblastomas

    Science.gov (United States)

    Timmons, Joshua J.; Lok, Edwin; San, Pyay; Bui, Kevin; Wong, Eric T.

    2017-11-01

    Tumor Treating Fields (TTFields) therapy is an approved modality of treatment for glioblastoma. Patient anatomy-based finite element analysis (FEA) has the potential to reveal not only how these fields affect tumor control but also how to improve efficacy. While the automated tools for segmentation speed up the generation of FEA models, multi-step manual corrections are required, including removal of disconnected voxels, incorporation of unsegmented structures and the addition of 36 electrodes plus gel layers matching the TTFields transducers. Existing approaches are also not scalable for the high throughput analysis of large patient volumes. A semi-automated workflow was developed to prepare FEA models for TTFields mapping in the human brain. Magnetic resonance imaging (MRI) pre-processing, segmentation, electrode and gel placement, and post-processing were all automated. The material properties of each tissue were applied to their corresponding mask in silico using COMSOL Multiphysics (COMSOL, Burlington, MA, USA). The fidelity of the segmentations with and without post-processing was compared against the full semi-automated segmentation workflow approach using Dice coefficient analysis. The average relative differences for the electric fields generated by COMSOL were calculated in addition to observed differences in electric field-volume histograms. Furthermore, the mesh file formats in MPHTXT and NASTRAN were also compared using the differences in the electric field-volume histogram. The Dice coefficient was less for auto-segmentation without versus auto-segmentation with post-processing, indicating convergence on a manually corrected model. An existent but marginal relative difference of electric field maps from models with manual correction versus those without was identified, and a clear advantage of using the NASTRAN mesh file format was found. The software and workflow outlined in this article may be used to accelerate the investigation of TTFields in

  8. End-to-end workflow for finite element analysis of tumor treating fields in glioblastomas.

    Science.gov (United States)

    Timmons, Joshua J; Lok, Edwin; San, Pyay; Bui, Kevin; Wong, Eric T

    2017-10-12

    Tumor Treating Fields (TTFields) therapy is an approved modality of treatment for glioblastoma. Patient anatomy-based finite element analysis (FEA) has the potential to reveal not only how these fields affect tumor control but also how to improve efficacy. While the automated tools for segmentation speed up the generation of FEA models, multi-step manual corrections are required, including removal of disconnected voxels, incorporation of unsegmented structures and the addition of 36 electrodes plus gel layers matching the TTFields transducers. Existing approaches are also not scalable for the high throughput analysis of large patient volumes. A semi-automated workflow was developed to prepare FEA models for TTFields mapping in the human brain. Magnetic resonance imaging (MRI) pre-processing, segmentation, electrode and gel placement, and post-processing were all automated. The material properties of each tissue were applied to their corresponding mask in silico using COMSOL Multiphysics (COMSOL, Burlington, MA, USA). The fidelity of the segmentations with and without post-processing was compared against the full semi-automated segmentation workflow approach using Dice coefficient analysis. The average relative differences for the electric fields generated by COMSOL were calculated in addition to observed differences in electric field-volume histograms. Furthermore, the mesh file formats in MPHTXT and NASTRAN were also compared using the differences in the electric field-volume histogram. The Dice coefficient was less for auto-segmentation without versus auto-segmentation with post-processing, indicating convergence on a manually corrected model. An existent but marginal relative difference of electric field maps from models with manual correction versus those without was identified, and a clear advantage of using the NASTRAN mesh file format was found. The software and workflow outlined in this article may be used to accelerate the investigation of TTFields in

  9. Automated Diatom Analysis Applied to Traditional Light Microscopy: A Proof-of-Concept Study

    Science.gov (United States)

    Little, Z. H. L.; Bishop, I.; Spaulding, S. A.; Nelson, H.; Mahoney, C.

    2017-12-01

    Diatom identification and enumeration by high resolution light microscopy is required for many areas of research and water quality assessment. Such analyses, however, are both expertise and labor-intensive. These challenges motivate the need for an automated process to efficiently and accurately identify and enumerate diatoms. Improvements in particle analysis software have increased the likelihood that diatom enumeration can be automated. VisualSpreadsheet software provides a possible solution for automated particle analysis of high-resolution light microscope diatom images. We applied the software, independent of its complementary FlowCam hardware, to automated analysis of light microscope images containing diatoms. Through numerous trials, we arrived at threshold settings to correctly segment 67% of the total possible diatom valves and fragments from broad fields of view. (183 light microscope images were examined containing 255 diatom particles. Of the 255 diatom particles present, 216 diatoms valves and fragments of valves were processed, with 170 properly analyzed and focused upon by the software). Manual analysis of the images yielded 255 particles in 400 seconds, whereas the software yielded a total of 216 particles in 68 seconds, thus highlighting that the software has an approximate five-fold efficiency advantage in particle analysis time. As in past efforts, incomplete or incorrect recognition was found for images with multiple valves in contact or valves with little contrast. The software has potential to be an effective tool in assisting taxonomists with diatom enumeration by completing a large portion of analyses. Benefits and limitations of the approach are presented to allow for development of future work in image analysis and automated enumeration of traditional light microscope images containing diatoms.

  10. MARC and the Library Service Center: Automation at Bargain Rates.

    Science.gov (United States)

    Pearson, Karl M.

    Despite recent research and development in the field of library automation, libraries have been unable to reap the benefits promised by technology due to the high cost of building and maintaining their own computer-based systems. Time-sharing and disc mass storage devices will bring automation costs, if spread over a number of users, within the…

  11. Automation in Warehouse Development

    CERN Document Server

    Verriet, Jacques

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and supports the quality of picking processes. Secondly, the development of models to simulate and analyse warehouse designs and their components facilitates the challenging task of developing warehouses that take into account each customer’s individual requirements and logistic processes. Automation in Warehouse Development addresses both types of automation from the innovative perspective of applied science. In particular, it describes the outcomes of the Falcon project, a joint endeavour by a consortium of industrial and academic partners. The results include a model-based approach to automate warehouse control design, analysis models for warehouse design, concepts for robotic item handling and computer vision, and auton...

  12. Bioprocessing automation in cell therapy manufacturing: Outcomes of special interest group automation workshop.

    Science.gov (United States)

    Ball, Oliver; Robinson, Sarah; Bure, Kim; Brindley, David A; Mccall, David

    2018-04-01

    Phacilitate held a Special Interest Group workshop event in Edinburgh, UK, in May 2017. The event brought together leading stakeholders in the cell therapy bioprocessing field to identify present and future challenges and propose potential solutions to automation in cell therapy bioprocessing. Here, we review and summarize discussions from the event. Deep biological understanding of a product, its mechanism of action and indication pathogenesis underpin many factors relating to bioprocessing and automation. To fully exploit the opportunities of bioprocess automation, therapeutics developers must closely consider whether an automation strategy is applicable, how to design an 'automatable' bioprocess and how to implement process modifications with minimal disruption. Major decisions around bioprocess automation strategy should involve all relevant stakeholders; communication between technical and business strategy decision-makers is of particular importance. Developers should leverage automation to implement in-process testing, in turn applicable to process optimization, quality assurance (QA)/ quality control (QC), batch failure control, adaptive manufacturing and regulatory demands, but a lack of precedent and technical opportunities can complicate such efforts. Sparse standardization across product characterization, hardware components and software platforms is perceived to complicate efforts to implement automation. The use of advanced algorithmic approaches such as machine learning may have application to bioprocess and supply chain optimization. Automation can substantially de-risk the wider supply chain, including tracking and traceability, cryopreservation and thawing and logistics. The regulatory implications of automation are currently unclear because few hardware options exist and novel solutions require case-by-case validation, but automation can present attractive regulatory incentives. Copyright © 2018 International Society for Cellular Therapy

  13. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  14. Automated borehole gravity meter system

    International Nuclear Information System (INIS)

    Lautzenhiser, Th.V.; Wirtz, J.D.

    1984-01-01

    An automated borehole gravity meter system for measuring gravity within a wellbore. The gravity meter includes leveling devices for leveling the borehole gravity meter, displacement devices for applying forces to a gravity sensing device within the gravity meter to bring the gravity sensing device to a predetermined or null position. Electronic sensing and control devices are provided for (i) activating the displacement devices, (ii) sensing the forces applied to the gravity sensing device, (iii) electronically converting the values of the forces into a representation of the gravity at the location in the wellbore, and (iv) outputting such representation. The system further includes electronic control devices with the capability of correcting the representation of gravity for tidal effects, as well as, calculating and outputting the formation bulk density and/or porosity

  15. Automated system of monitoring and positioning of functional units of mining technological machines for coal-mining enterprises

    Directory of Open Access Journals (Sweden)

    Meshcheryakov Yaroslav

    2018-01-01

    Full Text Available This article is show to the development of an automated monitoring and positioning system for functional nodes of mining technological machines. It describes the structure, element base, algorithms for identifying the operating states of a walking excavator; various types of errors in the functioning of microelectromechanical gyroscopes and accelerometers, as well as methods for their correction based on the Madgwick fusion filter. The results of industrial tests of an automated monitoring and positioning system for functional units on one of the opencast coal mines of Kuzbass are presented. This work is addressed to specialists working in the fields of the development of embedded systems and control systems, radio electronics, mechatronics, and robotics.

  16. NWS Corrections to Observations

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Form B-14 is the National Weather Service form entitled 'Notice of Corrections to Weather Records.' The forms are used to make corrections to observations on forms...

  17. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... more surgeries depending on the extent of the repair needed. Click here to find out more. Corrective ... more surgeries depending on the extent of the repair needed. Click here to find out more. Corrective ...

  18. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... Jaw Surgery Download Download the ebook for further information Corrective jaw, or orthognathic surgery is performed by ... your treatment. Correction of Common Dentofacial Deformities ​ ​ The information provided here is not intended as a substitute ...

  19. Automated Comparative Auditing of NCIT Genomic Roles Using NCBI

    Science.gov (United States)

    Cohen, Barry; Oren, Marc; Min, Hua; Perl, Yehoshua; Halper, Michael

    2008-01-01

    Biomedical research has identified many human genes and various knowledge about them. The National Cancer Institute Thesaurus (NCIT) represents such knowledge as concepts and roles (relationships). Due to the rapid advances in this field, it is to be expected that the NCIT’s Gene hierarchy will contain role errors. A comparative methodology to audit the Gene hierarchy with the use of the National Center for Biotechnology Information’s (NCBI’s) Entrez Gene database is presented. The two knowledge sources are accessed via a pair of Web crawlers to ensure up-to-date data. Our algorithms then compare the knowledge gathered from each, identify discrepancies that represent probable errors, and suggest corrective actions. The primary focus is on two kinds of gene-roles: (1) the chromosomal locations of genes, and (2) the biological processes in which genes plays a role. Regarding chromosomal locations, the discrepancies revealed are striking and systematic, suggesting a structurally common origin. In regard to the biological processes, difficulties arise because genes frequently play roles in multiple processes, and processes may have many designations (such as synonymous terms). Our algorithms make use of the roles defined in the NCIT Biological Process hierarchy to uncover many probable gene-role errors in the NCIT. These results show that automated comparative auditing is a promising technique that can identify a large number of probable errors and corrections for them in a terminological genomic knowledge repository, thus facilitating its overall maintenance. PMID:18486558

  20. Systematic review automation technologies

    Science.gov (United States)

    2014-01-01

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time. PMID:25005128