WorldWideScience

Sample records for source imaging methods

  1. Combination of acoustical radiosity and the image source method

    Koutsouris, Georgios I; Brunskog, Jonas; Jeong, Cheol-Ho

    2013-01-01

    A combined model for room acoustic predictions is developed, aiming to treat both diffuse and specular reflections in a unified way. Two established methods are incorporated: acoustical radiosity, accounting for the diffuse part, and the image source method, accounting for the specular part...

  2. Localization of sources of the hyperinsulinism through the image methods

    Abath, C.G.A.

    1990-01-01

    Pancreatic insulinomas are small tumours, manifested early by the high hormonal production. Microscopic changes, like islet cell hyperplasia or nesidioblastosis, are also sources of hyperinsulinism. The pre-operative localization of the lesions is important, avoiding unnecessary or insufficient blind pancreatectomies. It is presented the experience with 26 patients with hyperinsulinism, of whom six were examined by ultrasound, nine by computed tomography, 25 by angiography and 16 by pancreatic venous sampling for hormone assay, in order to localize the lesions. Percutaneous transhepatic portal and pancreatic vein catheterization with measurement of insuline concentrations was the most reliable and sensitive method for detecting the lesions, including those non-palpable during the surgical exploration (author)

  3. Feasibility study on X-ray source with pinhole imaging method

    Qiu Rui; Li Junli

    2007-01-01

    In order to verify the feasibility of study on X-ray source with pinhole imaging method, and optimize the design of X-ray pinhole imaging system, an X-ray pinhole imaging equipment was set up. The change of image due to the change of the position and intensity of X-ray source was estimated with mathematical method and validated with experiment. The results show that the change of the spot position and gray of the spot is linearly related with the change of the position and intensity of X-ray source, so it is feasible to study X-ray source with pinhole imaging method in this application. The results provide some references for the design of X-ray pinhole imaging system. (authors)

  4. Microseismic imaging using a source-independent full-waveform inversion method

    Wang, Hanchen

    2016-09-06

    Using full waveform inversion (FWI) to locate microseismic and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, waveform inversion of microseismic events faces incredible nonlinearity due to the unknown source location (space) and function (time). We develop a source independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with the observed and modeled data to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for source wavelet in z axis is extracted to check the accuracy of the inverted source image and velocity model. Also the angle gather is calculated to see if the velocity model is correct. By inverting for all the source image, source wavelet and the velocity model, the proposed method produces good estimates of the source location, ignition time and the background velocity for part of the SEG overthrust model.

  5. Microseismic imaging using a source-independent full-waveform inversion method

    Wang, Hanchen

    2016-01-01

    Using full waveform inversion (FWI) to locate microseismic and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, waveform inversion of microseismic events faces incredible nonlinearity due to the unknown source location (space) and function (time). We develop a source independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with the observed and modeled data to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for source wavelet in z axis is extracted to check the accuracy of the inverted source image and velocity model. Also the angle gather is calculated to see if the velocity model is correct. By inverting for all the source image, source wavelet and the velocity model, the proposed method produces good estimates of the source location, ignition time and the background velocity for part of the SEG overthrust model.

  6. Micro-seismic imaging using a source function independent full waveform inversion method

    Wang, Hanchen; Alkhalifah, Tariq

    2018-03-01

    At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  7. Micro-seismic imaging using a source function independent full waveform inversion method

    Wang, Hanchen

    2018-03-26

    At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  8. A novel method for detecting light source for digital images forensic

    Roy, A. K.; Mitra, S. K.; Agrawal, R.

    2011-06-01

    Manipulation in image has been in practice since centuries. These manipulated images are intended to alter facts — facts of ethics, morality, politics, sex, celebrity or chaos. Image forensic science is used to detect these manipulations in a digital image. There are several standard ways to analyze an image for manipulation. Each one has some limitation. Also very rarely any method tried to capitalize on the way image was taken by the camera. We propose a new method that is based on light and its shade as light and shade are the fundamental input resources that may carry all the information of the image. The proposed method measures the direction of light source and uses the light based technique for identification of any intentional partial manipulation in the said digital image. The method is tested for known manipulated images to correctly identify the light sources. The light source of an image is measured in terms of angle. The experimental results show the robustness of the methodology.

  9. Progress toward the development and testing of source reconstruction methods for NIF neutron imaging.

    Loomis, E N; Grim, G P; Wilde, C; Wilson, D C; Morgan, G; Wilke, M; Tregillis, I; Merrill, F; Clark, D; Finch, J; Fittinghoff, D; Bower, D

    2010-10-01

    Development of analysis techniques for neutron imaging at the National Ignition Facility is an important and difficult task for the detailed understanding of high-neutron yield inertial confinement fusion implosions. Once developed, these methods must provide accurate images of the hot and cold fuels so that information about the implosion, such as symmetry and areal density, can be extracted. One method under development involves the numerical inversion of the pinhole image using knowledge of neutron transport through the pinhole aperture from Monte Carlo simulations. In this article we present results of source reconstructions based on simulated images that test the methods effectiveness with regard to pinhole misalignment.

  10. Micro-seismic Imaging Using a Source Independent Waveform Inversion Method

    Wang, Hanchen

    2016-04-18

    Micro-seismology is attracting more and more attention in the exploration seismology community. The main goal in micro-seismic imaging is to find the source location and the ignition time in order to track the fracture expansion, which will help engineers monitor the reservoirs. Conventional imaging methods work fine in this field but there are many limitations such as manual picking, incorrect migration velocity and low signal to noise ratio (S/N). In traditional surface survey imaging, full waveform inversion (FWI) is widely used. The FWI method updates the velocity model by minimizing the misfit between the observed data and the predicted data. Using FWI to locate and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. Use the FWI technique, and overcomes the difficulties of manual pickings and incorrect velocity model for migration. However, the technique of waveform inversion of micro-seismic events faces its own problems. There is significant nonlinearity due to the unknown source location (space) and function (time). We have developed a source independent FWI of micro-seismic events to simultaneously invert for the source image, source function and velocity model. It is based on convolving reference traces with the observed and modeled data to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. To examine the accuracy of the inverted source image and velocity model the extended image for source wavelet in z-axis is extracted. Also the angle gather is calculated to check the applicability of the migration velocity. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity in the synthetic experiments with both parts of the Marmousi and the SEG

  11. A combination of the acoustic radiosity and the image source method

    Koutsouris, Georgios I.; Brunskog, Jonas; Jeong, Cheol-Ho

    2012-01-01

    A combined model for room acoustic predictions is developed, aiming to treat both diffuse and specular reflections in a unified way. Two established methods are incorporated: acoustical radiosity, accounting for the diffuse part, and the image source method, accounting for the specular part...

  12. MEG source imaging method using fast L1 minimum-norm and its applications to signals with brain noise and human resting-state source amplitude images.

    Huang, Ming-Xiong; Huang, Charles W; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L; Baker, Dewleen G; Song, Tao; Harrington, Deborah L; Theilmann, Rebecca J; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M; Edgar, J Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T; Drake, Angela; Lee, Roland R

    2014-01-01

    The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL's performance was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL's performance was then examined in the analysis of human median-nerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer's problems of signal leaking and distorted source time-courses. © 2013.

  13. Comparing a phased combination of acoustical radiosity and the image source method with other simulation tools

    Marbjerg, Gerd Høy; Brunskog, Jonas; Jeong, Cheol-Ho

    2015-01-01

    A phased combination of acoustical radiosity and the image source method (PARISM) has been developed in order to be able to model both specular and diffuse reflections with angle-dependent and complex-valued acoustical descriptions of the surfaces. It is of great interest to model both specular...

  14. Imaging Seismic Source Variations Using Back-Projection Methods at El Tatio Geyser Field, Northern Chile

    Kelly, C. L.; Lawrence, J. F.

    2014-12-01

    During October 2012, 51 geophones and 6 broadband seismometers were deployed in an ~50x50m region surrounding a periodically erupting columnar geyser in the El Tatio Geyser Field, Chile. The dense array served as the seismic framework for a collaborative project to study the mechanics of complex hydrothermal systems. Contemporaneously, complementary geophysical measurements (including down-hole temperature and pressure, discharge rates, thermal imaging, water chemistry, and video) were also collected. Located on the western flanks of the Andes Mountains at an elevation of 4200m, El Tatio is the third largest geyser field in the world. Its non-pristine condition makes it an ideal location to perform minutely invasive geophysical studies. The El Jefe Geyser was chosen for its easily accessible conduit and extremely periodic eruption cycle (~120s). During approximately 2 weeks of continuous recording, we recorded ~2500 nighttime eruptions which lack cultural noise from tourism. With ample data, we aim to study how the source varies spatially and temporally during each phase of the geyser's eruption cycle. We are developing a new back-projection processing technique to improve source imaging for diffuse signals. Our method was previously applied to the Sierra Negra Volcano system, which also exhibits repeating harmonic and diffuse seismic sources. We back-project correlated seismic signals from the receivers back to their sources, assuming linear source to receiver paths and a known velocity model (obtained from ambient noise tomography). We apply polarization filters to isolate individual and concurrent geyser energy associated with P and S phases. We generate 4D, time-lapsed images of the geyser source field that illustrate how the source distribution changes through the eruption cycle. We compare images for pre-eruption, co-eruption, post-eruption and quiescent periods. We use our images to assess eruption mechanics in the system (i.e. top-down vs. bottom-up) and

  15. Exploring three faint source detections methods for aperture synthesis radio images

    Peracaula, M.; Torrent, A.; Masias, M.; Lladó, X.; Freixenet, J.; Martí, J.; Sánchez-Sutil, J. R.; Muñoz-Arjonilla, A. J.; Paredes, J. M.

    2015-04-01

    Wide-field radio interferometric images often contain a large population of faint compact sources. Due to their low intensity/noise ratio, these objects can be easily missed by automated detection methods, which have been classically based on thresholding techniques after local noise estimation. The aim of this paper is to present and analyse the performance of several alternative or complementary techniques to thresholding. We compare three different algorithms to increase the detection rate of faint objects. The first technique consists of combining wavelet decomposition with local thresholding. The second technique is based on the structural behaviour of the neighbourhood of each pixel. Finally, the third algorithm uses local features extracted from a bank of filters and a boosting classifier to perform the detections. The methods' performances are evaluated using simulations and radio mosaics from the Giant Metrewave Radio Telescope and the Australia Telescope Compact Array. We show that the new methods perform better than well-known state of the art methods such as SEXTRACTOR, SAD and DUCHAMP at detecting faint sources of radio interferometric images.

  16. Advanced neutron imaging methods with a potential to benefit from pulsed sources

    Strobl, M.; Kardjilov, N.; Hilger, A.; Penumadu, D.; Manke, I.

    2011-01-01

    During the last decade neutron imaging has seen significant improvements in instrumentation, detection and spatial resolution. Additionally, a variety of new applications and methods have been explored. As a consequence of an outstanding development nowadays various techniques of neutron imaging go far beyond a two- and three-dimensional mapping of the attenuation coefficients for a broad range of samples. Neutron imaging has become sensitive to neutron scattering in the small angle scattering range as well as with respect to Bragg scattering. Corresponding methods potentially provide spatially resolved and volumetric data revealing microstructural inhomogeneities, texture variations, crystalline phase distributions and even strains in bulk samples. Other techniques allow for the detection of refractive index distribution through phase sensitive measurements and the utilization of polarized neutrons enables radiographic and tomographic investigations of magnetic fields and properties as well as electrical currents within massive samples. All these advanced methods utilize or depend on wavelength dependent signals, and are hence suited to profit significantly from pulsed neutron sources as will be discussed.

  17. Description and validation of a combination of acoustical radiosity and the image source method

    Marbjerg, Gerd Høy; Jeong, Cheol-Ho; Brunskog, Jonas

    2014-01-01

    A model that combines image source modelling and acoustical radiosity with complex boundary con- ditions, thus including phase shifts on reflection, has been developed. The model is denoted Phased Acoustical Radiosity and Image Source Model (PARISM). It has been developed in order to be able...... to model both specular and diffuse reflections with complex-valued acoustical descriptions of the surfaces. This paper mainly describes the combination of the two models and the implementation of the angle dependent surface descriptions both in the image source model and in acoustical radiosity...

  18. An advanced boundary element method (BEM) implementation for the forward problem of electromagnetic source imaging

    Akalin-Acar, Zeynep; Gencer, Nevzat G

    2004-01-01

    The forward problem of electromagnetic source imaging has two components: a numerical model to solve the related integral equations and a model of the head geometry. This study is on the boundary element method (BEM) implementation for numerical solutions and realistic head modelling. The use of second-order (quadratic) isoparametric elements and the recursive integration technique increase the accuracy in the solutions. Two new formulations are developed for the calculation of the transfer matrices to obtain the potential and magnetic field patterns using realistic head models. The formulations incorporate the use of the isolated problem approach for increased accuracy in solutions. If a personal computer is used for computations, each transfer matrix is calculated in 2.2 h. After this pre-computation period, solutions for arbitrary source configurations can be obtained in milliseconds for a realistic head model. A hybrid algorithm that uses snakes, morphological operations, region growing and thresholding is used for segmentation. The scalp, skull, grey matter, white matter and eyes are segmented from the multimodal magnetic resonance images and meshes for the corresponding surfaces are created. A mesh generation algorithm is developed for modelling the intersecting tissue compartments, such as eyes. To obtain more accurate results quadratic elements are used in the realistic meshes. The resultant BEM implementation provides more accurate forward problem solutions and more efficient calculations. Thus it can be the firm basis of the future inverse problem solutions

  19. Accuracy of Dual-Energy Virtual Monochromatic CT Numbers: Comparison between the Single-Source Projection-Based and Dual-Source Image-Based Methods.

    Ueguchi, Takashi; Ogihara, Ryota; Yamada, Sachiko

    2018-03-21

    To investigate the accuracy of dual-energy virtual monochromatic computed tomography (CT) numbers obtained by two typical hardware and software implementations: the single-source projection-based method and the dual-source image-based method. A phantom with different tissue equivalent inserts was scanned with both single-source and dual-source scanners. A fast kVp-switching feature was used on the single-source scanner, whereas a tin filter was used on the dual-source scanner. Virtual monochromatic CT images of the phantom at energy levels of 60, 100, and 140 keV were obtained by both projection-based (on the single-source scanner) and image-based (on the dual-source scanner) methods. The accuracy of virtual monochromatic CT numbers for all inserts was assessed by comparing measured values to their corresponding true values. Linear regression analysis was performed to evaluate the dependency of measured CT numbers on tissue attenuation, method, and their interaction. Root mean square values of systematic error over all inserts at 60, 100, and 140 keV were approximately 53, 21, and 29 Hounsfield unit (HU) with the single-source projection-based method, and 46, 7, and 6 HU with the dual-source image-based method, respectively. Linear regression analysis revealed that the interaction between the attenuation and the method had a statistically significant effect on the measured CT numbers at 100 and 140 keV. There were attenuation-, method-, and energy level-dependent systematic errors in the measured virtual monochromatic CT numbers. CT number reproducibility was comparable between the two scanners, and CT numbers had better accuracy with the dual-source image-based method at 100 and 140 keV. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  20. Micro-seismic imaging using a source function independent full waveform inversion method

    Wang, Hanchen; Alkhalifah, Tariq Ali

    2018-01-01

    hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI

  1. Auralisations with loudspeaker arrays from a phased combination of the image source method and acoustical radiosity

    Marbjerg, Gerd Høy

    2017-01-01

    In order to create a simulation tool that is well-suited for small rooms with low diffusion and highly absorbing ceilings, a new room acoustic simulation tool has been developed that combines a phased version of the image source with acoustical radiosity and that considers the angle dependence...... impulse response, because more directional information is available with acoustical radiosity. Small rooms with absorbing surfaces are tested, because this is the room type that PARISM is particularly useful for....

  2. Auralizations with loudspeaker arrays from a phased combination of the image source method and acoustical radiosity

    Marbjerg, Gerd Høy; Brunskog, Jonas; Jeong, Cheol-Ho

    2017-01-01

    In order to create a simulation tool that is well-suited for small rooms with low diffusion and highly absorbing ceilings, a new room acoustic simulation tool has been developed that combines a phased version of the image source with acoustical radiosity and that considers the angle dependence...... of the PARISM impulse response, because more directional information is available with acoustical radiosity. Small rooms with absorbing surfaces are tested, because this is the room type that PARISM is particularly useful for....

  3. Rapid Automatic Lighting Control of a Mixed Light Source for Image Acquisition using Derivative Optimum Search Methods

    Kim HyungTae

    2015-01-01

    Full Text Available Automatic lighting (auto-lighting is a function that maximizes the image quality of a vision inspection system by adjusting the light intensity and color.In most inspection systems, a single color light source is used, and an equal step search is employed to determine the maximum image quality. However, when a mixed light source is used, the number of iterations becomes large, and therefore, a rapid search method must be applied to reduce their number. Derivative optimum search methods follow the tangential direction of a function and are usually faster than other methods. In this study, multi-dimensional forms of derivative optimum search methods are applied to obtain the maximum image quality considering a mixed-light source. The auto-lighting algorithms were derived from the steepest descent and conjugate gradient methods, which have N-size inputs of driving voltage and one output of image quality. Experiments in which the proposed algorithm was applied to semiconductor patterns showed that a reduced number of iterations is required to determine the locally maximized image quality.

  4. An intelligent despeckling method for swept source optical coherence tomography images of skin

    Adabi, Saba; Mohebbikarkhoran, Hamed; Mehregan, Darius; Conforto, Silvia; Nasiriavanaki, Mohammadreza

    2017-03-01

    Optical Coherence Optical coherence tomography is a powerful high-resolution imaging method with a broad biomedical application. Nonetheless, OCT images suffer from a multiplicative artefacts so-called speckle, a result of coherent imaging of system. Digital filters become ubiquitous means for speckle reduction. Addressing the fact that there still a room for despeckling in OCT, we proposed an intelligent speckle reduction framework based on OCT tissue morphological, textural and optical features that through a trained network selects the winner filter in which adaptively suppress the speckle noise while preserve structural information of OCT signal. These parameters are calculated for different steps of the procedure to be used in designed Artificial Neural Network decider that select the best denoising technique for each segment of the image. Results of training shows the dominant filter is BM3D from the last category.

  5. Defect inspection in hot slab surface: multi-source CCD imaging based fuzzy-rough sets method

    Zhao, Liming; Zhang, Yi; Xu, Xiaodong; Xiao, Hong; Huang, Chao

    2016-09-01

    To provide an accurate surface defects inspection method and make the automation of robust image region of interests(ROI) delineation strategy a reality in production line, a multi-source CCD imaging based fuzzy-rough sets method is proposed for hot slab surface quality assessment. The applicability of the presented method and the devised system are mainly tied to the surface quality inspection for strip, billet and slab surface etcetera. In this work we take into account the complementary advantages in two common machine vision (MV) systems(line array CCD traditional scanning imaging (LS-imaging) and area array CCD laser three-dimensional (3D) scanning imaging (AL-imaging)), and through establishing the model of fuzzy-rough sets in the detection system the seeds for relative fuzzy connectedness(RFC) delineation for ROI can placed adaptively, which introduces the upper and lower approximation sets for RIO definition, and by which the boundary region can be delineated by RFC region competitive classification mechanism. For the first time, a Multi-source CCD imaging based fuzzy-rough sets strategy is attempted for CC-slab surface defects inspection that allows an automatic way of AI algorithms and powerful ROI delineation strategies to be applied to the MV inspection field.

  6. Micro-seismic Imaging Using a Source Independent Waveform Inversion Method

    Wang, Hanchen

    2016-01-01

    waveform inversion (FWI) is widely used. The FWI method updates the velocity model by minimizing the misfit between the observed data and the predicted data. Using FWI to locate and image microseismic events allows for an automatic process (free of picking

  7. A method for increasing the accuracy of image annotating in crowd-sourcing

    Nurmukhametov, O.R.; Baklanov, A.

    2016-01-01

    Crowdsourcing is a new approach to solve tasks when a group of volunteers replaces experts. Recent results show that crowdsourcing is an efficient tool for annotating large datasets. Geo-Wiki is an example of successful citizen science projects. The goal of Geo-Wiki project is to improve a global land cover map by applying crowdsourcing for image recognition. In our research, we investigate methods for increasing reliability of data collected during The Cropland Capture Game (Geo-Wiki). In th...

  8. Dual source CT imaging

    Seidensticker, Peter R.; Hofmann, Lars K.

    2008-01-01

    The introduction of Dual Source Computed Tomography (DSCT) in 2005 was an evolutionary leap in the field of CT imaging. Two x-ray sources operated simultaneously enable heart-rate independent temporal resolution and routine spiral dual energy imaging. The precise delivery of contrast media is a critical part of the contrast-enhanced CT procedure. This book provides an introduction to DSCT technology and to the basics of contrast media administration followed by 25 in-depth clinical scan and contrast media injection protocols. All were developed in consensus by selected physicians on the Dual Source CT Expert Panel. Each protocol is complemented by individual considerations, tricks and pitfalls, and by clinical examples from several of the world's best radiologists and cardiologists. This extensive CME-accredited manual is intended to help readers to achieve consistently high image quality, optimal patient care, and a solid starting point for the development of their own unique protocols. (orig.)

  9. Cardiomagnetic source imaging

    Pesola, Katja

    2000-01-01

    Magnetocardiographic (MCG) source imaging has received increasing interest in recent years. With a high enough localization accuracy of the current sources in the heart, valuable information can be provided, e.g., for the pre-ablative evaluation of arrhythmia patients. Furthermore, preliminary studies indicate that ischemic areas, i.e. areas which are suffering from lack of oxygen, and infarcted regions could be localized from multichannel MCG recordings. In this thesis, the accuracy of cardi...

  10. An Adjoint Sensitivity Method Applied to Time Reverse Imaging of Tsunami Source for the 2009 Samoa Earthquake

    Hossen, M. Jakir; Gusman, Aditya; Satake, Kenji; Cummins, Phil R.

    2018-01-01

    We have previously developed a tsunami source inversion method based on "Time Reverse Imaging" and demonstrated that it is computationally very efficient and has the ability to reproduce the tsunami source model with good accuracy using tsunami data of the 2011 Tohoku earthquake tsunami. In this paper, we implemented this approach in the 2009 Samoa earthquake tsunami triggered by a doublet earthquake consisting of both normal and thrust faulting. Our result showed that the method is quite capable of recovering the source model associated with normal and thrust faulting. We found that the inversion result is highly sensitive to some stations that must be removed from the inversion. We applied an adjoint sensitivity method to find the optimal set of stations in order to estimate a realistic source model. We found that the inversion result is improved significantly once the optimal set of stations is used. In addition, from the reconstructed source model we estimated the slip distribution of the fault from which we successfully determined the dipping orientation of the fault plane for the normal fault earthquake. Our result suggests that the fault plane dip toward the northeast.

  11. Evaluation of three methods for retrospective correction of vignetting on medical microscopy images utilizing two open source software tools.

    Babaloukas, Georgios; Tentolouris, Nicholas; Liatis, Stavros; Sklavounou, Alexandra; Perrea, Despoina

    2011-12-01

    Correction of vignetting on images obtained by a digital camera mounted on a microscope is essential before applying image analysis. The aim of this study is to evaluate three methods for retrospective correction of vignetting on medical microscopy images and compare them with a prospective correction method. One digital image from four different tissues was used and a vignetting effect was applied on each of these images. The resulted vignetted image was replicated four times and in each replica a different method for vignetting correction was applied with fiji and gimp software tools. The highest peak signal-to-noise ratio from the comparison of each method to the original image was obtained from the prospective method in all tissues. The morphological filtering method provided the highest peak signal-to-noise ratio value amongst the retrospective methods. The prospective method is suggested as the method of choice for correction of vignetting and if it is not applicable, then the morphological filtering may be suggested as the retrospective alternative method. © 2011 The Authors Journal of Microscopy © 2011 Royal Microscopical Society.

  12. Optimal wave focusing for seismic source imaging

    Bazargani, Farhad

    In both global and exploration seismology, studying seismic sources provides geophysicists with invaluable insight into the physics of earthquakes and faulting processes. One way to characterize the seismic source is to directly image it. Time-reversal (TR) focusing provides a simple and robust solution to the source imaging problem. However, for recovering a well- resolved image, TR requires a full-aperture receiver array that surrounds the source and adequately samples the wavefield. This requirement often cannot be realized in practice. In most source imaging experiments, the receiver geometry, due to the limited aperture and sparsity of the stations, does not allow adequate sampling of the source wavefield. Incomplete acquisition and imbalanced illumination of the imaging target limit the resolving power of the TR process. The main focus of this thesis is to offer an alternative approach to source imaging with the goal of mitigating the adverse effects of incomplete acquisition on the TR modeling. To this end, I propose a new method, named Backus-Gilbert (BG) source imaging, to optimally focus the wavefield onto the source position using a given receiver geometry. I first introduce BG as a method for focusing waves in acoustic media at a desired location and time. Then, by exploiting the source-receiver reciprocity of the Green function and the linearity of the problem, I show that BG focusing can be adapted and used as a source-imaging tool. Following this, I generalize the BG theory for elastic waves. Applying BG formalism for source imaging requires a model for the wave propagation properties of the earth and an estimate of the source location. Using numerical tests, I next examine the robustness and sensitivity of the proposed method with respect to errors in the earth model, uncertainty in the source location, and noise in data. The BG method can image extended sources as well as point sources. It can also retrieve the source mechanism. These features of

  13. Calcium source (image)

    Getting enough calcium to keep bones from thinning throughout a person's life may be made more difficult if that person has ... as a tendency toward kidney stones, for avoiding calcium-rich food sources. Calcium deficiency also effects the ...

  14. Image change detection systems, methods, and articles of manufacture

    Jones, James L.; Lassahn, Gordon D.; Lancaster, Gregory D.

    2010-01-05

    Aspects of the invention relate to image change detection systems, methods, and articles of manufacture. According to one aspect, a method of identifying differences between a plurality of images is described. The method includes loading a source image and a target image into memory of a computer, constructing source and target edge images from the source and target images to enable processing of multiband images, displaying the source and target images on a display device of the computer, aligning the source and target edge images, switching displaying of the source image and the target image on the display device, to enable identification of differences between the source image and the target image.

  15. Open source tools for fluorescent imaging.

    Hamilton, Nicholas A

    2012-01-01

    As microscopy becomes increasingly automated and imaging expands in the spatial and time dimensions, quantitative analysis tools for fluorescent imaging are becoming critical to remove both bottlenecks in throughput as well as fully extract and exploit the information contained in the imaging. In recent years there has been a flurry of activity in the development of bio-image analysis tools and methods with the result that there are now many high-quality, well-documented, and well-supported open source bio-image analysis projects with large user bases that cover essentially every aspect from image capture to publication. These open source solutions are now providing a viable alternative to commercial solutions. More importantly, they are forming an interoperable and interconnected network of tools that allow data and analysis methods to be shared between many of the major projects. Just as researchers build on, transmit, and verify knowledge through publication, open source analysis methods and software are creating a foundation that can be built upon, transmitted, and verified. Here we describe many of the major projects, their capabilities, and features. We also give an overview of the current state of open source software for fluorescent microscopy analysis and the many reasons to use and develop open source methods. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Neutron source multiplication method

    Clayton, E.D.

    1985-01-01

    Extensive use has been made of neutron source multiplication in thousands of measurements of critical masses and configurations and in subcritical neutron-multiplication measurements in situ that provide data for criticality prevention and control in nuclear materials operations. There is continuing interest in developing reliable methods for monitoring the reactivity, or k/sub eff/, of plant operations, but the required measurements are difficult to carry out and interpret on the far subcritical configurations usually encountered. The relationship between neutron multiplication and reactivity is briefly discussed and data presented to illustrate problems associated with the absolute measurement of neutron multiplication and reactivity in subcritical systems. A number of curves of inverse multiplication have been selected from a variety of experiments showing variations observed in multiplication during the course of critical and subcritical experiments where different methods of reactivity addition were used, with different neutron source detector position locations. Concern is raised regarding the meaning and interpretation of k/sub eff/ as might be measured in a far subcritical system because of the modal effects and spectrum differences that exist between the subcritical and critical systems. Because of this, the calculation of k/sub eff/ identical with unity for the critical assembly, although necessary, may not be sufficient to assure safety margins in calculations pertaining to far subcritical systems. Further study is needed on the interpretation and meaning of k/sub eff/ in the far subcritical system

  17. Technical Note: FreeCT_ICD: An Open Source Implementation of a Model-Based Iterative Reconstruction Method using Coordinate Descent Optimization for CT Imaging Investigations.

    Hoffman, John M; Noo, Frédéric; Young, Stefano; Hsieh, Scott S; McNitt-Gray, Michael

    2018-06-01

    To facilitate investigations into the impacts of acquisition and reconstruction parameters on quantitative imaging, radiomics and CAD using CT imaging, we previously released an open source implementation of a conventional weighted filtered backprojection reconstruction called FreeCT_wFBP. Our purpose was to extend that work by providing an open-source implementation of a model-based iterative reconstruction method using coordinate descent optimization, called FreeCT_ICD. Model-based iterative reconstruction offers the potential for substantial radiation dose reduction, but can impose substantial computational processing and storage requirements. FreeCT_ICD is an open source implementation of a model-based iterative reconstruction method that provides a reasonable tradeoff between these requirements. This was accomplished by adapting a previously proposed method that allows the system matrix to be stored with a reasonable memory requirement. The method amounts to describing the attenuation coefficient using rotating slices that follow the helical geometry. In the initially-proposed version, the rotating slices are themselves described using blobs. We have replaced this description by a unique model that relies on tri-linear interpolation together with the principles of Joseph's method. This model offers an improvement in memory requirement while still allowing highly accurate reconstruction for conventional CT geometries. The system matrix is stored column-wise and combined with an iterative coordinate descent (ICD) optimization. The result is FreeCT_ICD, which is a reconstruction program developed on the Linux platform using C++ libraries and the open source GNU GPL v2.0 license. The software is capable of reconstructing raw projection data of helical CT scans. In this work, the software has been described and evaluated by reconstructing datasets exported from a clinical scanner which consisted of an ACR accreditation phantom dataset and a clinical pediatric

  18. Imaging Apparatus And Method

    Manohar, Srirang; van Leeuwen, A.G.J.M.

    2010-01-01

    A thermoacoustic imaging apparatus comprises an electromagnetic radiation source configured to irradiate a sample area and an acoustic signal detection probe arrangement for detecting acoustic signals. A radiation responsive acoustic signal generator is added outside the sample area. The detection

  19. IMAGING APPARATUS AND METHOD

    Manohar, Srirang; van Leeuwen, A.G.J.M.

    2008-01-01

    A thermoacoustic imaging apparatus comprises an electromagnetic radiation source configured to irradiate a sample area and an acoustic signal detection probe arrangement for detecting acoustic signals. A radiation responsive acoustic signal generator is added outside the sample area. The detection

  20. Image Makers: Reporters or Sources.

    Petruzzello, Marion C.

    To explore how news sources are used by media to create a social image of women during key suffrage events of 1858, 1920, and 1970, the front page stories of the "New York Times" were reviewed for 1 week prior to and 1 week following each of these events: May 14, 1858, the Eighth National Women's Rights Convention in New York City;…

  1. An Optimal Image-Based Method for Identification of Acoustic Emission (AE) Sources in Plate-Like Structures Using a Lead Zirconium Titanate (PZT) Sensor Array

    Zhou, Li

    2018-01-01

    This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method. PMID:29466310

  2. An Optimal Image-Based Method for Identification of Acoustic Emission (AE) Sources in Plate-Like Structures Using a Lead Zirconium Titanate (PZT) Sensor Array.

    Yan, Gang; Zhou, Li

    2018-02-21

    This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method.

  3. Terahertz composite imaging method

    QIAO Xiaoli; REN Jiaojiao; ZHANG Dandan; CAO Guohua; LI Lijuan; ZHANG Xinming

    2017-01-01

    In order to improve the imaging quality of terahertz(THz) spectroscopy, Terahertz Composite Imaging Method(TCIM) is proposed. The traditional methods of improving THz spectroscopy image quality are mainly from the aspects of de-noising and image enhancement. TCIM breaks through this limitation. A set of images, reconstructed in a single data collection, can be utilized to construct two kinds of composite images. One algorithm, called Function Superposition Imaging Algorithm(FSIA), is to construct a new gray image utilizing multiple gray images through a certain function. The features of the Region Of Interest (ROI) are more obvious after operating, and it has capability of merging ROIs in multiple images. The other, called Multi-characteristics Pseudo-color Imaging Algorithm(McPcIA), is to construct a pseudo-color image by combining multiple reconstructed gray images in a single data collection. The features of ROI are enhanced by color differences. Two algorithms can not only improve the contrast of ROIs, but also increase the amount of information resulting in analysis convenience. The experimental results show that TCIM is a simple and effective tool for THz spectroscopy image analysis.

  4. Source splitting via the point source method

    Potthast, Roland; Fazi, Filippo M; Nelson, Philip A

    2010-01-01

    We introduce a new algorithm for source identification and field splitting based on the point source method (Potthast 1998 A point-source method for inverse acoustic and electromagnetic obstacle scattering problems IMA J. Appl. Math. 61 119–40, Potthast R 1996 A fast new method to solve inverse scattering problems Inverse Problems 12 731–42). The task is to separate the sound fields u j , j = 1, ..., n of n element of N sound sources supported in different bounded domains G 1 , ..., G n in R 3 from measurements of the field on some microphone array—mathematically speaking from the knowledge of the sum of the fields u = u 1 + ... + u n on some open subset Λ of a plane. The main idea of the scheme is to calculate filter functions g 1 ,…, g n , n element of N, to construct u l for l = 1, ..., n from u| Λ in the form u l (x) = ∫ Λ g l,x (y)u(y)ds(y), l=1,... n. (1) We will provide the complete mathematical theory for the field splitting via the point source method. In particular, we describe uniqueness, solvability of the problem and convergence and stability of the algorithm. In the second part we describe the practical realization of the splitting for real data measurements carried out at the Institute for Sound and Vibration Research at Southampton, UK. A practical demonstration of the original recording and the splitting results for real data is available online

  5. Contrast source inversion (CSI) method to cross-hole radio-imaging (RIM) data - Part 2: A complex synthetic example and a case study

    Li, Yongxing; Smith, Richard S.

    2018-03-01

    We present two examples of using the contrast source inversion (CSI) method to invert synthetic radio-imaging (RIM) data and field data. The synthetic model has two isolated conductors (one perfect conductor and one moderate conductor) embedded in a layered background. After inversion, we can identify the two conductors on the inverted image. The shape of the perfect conductor is better resolved than the shape of the moderate conductor. The inverted conductivity values of the two conductors are approximately the same, which demonstrates that the conductivity values cannot be correctly interpreted from the CSI results. The boundaries and the tilts of the upper and the lower conductive layers on the background can also be inferred from the results, but the centre parts of conductive layers in the inversion results are more conductive than the parts close to the boreholes. We used the straight-ray tomographic imaging method and the CSI method to invert the RIM field data collected using the FARA system between two boreholes in a mining area in Sudbury, Canada. The RIM data include the amplitude and the phase data collected using three frequencies: 312.5 kHz, 625 kHz and 1250 kHz. The data close to the ground surface have high amplitude values and complicated phase fluctuations, which are inferred to be contaminated by the reflected or refracted electromagnetic (EM) fields from the ground surface, and are removed for all frequencies. Higher-frequency EM waves attenuate more quickly in the subsurface environment, and the locations where the measurements are dominated by noise are also removed. When the data are interpreted with the straight-ray method, the images differ substantially for different frequencies. In addition, there are some unexpected features in the images, which are difficult to interpret. Compared with the straight-ray imaging results, the inversion results with the CSI method are more consistent for different frequencies. On the basis of what we learnt

  6. Rapid flow imaging method

    Pelc, N.J.; Spritzer, C.E.; Lee, J.N.

    1988-01-01

    A rapid, phase-contrast, MR imaging method of imaging flow has been implemented. The method, called VIGRE (velocity imaging with gradient recalled echoes), consists of two interleaved, narrow flip angle, gradient-recalled acquisitions. One is flow compensated while the second has a specified flow encoding (both peak velocity and direction) that causes signals to contain additional phase in proportion to velocity in the specified direction. Complex image data from the first acquisition are used as a phase reference for the second, yielding immunity from phase accumulation due to causes other than motion. Images with pixel values equal to MΔΘ where M is the magnitude of the flow compensated image and ΔΘ is the phase difference at the pixel, are produced. The magnitude weighting provides additional vessel contrast, suppresses background noise, maintains the flow direction information, and still allows quantitative data to be retrieved. The method has been validated with phantoms and is undergoing initial clinical evaluation. Early results are extremely encouraging

  7. Stereoscopic radiographic images with gamma source encoding

    Strocovsky, S.G.; Otero, D

    2012-01-01

    Conventional radiography with X-ray tube has several drawbacks, as the compromise between the size of the focal spot and the fluence. The finite dimensions of the focal spot impose a limit to the spatial resolution. Gamma radiography uses gamma-ray sources which surpass in size, portability and simplicity to X-ray tubes. However, its low intrinsic fluence forces to use extended sources that also degrade the spatial resolution. In this work, we show the principles of a new radiographic technique that overcomes the limitations associated with the finite dimensions of X-ray sources, and that offers additional benefits to conventional techniques. The new technique called coding source imaging (CSI), is based on the use of extended sources, edge-encoding of radiation and differential detection. The mathematical principles and the method of images reconstruction with the new proposed technique are explained in the present work. Analytical calculations were made to determine the maximum spatial resolution and the variables on which it depends. The CSI technique was tested by means of Monte Carlo simulations with sets of spherical objects. We show that CSI has stereoscopic capabilities and it can resolve objects smaller than the source size. The CSI decoding algorithm reconstructs simultaneously four different projections from the same object, while conventional radiography produces only one projection per acquisition. Projections are located in separate image fields on the detector plane. Our results show it is possible to apply an extremely simple radiographic technique with extended sources, and get 3D information of the attenuation coefficient distribution for simple geometry objects in a single acquisition. The results are promising enough to evaluate the possibility of future research with more complex objects typical of medical diagnostic radiography and industrial gamma radiography (author)

  8. Magnetic imager and method

    Powell, James; Reich, Morris; Danby, Gordon

    1997-07-22

    A magnetic imager 10 includes a generator 18 for practicing a method of applying a background magnetic field over a concealed object, with the object being effective to locally perturb the background field. The imager 10 also includes a sensor 20 for measuring perturbations of the background field to detect the object. In one embodiment, the background field is applied quasi-statically. And, the magnitude or rate of change of the perturbations may be measured for determining location, size, and/or condition of the object.

  9. Methods in Astronomical Image Processing

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  10. Universal Image Steganalytic Method

    V. Banoci

    2014-12-01

    Full Text Available In the paper we introduce a new universal steganalytic method in JPEG file format that is detecting well-known and also newly developed steganographic methods. The steganalytic model is trained by MHF-DZ steganographic algorithm previously designed by the same authors. The calibration technique with the Feature Based Steganalysis (FBS was employed in order to identify statistical changes caused by embedding a secret data into original image. The steganalyzer concept utilizes Support Vector Machine (SVM classification for training a model that is later used by the same steganalyzer in order to identify between a clean (cover and steganographic image. The aim of the paper was to analyze the variety in accuracy of detection results (ACR while detecting testing steganographic algorithms as F5, Outguess, Model Based Steganography without deblocking, JP Hide and Seek which represent the generally used steganographic tools. The comparison of four feature vectors with different lengths FBS (22, FBS (66 FBS(274 and FBS(285 shows promising results of proposed universal steganalytic method comparing to binary methods.

  11. Transmission imaging with a coded source

    Stoner, W.W.; Sage, J.P.; Braun, M.; Wilson, D.T.; Barrett, H.H.

    1976-01-01

    The conventional approach to transmission imaging is to use a rotating anode x-ray tube, which provides the small, brilliant x-ray source needed to cast sharp images of acceptable intensity. Stationary anode sources, although inherently less brilliant, are more compatible with the use of large area anodes, and so they can be made more powerful than rotating anode sources. Spatial modulation of the source distribution provides a way to introduce detailed structure in the transmission images cast by large area sources, and this permits the recovery of high resolution images, in spite of the source diameter. The spatial modulation is deliberately chosen to optimize recovery of image structure; the modulation pattern is therefore called a ''code.'' A variety of codes may be used; the essential mathematical property is that the code possess a sharply peaked autocorrelation function, because this property permits the decoding of the raw image cast by th coded source. Random point arrays, non-redundant point arrays, and the Fresnel zone pattern are examples of suitable codes. This paper is restricted to the case of the Fresnel zone pattern code, which has the unique additional property of generating raw images analogous to Fresnel holograms. Because the spatial frequency of these raw images are extremely coarse compared with actual holograms, a photoreduction step onto a holographic plate is necessary before the decoded image may be displayed with the aid of coherent illumination

  12. Comparison of source moment tensor recovered by diffraction stacking migration and source time reversal imaging

    Zhang, Q.; Zhang, W.

    2017-12-01

    Diffraction stacking migration is an automatic location methods and widely used in microseismic monitoring of the hydraulic fracturing. It utilizes the stacking of thousands waveform to enhance signal-to-noise ratio of weak events. For surface monitoring, the diffraction stacking method is suffered from polarity reverse among receivers due to radiation pattern of moment source. Joint determination of location and source mechanism has been proposed to overcome the polarity problem but needs significantly increased computational calculations. As an effective method to recover source moment tensor, time reversal imaging based on wave equation can locate microseismic event by using interferometry on the image to extract source position. However, the time reversal imaging is very time consuming compared to the diffraction stacking location because of wave-equation simulation.In this study, we compare the image from diffraction stacking and time reversal imaging to check if the diffraction stacking can obtain similar moment tensor as time reversal imaging. We found that image produced by taking the largest imaging value at each point along time axis does not exhibit the radiation pattern, while with the same level of calculation efficiency, the image produced for each trial origin time can generate radiation pattern similar to time reversal imaging procedure. Thus it is potential to locate the source position by the diffraction stacking method for general moment tensor sources.

  13. Multi-Source Image Analysis.

    1979-12-01

    These collections were taken to show the advantages made available to the inter- preter. In a military operation, however, often little or no in- situ ...The large body of water labeled "W" on each image represents the Agua Hedionda lagoon. East of the lagoon the area is primarily agricultural with a...power plant located in the southeast corner of the image. West of the Agua Hedionda lagoon is Carlsbad, California. Damp ground is labelled "Dg" on the

  14. The Source Equivalence Acceleration Method

    Everson, Matthew S.; Forget, Benoit

    2015-01-01

    Highlights: • We present a new acceleration method, the Source Equivalence Acceleration Method. • SEAM forms an equivalent coarse group problem for any spatial method. • Equivalence is also formed across different spatial methods and angular quadratures. • Testing is conducted using OpenMOC and performance is compared with CMFD. • Results show that SEAM is preferable for very expensive transport calculations. - Abstract: Fine-group whole-core reactor analysis remains one of the long sought goals of the reactor physics community. Such a detailed analysis is typically too computationally expensive to be realized on anything except the largest of supercomputers. Recondensation using the Discrete Generalized Multigroup (DGM) method, though, offers a relatively cheap alternative to solving the fine group transport problem. DGM, however, suffered from inconsistencies when applied to high-order spatial methods. While an exact spatial recondensation method was developed and provided full spatial consistency with the fine group problem, this approach substantially increased memory requirements for realistic problems. The method described in this paper, called the Source Equivalence Acceleration Method (SEAM), forms a coarse-group problem which preserves the fine-group problem even when using higher order spatial methods. SEAM allows recondensation to converge to the fine-group solution with minimal memory requirements and little additional overhead. This method also provides for consistency when using different spatial methods and angular quadratures between the coarse group and fine group problems. SEAM was implemented in OpenMOC, a 2D MOC code developed at MIT, and its performance tested against Coarse Mesh Finite Difference (CMFD) acceleration on the C5G7 benchmark problem and on a 361 group version of the problem. For extremely expensive transport calculations, SEAM was able to outperform CMFD, resulting in speed-ups of 20–45 relative to the normal power

  15. Image authentication using distributed source coding.

    Lin, Yao-Chung; Varodayan, David; Girod, Bernd

    2012-01-01

    We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.

  16. Hyperspectral image processing methods

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  17. Source-space ICA for MEG source imaging.

    Jonmohamadi, Yaqub; Jones, Richard D

    2016-02-01

    One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.

  18. Image registration method for medical image sequences

    Gee, Timothy F.; Goddard, James S.

    2013-03-26

    Image registration of low contrast image sequences is provided. In one aspect, a desired region of an image is automatically segmented and only the desired region is registered. Active contours and adaptive thresholding of intensity or edge information may be used to segment the desired regions. A transform function is defined to register the segmented region, and sub-pixel information may be determined using one or more interpolation methods.

  19. Optical Imaging of Ionizing Radiation from Clinical Sources.

    Shaffer, Travis M; Drain, Charles Michael; Grimm, Jan

    2016-11-01

    Nuclear medicine uses ionizing radiation for both in vivo diagnosis and therapy. Ionizing radiation comes from a variety of sources, including x-rays, beam therapy, brachytherapy, and various injected radionuclides. Although PET and SPECT remain clinical mainstays, optical readouts of ionizing radiation offer numerous benefits and complement these standard techniques. Furthermore, for ionizing radiation sources that cannot be imaged using these standard techniques, optical imaging offers a unique imaging alternative. This article reviews optical imaging of both radionuclide- and beam-based ionizing radiation from high-energy photons and charged particles through mechanisms including radioluminescence, Cerenkov luminescence, and scintillation. Therapeutically, these visible photons have been combined with photodynamic therapeutic agents preclinically for increasing therapeutic response at depths difficult to reach with external light sources. Last, new microscopy methods that allow single-cell optical imaging of radionuclides are reviewed. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  20. DEIMOS – an Open Source Image Database

    M. Blazek

    2011-12-01

    Full Text Available The DEIMOS (DatabasE of Images: Open Source is created as an open-source database of images and videos for testing, verification and comparing of various image and/or video processing techniques such as enhancing, compression and reconstruction. The main advantage of DEIMOS is its orientation to various application fields – multimedia, television, security, assistive technology, biomedicine, astronomy etc. The DEIMOS is/will be created gradually step-by-step based upon the contributions of team members. The paper is describing basic parameters of DEIMOS database including application examples.

  1. Soft tissue tumors - imaging methods

    Arlart, I.P.

    1985-01-01

    Soft Tissue Tumors - Imaging Methods: Imaging methods play an important diagnostic role in soft tissue tumors concerning a preoperative evaluation of localization, size, topographic relationship, dignity, and metastatic disease. The present paper gives an overview about diagnostic methods available today such as ultrasound, thermography, roentgenographic plain films and xeroradiography, radionuclide methods, computed tomography, lymphography, angiography, and magnetic resonance imaging. Besides sonography particularly computed tomography has the most important diagnostic value in soft tissue tumors. The application of a recently developed method, the magnetic resonance imaging, cannot yet be assessed in its significance. (orig.) [de

  2. New neutron imaging using pulsed sources. Characteristics of a pulsed neutron source and principle of pulsed neutron imaging

    Kiyanagi, Yoshiaki

    2012-01-01

    Neutron beam is one of important tools to obtain the transmission image of an object. Until now, steady state neutron sources such as reactors are mainly used for this imaging purpose. Recently, it has been demonstrated that pulsed neutron imaging based on accelerator neutron sources can provide a real-space distribution of physical information of materials such as crystallographic structure, element, temperature, hydrogen bound state, magnetic field and so on, by analyzing wavelength dependent transmission spectrum, which information cannot be observed or difficult to obtain with a traditional imaging method using steady state neutrons. Here, characteristics of the pulsed neutron source and principle of the pulsed neutron imaging are explained as a basic concept of the new method. (author)

  3. Computational methods for molecular imaging

    Shi, Kuangyu; Li, Shuo

    2015-01-01

    This volume contains original submissions on the development and application of molecular imaging computing. The editors invited authors to submit high-quality contributions on a wide range of topics including, but not limited to: • Image Synthesis & Reconstruction of Emission Tomography (PET, SPECT) and other Molecular Imaging Modalities • Molecular Imaging Enhancement • Data Analysis of Clinical & Pre-clinical Molecular Imaging • Multi-Modal Image Processing (PET/CT, PET/MR, SPECT/CT, etc.) • Machine Learning and Data Mining in Molecular Imaging. Molecular imaging is an evolving clinical and research discipline enabling the visualization, characterization and quantification of biological processes taking place at the cellular and subcellular levels within intact living subjects. Computational methods play an important role in the development of molecular imaging, from image synthesis to data analysis and from clinical diagnosis to therapy individualization. This work will bring readers fro...

  4. Development of Quantification Method for Bioluminescence Imaging

    Kim, Hyeon Sik; Min, Jung Joon; Lee, Byeong Il; Choi, Eun Seo; Tak, Yoon O; Choi, Heung Kook; Lee, Ju Young

    2009-01-01

    Optical molecular luminescence imaging is widely used for detection and imaging of bio-photons emitted by luminescent luciferase activation. The measured photons in this method provide the degree of molecular alteration or cell numbers with the advantage of high signal-to-noise ratio. To extract useful information from the measured results, the analysis based on a proper quantification method is necessary. In this research, we propose a quantification method presenting linear response of measured light signal to measurement time. We detected the luminescence signal by using lab-made optical imaging equipment of animal light imaging system (ALIS) and different two kinds of light sources. One is three bacterial light-emitting sources containing different number of bacteria. The other is three different non-bacterial light sources emitting very weak light. By using the concept of the candela and the flux, we could derive simplified linear quantification formula. After experimentally measuring light intensity, the data was processed with the proposed quantification function. We could obtain linear response of photon counts to measurement time by applying the pre-determined quantification function. The ratio of the re-calculated photon counts and measurement time present a constant value although different light source was applied. The quantification function for linear response could be applicable to the standard quantification process. The proposed method could be used for the exact quantitative analysis in various light imaging equipment with presenting linear response behavior of constant light emitting sources to measurement time

  5. Methods for evaluating information sources

    Hjørland, Birger

    2012-01-01

    The article briefly presents and discusses 12 different approaches to the evaluation of information sources (for example a Wikipedia entry or a journal article): (1) the checklist approach; (2) classical peer review; (3) modified peer review; (4) evaluation based on examining the coverage...... of controversial views; (5) evidence-based evaluation; (6) comparative studies; (7) author credentials; (8) publisher reputation; (9) journal impact factor; (10) sponsoring: tracing the influence of economic, political, and ideological interests; (11) book reviews and book reviewing; and (12) broader criteria....... Reading a text is often not a simple process. All the methods discussed here are steps on the way on learning how to read, understand, and criticize texts. According to hermeneutics it involves the subjectivity of the reader, and that subjectivity is influenced, more or less, by different theoretical...

  6. Medical Imaging Image Quality Assessment with Monte Carlo Methods

    Michail, C M; Fountos, G P; Kalyvas, N I; Valais, I G; Kandarakis, I S; Karpetas, G E; Martini, Niki; Koukou, Vaia

    2015-01-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction, with cluster computing. The PET scanner simulated in this study was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the Modulation Transfer Function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL algorithm. OSMAPOSL reconstruction was assessed by using various subsets (3 to 21) and iterations (1 to 20), as well as by using various beta (hyper) parameter values. MTF values were found to increase up to the 12th iteration whereas remain almost constant thereafter. MTF improves by using lower beta values. The simulated PET evaluation method based on the TLC plane source can be also useful in research for the further development of PET and SPECT scanners though GATE simulations. (paper)

  7. Methods of digital image processing

    Doeler, W.

    1985-01-01

    Increasing use of computerized methods for diagnostical imaging of radiological problems will open up a wide field of applications for digital image processing. The requirements set by routine diagnostics in medical radiology point to picture data storage and documentation and communication as the main points of interest for application of digital image processing. As to the purely radiological problems, the value of digital image processing is to be sought in the improved interpretability of the image information in those cases where the expert's experience and image interpretation by human visual capacities do not suffice. There are many other domains of imaging in medical physics where digital image processing and evaluation is very useful. The paper reviews the various methods available for a variety of problem solutions, and explains the hardware available for the tasks discussed. (orig.) [de

  8. OSIRIX: open source multimodality image navigation software

    Rosset, Antoine; Pysher, Lance; Spadola, Luca; Ratib, Osman

    2005-04-01

    The goal of our project is to develop a completely new software platform that will allow users to efficiently and conveniently navigate through large sets of multidimensional data without the need of high-end expensive hardware or software. We also elected to develop our system on new open source software libraries allowing other institutions and developers to contribute to this project. OsiriX is a free and open-source imaging software designed manipulate and visualize large sets of medical images: http://homepage.mac.com/rossetantoine/osirix/

  9. 3D Seismic Imaging using Marchenko Methods

    Lomas, A.; Curtis, A.

    2017-12-01

    Marchenko methods are novel, data driven techniques that allow seismic wavefields from sources and receivers on the Earth's surface to be redatumed to construct wavefields with sources in the subsurface - including complex multiply-reflected waves, and without the need for a complex reference model. In turn, this allows subsurface images to be constructed at any such subsurface redatuming points (image or virtual receiver points). Such images are then free of artefacts from multiply-scattered waves that usually contaminate migrated seismic images. Marchenko algorithms require as input the same information as standard migration methods: the full reflection response from sources and receivers at the Earth's surface, and an estimate of the first arriving wave between the chosen image point and the surface. The latter can be calculated using a smooth velocity model estimated using standard methods. The algorithm iteratively calculates a signal that focuses at the image point to create a virtual source at that point, and this can be used to retrieve the signal between the virtual source and the surface. A feature of these methods is that the retrieved signals are naturally decomposed into up- and down-going components. That is, we obtain both the signal that initially propagated upwards from the virtual source and arrived at the surface, separated from the signal that initially propagated downwards. Figure (a) shows a 3D subsurface model with a variable density but a constant velocity (3000m/s). Along the surface of this model (z=0) in both the x and y directions are co-located sources and receivers at 20-meter intervals. The redatumed signal in figure (b) has been calculated using Marchenko methods from a virtual source (1200m, 500m and 400m) to the surface. For comparison the true solution is given in figure (c), and shows a good match when compared to figure (b). While these 2D redatuming and imaging methods are still in their infancy having first been developed in

  10. An Image Registration Method for Colposcopic Images

    Efrén Mezura-Montes

    2013-01-01

    sequence and a division of such image into small windows. A search process is then carried out to find the window with the highest affinity in each image of the sequence and replace it with the window in the reference image. The affinity value is based on polynomial approximation of the time series computed and the search is bounded by a search radius which defines the neighborhood of each window. The proposed approach is tested in ten 310-frame real cases in two experiments: the first one to determine the best values for the window size and the search radius and the second one to compare the best obtained results with respect to four registration methods found in the specialized literature. The obtained results show a robust and competitive performance of the proposed approach with a significant lower time with respect to the compared methods.

  11. Convolutive Blind Source Separation Methods

    Pedersen, Michael Syskind; Larsen, Jan; Kjems, Ulrik

    2008-01-01

    During the past decades, much attention has been given to the separation of mixed sources, in particular for the blind case where both the sources and the mixing process are unknown and only recordings of the mixtures are available. In several situations it is desirable to recover all sources from...... the recorded mixtures, or at least to segregate a particular source. Furthermore, it may be useful to identify the mixing process itself to reveal information about the physical mixing system. In some simple mixing models each recording consists of a sum of differently weighted source signals. However, in many...... real-world applications, such as in acoustics, the mixing process is more complex. In such systems, the mixtures are weighted and delayed, and each source contributes to the sum with multiple delays corresponding to the multiple paths by which an acoustic signal propagates to a microphone...

  12. An evolution of image source camera attribution approaches.

    Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul

    2016-05-01

    Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics

  13. Imaging methods in otorhinolaryngology

    Frey, K.W.; Mees, K.; Vogl, T.

    1989-01-01

    This book is the work of an otorhinolaryngologist and two radiologists, who combined their experience and efforts in order to solve a great variety and number of problems encountered in practical work, taking into account the latest technical potentials and the practical feasibility, which is determined by the equipment available. Every chapter presents the full range of diagnostic methods applicable, starting with the suitable plain radiography methods and proceeding to the various tomographic scanning methods, including conventional tomography. Every technique is assessed in terms of diagnostic value and drawbacks. (orig./MG) With 778 figs [de

  14. Image restoration and processing methods

    Daniell, G.J.

    1984-01-01

    This review will stress the importance of using image restoration techniques that deal with incomplete, inconsistent, and noisy data and do not introduce spurious features into the processed image. No single image is equally suitable for both the resolution of detail and the accurate measurement of intensities. A good general purpose technique is the maximum entropy method and the basis and use of this will be explained. (orig.)

  15. On an image reconstruction method for ECT

    Sasamoto, Akira; Suzuki, Takayuki; Nishimura, Yoshihiro

    2007-04-01

    An image by Eddy Current Testing(ECT) is a blurred image to original flaw shape. In order to reconstruct fine flaw image, a new image reconstruction method has been proposed. This method is based on an assumption that a very simple relationship between measured data and source were described by a convolution of response function and flaw shape. This assumption leads to a simple inverse analysis method with deconvolution.In this method, Point Spread Function (PSF) and Line Spread Function(LSF) play a key role in deconvolution processing. This study proposes a simple data processing to determine PSF and LSF from ECT data of machined hole and line flaw. In order to verify its validity, ECT data for SUS316 plate(200x200x10mm) with artificial machined hole and notch flaw had been acquired by differential coil type sensors(produced by ZETEC Inc). Those data were analyzed by the proposed method. The proposed method restored sharp discrete multiple hole image from interfered data by multiple holes. Also the estimated width of line flaw has been much improved compared with original experimental data. Although proposed inverse analysis strategy is simple and easy to implement, its validity to holes and line flaw have been shown by many results that much finer image than original image have been reconstructed.

  16. Methods in quantitative image analysis.

    Oberholzer, M; Ostreicher, M; Christen, H; Brühlmann, M

    1996-05-01

    histogram of an existing image (input image) into a new grey value histogram (output image) are most quickly handled by a look-up table (LUT). The histogram of an image can be influenced by gain, offset and gamma of the camera. Gain defines the voltage range, offset defines the reference voltage and gamma the slope of the regression line between the light intensity and the voltage of the camera. A very important descriptor of neighbourhood relations in an image is the co-occurrence matrix. The distance between the pixels (original pixel and its neighbouring pixel) can influence the various parameters calculated from the co-occurrence matrix. The main goals of image enhancement are elimination of surface roughness in an image (smoothing), correction of defects (e.g. noise), extraction of edges, identification of points, strengthening texture elements and improving contrast. In enhancement, two types of operations can be distinguished: pixel-based (point operations) and neighbourhood-based (matrix operations). The most important pixel-based operations are linear stretching of grey values, application of pre-stored LUTs and histogram equalisation. The neighbourhood-based operations work with so-called filters. These are organising elements with an original or initial point in their centre. Filters can be used to accentuate or to suppress specific structures within the image. Filters can work either in the spatial or in the frequency domain. The method used for analysing alterations of grey value intensities in the frequency domain is the Hartley transform. Filter operations in the spatial domain can be based on averaging or ranking the grey values occurring in the organising element. The most important filters, which are usually applied, are the Gaussian filter and the Laplace filter (both averaging filters), and the median filter, the top hat filter and the range operator (all ranking filters). Segmentation of objects is traditionally based on threshold grey values. (AB

  17. Numerical methods for image registration

    Modersitzki, Jan

    2003-01-01

    Based on the author's lecture notes and research, this well-illustrated and comprehensive text is one of the first to provide an introduction to image registration with particular emphasis on numerical methods in medical imaging. Ideal for researchers in industry and academia, it is also a suitable study guide for graduate mathematicians, computer scientists, engineers, medical physicists, and radiologists.Image registration is utilised whenever information obtained from different viewpoints needs to be combined or compared and unwanted distortion needs to be eliminated. For example, CCTV imag

  18. Coherent diffractive imaging methods for semiconductor manufacturing

    Helfenstein, Patrick; Mochi, Iacopo; Rajeev, Rajendran; Fernandez, Sara; Ekinci, Yasin

    2017-12-01

    The paradigm shift of the semiconductor industry moving from deep ultraviolet to extreme ultraviolet lithography (EUVL) brought about new challenges in the fabrication of illumination and projection optics, which constitute one of the core sources of cost of ownership for many of the metrology tools needed in the lithography process. For this reason, lensless imaging techniques based on coherent diffractive imaging started to raise interest in the EUVL community. This paper presents an overview of currently on-going research endeavors that use a number of methods based on lensless imaging with coherent light.

  19. Twin-Foucault imaging method

    Harada, Ken

    2012-02-01

    A method of Lorentz electron microscopy, which enables observation two Foucault images simultaneously by using an electron biprism instead of an objective aperture, was developed. The electron biprism is installed between two electron beams deflected by 180° magnetic domains. Potential applied to the biprism deflects the two electron beams further, and two Foucault images with reversed contrast are then obtained in one visual field. The twin Foucault images are able to extract the magnetic domain structures and to reconstruct an ordinary electron micrograph. The developed Foucault method was demonstrated with a 180° domain structure of manganite La0.825Sr0.175MnO3.

  20. Image portion identification methods, image parsing methods, image parsing systems, and articles of manufacture

    Lassahn, Gordon D.; Lancaster, Gregory D.; Apel, William A.; Thompson, Vicki S.

    2013-01-08

    Image portion identification methods, image parsing methods, image parsing systems, and articles of manufacture are described. According to one embodiment, an image portion identification method includes accessing data regarding an image depicting a plurality of biological substrates corresponding to at least one biological sample and indicating presence of at least one biological indicator within the biological sample and, using processing circuitry, automatically identifying a portion of the image depicting one of the biological substrates but not others of the biological substrates.

  1. Method for Surface Scanning in Medical Imaging and Related Apparatus

    2015-01-01

    A method and apparatus for surface scanning in medical imaging is provided. The surface scanning apparatus comprises an image source, a first optical fiber bundle comprising first optical fibers having proximal ends and distal ends, and a first optical coupler for coupling an image from the image...

  2. Dose performance and image quality: Dual source CT versus single source CT in cardiac CT angiography

    Wang Min; Qi Hengtao; Wang Ximing; Wang Tao; Chen, Jiu-Hong; Liu Cheng

    2009-01-01

    Objective: To evaluate dose performance and image quality of 64-slice dual source CT (DSCT) in comparison to 64-slice single source CT (SSCT) in cardiac CT angiography (CTA). Methods: 100 patients examined by DSCT and 60 patients scanned by SSCT were included in this study. Objective indices such as image noise, contrast-to-noise ratio and signal-to-noise ratio were analyzed. Subjective image quality was assessed by two cardiovascular radiologists in consensus using a four-point scale (1 = excellent to 4 = not acceptable). Estimation of effective dose was performed on the basis of dose length product (DLP). Results: At low heart rates ( 0.05), but, at high heart rates (>70 bpm), DSCT provided robust image quality (P 70 bpm), DSCT is able to provide robust diagnostic image quality at doses far below that of SSCT.

  3. Quality assessment in radiological imaging methods

    Herstel, W.

    1985-01-01

    The equipment used in diagnostic radiology is becoming more and more complicated. In the imaging process four components are distinguished, each of which can introduce loss in essential information: the X-ray source, the human body, the imaging system and the observer. In nearly all imaging methods the X-ray quantum fluctuations are a limitation to observation. But there are also technical factors. As an illustration it is shown how in a television scanning process the resolution is restricted by the system parameters. A short review is given of test devices and the results are given of an image comparison based on regular bar patterns. Although this method has the disadvantage of measuring mainly the limiting resolution, the results of the test correlate reasonably well with the subjective appreciations of radiographs of bony structures made by a group of trained radiologists. Fluoroscopic systems should preferably be tested using moving structures under dynamic conditions. (author)

  4. Intensity correlation imaging with sunlight-like source

    Wang, Wentao; Tang, Zhiguo; Zheng, Huaibin; Chen, Hui; Yuan, Yuan; Liu, Jinbin; Liu, Yanyan; Xu, Zhuo

    2018-05-01

    We show a method of intensity correlation imaging of targets illuminated by a sunlight-like source both theoretically and experimentally. With a Faraday anomalous dispersion optical filter (FADOF), we have modulated the coherence time of a thermal source up to 0.167 ns. And we carried out measurements of temporal and spatial correlations, respectively, with an intensity interferometer setup. By skillfully using the even Fourier fitting on the very sparse sampling data, the images of targets are successfully reconstructed from the low signal-noise-ratio(SNR) interference pattern by applying an iterative phase retrieval algorithm. The resulting imaging quality is as well as the one obtained by the theoretical fitting. The realization of such a case will bring this technique closer to geostationary satellite imaging illuminated by sunlight.

  5. Mathematical methods in elasticity imaging

    Ammari, Habib; Garnier, Josselin; Kang, Hyeonbae; Lee, Hyundae; Wahab, Abdul

    2015-01-01

    This book is the first to comprehensively explore elasticity imaging and examines recent, important developments in asymptotic imaging, modeling, and analysis of deterministic and stochastic elastic wave propagation phenomena. It derives the best possible functional images for small inclusions and cracks within the context of stability and resolution, and introduces a topological derivative-based imaging framework for detecting elastic inclusions in the time-harmonic regime. For imaging extended elastic inclusions, accurate optimal control methodologies are designed and the effects of uncertainties of the geometric or physical parameters on stability and resolution properties are evaluated. In particular, the book shows how localized damage to a mechanical structure affects its dynamic characteristics, and how measured eigenparameters are linked to elastic inclusion or crack location, orientation, and size. Demonstrating a novel method for identifying, locating, and estimating inclusions and cracks in elastic...

  6. Advanced Source Deconvolution Methods for Compton Telescopes

    Zoglauer, Andreas

    The next generation of space telescopes utilizing Compton scattering for astrophysical observations is destined to one day unravel the mysteries behind Galactic nucleosynthesis, to determine the origin of the positron annihilation excess near the Galactic center, and to uncover the hidden emission mechanisms behind gamma-ray bursts. Besides astrophysics, Compton telescopes are establishing themselves in heliophysics, planetary sciences, medical imaging, accelerator physics, and environmental monitoring. Since the COMPTEL days, great advances in the achievable energy and position resolution were possible, creating an extremely vast, but also extremely sparsely sampled data space. Unfortunately, the optimum way to analyze the data from the next generation of Compton telescopes has not yet been found, which can retrieve all source parameters (location, spectrum, polarization, flux) and achieves the best possible resolution and sensitivity at the same time. This is especially important for all sciences objectives looking at the inner Galaxy: the large amount of expected sources, the high background (internal and Galactic diffuse emission), and the limited angular resolution, make it the most taxing case for data analysis. In general, two key challenges exist: First, what are the best data space representations to answer the specific science questions? Second, what is the best way to deconvolve the data to fully retrieve the source parameters? For modern Compton telescopes, the existing data space representations can either correctly reconstruct the absolute flux (binned mode) or achieve the best possible resolution (list-mode), both together were not possible up to now. Here we propose to develop a two-stage hybrid reconstruction method which combines the best aspects of both. Using a proof-of-concept implementation we can for the first time show that it is possible to alternate during each deconvolution step between a binned-mode approach to get the flux right and a

  7. The optimal algorithm for Multi-source RS image fusion.

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  8. Three-dimensional tomosynthetic image restoration for brachytherapy source localization

    Persons, Timothy M.

    2001-01-01

    Tomosynthetic image reconstruction allows for the production of a virtually infinite number of slices from a finite number of projection views of a subject. If the reconstructed image volume is viewed in toto, and the three-dimensional (3D) impulse response is accurately known, then it is possible to solve the inverse problem (deconvolution) using canonical image restoration methods (such as Wiener filtering or solution by conjugate gradient least squares iteration) by extension to three dimensions in either the spatial or the frequency domains. This dissertation presents modified direct and iterative restoration methods for solving the inverse tomosynthetic imaging problem in 3D. The significant blur artifact that is common to tomosynthetic reconstructions is deconvolved by solving for the entire 3D image at once. The 3D impulse response is computed analytically using a fiducial reference schema as realized in a robust, self-calibrating solution to generalized tomosynthesis. 3D modulation transfer function analysis is used to characterize the tomosynthetic resolution of the 3D reconstructions. The relevant clinical application of these methods is 3D imaging for brachytherapy source localization. Conventional localization schemes for brachytherapy implants using orthogonal or stereoscopic projection radiographs suffer from scaling distortions and poor visibility of implanted seeds, resulting in compromised source tracking (reported errors: 2-4 mm) and dosimetric inaccuracy. 3D image reconstruction (using a well-chosen projection sampling scheme) and restoration of a prostate brachytherapy phantom is used for testing. The approaches presented in this work localize source centroids with submillimeter error in two Cartesian dimensions and just over one millimeter error in the third

  9. Quantitative imaging methods in osteoporosis.

    Oei, Ling; Koromani, Fjorda; Rivadeneira, Fernando; Zillikens, M Carola; Oei, Edwin H G

    2016-12-01

    Osteoporosis is characterized by a decreased bone mass and quality resulting in an increased fracture risk. Quantitative imaging methods are critical in the diagnosis and follow-up of treatment effects in osteoporosis. Prior radiographic vertebral fractures and bone mineral density (BMD) as a quantitative parameter derived from dual-energy X-ray absorptiometry (DXA) are among the strongest known predictors of future osteoporotic fractures. Therefore, current clinical decision making relies heavily on accurate assessment of these imaging features. Further, novel quantitative techniques are being developed to appraise additional characteristics of osteoporosis including three-dimensional bone architecture with quantitative computed tomography (QCT). Dedicated high-resolution (HR) CT equipment is available to enhance image quality. At the other end of the spectrum, by utilizing post-processing techniques such as the trabecular bone score (TBS) information on three-dimensional architecture can be derived from DXA images. Further developments in magnetic resonance imaging (MRI) seem promising to not only capture bone micro-architecture but also characterize processes at the molecular level. This review provides an overview of various quantitative imaging techniques based on different radiological modalities utilized in clinical osteoporosis care and research.

  10. An Image Matching Algorithm Integrating Global SRTM and Image Segmentation for Multi-Source Satellite Imagery

    Xiao Ling

    2016-08-01

    Full Text Available This paper presents a novel image matching method for multi-source satellite images, which integrates global Shuttle Radar Topography Mission (SRTM data and image segmentation to achieve robust and numerous correspondences. This method first generates the epipolar lines as a geometric constraint assisted by global SRTM data, after which the seed points are selected and matched. To produce more reliable matching results, a region segmentation-based matching propagation is proposed in this paper, whereby the region segmentations are extracted by image segmentation and are considered to be a spatial constraint. Moreover, a similarity measure integrating Distance, Angle and Normalized Cross-Correlation (DANCC, which considers geometric similarity and radiometric similarity, is introduced to find the optimal correspondences. Experiments using typical satellite images acquired from Resources Satellite-3 (ZY-3, Mapping Satellite-1, SPOT-5 and Google Earth demonstrated that the proposed method is able to produce reliable and accurate matching results.

  11. Radiation sources and methods for producing them

    Malson, H.A.; Moyer, S.E.; Honious, H.B.; Janzow, E.F.

    1979-01-01

    The radiation sources contain a substrate with an electrically conducting, non-radioactive metal surface, a layer of a metal isotope of the scandium group as well as a percentage of non-radioactive binding metal being coated on the surface by means of an electroplating method. Besides examples for β sources ( 147 Pm), γ sources ( 241 Am), and neutron sources ( 252 Cf) there is described an α-radiation source ( 241 Am, 244 Cu, 238 Pu) for smoke detectors. There are given extensive tables and a bibliography. (DG) [de

  12. Cardiac magnetic source imaging based on current multipole model

    Tang Fa-Kuan; Wang Qian; Hua Ning; Lu Hong; Tang Xue-Zheng; Ma Ping

    2011-01-01

    It is widely accepted that the heart current source can be reduced into a current multipole. By adopting three linear inverse methods, the cardiac magnetic imaging is achieved in this article based on the current multipole model expanded to the first order terms. This magnetic imaging is realized in a reconstruction plane in the centre of human heart, where the current dipole array is employed to represent realistic cardiac current distribution. The current multipole as testing source generates magnetic fields in the measuring plane, serving as inputs of cardiac magnetic inverse problem. In the heart-torso model constructed by boundary element method, the current multipole magnetic field distribution is compared with that in the homogeneous infinite space, and also with the single current dipole magnetic field distribution. Then the minimum-norm least-squares (MNLS) method, the optimal weighted pseudoinverse method (OWPIM), and the optimal constrained linear inverse method (OCLIM) are selected as the algorithms for inverse computation based on current multipole model innovatively, and the imaging effects of these three inverse methods are compared. Besides, two reconstructing parameters, residual and mean residual, are also discussed, and their trends under MNLS, OWPIM and OCLIM each as a function of SNR are obtained and compared. (general)

  13. Image transmission system using adaptive joint source and channel decoding

    Liu, Weiliang; Daut, David G.

    2005-03-01

    In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.

  14. Methods of producing luminescent images

    Broadhead, P.; Newman, G.A.

    1977-01-01

    A method is described for producing a luminescent image in a layer of a binding material in which is dispersed a thermoluminescent material. The layer is heated uniformly to a temperature of 80 to 300 0 C and is exposed to luminescence inducing radiation whilst so heated. The preferred exposing radiation is X-rays and preferably the thermoluminescent material is insensitive to electromagnetic radiation of wavelength longer than 300 mm. Information concerning preparation of the luminescent material is given in BP 1,347,672; this material has the advantage that at elevated temperatures it shows increased sensitivity compared with room temperature. At temperatures in the range 80 to 150 0 C the thermoluminescent material exhibits 'afterglow', allowing the image to persist for several seconds after the X-radiation has ceased, thus allowing the image to be retained for visual inspection in this temperature range. At higher temperatures, however, there is negligible 'afterglow'. The thermoluminescent layers so produced are particularly useful as fluoroscopic screens. The preferred method of heating the thermoluminescent material is described in BP 1,354,149. An example is given of the application of the method. (U.K.)

  15. EEG source imaging during two Qigong meditations.

    Faber, Pascal L; Lehmann, Dietrich; Tei, Shisei; Tsujiuchi, Takuya; Kumano, Hiroaki; Pascual-Marqui, Roberto D; Kochi, Kieko

    2012-08-01

    Experienced Qigong meditators who regularly perform the exercises "Thinking of Nothing" and "Qigong" were studied with multichannel EEG source imaging during their meditations. The intracerebral localization of brain electric activity during the two meditation conditions was compared using sLORETA functional EEG tomography. Differences between conditions were assessed using t statistics (corrected for multiple testing) on the normalized and log-transformed current density values of the sLORETA images. In the EEG alpha-2 frequency, 125 voxels differed significantly; all were more active during "Qigong" than "Thinking of Nothing," forming a single cluster in parietal Brodmann areas 5, 7, 31, and 40, all in the right hemisphere. In the EEG beta-1 frequency, 37 voxels differed significantly; all were more active during "Thinking of Nothing" than "Qigong," forming a single cluster in prefrontal Brodmann areas 6, 8, and 9, all in the left hemisphere. Compared to combined initial-final no-task resting, "Qigong" showed activation in posterior areas whereas "Thinking of Nothing" showed activation in anterior areas. The stronger activity of posterior (right) parietal areas during "Qigong" and anterior (left) prefrontal areas during "Thinking of Nothing" may reflect a predominance of self-reference, attention and input-centered processing in the "Qigong" meditation, and of control-centered processing in the "Thinking of Nothing" meditation.

  16. Digital image processing mathematical and computational methods

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  17. Analysis of the image of pion-emitting sources in the source center-of-mass frame

    Ren, Yanyu; Feng, Qichun; Huo, Lei; Zhang, Jingbo; Liu, Jianli; Tang, Guixin [Harbin Institute of Technology, Department of Physics, Harbin, Heilongjiang (China); Zhang, Weining [Harbin Institute of Technology, Department of Physics, Harbin, Heilongjiang (China); Dalian University of Technology, School of Physics and Optoelectronic Technology, Dalian, Liaoning (China)

    2017-08-15

    In this paper, we try a method to extract the image of pion-emitting source function in the center-of-mass frame of the source (CMFS). We choose identical pion pairs according to the difference of their energy and use these pion pairs to build the correlation function. The purpose is to reduce the effect of ΔEΔt, thus the corresponding imaging result can tend to the real source function. We examine the effect of this method by comparing its results with real source functions extracted from models directly. (orig.)

  18. Open source tools for standardized privacy protection of medical images

    Lien, Chung-Yueh; Onken, Michael; Eichelberg, Marco; Kao, Tsair; Hein, Andreas

    2011-03-01

    In addition to the primary care context, medical images are often useful for research projects and community healthcare networks, so-called "secondary use". Patient privacy becomes an issue in such scenarios since the disclosure of personal health information (PHI) has to be prevented in a sharing environment. In general, most PHIs should be completely removed from the images according to the respective privacy regulations, but some basic and alleviated data is usually required for accurate image interpretation. Our objective is to utilize and enhance these specifications in order to provide reliable software implementations for de- and re-identification of medical images suitable for online and offline delivery. DICOM (Digital Imaging and Communications in Medicine) images are de-identified by replacing PHI-specific information with values still being reasonable for imaging diagnosis and patient indexing. In this paper, this approach is evaluated based on a prototype implementation built on top of the open source framework DCMTK (DICOM Toolkit) utilizing standardized de- and re-identification mechanisms. A set of tools has been developed for DICOM de-identification that meets privacy requirements of an offline and online sharing environment and fully relies on standard-based methods.

  19. A method of image improvement in three-dimensional imaging

    Suto, Yasuzo; Huang, Tewen; Furuhata, Kentaro; Uchino, Masafumi.

    1988-01-01

    In general, image interpolation is required when the surface configurations of such structures as bones and organs are three-dimensionally constructed from the multi-sliced images obtained by CT. Image interpolation is a processing method whereby an artificial image is inserted between two adjacent slices to make spatial resolution equal to slice resolution in appearance. Such image interpolation makes it possible to increase the image quality of the constructed three-dimensional image. In our newly-developed algorithm, we have converted the presently and subsequently sliced images to distance images, and generated the interpolation images from these two distance images. As a result, compared with the previous method, three-dimensional images with better image quality have been constructed. (author)

  20. Improved Mirror Source Method in Roomacoustics

    Mechel, F. P.

    2002-10-01

    Most authors in room acoustics qualify the mirror source method (MS-method) as the only exact method to evaluate sound fields in auditoria. But evidently nobody applies it. The reason for this discrepancy is the abundantly high numbers of needed mirror sources which are reported in the literature, although such estimations of needed numbers of mirror sources mostly are used for the justification of more or less heuristic modifications of the MS-method. The present, intentionally tutorial article accentuates the analytical foundations of the MS-method whereby the number of needed mirror sources is reduced already. Further, the task of field evaluation in three-dimensional spaces is reduced to a sequence of tasks in two-dimensional room edges. This not only allows the use of easier geometrical computations in two dimensions, but also the sound field in corner areas can be represented by a single (directional) source sitting on the corner line, so that only this "corner source" must be mirror-reflected in the further process. This procedure gives a drastic reduction of the number of needed equivalent sources. Finally, the traditional MS-method is not applicable in rooms with convex corners (the angle between the corner flanks, measured on the room side, exceeds 180°). In such cases, the MS-method is combined below with the second principle of superposition(PSP). It reduces the scattering task at convex corners to two sub-tasks between one flank and the median plane of the room wedge, i.e., always in concave corner areas where the MS-method can be applied.

  1. Radiation Source Mapping with Bayesian Inverse Methods

    Hykes, Joshua Michael

    We present a method to map the spectral and spatial distributions of radioactive sources using a small number of detectors. Locating and identifying radioactive materials is important for border monitoring, accounting for special nuclear material in processing facilities, and in clean-up operations. Most methods to analyze these problems make restrictive assumptions about the distribution of the source. In contrast, the source-mapping method presented here allows an arbitrary three-dimensional distribution in space and a flexible group and gamma peak distribution in energy. To apply the method, the system's geometry and materials must be known. A probabilistic Bayesian approach is used to solve the resulting inverse problem (IP) since the system of equations is ill-posed. The probabilistic approach also provides estimates of the confidence in the final source map prediction. A set of adjoint flux, discrete ordinates solutions, obtained in this work by the Denovo code, are required to efficiently compute detector responses from a candidate source distribution. These adjoint fluxes are then used to form the linear model to map the state space to the response space. The test for the method is simultaneously locating a set of 137Cs and 60Co gamma sources in an empty room. This test problem is solved using synthetic measurements generated by a Monte Carlo (MCNP) model and using experimental measurements that we collected for this purpose. With the synthetic data, the predicted source distributions identified the locations of the sources to within tens of centimeters, in a room with an approximately four-by-four meter floor plan. Most of the predicted source intensities were within a factor of ten of their true value. The chi-square value of the predicted source was within a factor of five from the expected value based on the number of measurements employed. With a favorable uniform initial guess, the predicted source map was nearly identical to the true distribution

  2. A simple algorithm for estimation of source-to-detector distance in Compton imaging

    Rawool-Sullivan, Mohini W.; Sullivan, John P.; Tornga, Shawn R.; Brumby, Steven P.

    2008-01-01

    Compton imaging is used to predict the location of gamma-emitting radiation sources. The X and Y coordinates of the source can be obtained using a back-projected image and a two-dimensional peak-finding algorithm. The emphasis of this work is to estimate the source-to-detector distance (Z). The algorithm presented uses the solid angle subtended by the reconstructed image at various source-to-detector distances. This algorithm was validated using both measured data from the prototype Compton imager (PCI) constructed at the Los Alamos National Laboratory and simulated data of the same imager. Results show this method can be applied successfully to estimate Z, and it provides a way of determining Z without prior knowledge of the source location. This method is faster than the methods that employ maximum likelihood method because it is based on simple back projections of Compton scatter data

  3. Seismic reflection imaging with conventional and unconventional sources

    Quiros Ugalde, Diego Alonso

    This manuscript reports the results of research using both conventional and unconventional energy sources as well as conventional and unconventional analysis to image crustal structure using reflected seismic waves. The work presented here includes the use of explosions to investigate the Taiwanese lithosphere, the use of 'noise' from railroads to investigate the shallow subsurface of the Rio Grande rift, and the use of microearthquakes to image subsurface structure near an active fault zone within the Appalachian mountains. Chapter 1 uses recordings from the land refraction and wide-angle reflection component of the Taiwan Integrated Geodynamic Research (TAIGER) project. The most prominent reflection feature imaged by these surveys is an anomalously strong reflector found in northeastern Taiwan. The goal of this chapter is to analyze the TAIGER recordings and to place the reflector into a geologic framework that fits with the modern tectonic kinematics of the region. Chapter 2 uses railroad traffic as a source for reflection profiling within the Rio Grande rift. Here the railroad recordings are treated in an analogous way to Vibroseis recordings. These results suggest that railroad noise in general can be a valuable new tool in imaging and characterizing the shallow subsurface in environmental and geotechnical studies. In chapters 3 and 4, earthquakes serve as the seismic imaging source. In these studies the methodology of Vertical Seismic Profiling (VSP) is borrowed from the oil and gas industry to develop reflection images. In chapter 3, a single earthquake is used to probe a small area beneath Waterboro, Maine. In chapter 4, the same method is applied to multiple earthquakes to take advantage of the increased redundancy that results from multiple events illuminating the same structure. The latter study demonstrates how dense arrays can be a powerful new tool for delineating, and monitoring temporal changes of deep structure in areas characterized by significant

  4. Method of assessing heterogeneity in images

    Jacob, Richard E.; Carson, James P.

    2016-08-23

    A method of assessing heterogeneity in images is disclosed. 3D images of an object are acquired. The acquired images may be filtered and masked. Iterative decomposition is performed on the masked images to obtain image subdivisions that are relatively homogeneous. Comparative analysis, such as variogram analysis or correlogram analysis, is performed of the decomposed images to determine spatial relationships between regions of the images that are relatively homogeneous.

  5. Accommodating multiple illumination sources in an imaging colorimetry environment

    Tobin, Kenneth W., Jr.; Goddard, James S., Jr.; Hunt, Martin A.; Hylton, Kathy W.; Karnowski, Thomas P.; Simpson, Marc L.; Richards, Roger K.; Treece, Dale A.

    2000-03-01

    Researchers at the Oak Ridge National Laboratory have been developing a method for measuring color quality in textile products using a tri-stimulus color camera system. Initial results of the Imaging Tristimulus Colorimeter (ITC) were reported during 1999. These results showed that the projection onto convex sets (POCS) approach to color estimation could be applied to complex printed patterns on textile products with high accuracy and repeatability. Image-based color sensors used for on-line measurement are not colorimetric by nature and require a non-linear transformation of the component colors based on the spectral properties of the incident illumination, imaging sensor, and the actual textile color. Our earlier work reports these results for a broad-band, smoothly varying D65 standard illuminant. To move the measurement to the on-line environment with continuously manufactured textile webs, the illumination source becomes problematic. The spectral content of these light sources varies substantially from the D65 standard illuminant and can greatly impact the measurement performance of the POCS system. Although absolute color measurements are difficult to make under different illumination, referential measurements to monitor color drift provide a useful indication of product quality. Modifications to the ITC system have been implemented to enable the study of different light sources. These results and the subsequent analysis of relative color measurements will be reported for textile products.

  6. CMP reflection imaging via interferometry of distributed subsurface sources

    Kim, D.; Brown, L. D.; Quiros, D. A.

    2015-12-01

    The theoretical foundations of recovering body wave energy via seismic interferometry are well established. However in practice, such recovery remains problematic. Here, synthetic seismograms computed for subsurface sources are used to evaluate the geometrical combinations of realistic ambient source and receiver distributions that result in useful recovery of virtual body waves. This study illustrates how surface receiver arrays that span a limited distribution suite of sources, can be processed to reproduce virtual shot gathers that result in CMP gathers which can be effectively stacked with traditional normal moveout corrections. To verify the feasibility of the approach in practice, seismic recordings of 50 aftershocks following the magnitude of 5.8 Virginia earthquake occurred in August, 2011 have been processed using seismic interferometry to produce seismic reflection images of the crustal structure above and beneath the aftershock cluster. Although monotonic noise proved to be problematic by significantly reducing the number of usable recordings, the edited dataset resulted in stacked seismic sections characterized by coherent reflections that resemble those seen on a nearby conventional reflection survey. In particular, "virtual" reflections at travel times of 3 to 4 seconds suggest reflector sat approximately 7 to 12 km depth that would seem to correspond to imbricate thrust structures formed during the Appalachian orogeny. The approach described here represents a promising new means of body wave imaging of 3D structure that can be applied to a wide array of geologic and energy problems. Unlike other imaging techniques using natural sources, this technique does not require precise source locations or times. It can thus exploit aftershocks too small for conventional analyses. This method can be applied to any type of microseismic cloud, whether tectonic, volcanic or man-made.

  7. Backscatter absorption gas imaging systems and light sources therefore

    Kulp, Thomas Jan [Livermore, CA; Kliner, Dahv A. V. [San Ramon, CA; Sommers, Ricky [Oakley, CA; Goers, Uta-Barbara [Campbell, NY; Armstrong, Karla M [Livermore, CA

    2006-12-19

    The location of gases that are not visible to the unaided human eye can be determined using tuned light sources that spectroscopically probe the gases and cameras that can provide images corresponding to the absorption of the gases. The present invention is a light source for a backscatter absorption gas imaging (BAGI) system, and a light source incorporating the light source, that can be used to remotely detect and produce images of "invisible" gases. The inventive light source has a light producing element, an optical amplifier, and an optical parametric oscillator to generate wavelength tunable light in the IR. By using a multi-mode light source and an amplifier that operates using 915 nm pump sources, the power consumption of the light source is reduced to a level that can be operated by batteries for long periods of time. In addition, the light source is tunable over the absorption bands of many hydrocarbons, making it useful for detecting hazardous gases.

  8. Open-source software platform for medical image segmentation applications

    Namías, R.; D'Amato, J. P.; del Fresno, M.

    2017-11-01

    Segmenting 2D and 3D images is a crucial and challenging problem in medical image analysis. Although several image segmentation algorithms have been proposed for different applications, no universal method currently exists. Moreover, their use is usually limited when detection of complex and multiple adjacent objects of interest is needed. In addition, the continually increasing volumes of medical imaging scans require more efficient segmentation software design and highly usable applications. In this context, we present an extension of our previous segmentation framework which allows the combination of existing explicit deformable models in an efficient and transparent way, handling simultaneously different segmentation strategies and interacting with a graphic user interface (GUI). We present the object-oriented design and the general architecture which consist of two layers: the GUI at the top layer, and the processing core filters at the bottom layer. We apply the framework for segmenting different real-case medical image scenarios on public available datasets including bladder and prostate segmentation from 2D MRI, and heart segmentation in 3D CT. Our experiments on these concrete problems show that this framework facilitates complex and multi-object segmentation goals while providing a fast prototyping open-source segmentation tool.

  9. COMPARISON OF IMAGE ENHANCEMENT METHODS FOR CHROMOSOME KARYOTYPE IMAGE ENHANCEMENT

    Dewa Made Sri Arsa

    2017-02-01

    Full Text Available The chromosome is a set of DNA structure that carry information about our life. The information can be obtained through Karyotyping. The process requires a clear image so the chromosome can be evaluate well. Preprocessing have to be done on chromosome images that is image enhancement. The process starts with image background removing. The image will be cleaned background color. The next step is image enhancement. This paper compares several methods for image enhancement. We evaluate some method in image enhancement like Histogram Equalization (HE, Contrast-limiting Adaptive Histogram Equalization (CLAHE, Histogram Equalization with 3D Block Matching (HE+BM3D, and basic image enhancement, unsharp masking. We examine and discuss the best method for enhancing chromosome image. Therefore, to evaluate the methods, the original image was manipulated by the addition of some noise and blur. Peak Signal-to-noise Ratio (PSNR and Structural Similarity Index (SSIM are used to examine method performance. The output of enhancement method will be compared with result of Professional software for karyotyping analysis named Ikaros MetasystemT M . Based on experimental results, HE+BM3D method gets a stable result on both scenario noised and blur image.

  10. Double-compression method for biomedical images

    Antonenko, Yevhenii A.; Mustetsov, Timofey N.; Hamdi, Rami R.; Małecka-Massalska, Teresa; Orshubekov, Nurbek; DzierŻak, RóŻa; Uvaysova, Svetlana

    2017-08-01

    This paper describes a double compression method (DCM) of biomedical images. A comparison of image compression factors in size JPEG, PNG and developed DCM was carried out. The main purpose of the DCM - compression of medical images while maintaining the key points that carry diagnostic information. To estimate the minimum compression factor an analysis of the coding of random noise image is presented.

  11. Optoelectronic imaging of speckle using image processing method

    Wang, Jinjiang; Wang, Pengfei

    2018-01-01

    A detailed image processing of laser speckle interferometry is proposed as an example for the course of postgraduate student. Several image processing methods were used together for dealing with optoelectronic imaging system, such as the partial differential equations (PDEs) are used to reduce the effect of noise, the thresholding segmentation also based on heat equation with PDEs, the central line is extracted based on image skeleton, and the branch is removed automatically, the phase level is calculated by spline interpolation method, and the fringe phase can be unwrapped. Finally, the imaging processing method was used to automatically measure the bubble in rubber with negative pressure which could be used in the tire detection.

  12. Electrophysiological Source Imaging: A Noninvasive Window to Brain Dynamics.

    He, Bin; Sohrabpour, Abbas; Brown, Emery; Liu, Zhongming

    2018-06-04

    Brain activity and connectivity are distributed in the three-dimensional space and evolve in time. It is important to image brain dynamics with high spatial and temporal resolution. Electroencephalography (EEG) and magnetoencephalography (MEG) are noninvasive measurements associated with complex neural activations and interactions that encode brain functions. Electrophysiological source imaging estimates the underlying brain electrical sources from EEG and MEG measurements. It offers increasingly improved spatial resolution and intrinsically high temporal resolution for imaging large-scale brain activity and connectivity on a wide range of timescales. Integration of electrophysiological source imaging and functional magnetic resonance imaging could further enhance spatiotemporal resolution and specificity to an extent that is not attainable with either technique alone. We review methodological developments in electrophysiological source imaging over the past three decades and envision its future advancement into a powerful functional neuroimaging technology for basic and clinical neuroscience applications.

  13. Color image definition evaluation method based on deep learning method

    Liu, Di; Li, YingChun

    2018-01-01

    In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.

  14. Analytic sensing for multi-layer spherical models with application to EEG source imaging

    Kandaswamy, Djano; Blu, Thierry; Van De Ville, Dimitri

    2013-01-01

    Source imaging maps back boundary measurements to underlying generators within the domain; e. g., retrieving the parameters of the generating dipoles from electrical potential measurements on the scalp such as in electroencephalography (EEG). Fitting such a parametric source model is non-linear in the positions of the sources and renewed interest in mathematical imaging has led to several promising approaches. One important step in these methods is the application of a sensing principle that ...

  15. Nuclear magnetic resonance imaging method

    Johnson, G.; MacDonald, J.; Hutchison, S.; Eastwood, L.M.; Redpath, T.W.T.; Mallard, J.R.

    1984-01-01

    A method of deriving three dimensional image information from an object using nuclear magnetic resonance signals comprises subjecting the object to a continuous, static magnetic field and carrying out the following set of sequential steps: 1) exciting nuclear spins in a selected volume (90deg pulse); 2) applying non-aligned first, second and third gradients of the magnetic field; 3) causing the spins to rephase periodically by reversal of the first gradient to produce spin echoes, and applying pulses of the second gradient prior to every read-out of an echo signal from the object, to differently encode the spin in the second gradient direction for each read-out signal. The above steps 1-3 are then successively repeated with different values of gradient of the third gradient, there being a recovery interval between the repetition of successive sets of steps. Alternate echoes only are read out, the other echoes being time-reversed and ignored for convenience. The resulting signals are appropriately sampled, set out in an array and subjected to three dimensional Fourier transformation. (author)

  16. Computational methods in molecular imaging technologies

    Gunjan, Vinit Kumar; Venkatesh, C; Amarnath, M

    2017-01-01

    This book highlights the experimental investigations that have been carried out on magnetic resonance imaging and computed tomography (MRI & CT) images using state-of-the-art Computational Image processing techniques, and tabulates the statistical values wherever necessary. In a very simple and straightforward way, it explains how image processing methods are used to improve the quality of medical images and facilitate analysis. It offers a valuable resource for researchers, engineers, medical doctors and bioinformatics experts alike.

  17. Imaging of fast-neutron sources using solid-state track-recorder pinhole radiography

    Ruddy, F.H.; Gold, R.; Roberts, J.H.; Kaiser, B.J.; Preston, C.C.

    1983-08-01

    Pinhole imaging methods are being developed and tested for potential future use in imaging the intense neutron source of the Fusion Materials Irradiation Test (FMIT) Facility. Previously reported, extensive calibration measurements of the proton, neutron, and alpha particle response characteristics of CR-39 polymer solid state track recorders (SSTRs) are being used to interpret the results of imaging experiments using both charged particle and neutron pinhole collimators. High resolution, neutron pinhole images of a 252 Cf source have been obtained in the form of neutron induced proton recoil tracks in CR-39 polymer SSTR. These imaging experiments are described as well as their potential future applications to FMIT

  18. Presurgical mapping with magnetic source imaging. Comparisons with intraoperative findings

    Roberts, T.P.L.; Ferrari, P.; Perry, D.; Rowley, H.A.; Berger, M.S.

    2000-01-01

    We compare noninvasive preoperative mapping with magnetic source imaging to intraoperative cortical stimulation mapping. These techniques were directly compared in 17 patients who underwent preoperative and postoperative somatosensory mapping of a total of 22 comparable anatomic sites (digits, face). Our findings are presented in the context of previous studies that used magnetic source imaging and functional magnetic resonance imaging as noninvasive surrogates of intraoperative mapping for the identification of sensorimotor and language-specific brain functional centers in patients with brain tumors. We found that magnetic source imaging results were reasonably concordant with intraoperative mapping findings in over 90% of cases, and that concordance could be defined as 'good' in 77% of cases. Magnetic source imaging therefore provides a viable, if coarse, identification of somatosensory areas and, consequently, can guide and reduce the time taken for intraoperative mapping procedures. (author)

  19. System and method for image registration of multiple video streams

    Dillavou, Marcus W.; Shum, Phillip Corey; Guthrie, Baron L.; Shenai, Mahesh B.; Deaton, Drew Steven; May, Matthew Benton

    2018-02-06

    Provided herein are methods and systems for image registration from multiple sources. A method for image registration includes rendering a common field of interest that reflects a presence of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements and updating the common field of interest such that the presence of the at least one of the elements is registered relative to another of the elements.

  20. Random laser illumination: an ideal source for biomedical polarization imaging?

    Carvalho, Mariana T.; Lotay, Amrit S.; Kenny, Fiona M.; Girkin, John M.; Gomes, Anderson S. L.

    2016-03-01

    Imaging applications increasingly require light sources with high spectral density (power over spectral bandwidth. This has led in many cases to the replacement of conventional thermal light sources with bright light-emitting diodes (LEDs), lasers and superluminescent diodes. Although lasers and superluminescent diodes appear to be ideal light sources due to their narrow bandwidth and power, however, in the case of full-field imaging, their spatial coherence leads to coherent artefacts, such as speckle, that corrupt the image. LEDs, in contrast, have lower spatial coherence and thus seem the natural choice, but they have low spectral density. Random Lasers are an unconventional type of laser that can be engineered to provide low spatial coherence with high spectral density. These characteristics makes them potential sources for biological imaging applications where specific absorption and reflection are the characteristics required for state of the art imaging. In this work, a Random Laser (RL) is used to demonstrate speckle-free full-field imaging for polarization-dependent imaging in an epi-illumination configuration. We compare LED and RL illumination analysing the resulting images demonstrating that the RL illumination produces an imaging system with higher performance (image quality and spectral density) than that provided by LEDs.

  1. Virtual ultrasound sources in high-resolution ultrasound imaging

    Nikolov, Svetoslav; Jensen, Jørgen Arendt

    2002-01-01

    beamforming procedure for 3D ultrasound imaging. The position of the virtual source, and the created waveform are investigated with simulation, and with pulse-echo measurements. There is good agreement between the estimated wavefront and the theoretically tted one. Several examples of the use of virtual...... source elements are considered. Using SAF on data acquired for a conventional linear array imaging improves the penetration depth for the particular imaging situation from 80 to 110 mm. The independent use of virtual source elements in the elevation plane decreases the respective size of the point spread...

  2. Review methods for image segmentation from computed tomography images

    Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik; Mahmud, Rozi

    2014-01-01

    Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affect the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan

  3. GLOBAL OPTIMIZATION METHODS FOR GRAVITATIONAL LENS SYSTEMS WITH REGULARIZED SOURCES

    Rogers, Adam; Fiege, Jason D.

    2012-01-01

    Several approaches exist to model gravitational lens systems. In this study, we apply global optimization methods to find the optimal set of lens parameters using a genetic algorithm. We treat the full optimization procedure as a two-step process: an analytical description of the source plane intensity distribution is used to find an initial approximation to the optimal lens parameters; the second stage of the optimization uses a pixelated source plane with the semilinear method to determine an optimal source. Regularization is handled by means of an iterative method and the generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) functions that are commonly used in standard image deconvolution problems. This approach simultaneously estimates the optimal regularization parameter and the number of degrees of freedom in the source. Using the GCV and UPRE functions, we are able to justify an estimation of the number of source degrees of freedom found in previous work. We test our approach by applying our code to a subset of the lens systems included in the SLACS survey.

  4. Method for producing uranium atomic beam source

    Krikorian, O.H.

    1976-01-01

    A method is described for producing a beam of neutral uranium atoms by vaporizing uranium from a compound UM/sub x/ heated to produce U vapor from an M boat or from some other suitable refractory container such as a tungsten boat, where M is a metal whose vapor pressure is negligible compared with that of uranium at the vaporization temperature. The compound, for example, may be the uranium-rhenium compound, URe 2 . An evaporation rate in excess of about 10 times that of conventional uranium beam sources is produced

  5. Future prospects of imaging at spallation neutron sources

    Strobl, M.

    2009-01-01

    The advent of state-of-the-art spallation neutron sources is a major step forward in efficient neutron production for most neutron scattering techniques. Although they provide lower time-averaged neutron flux than high flux reactor sources, advantage for different instrumental techniques can be derived from the pulsed time structure of the available flux, which can be translated into energy, respectively, wavelength resolution. Conventional neutron imaging on the other hand relies on an intense continuous beam flux and hence falls short in profiting from the new development. Nevertheless, some recently developed novel imaging techniques require and some can benefit from energy resolution. The impact of the emerging spallation sources on different imaging techniques has been investigated, ways to benefit will be identified (where possible) and prospects of future imaging instruments and possible options and layouts at a spallation neutron source will be discussed and outlined.

  6. Gamma-ray Imaging Methods

    Vetter, K; Mihailescu, L; Nelson, K; Valentine, J; Wright, D

    2006-10-05

    In this document we discuss specific implementations for gamma-ray imaging instruments including the principle of operation and describe systems which have been built and demonstrated as well as systems currently under development. There are several fundamentally different technologies each with specific operational requirements and performance trade offs. We provide an overview of the different gamma-ray imaging techniques and briefly discuss challenges and limitations associated with each modality (in the appendix we give detailed descriptions of specific implementations for many of these technologies). In Section 3 we summarize the performance and operational aspects in tabular form as an aid for comparing technologies and mapping technologies to potential applications.

  7. Neutron Imaging at Compact Accelerator-Driven Neutron Sources in Japan

    Yoshiaki Kiyanagi

    2018-03-01

    Full Text Available Neutron imaging has been recognized to be very useful to investigate inside of materials and products that cannot be seen by X-ray. New imaging methods using the pulsed structure of neutron sources based on accelerators has been developed also at compact accelerator-driven neutron sources and opened new application fields in neutron imaging. The world’s first dedicated imaging instrument at pulsed neutron sources was constructed at J-PARC in Japan owing to the development of such new methods. Then, usefulness of the compact accelerator-driven neutron sources in neutron science was recognized and such facilities were newly constructed in Japan. Now, existing and new sources have been used for neutron imaging. Traditional imaging and newly developed pulsed neutron imaging such as Bragg edge transmission have been applied to various fields by using compact and large neutron facilities. Here, compact accelerator-driven neutron sources used for imaging in Japan are introduced and some of their activities are presented.

  8. A trial fabrication of activity standard surface sources and positional standard surface sources for an imaging plate system

    Sato, Yasushi; Hino, Yoshio; Yamada, Takahiro; Matsumoto, Mikio

    2003-01-01

    An imaging plate system can detect low level activity, but quantitative analysis is difficult because there are no adequate standard surface sources. A new fabrication method was developed for standard surface sources by printing on a sheet of paper using an ink-jet printer with inks in which a radioactive material was mixed. The fabricated standard surface sources had high uniformity, high positional resolution arbitrary shapes and a broad intensity range. The standard sources were used for measurement of surface activity as an application. (H. Yokoo)

  9. Source position error influence on industry CT image quality

    Cong Peng; Li Zhipeng; Wu Haifeng

    2004-01-01

    Based on the emulational exercise, the influence of source position error on industry CT (ICT) image quality was studied and the valuable parameters were obtained for the design of ICT. The vivid container CT image was also acquired from the CT testing system. (authors)

  10. IMPROVING THE QUALITY OF NEAR-INFRARED IMAGING OF IN VIVOBLOOD VESSELS USING IMAGE FUSION METHODS

    Jensen, Andreas Kryger; Savarimuthu, Thiusius Rajeeth; Sørensen, Anders Stengaard

    2009-01-01

    We investigate methods for improving the visual quality of in vivo images of blood vessels in the human forearm. Using a near-infrared light source and a dual CCD chip camera system capable of capturing images at visual and nearinfrared spectra, we evaluate three fusion methods in terms...... of their capability of enhancing the blood vessels while preserving the spectral signature of the original color image. Furthermore, we investigate a possibility of removing hair in the images using a fusion rule based on the "a trous" stationary wavelet decomposition. The method with the best overall performance...... with both speed and quality in mind is the Intensity Injection method. Using the developed system and the methods presented in this article, it is possible to create images of high visual quality with highly emphasized blood vessels....

  11. Sparse Source EEG Imaging with the Variational Garrote

    Hansen, Sofie Therese; Stahlhut, Carsten; Hansen, Lars Kai

    2013-01-01

    EEG imaging, the estimation of the cortical source distribution from scalp electrode measurements, poses an extremely ill-posed inverse problem. Recent work by Delorme et al. (2012) supports the hypothesis that distributed source solutions are sparse. We show that direct search for sparse solutions...

  12. Some selected quantitative methods of thermal image analysis in Matlab.

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Coded aperture imaging of alpha source spatial distribution

    Talebitaher, Alireza; Shutler, Paul M.E.; Springham, Stuart V.; Rawat, Rajdeep S.; Lee, Paul

    2012-01-01

    The Coded Aperture Imaging (CAI) technique has been applied with CR-39 nuclear track detectors to image alpha particle source spatial distributions. The experimental setup comprised: a 226 Ra source of alpha particles, a laser-machined CAI mask, and CR-39 detectors, arranged inside a vacuum enclosure. Three different alpha particle source shapes were synthesized by using a linear translator to move the 226 Ra source within the vacuum enclosure. The coded mask pattern used is based on a Singer Cyclic Difference Set, with 400 pixels and 57 open square holes (representing ρ = 1/7 = 14.3% open fraction). After etching of the CR-39 detectors, the area, circularity, mean optical density and positions of all candidate tracks were measured by an automated scanning system. Appropriate criteria were used to select alpha particle tracks, and a decoding algorithm applied to the (x, y) data produced the de-coded image of the source. Signal to Noise Ratio (SNR) values obtained for alpha particle CAI images were found to be substantially better than those for corresponding pinhole images, although the CAI-SNR values were below the predictions of theoretical formulae. Monte Carlo simulations of CAI and pinhole imaging were performed in order to validate the theoretical SNR formulae and also our CAI decoding algorithm. There was found to be good agreement between the theoretical formulae and SNR values obtained from simulations. Possible reasons for the lower SNR obtained for the experimental CAI study are discussed.

  14. Magnetic resonance spectroscopy as an imaging method

    Bomsdorf, H.; Imme, M.; Jensen, D.; Kunz, D.; Menhardt, W.; Ottenberg, K.; Roeschmann, P.; Schmidt, K.H.; Tschendel, O.; Wieland, J.

    1990-01-01

    An experimental Magnetic Resonance (MR) system with 4 tesla flux density was set up. For that purpose a data acquisition system and RF coils for resonance frequencies up to 170 MHz were developed. Methods for image guided spectroscopy as well as spectroscopic imaging focussing on the nuclei 1 H and 13 C were developed and tested on volunteers and selected patients. The advantages of the high field strength with respect to spectroscopic studies were demonstrated. Developments of a new fast imaging technique for the acquisition of scout images as well as a method for mapping and displaying the magnetic field inhomogeneity in-vivo represent contributions to the optimisation of the experimental procedure in spectroscopic studies. Investigations on the interaction of RF radiation with the exposed tissue allowed conclusions regarding the applicability of MR methods at high field strengths. Methods for display and processing of multi-dimensional spectroscopic imaging data sets were developed and existing methods for real-time image synthesis were extended. Results achieved in the field of computer aided analysis of MR images comprised new techniques for image background detection, contour detection and automatic image interpretation as well as knowledge bases for textural representation of medical knowledge for diagnosis. (orig.) With 82 refs., 3 tabs., 75 figs [de

  15. Schwarz method for earthquake source dynamics

    Badea, Lori; Ionescu, Ioan R.; Wolf, Sylvie

    2008-01-01

    Dynamic faulting under slip-dependent friction in a linear elastic domain (in-plane and 3D configurations) is considered. The use of an implicit time-stepping scheme (Newmark method) allows much larger values of the time step than the critical CFL time step, and higher accuracy to handle the non-smoothness of the interface constitutive law (slip weakening friction). The finite element form of the quasi-variational inequality is solved by a Schwarz domain decomposition method, by separating the inner nodes of the domain from the nodes on the fault. In this way, the quasi-variational inequality splits into two subproblems. The first one is a large linear system of equations, and its unknowns are related to the mesh nodes of the first subdomain (i.e. lying inside the domain). The unknowns of the second subproblem are the degrees of freedom of the mesh nodes of the second subdomain (i.e. lying on the domain boundary where the conditions of contact and friction are imposed). This nonlinear subproblem is solved by the same Schwarz algorithm, leading to some local nonlinear subproblems of a very small size. Numerical experiments are performed to illustrate convergence in time and space, instability capturing, energy dissipation and the influence of normal stress variations. We have used the proposed numerical method to compute source dynamics phenomena on complex and realistic 2D fault models (branched fault systems)

  16. Algorithms for biomagnetic source imaging with prior anatomical and physiological information

    Hughett, Paul William [Univ. of California, Berkeley, CA (United States). Dept. of Electrical Engineering and Computer Sciences

    1995-12-01

    This dissertation derives a new method for estimating current source amplitudes in the brain and heart from external magnetic field measurements and prior knowledge about the probable source positions and amplitudes. The minimum mean square error estimator for the linear inverse problem with statistical prior information was derived and is called the optimal constrained linear inverse method (OCLIM). OCLIM includes as special cases the Shim-Cho weighted pseudoinverse and Wiener estimators but allows more general priors and thus reduces the reconstruction error. Efficient algorithms were developed to compute the OCLIM estimate for instantaneous or time series data. The method was tested in a simulated neuromagnetic imaging problem with five simultaneously active sources on a grid of 387 possible source locations; all five sources were resolved, even though the true sources were not exactly at the modeled source positions and the true source statistics differed from the assumed statistics.

  17. An objective method for High Dynamic Range source content selection

    Narwaria, Manish; Mantel, Claire; Da Silva, Matthieu Perreira

    2014-01-01

    With the aim of improving the immersive experience of the end user, High Dynamic Range (HDR) imaging has been gaining popularity. Therefore, proper validation and performance benchmarking of HDR processing algorithms is a key step towards standardization and commercial deployment. A crucial...... component of such validation studies is the selection of a challenging and balanced set of source (reference) HDR content. In order to facilitate this, we present an objective method based on the premise that a more challenging HDR scene encapsulates higher contrast, and as a result will show up more...

  18. Image splitting and remapping method for radiological image compression

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  19. Gadgetron: An Open Source Framework for Medical Image Reconstruction

    Hansen, Michael Schacht; Sørensen, Thomas Sangild

    2013-01-01

    This work presents a new open source framework for medical image reconstruction called the “Gadgetron.” The framework implements a flexible system for creating streaming data processing pipelines where data pass through a series of modules or “Gadgets” from raw data to reconstructed images...... with a set of dedicated toolboxes in shared libraries for medical image reconstruction. This includes generic toolboxes for data-parallel (e.g., GPU-based) execution of compute-intensive components. The basic framework architecture is independent of medical imaging modality, but this article focuses on its...

  20. Subsurface imaging by electrical and EM methods

    NONE

    1998-12-01

    This report consists of 3 subjects. 1) Three dimensional inversion of resistivity data with topography : In this study, we developed a 3-D inversion method based on the finite element calculation of model responses, which can effectively accommodate the irregular topography. In solving the inverse problem, the iterative least-squares approach comprising the smoothness-constraints was taken along with the reciprocity approach in the calculation of Jacobian. Furthermore the Active Constraint Balancing, which has been recently developed by ourselves to enhance the resolving power of the inverse problem, was also employed. Since our new algorithm accounts for the topography in the inversion step, topography correction is not necessary as a preliminary processing and we can expect a more accurate image of the earth. 2) Electromagnetic responses due to a source in the borehole : The effects of borehole fluid and casing on the borehole EM responses should thoroughly be analyzed since they may affect the resultant image of the earth. In this study, we developed an accurate algorithm for calculating the EM responses containing the effects of borehole fluid and casing when a current-carrying ring is located on the borehole axis. An analytic expression for primary vertical magnetic field along the borehole axis was first formulated and the fast Fourier transform is to be applied to get the EM fields at any location in whole space. 3) High frequency electromagnetic impedance survey : At high frequencies the EM impedance becomes a function of the angle of incidence or the horizontal wavenumber, so the electrical properties cannot be readily extracted without first eliminating the effect of horizontal wavenumber on the impedance. For this purpose, this paper considers two independent methods for accurately determining the horizontal wavenumber, which in turn is used to correct the impedance data. The 'apparent' electrical properties derived from the corrected impedance

  1. Method for position emission mammography image reconstruction

    Smith, Mark Frederick

    2004-10-12

    An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.

  2. Linear Methods for Image Interpolation

    Pascal Getreuer

    2011-01-01

    We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

  3. Influential sources affecting Bangkok adolescent body image perceptions.

    Thianthai, Chulanee

    2006-01-01

    The study of body image-related problems in non-Western countries is still very limited. Thus, this study aims to identify the main influential sources and show how they affect the body image perceptions of Bangkok adolescents. The researcher recruited 400 Thai male and female adolescents in Bangkok, attending high school to freshmen level, ranging from 16-19 years, to participate in this study. Survey questionnaires were distributed to every student and follow-up interviews conducted with 40 students. The findings showed that there are eight main influential sources respectively ranked from the most influential to the least influential: magazines, television, peer group, familial, fashion trend, the opposite gender, self-realization and health knowledge. Similar to those studies conducted in Western countries, more than half of the total percentage was the influence of mass media and peer groups. Bangkok adolescents also internalized Western ideal beauty through these mass media channels. Alike studies conducted in the West, there was similarities in the process of how these influential sources affect Bangkok adolescent body image perception, with the exception of familial source. In conclusion, taking the approach of identifying the main influential sources and understanding how they affect adolescent body image perceptions can help prevent adolescents from having unhealthy views and taking risky measures toward their bodies. More studies conducted in non-Western countries are needed in order to build a cultural sensitive program, catered to the body image problems occurring in adolescents within that particular society.

  4. On rational approximation methods for inverse source problems

    Rundell, William

    2011-02-01

    The basis of most imaging methods is to detect hidden obstacles or inclusions within a body when one can only make measurements on an exterior surface. Such is the ubiquity of these problems, the underlying model can lead to a partial differential equation of any of the major types, but here we focus on the case of steady-state electrostatic or thermal imaging and consider boundary value problems for Laplace\\'s equation. Our inclusions are interior forces with compact support and our data consists of a single measurement of (say) voltage/current or temperature/heat flux on the external boundary. We propose an algorithm that under certain assumptions allows for the determination of the support set of these forces by solving a simpler "equivalent point source" problem, and which uses a Newton scheme to improve the corresponding initial approximation. © 2011 American Institute of Mathematical Sciences.

  5. On rational approximation methods for inverse source problems

    Rundell, William; Hanke, Martin

    2011-01-01

    The basis of most imaging methods is to detect hidden obstacles or inclusions within a body when one can only make measurements on an exterior surface. Such is the ubiquity of these problems, the underlying model can lead to a partial differential equation of any of the major types, but here we focus on the case of steady-state electrostatic or thermal imaging and consider boundary value problems for Laplace's equation. Our inclusions are interior forces with compact support and our data consists of a single measurement of (say) voltage/current or temperature/heat flux on the external boundary. We propose an algorithm that under certain assumptions allows for the determination of the support set of these forces by solving a simpler "equivalent point source" problem, and which uses a Newton scheme to improve the corresponding initial approximation. © 2011 American Institute of Mathematical Sciences.

  6. Digital image envelope: method and evaluation

    Huang, H. K.; Cao, Fei; Zhou, Michael Z.; Mogel, Greg T.; Liu, Brent J.; Zhou, Xiaoqiang

    2003-05-01

    Health data security, characterized in terms of data privacy, authenticity, and integrity, is a vital issue when digital images and other patient information are transmitted through public networks in telehealth applications such as teleradiology. Mandates for ensuring health data security have been extensively discussed (for example The Health Insurance Portability and Accountability Act, HIPAA) and health informatics guidelines (such as the DICOM standard) are beginning to focus on issues of data continue to be published by organizing bodies in healthcare; however, there has not been a systematic method developed to ensure data security in medical imaging Because data privacy and authenticity are often managed primarily with firewall and password protection, we have focused our research and development on data integrity. We have developed a systematic method of ensuring medical image data integrity across public networks using the concept of the digital envelope. When a medical image is generated regardless of the modality, three processes are performed: the image signature is obtained, the DICOM image header is encrypted, and a digital envelope is formed by combining the signature and the encrypted header. The envelope is encrypted and embedded in the original image. This assures the security of both the image and the patient ID. The embedded image is encrypted again and transmitted across the network. The reverse process is performed at the receiving site. The result is two digital signatures, one from the original image before transmission, and second from the image after transmission. If the signatures are identical, there has been no alteration of the image. This paper concentrates in the method and evaluation of the digital image envelope.

  7. Imaging multipole gravity anomaly sources by 3D probability tomography

    Alaia, Raffaele; Patella, Domenico; Mauriello, Paolo

    2009-01-01

    We present a generalized theory of the probability tomography applied to the gravity method, assuming that any Bouguer anomaly data set can be caused by a discrete number of monopoles, dipoles, quadrupoles and octopoles. These elementary sources are used to characterize, in an as detailed as possible way and without any a priori assumption, the shape and position of the most probable minimum structure of the gravity sources compatible with the observed data set, by picking out the location of their centres and peculiar points of their boundaries related to faces, edges and vertices. A few synthetic examples using simple geometries are discussed in order to demonstrate the notably enhanced resolution power of the new approach, compared with a previous formulation that used only monopoles and dipoles. A field example related to a gravity survey carried out in the volcanic area of Mount Etna (Sicily, Italy) is presented, aimed at imaging the geometry of the minimum gravity structure down to 8 km of depth bsl

  8. Method for nuclear magnetic resonance imaging

    Kehayias, J.J.; Joel, D.D.; Adams, W.H.; Stein, H.L.

    1988-05-26

    A method for in vivo NMR imaging of the blood vessels and organs of a patient characterized by using a dark dye-like imaging substance consisting essentially of a stable, high-purity concentration of D/sub 2/O in a solution with water.

  9. Linear Methods for Image Interpolation

    Pascal Getreuer

    2011-09-01

    Full Text Available We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

  10. The VTTVIS line imaging spectrometer - principles, error sources, and calibration

    Jørgensen, R.N.

    2002-01-01

    work describing the basic principles, potential error sources, and/or adjustment and calibration procedures. This report fulfils the need for such documentationwith special focus on the system at KVL. The PGP based system has several severe error sources, which should be removed prior any analysis......Hyperspectral imaging with a spatial resolution of a few mm2 has proved to have a great potential within crop and weed classification and also within nutrient diagnostics. A commonly used hyperspectral imaging system is based on the Prism-Grating-Prism(PGP) principles produced by Specim Ltd...... in off-axis transmission efficiencies, diffractionefficiencies, and image distortion have a significant impact on the instrument performance. Procedures removing or minimising these systematic error sources are developed and described for the system build at KVL but can be generalised to other PGP...

  11. PySE: Python Source Extractor for radio astronomical images

    Spreeuw, Hanno; Swinbank, John; Molenaar, Gijs; Staley, Tim; Rol, Evert; Sanders, John; Scheers, Bart; Kuiack, Mark

    2018-05-01

    PySE finds and measures sources in radio telescope images. It is run with several options, such as the detection threshold (a multiple of the local noise), grid size, and the forced clean beam fit, followed by a list of input image files in standard FITS or CASA format. From these, PySe provides a list of found sources; information such as the calculated background image, source list in different formats (e.g. text, region files importable in DS9), and other data may be saved. PySe can be integrated into a pipeline; it was originally written as part of the LOFAR Transient Detection Pipeline (TraP, ascl:1412.011).

  12. Imaging methods in medical diagnosis

    Krestel, E.

    1981-01-01

    Pictures of parts of the human body or of the human body (views, superposition pictures, pictures of body layers, or photographs) are considerable helps for the medical diagnostics. Physics, electrotechnique, and machine construction make the picture production possible. Modern electronics and optics offer facilities of picture processing which influences the picture quality. Picture interpretation is the the physican's task. The picture-delivering methods applied in medicine include the conventional X-ray diagnostics, X-ray computer tomography, nuclear diagnostics, sonography with ultas sound, and endoscopy. Their rapid development and immprovement was caused by the development of electronics during the past 20 years. A method presently in discussion and development is the Kernspin-tomography. (orig./MG) [de

  13. PySE: Software for extracting sources from radio images

    Carbone, D.; Garsden, H.; Spreeuw, H.; Swinbank, J. D.; van der Horst, A. J.; Rowlinson, A.; Broderick, J. W.; Rol, E.; Law, C.; Molenaar, G.; Wijers, R. A. M. J.

    2018-04-01

    PySE is a Python software package for finding and measuring sources in radio telescope images. The software was designed to detect sources in the LOFAR telescope images, but can be used with images from other radio telescopes as well. We introduce the LOFAR Telescope, the context within which PySE was developed, the design of PySE, and describe how it is used. Detailed experiments on the validation and testing of PySE are then presented, along with results of performance testing. We discuss some of the current issues with the algorithms implemented in PySE and their interaction with LOFAR images, concluding with the current status of PySE and its future development.

  14. Photoacoustic imaging driven by an interstitial irradiation source

    Trevor Mitcham

    2015-06-01

    Full Text Available Photoacoustic (PA imaging has shown tremendous promise in providing valuable diagnostic and therapy-monitoring information in select clinical procedures. Many of these pursued applications, however, have been relatively superficial due to difficulties with delivering light deep into tissue. To address this limitation, this work investigates generating a PA image using an interstitial irradiation source with a clinical ultrasound (US system, which was shown to yield improved PA signal quality at distances beyond 13 mm and to provide improved spectral fidelity. Additionally, interstitially driven multi-wavelength PA imaging was able to provide accurate spectra of gold nanoshells and deoxyhemoglobin in excised prostate and liver tissue, respectively, and allowed for clear visualization of a wire at 7 cm in excised liver. This work demonstrates the potential of using a local irradiation source to extend the depth capabilities of future PA imaging techniques for minimally invasive interventional radiology procedures.

  15. Image quality optimization and evaluation of linearly mixed images in dual-source, dual-energy CT

    Yu Lifeng; Primak, Andrew N.; Liu Xin; McCollough, Cynthia H.

    2009-01-01

    In dual-source dual-energy CT, the images reconstructed from the low- and high-energy scans (typically at 80 and 140 kV, respectively) can be mixed together to provide a single set of non-material-specific images for the purpose of routine diagnostic interpretation. Different from the material-specific information that may be obtained from the dual-energy scan data, the mixed images are created with the purpose of providing the interpreting physician a single set of images that have an appearance similar to that in single-energy images acquired at the same total radiation dose. In this work, the authors used a phantom study to evaluate the image quality of linearly mixed images in comparison to single-energy CT images, assuming the same total radiation dose and taking into account the effect of patient size and the dose partitioning between the low-and high-energy scans. The authors first developed a method to optimize the quality of the linearly mixed images such that the single-energy image quality was compared to the best-case image quality of the dual-energy mixed images. Compared to 80 kV single-energy images for the same radiation dose, the iodine CNR in dual-energy mixed images was worse for smaller phantom sizes. However, similar noise and similar or improved iodine CNR relative to 120 kV images could be achieved for dual-energy mixed images using the same total radiation dose over a wide range of patient sizes (up to 45 cm lateral thorax dimension). Thus, for adult CT practices, which primarily use 120 kV scanning, the use of dual-energy CT for the purpose of material-specific imaging can also produce a set of non-material-specific images for routine diagnostic interpretation that are of similar or improved quality relative to single-energy 120 kV scans.

  16. A NEW IMAGE REGISTRATION METHOD FOR GREY IMAGES

    Nie Xuan; Zhao Rongchun; Jiang Zetao

    2004-01-01

    The proposed algorithm relies on a group of new formulas for calculating tangent slope so as to address angle feature of edge curves of image. It can utilize tangent angle features to estimate automatically and fully the rotation parameters of geometric transform and enable rough matching of images with huge rotation difference. After angle compensation, it can search for matching point sets by correlation criterion, then calculate parameters of affine transform, enable higher-precision emendation of rotation and transferring. Finally, it fulfills precise matching for images with relax-tense iteration method. Compared with the registration approach based on wavelet direction-angle features, the matching algorithm with tangent feature of image edge is more robust and realizes precise registration of various images. Furthermore, it is also helpful in graphics matching.

  17. Historic Methods for Capturing Magnetic Field Images

    Kwan, Alistair

    2016-01-01

    I investigated two late 19th-century methods for capturing magnetic field images from iron filings for historical insight into the pedagogy of hands-on physics education methods, and to flesh out teaching and learning practicalities tacit in the historical record. Both methods offer opportunities for close sensory engagement in data-collection…

  18. Research on multi-source image fusion technology in haze environment

    Ma, GuoDong; Piao, Yan; Li, Bing

    2017-11-01

    In the haze environment, the visible image collected by a single sensor can express the details of the shape, color and texture of the target very well, but because of the haze, the sharpness is low and some of the target subjects are lost; Because of the expression of thermal radiation and strong penetration ability, infrared image collected by a single sensor can clearly express the target subject, but it will lose detail information. Therefore, the multi-source image fusion method is proposed to exploit their respective advantages. Firstly, the improved Dark Channel Prior algorithm is used to preprocess the visible haze image. Secondly, the improved SURF algorithm is used to register the infrared image and the haze-free visible image. Finally, the weighted fusion algorithm based on information complementary is used to fuse the image. Experiments show that the proposed method can improve the clarity of the visible target and highlight the occluded infrared target for target recognition.

  19. Methods for evaluating imaging methods of limited reproducibility

    Krummenauer, F.

    2005-01-01

    Just like new drugs, new or modified imaging methods must be subjected to objective clinical tests, including tests on humans. In this, it must be ensured that the principle of Good Clinical Practice (GCP) are followed with regard to medical, administrative and methodical quality. Innovative methods fo clinical epidemiology and medical biometry should be applied from the planning stage to the final statistical evaluation. The author presents established and new methods for planning, evaluation and reporting of clinical tests of diagnostic methods, and especially imaging methods, in clinical medicine and illustrates these by means of current research projects in the various medical disciplines. The strategies presented are summarized in a recommendation based on the concept of phases I - IV of clinical drug testing in order to enable standardisation of the clinical evaluation of imaging methods. (orig.)

  20. Incorporating priors for EEG source imaging and connectivity analysis

    Xu eLei

    2015-08-01

    Full Text Available Electroencephalography source imaging (ESI is a useful technique to localize the generators from a given scalp electric measurement and to investigate the temporal dynamics of the large-scale neural circuits. By introducing reasonable priors from other modalities, ESI reveals the most probable sources and communication structures at every moment in time. Here, we review the available priors from such techniques as magnetic resonance imaging (MRI, functional MRI (fMRI, and positron emission tomography (PET. The modality's specific contribution is analyzed from the perspective of source reconstruction. For spatial priors, such as EEG-correlated fMRI, temporally coherent networks and resting-state fMRI are systematically introduced in the ESI. Moreover, the fiber tracking (diffusion tensor imaging, DTI and neuro-stimulation techniques (transcranial magnetic stimulation, TMS are also introduced as the potential priors, which can help to draw inferences about the neuroelectric connectivity in the source space. We conclude that combining EEG source imaging with other complementary modalities is a promising approach towards the study of brain networks in cognitive and clinical neurosciences.

  1. Imaging x-ray sources at a finite distance in coded-mask instruments

    Donnarumma, Immacolata; Pacciani, Luigi; Lapshov, Igor; Evangelista, Yuri

    2008-01-01

    We present a method for the correction of beam divergence in finite distance sources imaging through coded-mask instruments. We discuss the defocusing artifacts induced by the finite distance showing two different approaches to remove such spurious effects. We applied our method to one-dimensional (1D) coded-mask systems, although it is also applicable in two-dimensional systems. We provide a detailed mathematical description of the adopted method and of the systematics introduced in the reconstructed image (e.g., the fraction of source flux collected in the reconstructed peak counts). The accuracy of this method was tested by simulating pointlike and extended sources at a finite distance with the instrumental setup of the SuperAGILE experiment, the 1D coded-mask x-ray imager onboard the AGILE (Astro-rivelatore Gamma a Immagini Leggero) mission. We obtained reconstructed images of good quality and high source location accuracy. Finally we show the results obtained by applying this method to real data collected during the calibration campaign of SuperAGILE. Our method was demonstrated to be a powerful tool to investigate the imaging response of the experiment, particularly the absorption due to the materials intercepting the line of sight of the instrument and the conversion between detector pixel and sky direction

  2. Prognostic aspects of imaging method development

    Steinhart, L.

    1987-01-01

    A survey is presented of X-ray diagnostic methods and techniques and possibilities of their further development. Promising methods include direct imaging using digital radiography. In connection with computer technology these methods achieve higher resolution. The storage of obtained images in the computer memory will allow automated processing and evaluation and the use of expert systems. Development is expected to take place especially in computerized tomography using magnetic resonance, and positron computed tomography and other non-radioactive diagnostic methods. (J.B.). 5 figs., 1 tab., 1 ref

  3. Matrix Krylov subspace methods for image restoration

    khalide jbilou

    2015-09-01

    Full Text Available In the present paper, we consider some matrix Krylov subspace methods for solving ill-posed linear matrix equations and in those problems coming from the restoration of blurred and noisy images. Applying the well known Tikhonov regularization procedure leads to a Sylvester matrix equation depending the Tikhonov regularized parameter. We apply the matrix versions of the well known Krylov subspace methods, namely the Least Squared (LSQR and the conjugate gradient (CG methods to get approximate solutions representing the restored images. Some numerical tests are presented to show the effectiveness of the proposed methods.

  4. Microfocus x-ray imaging of traceable pointlike {sup 22}Na sources for quality control

    Hasegawa, T.; Oda, K.; Sato, Y.; Ito, H.; Masuda, S.; Yamada, T.; Matsumoto, M.; Murayama, H.; Takei, H. [Allied Health Sciences, Kitasato University Kitasato 1-15-1, Minami-ku, Sagamihara-shi, Kanagawa 252-0373 (Japan); Positron Medical Center, Tokyo Metropolitan Institute of Gerontology Sakaecho 35-2, Itabashi-ku, Tokyo 173-0015 (Japan); Advanced Industrial Science and Technology (AIST) Central 2, Umezono 1-1-1, Tsukuba-shi, Ibaraki 305-8568 (Japan); Kanagawa Industrial Technology Center (KITC) Shimoimazumi 705-1, Ebina-shi, Kanagawa 243-0435 (Japan); Japan Radioisotope Association (JRIA) Komagome 2-28-45, Bunkyo-ku, Tokyo 113-8941 (Japan); Molecular Imaging Center, National Institute of Radiological Sciences Anagawa 4-9-1, Inage, Chiba 263-8555 (Japan); Graduate School of Medical Sciences, Kitasato University Kitasato 1-15-1, Minami-ku, Sagamihara-shi, Kanagawa 252-0373 (Japan)

    2012-07-15

    Purpose: The purpose of this study is to propose a microfocus x-ray imaging technique for observing the internal structure of small radioactive sources and evaluating geometrical errors quantitatively, and to apply this technique to traceable pointlike {sup 22}Na sources, which were designed for positron emission tomography calibration, for the purpose of quality control of the pointlike sources. Methods: A microfocus x-ray imaging system with a focus size of 0.001 mm was used to obtain projection x-ray images and x-ray CT images of five pointlike source samples, which were manufactured during 2009-2012. The obtained projection and tomographic images were used to observe the internal structure and evaluate geometrical errors quantitatively. Monte Carlo simulation was used to evaluate the effect of possible geometrical errors on the intensity and uniformity of 0.511 MeV annihilation photon pairs emitted from the sources. Results: Geometrical errors were evaluated with sufficient precision using projection x-ray images. CT images were used for observing the internal structure intuitively. As a result, four of the five examined samples were within the tolerance to maintain the total uncertainty below {+-}0.5%, given the source radioactivity; however, one sample was found to be defective. Conclusions: This quality control procedure is crucial and offers an important basis for using the pointlike {sup 22}Na source as a basic calibration tool. The microfocus x-ray imaging approach is a promising technique for visual and quantitative evaluation of the internal geometry of small radioactive sources.

  5. Development and validation of a combined phased acoustical radiosity and image source model for predicting sound fields in rooms

    Marbjerg, Gerd Høy; Brunskog, Jonas; Jeong, Cheol-Ho

    2015-01-01

    A model, combining acoustical radiosity and the image source method, including phase shifts on reflection, has been developed. The model is denoted Phased Acoustical Radiosity and Image Source Method (PARISM), and it has been developed in order to be able to model both specular and diffuse...... radiosity by regarding the model as being stochastic. Three methods of implementation are proposed and investigated, and finally, recommendations are made for their use. Validation of the image source method is done by comparison with finite element simulations of a rectangular room with a porous absorber...

  6. EEG source imaging assists decoding in a face recognition task

    Andersen, Rasmus S.; Eliasen, Anders U.; Pedersen, Nicolai

    2017-01-01

    of face recognition. This task concerns the differentiation of brain responses to images of faces and scrambled faces and poses a rather difficult decoding problem at the single trial level. We implement the pipeline using spatially focused features and show that this approach is challenged and source...

  7. Advanced imaging technology using carbon nanotube x ray source

    Choi, Hae Young; Seol, Seung Kown; Kim, Jaehoon; Yoo, Seung Hoon; Kim, Jong Uk

    2008-01-01

    Recently, X ray imaging technology is a useful and leading medical diagnostic tool for healthcare professionals to diagnose disease in human body. CNTs(i.e. carbon nanotubes)are used in many applications like FED, Micro wave amplifier, X ray source, etc. because of its suitable electrical, chemical and physical properties. Specially, CNTs are well used electron emitters for x ray source. Conventionally, thermionic type of tungsten filament x ray tube is widely employed in the field of bio medical and industrial application fields. However, intrinsic problems such as, poor emission efficiency and low imaging resolution cause the limitation of use of the x ray tube. To fulfill the current market requirement specifically for medical diagnostic field, we have developed rather a portable and compact CNT based x ray source in which high imaging resolution is provided. Electron sources used in X ray tubes should be well focused to the anode target for generation of high quality x ray. In this study, Pierce type x ray generation module was tested based its simulation results using by OPERA 3D code. Pierce type module is composed of cone type electrical lens with its number of them and inner angles of them that shows different results with these parameters. And some preliminary images obtained using the CNT x ray source were obtained. The represented images are the finger bone and teeth in human body. It is clear that the trabeculation shape is observed in finger bone. To obtain the finger bone image, tube currents of 250A at 42kV tube voltage was applied. The human tooth image, however, is somewhat unclear because the supplied voltage to the tube was limited to max. 50kV in the system developed. It should be noted that normally 60∼70kV of tube voltage is supplied in dental imaging. Considering these it should be emphasized that if the tube voltage is over 60kV then clearer image is possible. In this paper, we are discussed comparing between these experiment results and

  8. Bayesian image processing of data from fuzzy pattern sources

    Liang, Z.; Hart, H.

    1986-01-01

    In some radioisotopic organ image applications, a priori or supplementary source information may exist and can be characterized in terms of probability density functions P (phi) of the source elements {phi/sub j/} = phi (where phi/sub j/ (j = 1,2,..α) is the estimated average photon emission in voxel j per unit time at t = 0). For example, in cardiac imaging studies it is possible to evaluate the radioisotope concentration of the blood filling the cardiac chambers independently as a function of time by peripheral measurement. The blood concentration information in effect serves to limit amplitude uncertainty to the chamber boundary voxels and thus reduces the extent of amplitude ambiguities in the overall cardiac imaging reconstruction. The a priori or supplementary information may more generally be spatial, amplitude-dependent probability distributions P(phi), fuzzy patterns superimposed upon a background

  9. Analysis of Non Local Image Denoising Methods

    Pardo, Álvaro

    Image denoising is probably one of the most studied problems in the image processing community. Recently a new paradigm on non local denoising was introduced. The Non Local Means method proposed by Buades, Morel and Coll attracted the attention of other researches who proposed improvements and modifications to their proposal. In this work we analyze those methods trying to understand their properties while connecting them to segmentation based on spectral graph properties. We also propose some improvements to automatically estimate the parameters used on these methods.

  10. Handbook of mathematical methods in imaging

    2015-01-01

    The Handbook of Mathematical Methods in Imaging provides a comprehensive treatment of the mathematical techniques used in imaging science. The material is grouped into two central themes, namely, Inverse Problems (Algorithmic Reconstruction) and Signal and Image Processing. Each section within the themes covers applications (modeling), mathematics, numerical methods (using a case example) and open questions. Written by experts in the area, the presentation is mathematically rigorous. This expanded and revised second edition contains updates to existing chapters and 16 additional entries on important mathematical methods such as graph cuts, morphology, discrete geometry, PDEs, conformal methods, to name a few. The entries are cross-referenced for easy navigation through connected topics. Available in both print and electronic forms, the handbook is enhanced by more than 200 illustrations and an extended bibliography. It will benefit students, scientists and researchers in applied mathematics. Engineers and com...

  11. Accelerated gradient methods for constrained image deblurring

    Bonettini, S; Zanella, R; Zanni, L; Bertero, M

    2008-01-01

    In this paper we propose a special gradient projection method for the image deblurring problem, in the framework of the maximum likelihood approach. We present the method in a very general form and we give convergence results under standard assumptions. Then we consider the deblurring problem and the generality of the proposed algorithm allows us to add a energy conservation constraint to the maximum likelihood problem. In order to improve the convergence rate, we devise appropriate scaling strategies and steplength updating rules, especially designed for this application. The effectiveness of the method is evaluated by means of a computational study on astronomical images corrupted by Poisson noise. Comparisons with standard methods for image restoration, such as the expectation maximization algorithm, are also reported.

  12. Dual-source CT cardiac imaging: initial experience

    Johnson, Thorsten R.C.; Nikolaou, Konstantin; Wintersperger, Bernd J.; Rist, Carsten; Buhmann, Sonja; Reiser, Maximilian F.; Becker, Christoph R.; Leber, Alexander W.; Ziegler, Franz von; Knez, Andreas

    2006-01-01

    The relation of heart rate and image quality in the depiction of coronary arteries, heart valves and myocardium was assessed on a dual-source computed tomography system (DSCT). Coronary CT angiography was performed on a DSCT (Somatom Definition, Siemens) with high concentration contrast media (Iopromide, Ultravist 370, Schering) in 24 patients with heart rates between 44 and 92 beats per minute. Images were reconstructed over the whole cardiac cycle in 10% steps. Two readers independently assessed the image quality with regard to the diagnostic evaluation of right and left coronary artery, heart valves and left ventricular myocardium for the assessment of vessel wall changes, coronary stenoses, valve morphology and function and ventricular function on a three point grading scale. The image quality ratings at the optimal reconstruction interval were 1.24±0.42 for the right and 1.09±0.27 for the left coronary artery. A reconstruction of diagnostic systolic and diastolic images is possible for a wide range of heart rates, allowing also a functional evaluation of valves and myocardium. Dual-source CT offers very robust diagnostic image quality in a wide range of heart rates. The high temporal resolution now also makes a functional evaluation of the heart valves and myocardium possible. (orig.)

  13. Circular SAR Optimization Imaging Method of Buildings

    Wang Jian-feng

    2015-12-01

    Full Text Available The Circular Synthetic Aperture Radar (CSAR can obtain the entire scattering properties of targets because of its great ability of 360° observation. In this study, an optimal orientation of the CSAR imaging algorithm of buildings is proposed by applying a combination of coherent and incoherent processing techniques. FEKO software is used to construct the electromagnetic scattering modes and simulate the radar echo. The FEKO imaging results are compared with the isotropic scattering results. On comparison, the optimal azimuth coherent accumulation angle of CSAR imaging of buildings is obtained. Practically, the scattering directions of buildings are unknown; therefore, we divide the 360° echo of CSAR into many overlapped and few angle echoes corresponding to the sub-aperture and then perform an imaging procedure on each sub-aperture. Sub-aperture imaging results are applied to obtain the all-around image using incoherent fusion techniques. The polarimetry decomposition method is used to decompose the all-around image and further retrieve the edge information of buildings successfully. The proposed method is validated with P-band airborne CSAR data from Sichuan, China.

  14. Method of orthogonally splitting imaging pose measurement

    Zhao, Na; Sun, Changku; Wang, Peng; Yang, Qian; Liu, Xintong

    2018-01-01

    In order to meet the aviation's and machinery manufacturing's pose measurement need of high precision, fast speed and wide measurement range, and to resolve the contradiction between measurement range and resolution of vision sensor, this paper proposes an orthogonally splitting imaging pose measurement method. This paper designs and realizes an orthogonally splitting imaging vision sensor and establishes a pose measurement system. The vision sensor consists of one imaging lens, a beam splitter prism, cylindrical lenses and dual linear CCD. Dual linear CCD respectively acquire one dimensional image coordinate data of the target point, and two data can restore the two dimensional image coordinates of the target point. According to the characteristics of imaging system, this paper establishes the nonlinear distortion model to correct distortion. Based on cross ratio invariability, polynomial equation is established and solved by the least square fitting method. After completing distortion correction, this paper establishes the measurement mathematical model of vision sensor, and determines intrinsic parameters to calibrate. An array of feature points for calibration is built by placing a planar target in any different positions for a few times. An terative optimization method is presented to solve the parameters of model. The experimental results show that the field angle is 52 °, the focus distance is 27.40 mm, image resolution is 5185×5117 pixels, displacement measurement error is less than 0.1mm, and rotation angle measurement error is less than 0.15°. The method of orthogonally splitting imaging pose measurement can satisfy the pose measurement requirement of high precision, fast speed and wide measurement range.

  15. COMPARISON OF DIGITAL IMAGE STEGANOGRAPHY METHODS

    S. A. Seyyedi

    2013-01-01

    Full Text Available Steganography is a method of hiding information in other information of different format (container. There are many steganography techniques with various types of container. In the Internet, digital images are the most popular and frequently used containers. We consider main image steganography techniques and their advantages and disadvantages. We also identify the requirements of a good steganography algorithm and compare various such algorithms.

  16. Study on Processing Method of Image Shadow

    Wang Bo

    2014-07-01

    Full Text Available In order to effectively remove disturbance of shadow and enhance robustness of information processing of computer visual image, this paper makes study on inspection and removal of image shadow. It makes study the continual removal algorithm of shadow based on integration, the illumination surface and texture, it respectively introduces their work principles and realization method, it can effectively carrying processing for shadow by test.

  17. Improved image alignment method in application to X-ray images and biological images.

    Wang, Ching-Wei; Chen, Hsiang-Chou

    2013-08-01

    Alignment of medical images is a vital component of a large number of applications throughout the clinical track of events; not only within clinical diagnostic settings, but prominently so in the area of planning, consummation and evaluation of surgical and radiotherapeutical procedures. However, image registration of medical images is challenging because of variations on data appearance, imaging artifacts and complex data deformation problems. Hence, the aim of this study is to develop a robust image alignment method for medical images. An improved image registration method is proposed, and the method is evaluated with two types of medical data, including biological microscopic tissue images and dental X-ray images and compared with five state-of-the-art image registration techniques. The experimental results show that the presented method consistently performs well on both types of medical images, achieving 88.44 and 88.93% averaged registration accuracies for biological tissue images and X-ray images, respectively, and outperforms the benchmark methods. Based on the Tukey's honestly significant difference test and Fisher's least square difference test tests, the presented method performs significantly better than all existing methods (P ≤ 0.001) for tissue image alignment, and for the X-ray image registration, the proposed method performs significantly better than the two benchmark b-spline approaches (P < 0.001). The software implementation of the presented method and the data used in this study are made publicly available for scientific communities to use (http://www-o.ntust.edu.tw/∼cweiwang/ImprovedImageRegistration/). cweiwang@mail.ntust.edu.tw.

  18. An Improved Image Contrast Assessment Method

    Yuanyuan Fan

    2013-07-01

    Full Text Available Contrast is an important factor affecting the image quality. In order to overcome the problems of local band-limited contrast, a novel image contrast assessment method based on the property of HVS is proposed. Firstly, the image by low-pass filter is performed fast wavelet decomposition. Secondly, all levels of band-pass filtered image and its corresponding low-pass filtered image are obtained by processing wavelet coefficients. Thirdly, local band-limited contrast is calculated, and the local band-limited contrast entropy is calculated according to the definition of entropy, Finally, the contrast entropy of image is obtained by averaging the local band-limited contrast entropy weighed using CSF coefficient. The experiment results show that the best contrast image can be accurately identified in the sequence images obtained by adjusting the exposure time and stretching gray respectively, the assessment results accord with human visual characteristics and make up the lack of local band-limited contrast.

  19. NMR blood vessel imaging method and apparatus

    Riederer, S.J.

    1988-01-01

    A high speed method of forming computed images of blood vessels based on measurements of characteristics of a body is described comprising the steps of: subjecting a predetermined body area containing blood vessels of interest to, successively, applications of a short repetition time (TR) NMR pulse sequence during the period of high blood velocity and then to corresponding applications during the period of low blood velocity for successive heart beat cycles; weighting the collected imaging data from each application of the NMR pulse sequence according to whether the data was acquired during the period of high blood velocity or a period of low blood velocity of the corresponding heart beat cycle; accumulating weighted imaging data from a plurality of NMR pulse sequences corresponding to high blood velocity periods and from a plurality of NMR pulse sequences corresponding to low blood velocity periods; subtracting the weighted imaging data corresponding to each specific phase encoding acquired during the high blood velocity periods from the weighted imaging data for the same phase encoding corresponding to low blood velocity periods in order to compute blood vessel imaging data; and forming an image of the blood vessels of interest from the blood vessel imaging data

  20. A finite-difference contrast source inversion method

    Abubakar, A; Hu, W; Habashy, T M; Van den Berg, P M

    2008-01-01

    We present a contrast source inversion (CSI) algorithm using a finite-difference (FD) approach as its backbone for reconstructing the unknown material properties of inhomogeneous objects embedded in a known inhomogeneous background medium. Unlike the CSI method using the integral equation (IE) approach, the FD-CSI method can readily employ an arbitrary inhomogeneous medium as its background. The ability to use an inhomogeneous background medium has made this algorithm very suitable to be used in through-wall imaging and time-lapse inversion applications. Similar to the IE-CSI algorithm the unknown contrast sources and contrast function are updated alternately to reconstruct the unknown objects without requiring the solution of the full forward problem at each iteration step in the optimization process. The FD solver is formulated in the frequency domain and it is equipped with a perfectly matched layer (PML) absorbing boundary condition. The FD operator used in the FD-CSI method is only dependent on the background medium and the frequency of operation, thus it does not change throughout the inversion process. Therefore, at least for the two-dimensional (2D) configurations, where the size of the stiffness matrix is manageable, the FD stiffness matrix can be inverted using a non-iterative inversion matrix approach such as a Gauss elimination method for the sparse matrix. In this case, an LU decomposition needs to be done only once and can then be reused for multiple source positions and in successive iterations of the inversion. Numerical experiments show that this FD-CSI algorithm has an excellent performance for inverting inhomogeneous objects embedded in an inhomogeneous background medium

  1. An electronically tunable ultrafast laser source applied to fluorescence imaging and fluorescence lifetime imaging microscopy

    Dunsby, C; Lanigan, P M P; McGinty, J; Elson, D S; Requejo-Isidro, J; Munro, I; Galletly, N; McCann, F; Treanor, B; Oenfelt, B; Davis, D M; Neil, M A A; French, P M W

    2004-01-01

    Fluorescence imaging is used widely in microscopy and macroscopic imaging applications for fields ranging from biomedicine to materials science. A critical component for any fluorescence imaging system is the excitation source. Traditionally, wide-field systems use filtered thermal or arc-generated white light sources, while point scanning confocal microscope systems require spatially coherent (point-like) laser sources. Unfortunately, the limited range of visible wavelengths available from conventional laser sources constrains the design and usefulness of fluorescent probes in confocal microscopy. A 'hands-off' laser-like source, electronically tunable across the visible spectrum, would be invaluable for fluorescence imaging and provide new opportunities, e.g. automated excitation fingerprinting and in situ measurement of excitation cross-sections. Yet more information can be obtained using fluorescence lifetime imaging (FLIM), which requires that the light source be pulsed or rapidly modulated. We show how a white light continuum, generated by injecting femtosecond optical radiation into a micro-structured optical fibre, coupled with a simple prism-based tunable filter arrangement, can fulfil all these roles as a continuously electronically tunable (435-1150 nm) visible ultrafast light source in confocal, wide-field and FLIM systems

  2. Efficient image enhancement using sparse source separation in the Retinex theory

    Yoon, Jongsu; Choi, Jangwon; Choe, Yoonsik

    2017-11-01

    Color constancy is the feature of the human vision system (HVS) that ensures the relative constancy of the perceived color of objects under varying illumination conditions. The Retinex theory of machine vision systems is based on the HVS. Among Retinex algorithms, the physics-based algorithms are efficient; however, they generally do not satisfy the local characteristics of the original Retinex theory because they eliminate global illumination from their optimization. We apply the sparse source separation technique to the Retinex theory to present a physics-based algorithm that satisfies the locality characteristic of the original Retinex theory. Previous Retinex algorithms have limited use in image enhancement because the total variation Retinex results in an overly enhanced image and the sparse source separation Retinex cannot completely restore the original image. In contrast, our proposed method preserves the image edge and can very nearly replicate the original image without any special operation.

  3. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  4. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  5. Blind image deconvolution methods and convergence

    Chaudhuri, Subhasis; Rameshan, Renu

    2014-01-01

    Blind deconvolution is a classical image processing problem which has been investigated by a large number of researchers over the last four decades. The purpose of this monograph is not to propose yet another method for blind image restoration. Rather the basic issue of deconvolvability has been explored from a theoretical view point. Some authors claim very good results while quite a few claim that blind restoration does not work. The authors clearly detail when such methods are expected to work and when they will not. In order to avoid the assumptions needed for convergence analysis in the

  6. Gamma-ray imaging of the Quinby sources

    Gregor, J.; Hensley, D.C.

    1996-01-01

    The Quinby sources are alumina cylinders 7 inches in diameter and 8 inches high doped with weapons grade plutonium. We describe a computer tomography system for reconstructing three-dimensional images of these sources. Each reconstruction maps the spatial distribution of the internal [sup 241]Am gamma ray activity and is computed using an iterative, expectation-maximization algorithm with detection efficiencies based both on geometric model of the experimental setup and attenuation corrections. Constructed about a decade ago, the Quinby sources were to contain uniformly distributed material. However, for some of the sources we have found regions where the plutonium solution, tends to be concentrated. The ultimate goal of this work is to provide the basis for self-shielding corrections when analyzing differential dieaway neutron measurements

  7. The utilization of dual source CT in imaging of polytrauma

    Nicolaou, S. [University of British Columbia, Vancouver General Hospital, Department of Radiology, 899 West 12th Avenue, Vancouver, British Columbia, V5Z1M9 (Canada)], E-mail: savvas.nicolaou@vch.ca; Eftekhari, A.; Sedlic, T.; Hou, D.J.; Mudri, M.J.; Aldrich, John; Louis, L. [University of British Columbia, Vancouver General Hospital, Department of Radiology, 899 West 12th Avenue, Vancouver, British Columbia, V5Z1M9 (Canada)

    2008-12-15

    Despite the growing role of imaging, trauma remains the leading cause of death in people below the age of 45 years in the western industrialized countries. Trauma has been touted as the largest epidemic in the 20th century. The advent of MDCT has been the greatest advance in trauma care in the last 25 years. However, there are still challenges in CT imaging of the polytrauma individual including time restraints, diagnostic errors, radiation dose effects and bridging the gap between anatomy and physiology. This article will analyze these challenges and provide possible solutions offered by the unique design of the dual source CT scanner.

  8. The utilization of dual source CT in imaging of polytrauma

    Nicolaou, S.; Eftekhari, A.; Sedlic, T.; Hou, D.J.; Mudri, M.J.; Aldrich, John; Louis, L.

    2008-01-01

    Despite the growing role of imaging, trauma remains the leading cause of death in people below the age of 45 years in the western industrialized countries. Trauma has been touted as the largest epidemic in the 20th century. The advent of MDCT has been the greatest advance in trauma care in the last 25 years. However, there are still challenges in CT imaging of the polytrauma individual including time restraints, diagnostic errors, radiation dose effects and bridging the gap between anatomy and physiology. This article will analyze these challenges and provide possible solutions offered by the unique design of the dual source CT scanner

  9. Blind source separation of ex-vivo aorta tissue multispectral images.

    Galeano, July; Perez, Sandra; Montoya, Yonatan; Botina, Deivid; Garzón, Johnson

    2015-05-01

    Blind Source Separation methods (BSS) aim for the decomposition of a given signal in its main components or source signals. Those techniques have been widely used in the literature for the analysis of biomedical images, in order to extract the main components of an organ or tissue under study. The analysis of skin images for the extraction of melanin and hemoglobin is an example of the use of BSS. This paper presents a proof of concept of the use of source separation of ex-vivo aorta tissue multispectral Images. The images are acquired with an interference filter-based imaging system. The images are processed by means of two algorithms: Independent Components analysis and Non-negative Matrix Factorization. In both cases, it is possible to obtain maps that quantify the concentration of the main chromophores present in aortic tissue. Also, the algorithms allow for spectral absorbance of the main tissue components. Those spectral signatures were compared against the theoretical ones by using correlation coefficients. Those coefficients report values close to 0.9, which is a good estimator of the method's performance. Also, correlation coefficients lead to the identification of the concentration maps according to the evaluated chromophore. The results suggest that Multi/hyper-spectral systems together with image processing techniques is a potential tool for the analysis of cardiovascular tissue.

  10. Radiographic imaging method by gas ionisation

    Eickel, R.; Rheude, A.

    1982-02-01

    The search for a substitute of the silver halide film has been intensified worldwide due to the shortage and price increase of silver metal. Gasionography could be an alternative to the wellknown silver film imaging techniques in roentgenology. Therefore the practical basis of the imaging process and the electrophoretic development was investigated. The technical realisation of this method was demonstrated for two different types of X-ray examen by developing a fully automatic chest changer and a mammography system that can be adapted to commercially available imaging stands. The image quality achieved with these apparatus was evaluated in comparison with conventional film techniques in the laboratory as well as in a clinical trial. (orig.) [de

  11. A multicore based parallel image registration method.

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L; Foran, David J

    2009-01-01

    Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform.

  12. Image-reconstruction methods in positron tomography

    Townsend, David W; CERN. Geneva

    1993-01-01

    Physics and mathematics for medical imaging In the two decades since the introduction of the X-ray scanner into radiology, medical imaging techniques have become widely established as essential tools in the diagnosis of disease. As a consequence of recent technological and mathematical advances, the non-invasive, three-dimensional imaging of internal organs such as the brain and the heart is now possible, not only for anatomical investigations using X-rays but also for studies which explore the functional status of the body using positron-emitting radioisotopes and nuclear magnetic resonance. Mathematical methods which enable three-dimentional distributions to be reconstructed from projection data acquired by radiation detectors suitably positioned around the patient will be described in detail. The lectures will trace the development of medical imaging from simpleradiographs to the present-day non-invasive measurement of in vivo boichemistry. Powerful techniques to correlate anatomy and function that are cur...

  13. Hard X-Ray Flare Source Sizes Measured with the Ramaty High Energy Solar Spectroscopic Imager

    Dennis, Brian R.; Pernak, Rick L.

    2009-01-01

    Ramaty High Energy Solar Spectroscopic Imager (RHESSI) observations of 18 double hard X-ray sources seen at energies above 25 keV are analyzed to determine the spatial extent of the most compact structures evident in each case. The following four image reconstruction algorithms were used: Clean, Pixon, and two routines using visibilities maximum entropy and forward fit (VFF). All have been adapted for this study to optimize their ability to provide reliable estimates of the sizes of the more compact sources. The source fluxes, sizes, and morphologies obtained with each method are cross-correlated and the similarities and disagreements are discussed. The full width at half-maximum (FWHM) of the major axes of the sources with assumed elliptical Gaussian shapes are generally well correlated between the four image reconstruction routines and vary between the RHESSI resolution limit of approximately 2" up to approximately 20" with most below 10". The FWHM of the minor axes are generally at or just above the RHESSI limit and hence should be considered as unresolved in most cases. The orientation angles of the elliptical sources are also well correlated. These results suggest that the elongated sources are generally aligned along a flare ribbon with the minor axis perpendicular to the ribbon. This is verified for the one flare in our list with coincident Transition Region and Coronal Explorer (TRACE) images. There is evidence for significant extra flux in many of the flares in addition to the two identified compact sources, thus rendering the VFF assumption of just two Gaussians inadequate. A more realistic approximation in many cases would be of two line sources with unresolved widths. Recommendations are given for optimizing the RHESSI imaging reconstruction process to ensure that the finest possible details of the source morphology become evident and that reliable estimates can be made of the source dimensions.

  14. Single photon imaging and timing array sensor apparatus and method

    Smith, R. Clayton

    2003-06-24

    An apparatus and method are disclosed for generating a three-dimension image of an object or target. The apparatus is comprised of a photon source for emitting a photon at a target. The emitted photons are received by a photon receiver for receiving the photon when reflected from the target. The photon receiver determines a reflection time of the photon and further determines an arrival position of the photon on the photon receiver. An analyzer is communicatively coupled to the photon receiver, wherein the analyzer generates a three-dimensional image of the object based upon the reflection time and the arrival position.

  15. Radiopharmaceutical chelates and method of external imaging

    Loberg, M.D.; Callery, P.S.; Cooper, M.

    1977-01-01

    A chelate of technetium-99m, cobalt-57, gallium-67, gallium-68, indium-111 or indium-113m and a substituted iminodiacetic acid or an 8-hydroxyquinoline useful as a radiopharmaceutical external imaging agent. The invention also includes preparative methods therefor

  16. Enabling vendor independent photoacoustic imaging systems with asynchronous laser source

    Wu, Yixuan; Zhang, Haichong K.; Boctor, Emad M.

    2018-02-01

    Channel data acquisition, and synchronization between laser excitation and PA signal acquisition, are two fundamental hardware requirements for photoacoustic (PA) imaging. Unfortunately, however, neither is equipped by most clinical ultrasound scanners. Therefore, less economical specialized research platforms are used in general, which hinders a smooth clinical transition of PA imaging. In previous studies, we have proposed an algorithm to achieve PA imaging using ultrasound post-beamformed (USPB) RF data instead of channel data. This work focuses on enabling clinical ultrasound scanners to implement PA imaging, without requiring synchronization between the laser excitation and PA signal acquisition. Laser synchronization is inherently consisted of two aspects: frequency and phase information. We synchronize without communicating the laser and the ultrasound scanner by investigating USPB images of a point-target phantom in two steps. First, frequency information is estimated by solving a nonlinear optimization problem, under the assumption that the segmented wave-front can only be beamformed into a single spot when synchronization is achieved. Second, after making frequencies of two systems identical, phase delay is estimated by optimizing the image quality while varying phase value. The proposed method is validated through simulation, by manually adding both frequency and phase errors, then applying the proposed algorithm to correct errors and reconstruct PA images. Compared with the ground truth, simulation results indicate that the remaining errors in frequency correction and phase correction are 0.28% and 2.34%, respectively, which affirm the potential of overcoming hardware barriers on PA imaging through software solution.

  17. Phase contrast imaging using a micro focus x-ray source

    Zhou, Wei; Majidi, Keivan; Brankov, Jovan G.

    2014-09-01

    Phase contrast x-ray imaging, a new technique to increase the imaging contrast for the tissues with close attenuation coefficients, has been studied since mid 1990s. This technique reveals the possibility to show the clear details of the soft tissues and tumors in small scale resolution. A compact and low cost phase contrast imaging system using a conventional x-ray source is described in this paper. Using the conventional x-ray source is of great importance, because it provides the possibility to use the method in hospitals and clinical offices. Simple materials and components are used in the setup to keep the cost in a reasonable and affordable range.Tungsten Kα1 line with the photon energy 59.3 keV was used for imaging. Some of the system design details are discussed. The method that was used to stabilize the system is introduced. A chicken thigh bone tissue sample was used for imaging followed by the image quality, image acquisition time and the potential clinical application discussion. High energy x-ray beam can be used in phase contrast imaging. Therefore the radiation dose to the patients can be greatly decreased compared to the traditional x-ray radiography.

  18. Probabilistic M/EEG source imaging from sparse spatio-temporal event structure

    Stahlhut, Carsten; Attias, Hagai T.; Wipf, David

    While MEG and EEG source imaging methods have to tackle a severely ill-posed problem their success can be stated as their ability to constrain the solutions using appropriate priors. In this paper we propose a hierarchical Bayesian model facilitating spatio-temporal patterns through the use of bo...

  19. Open source deformable image registration system for treatment planning and recurrence CT scans

    Zukauskaite, Ruta; Brink, Carsten; Hansen, Christian Rønn

    2016-01-01

    manually contoured eight anatomical regions-of-interest (ROI) twice on pCT and once on rCT. METHODS: pCT and rCT images were deformably registered using the open source software elastix. Mean surface distance (MSD) and Dice similarity coefficient (DSC) between contours were used for validation of DIR...

  20. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Barni Mauro

    2007-01-01

    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  1. Image correlation method for DNA sequence alignment.

    Curilem Saldías, Millaray; Villarroel Sassarini, Felipe; Muñoz Poblete, Carlos; Vargas Vásquez, Asticio; Maureira Butler, Iván

    2012-01-01

    The complexity of searches and the volume of genomic data make sequence alignment one of bioinformatics most active research areas. New alignment approaches have incorporated digital signal processing techniques. Among these, correlation methods are highly sensitive. This paper proposes a novel sequence alignment method based on 2-dimensional images, where each nucleic acid base is represented as a fixed gray intensity pixel. Query and known database sequences are coded to their pixel representation and sequence alignment is handled as object recognition in a scene problem. Query and database become object and scene, respectively. An image correlation process is carried out in order to search for the best match between them. Given that this procedure can be implemented in an optical correlator, the correlation could eventually be accomplished at light speed. This paper shows an initial research stage where results were "digitally" obtained by simulating an optical correlation of DNA sequences represented as images. A total of 303 queries (variable lengths from 50 to 4500 base pairs) and 100 scenes represented by 100 x 100 images each (in total, one million base pair database) were considered for the image correlation analysis. The results showed that correlations reached very high sensitivity (99.01%), specificity (98.99%) and outperformed BLAST when mutation numbers increased. However, digital correlation processes were hundred times slower than BLAST. We are currently starting an initiative to evaluate the correlation speed process of a real experimental optical correlator. By doing this, we expect to fully exploit optical correlation light properties. As the optical correlator works jointly with the computer, digital algorithms should also be optimized. The results presented in this paper are encouraging and support the study of image correlation methods on sequence alignment.

  2. A novel optical gating method for laser gated imaging

    Ginat, Ran; Schneider, Ron; Zohar, Eyal; Nesher, Ofer

    2013-06-01

    For the past 15 years, Elbit Systems is developing time-resolved active laser-gated imaging (LGI) systems for various applications. Traditional LGI systems are based on high sensitive gated sensors, synchronized to pulsed laser sources. Elbit propriety multi-pulse per frame method, which is being implemented in LGI systems, improves significantly the imaging quality. A significant characteristic of the LGI is its ability to penetrate a disturbing media, such as rain, haze and some fog types. Current LGI systems are based on image intensifier (II) sensors, limiting the system in spectral response, image quality, reliability and cost. A novel propriety optical gating module was developed in Elbit, untying the dependency of LGI system on II. The optical gating module is not bounded to the radiance wavelength and positioned between the system optics and the sensor. This optical gating method supports the use of conventional solid state sensors. By selecting the appropriate solid state sensor, the new LGI systems can operate at any desired wavelength. In this paper we present the new gating method characteristics, performance and its advantages over the II gating method. The use of the gated imaging systems is described in a variety of applications, including results from latest field experiments.

  3. New magnetic resonance imaging methods in nephrology

    Zhang, Jeff L.; Morrell, Glen; Rusinek, Henry; Sigmund, Eric; Chandarana, Hersh; Lerman, Lilach O.; Prasad, Pottumarthi Vara; Niles, David; Artz, Nathan; Fain, Sean; Vivier, Pierre H.; Cheung, Alfred K.; Lee, Vivian S.

    2013-01-01

    Established as a method to study anatomic changes, such as renal tumors or atherosclerotic vascular disease, magnetic resonance imaging (MRI) to interrogate renal function has only recently begun to come of age. In this review, we briefly introduce some of the most important MRI techniques for renal functional imaging, and then review current findings on their use for diagnosis and monitoring of major kidney diseases. Specific applications include renovascular disease, diabetic nephropathy, renal transplants, renal masses, acute kidney injury and pediatric anomalies. With this review, we hope to encourage more collaboration between nephrologists and radiologists to accelerate the development and application of modern MRI tools in nephrology clinics. PMID:24067433

  4. Source imaging of drums in the APNEA system

    Hensley, D.

    1995-01-01

    The APNea System is a neutron assay device utilizing both a passive mode and a differential-dieaway active mode. The total detection efficiency is not spatially uniform, even for an empty chamber, and a drum matrix in the chamber can severely distort this response. In order to achieve a response which is independent of the way the source material is distributed in a drum, an imaging procedure has been developed which treats the drum as a number of virtual (sub)volumes. Since each virtual volume of source material is weighted with the appropriate instrument parameters (detection efficiency and thermal flux), the final assay result is essentially independent of the actual distribution of the source material throughout the drum and its matrix

  5. Three-dimensional image signals: processing methods

    Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru

    2010-11-01

    Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.

  6. Source signature estimation from multimode surface waves via mode-separated virtual real source method

    Gao, Lingli; Pan, Yudi

    2018-05-01

    The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.

  7. A novel magnetic resonance imaging-compatible motor control method for image-guided robotic surgery

    Suzuki, Takashi; Liao, Hongen; Kobayashi, Etsuko; Sakuma, Ichiro

    2006-01-01

    For robotic surgery assistance systems that use magnetic resonance imaging (MRI) for guidance, the problem of electromagnetic interference is common. Image quality is particularly degraded if motors are running during scanning. We propose a novel MRI-compatible method considering the pulse sequence of imaging. Motors are driven for a short time when the MRI system stops signal acquisition (i.e., awaiting relaxation of the proton), so the image does not contain noise from the actuators. The MRI system and motor are synchronized using a radio frequency pulse signal (8.5 MHz) as the trigger, which is acquired via a special antenna mounted near the scanner. This method can be widely applied because it only receives part of the scanning signal and neither hardware nor software of the MRI system needs to be changed. As a feasibility evaluation test, we compared the images and signal-to-noise ratios between the cases with and without this method, under the condition that a piezoelectric motor was driven during scanning as a noise source, which was generally used as a MRI-compatible actuator. The results showed no deterioration in image quality and the benefit of the new method even though the choice of available scanning sequences is limited. (author)

  8. Diffraction-enhanced imaging at the UK synchrotron radiation source

    Ibison, M.; Cheung, K.C.; Siu, K.; Hall, C.J.; Lewis, R.A.; Hufton, A.; Wilkinson, S.J.; Rogers, K.D.; Round, A.

    2005-01-01

    The Diffraction-Enhanced Imaging (DEI) system, which shares access to Beamline 7.6 on the Daresbury Synchrotron Radiation Source (SRS), is now in its third year of existence. The system was developed under a European Commission grant PHase Analyser SYstem (PHASY), won during the Fourth Framework. Typical applications continue to be the imaging of small biological specimens, using a beam of 12-17 keV after monochromation and up to 40 mm in width and 1-2 mm in height, although it is planned to investigate other materials as opportunity permits and time becomes available for more routine scientific use. Recent improvements have been made to the optical alignment procedure for setting up the station before imaging: a small laser device can now be set up to send a beam down the X-ray path through the four crystals, and a small photodiode, which has much better signal-to-noise characteristics than the ion chamber normally used for alignment, has been trailed successfully. A 3-D tomographic reconstruction capability has recently been developed and tested for DEI projection image sets, and will be applied to future imaging work on the SRS, in conjunction with volume visualization software. The next generation of DEI system, planned to operate at up to 60 keV on an SRS wiggler station, is in its design stage; it will feature much improved mechanics and mountings, especially for angular control, and a simplified alignment procedure to facilitate the necessary sharing of the SRS station

  9. Image reconstruction methods in positron tomography

    Townsend, D.W.; Defrise, M.

    1993-01-01

    In the two decades since the introduction of the X-ray scanner into radiology, medical imaging techniques have become widely established as essential tools in the diagnosis of disease. As a consequence of recent technological and mathematical advances, the non-invasive, three-dimensional imaging of internal organs such as the brain and the heart is now possible, not only for anatomical investigations using X-ray but also for studies which explore the functional status of the body using positron-emitting radioisotopes. This report reviews the historical and physical basis of medical imaging techniques using positron-emitting radioisotopes. Mathematical methods which enable three-dimensional distributions of radioisotopes to be reconstructed from projection data (sinograms) acquired by detectors suitably positioned around the patient are discussed. The extension of conventional two-dimensional tomographic reconstruction algorithms to fully three-dimensional reconstruction is described in detail. (orig.)

  10. IQM: an extensible and portable open source application for image and signal analysis in Java.

    Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2015-01-01

    Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM's image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis.

  11. Calibration methods for ECE systems with microwave sources

    Tubbing, B.J.D.; Kissel, S.E.

    1987-01-01

    The authors investigated the feasibility of two methods for calibration of electron cyclotron emission (ECE) systems, both based on the use of a microwave source. In the first method -called the Antenna Pattern Integration (API) method - the microwave source is scanned in space, so as to simulate a large - area - blackbody -source. In the second method -called the Untuned Cavity (UC) method -an untuned cavity, fed by the microwave source, is used to simulate a blackbody. For both methods, the hardware required to perform partly automated calibrations was developed. The microwave based methods were compared with a large area blackbody calibration on two different ECE systems, a Michelson interferometer and a grating polychromator. The API method was found to be more successful than the UC method. (author)

  12. The Sources and Methods of Engineering Design Requirement

    Li, Xuemeng; Zhang, Zhinan; Ahmed-Kristensen, Saeema

    2014-01-01

    to be defined in a new context. This paper focuses on understanding the design requirement sources at the requirement elicitation phase. It aims at proposing an improved design requirement source classification considering emerging markets and presenting current methods for eliciting requirement for each source...

  13. Studies on the method of producing radiographic 170Tm source

    Maeda, Sho

    1976-08-01

    A method of producing radiographic 170 Tm source has been studied, including target preparation, neutron irradiation, handling of the irradiated target in the hot cell and source capsules. On the basis of the results, practical 170 Tm radiographic sources (29 -- 49Ci, with pellets 3mm in diameter and 3mm long) were produced in trial by neutron irradiation with the JMTR. (auth.)

  14. Method and apparatus for enhancing radiometric imaging

    Logan, R. H.; Paradish, F. J.

    1985-01-01

    Disclosed is a method and apparatus for enhancing target detection, particularly in the millimeter wave frequency range, through the utilization of an imaging radiometer. The radiometer, which is a passive thermal receiver, detects the reflected and emitted thermal radiation of targets within a predetermined antenna/receiver beamwidth. By scanning the radiometer over a target area, a thermal image is created. At millimeter wave frequencies, the received emissions from the target area are highly dependent on the emissivity of the target of interest. Foliage will appear ''hot'' due to its high emissivity and metals will appear cold due to their low emissivities. A noise power illuminator is periodically actuated to illuminate the target of interest. When the illuminator is actuated, the role of emissivity is reversed, namely poorly emissive targets will generally be good reflectors which in the presence of an illuminator will appear ''hot''. The highly emissive targets (such as foliage and dirt) which absorb most of the transmitted energy will appear almost the same as in a nonilluminated, passive image. Using a data processor, the intensity of the passive image is subtracted from the intensity of the illuminated, active image which thereby cancels the background foliage, dirt, etc. and the reflective metallic targets are enhanced

  15. Active learning methods for interactive image retrieval.

    Gosselin, Philippe Henri; Cord, Matthieu

    2008-07-01

    Active learning methods have been considered with increased interest in the statistical learning community. Initially developed within a classification framework, a lot of extensions are now being proposed to handle multimedia applications. This paper provides algorithms within a statistical framework to extend active learning for online content-based image retrieval (CBIR). The classification framework is presented with experiments to compare several powerful classification techniques in this information retrieval context. Focusing on interactive methods, active learning strategy is then described. The limitations of this approach for CBIR are emphasized before presenting our new active selection process RETIN. First, as any active method is sensitive to the boundary estimation between classes, the RETIN strategy carries out a boundary correction to make the retrieval process more robust. Second, the criterion of generalization error to optimize the active learning selection is modified to better represent the CBIR objective of database ranking. Third, a batch processing of images is proposed. Our strategy leads to a fast and efficient active learning scheme to retrieve sets of online images (query concept). Experiments on large databases show that the RETIN method performs well in comparison to several other active strategies.

  16. Demonstration of acoustic source localization in air using single pixel compressive imaging

    Rogers, Jeffrey S.; Rohde, Charles A.; Guild, Matthew D.; Naify, Christina J.; Martin, Theodore P.; Orris, Gregory J.

    2017-12-01

    Acoustic source localization often relies on large sensor arrays that can be electronically complex and have large data storage requirements to process element level data. Recently, the concept of a single-pixel-imager has garnered interest in the electromagnetics literature due to its ability to form high quality images with a single receiver paired with shaped aperture screens that allow for the collection of spatially orthogonal measurements. Here, we present a method for creating an acoustic analog to the single-pixel-imager found in electromagnetics for the purpose of source localization. Additionally, diffraction is considered to account for screen openings comparable to the acoustic wavelength. A diffraction model is presented and incorporated into the single pixel framework. In this paper, we explore the possibility of applying single pixel localization to acoustic measurements. The method is experimentally validated with laboratory measurements made in an air waveguide.

  17. Characterization of dynamic changes of current source localization based on spatiotemporal fMRI constrained EEG source imaging

    Nguyen, Thinh; Potter, Thomas; Grossman, Robert; Zhang, Yingchun

    2018-06-01

    Objective. Neuroimaging has been employed as a promising approach to advance our understanding of brain networks in both basic and clinical neuroscience. Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) represent two neuroimaging modalities with complementary features; EEG has high temporal resolution and low spatial resolution while fMRI has high spatial resolution and low temporal resolution. Multimodal EEG inverse methods have attempted to capitalize on these properties but have been subjected to localization error. The dynamic brain transition network (DBTN) approach, a spatiotemporal fMRI constrained EEG source imaging method, has recently been developed to address these issues by solving the EEG inverse problem in a Bayesian framework, utilizing fMRI priors in a spatial and temporal variant manner. This paper presents a computer simulation study to provide a detailed characterization of the spatial and temporal accuracy of the DBTN method. Approach. Synthetic EEG data were generated in a series of computer simulations, designed to represent realistic and complex brain activity at superficial and deep sources with highly dynamical activity time-courses. The source reconstruction performance of the DBTN method was tested against the fMRI-constrained minimum norm estimates algorithm (fMRIMNE). The performances of the two inverse methods were evaluated both in terms of spatial and temporal accuracy. Main results. In comparison with the commonly used fMRIMNE method, results showed that the DBTN method produces results with increased spatial and temporal accuracy. The DBTN method also demonstrated the capability to reduce crosstalk in the reconstructed cortical time-course(s) induced by neighboring regions, mitigate depth bias and improve overall localization accuracy. Significance. The improved spatiotemporal accuracy of the reconstruction allows for an improved characterization of complex neural activity. This improvement can be

  18. Brief review of image reconstruction methods for imaging in nuclear medicine

    Murayama, Hideo

    1999-01-01

    Emission computed tomography (ECT) has as its major emphasis the quantitative determination of the moment to moment changes in the chemistry and flow physiology of injected or inhaled compounds labeled with radioactive atoms in a human body. The major difference lies in the fact that ECT seeks to describe the location and intensity of sources of emitted photons in an attenuating medium whereas transmission X-ray computed tomography (TCT) seeks to determine the distribution of the attenuating medium. A second important difference between ECT and TCT is that of available statistics. ECT statistics are low because each photon without control in emitting direction must be detected and analyzed, not as in TCT. The following sections review the historical development of image reconstruction methods for imaging in nuclear medicine, relevant intrinsic concepts for image reconstruction on ECT, and current status of volume imaging as well as a unique approach on iterative techniques for ECT. (author). 130 refs

  19. Noise source separation of diesel engine by combining binaural sound localization method and blind source separation method

    Yao, Jiachi; Xiang, Yang; Qian, Sichong; Li, Shengyang; Wu, Shaowei

    2017-11-01

    In order to separate and identify the combustion noise and the piston slap noise of a diesel engine, a noise source separation and identification method that combines a binaural sound localization method and blind source separation method is proposed. During a diesel engine noise and vibration test, because a diesel engine has many complex noise sources, a lead covering method was carried out on a diesel engine to isolate other interference noise from the No. 1-5 cylinders. Only the No. 6 cylinder parts were left bare. Two microphones that simulated the human ears were utilized to measure the radiated noise signals 1 m away from the diesel engine. First, a binaural sound localization method was adopted to separate the noise sources that are in different places. Then, for noise sources that are in the same place, a blind source separation method is utilized to further separate and identify the noise sources. Finally, a coherence function method, continuous wavelet time-frequency analysis method, and prior knowledge of the diesel engine are combined to further identify the separation results. The results show that the proposed method can effectively separate and identify the combustion noise and the piston slap noise of a diesel engine. The frequency of the combustion noise and the piston slap noise are respectively concentrated at 4350 Hz and 1988 Hz. Compared with the blind source separation method, the proposed method has superior separation and identification effects, and the separation results have fewer interference components from other noise.

  20. Enhancing the (MSLDIP) image steganographic method (ESLDIP method)

    Seddik Saad, Al-hussien

    2011-10-01

    Message transmissions over the Internet still have data security problem. Therefore, secure and secret communication methods are needed for transmitting messages over the Internet. Cryptography scrambles the message so that it cannot be understood. However, it makes the message suspicious enough to attract eavesdropper's attention. Steganography hides the secret message within other innocuous-looking cover files (i.e. images, music and video files) so that it cannot be observed [1].The term steganography originates from the Greek root words "steganos'' and "graphein'' which literally mean "covered writing''. It is defined as the science that involves communicating secret data in an appropriate multimedia carrier, e.g., image, audio text and video files [3].Steganographic techniques allow one party to communicate information to another without a third party even knowing that the communication is occurring. The ways to deliver these "secret messages" vary greatly [3].Our proposed method called Enhanced SLDIP (ESLDIP). In which the maximmum hiding capacity (MHC) of proposed ESLDIP method is higher than the previously proposed MSLDIP methods and the PSNR of the ESLDIP method is higher than the MSLDIP PSNR values', which means that the image quality of the ESLDIP method will be better than MSLDIP method and the maximmum hiding capacity (MHC) also improved. The rest of this paper is organized as follows. In section 2, steganography has been discussed; lingo, carriers and types. In section 3, related works are introduced. In section 4, the proposed method will be discussed in details. In section 5, the simulation results are given and Section 6 concludes the paper.

  1. A Frequency Splitting Method For CFM Imaging

    Udesen, Jesper; Gran, Fredrik; Jensen, Jørgen Arendt

    2006-01-01

    The performance of conventional CFM imaging will often be degraded due to the relatively low number of pulses (4-10) used for each velocity estimate. To circumvent this problem we propose a new method using frequency splitting (FS). The FS method uses broad band chirps as excitation pulses instead...... of narrow band pulses as in conventional CFM imaging. By appropriate filtration, the returned signals are divided into a number of narrow band signals which are approximately disjoint. After clutter filtering the velocities are found from each frequency band using a conventional autocorrelation estimator......, a 5 MHz linear array transducer was used to scan a vessel situated at 30 mm depth with a maximum flow velocity of 0.1 m/s. The pulse repetition frequency was 1.8 kHz and the angle between the flow and the beam was 60 deg. A 15 mus chirp was used as excitation pulse and 40 independent velocity...

  2. Earthquake source studies and seismic imaging in Alaska

    Tape, C.; Silwal, V.

    2015-12-01

    Alaska is one of the world's most seismically and tectonically active regions. Its enhanced seismicity, including slab seismicity down to 180 km, provides opportunities (1) to characterize pervasive crustal faulting and slab deformation through the estimation of moment tensors and (2) to image subsurface structures to help understand the tectonic evolution of Alaska. Most previous studies of earthquakes and seismic imaging in Alaska have emphasized earthquake locations and body-wave travel-time tomography. In the past decade, catalogs of seismic moment tensors have been established, while seismic surface waves, active-source data, and potential field data have been used to improve models of seismic structure. We have developed moment tensor catalogs in the regions of two of the largest sedimentary basins in Alaska: Cook Inlet forearc basin, west of Anchorage, and Nenana basin, west of Fairbanks. Our moment tensor solutions near Nenana basin suggest a transtensional tectonic setting, with the basin developing in a stepover of a left-lateral strike-slip fault system. We explore the effects of seismic wave propagation from point-source and finite-source earthquake models by performing three-dimensional wavefield simulations using seismic velocity models that include major sedimentary basins. We will use our catalog of moment tensors within an adjoint-based, iterative inversion to improve the three-dimensional tomographic model of Alaska.

  3. Imaging of Scattered Wavefields in Passive and Controlled-source Seismology

    AlTheyab, Abdullah

    2015-12-01

    Seismic waves are used to study the Earth, exploit its hydrocarbon resources, and understand its hazards. Extracting information from seismic waves about the Earth’s subsurface, however, is becoming more challenging as our questions become more complex and our demands for higher resolution increase. This dissertation introduces two new methods that use scattered waves for improving the resolution of subsurface images: natural migration of passive seismic data and convergent full-waveform inversion. In the first part of this dissertation, I describe a method where the recorded seismic data are used to image subsurface heterogeneities like fault planes. This method, denoted as natural migration of backscattered surface waves, provides higher resolution images for near-surface faults that is complementary to surface-wave tomography images. Our proposed method differ from contemporary methods in that it does not (1) require a velocity model of the earth, (2) assumes weak scattering, or (3) have a high computational cost. This method is applied to ambient noise recorded by the US-Array to map regional faults across the American continent. Natural migration can be formulated as a least-squares inversion to furtherer enhance the resolution and the quality of the fault images. This inversion is applied to ambient noise recorded in Long Beach, California to reveal a matrix of shallow subsurface faults. The second part of this dissertation describes a convergent full waveform inversion method for controlled source data. A controlled source excites waves that scatter from subsurface reflectors. The scattered waves are recorded by a large array of geophones. These recorded waves can be inverted for a high-resolution image of the subsurface by FWI, which is typically convergent for transmitted arrivals but often does not converge for deep reflected events. I propose a preconditioning approach that extends the ability of FWI to image deep parts of the velocity model, which

  4. A Method for Denoising Image Contours

    Ovidiu COSMA

    2017-12-01

    Full Text Available The edge detection techniques have to compromise between sensitivity and noise. In order for the main contours to be uninterrupted, the level of sensitivity has to be raised, which however has the negative effect of producing a multitude of insignificant contours (noise. This article proposes a method of removing this noise, which acts directly on the binary representation of the image contours.

  5. Beyond seismic interferometry: imaging the earth's interior with virtual sources and receivers inside the earth

    Wapenaar, C. P. A.; Van der Neut, J.; Thorbecke, J.; Broggini, F.; Slob, E. C.; Snieder, R.

    2015-12-01

    Imagine one could place seismic sources and receivers at any desired position inside the earth. Since the receivers would record the full wave field (direct waves, up- and downward reflections, multiples, etc.), this would give a wealth of information about the local structures, material properties and processes in the earth's interior. Although in reality one cannot place sources and receivers anywhere inside the earth, it appears to be possible to create virtual sources and receivers at any desired position, which accurately mimics the desired situation. The underlying method involves some major steps beyond standard seismic interferometry. With seismic interferometry, virtual sources can be created at the positions of physical receivers, assuming these receivers are illuminated isotropically. Our proposed method does not need physical receivers at the positions of the virtual sources; moreover, it does not require isotropic illumination. To create virtual sources and receivers anywhere inside the earth, it suffices to record the reflection response with physical sources and receivers at the earth's surface. We do not need detailed information about the medium parameters; it suffices to have an estimate of the direct waves between the virtual-source positions and the acquisition surface. With these prerequisites, our method can create virtual sources and receivers, anywhere inside the earth, which record the full wave field. The up- and downward reflections, multiples, etc. in the virtual responses are extracted directly from the reflection response at the surface. The retrieved virtual responses form an ideal starting point for accurate seismic imaging, characterization and monitoring.

  6. Low energy electron point source microscopy: beyond imaging

    Beyer, Andre; Goelzhaeuser, Armin [Physics of Supramolecular Systems and Surfaces, University of Bielefeld, Postfach 100131, 33501 Bielefeld (Germany)

    2010-09-01

    Low energy electron point source (LEEPS) microscopy has the capability to record in-line holograms at very high magnifications with a fairly simple set-up. After the holograms are numerically reconstructed, structural features with the size of about 2 nm can be resolved. The achievement of an even higher resolution has been predicted. However, a number of obstacles are known to impede the realization of this goal, for example the presence of electric fields around the imaged object, electrostatic charging or radiation induced processes. This topical review gives an overview of the achievements as well as the difficulties in the efforts to shift the resolution limit of LEEPS microscopy towards the atomic level. A special emphasis is laid on the high sensitivity of low energy electrons to electrical fields, which limits the structural determination of the imaged objects. On the other hand, the investigation of the electrical field around objects of known structure is very useful for other tasks and LEEPS microscopy can be extended beyond the task of imaging. The determination of the electrical resistance of individual nanowires can be achieved by a proper analysis of the corresponding LEEPS micrographs. This conductivity imaging may be a very useful application for LEEPS microscopes. (topical review)

  7. Line x-ray source for diffraction enhanced imaging in clinical and industrial applications

    Wang, Xiaoqin

    Mammography is one type of imaging modalities that uses a low-dose x-ray or other radiation sources for examination of breasts. It plays a central role in early detection of breast cancers. The material similarity of tumor-cell and health cell, breast implants surgery and other factors, make the breast cancers hard to visualize and detect. Diffraction enhanced imaging (DEI), first proposed and investigated by D. Chapman is a new x-ray radiographic imaging modality using monochromatic x-rays from a synchrotron source, which produced images of thick absorbing objects that are almost completely free of scatter. It shows dramatically improved contrast over standard imaging when applied to the same phantom. The contrast is based not only on attenuation but also on the refraction and diffraction properties of the sample. This imaging method may improve image quality of mammography, other medical applications, industrial radiography for non-destructive testing and x-ray computed tomography. However, the size, and cost, of a synchrotron source limits the application of the new modality to be applicable at clinical levels. This research investigates the feasibility of a designed line x-ray source to produce intensity compatible to synchrotron sources. It is composed of a 2-cm in length tungsten filament, installed on a carbon steel filament cup (backing plate), as the cathode and a stationary oxygen-free copper anode with molybdenum coating on the front surface serves as the target. Characteristic properties of the line x-ray source were computationally studied and the prototype was experimentally investigated. SIMIION code was used to computationally study the electron trajectories emanating from the filament towards the molybdenum target. A Faraday cup on the prototype device, proof-of-principle, was used to measure the distribution of electrons on the target, which compares favorably to computational results. The intensities of characteristic x-ray for molybdenum

  8. Combining inter-source seismic interferometry and source-receiver interferometry for deep local imaging

    Liu, Y.; Arntsen, B.; Wapenaar, C.P.A.; Van der Neut, J.R.

    2014-01-01

    The virtual source method has been applied successfully to retrieve the impulse response between pairs of receivers in the subsurface. This method is further improved by an updown separation prior to the crosscorrelation to suppress the reflections from the overburden and the free surface. In a

  9. Application of Multi-Source Remote Sensing Image in Yunnan Province Grassland Resources Investigation

    Li, J.; Wen, G.; Li, D.

    2018-04-01

    Trough mastering background information of Yunnan province grassland resources utilization and ecological conditions to improves grassland elaborating management capacity, it carried out grassland resource investigation work by Yunnan province agriculture department in 2017. The traditional grassland resource investigation method is ground based investigation, which is time-consuming and inefficient, especially not suitable for large scale and hard-to-reach areas. While remote sensing is low cost, wide range and efficient, which can reflect grassland resources present situation objectively. It has become indispensable grassland monitoring technology and data sources and it has got more and more recognition and application in grassland resources monitoring research. This paper researches application of multi-source remote sensing image in Yunnan province grassland resources investigation. First of all, it extracts grassland resources thematic information and conducts field investigation through BJ-2 high space resolution image segmentation. Secondly, it classifies grassland types and evaluates grassland degradation degree through high resolution characteristics of Landsat 8 image. Thirdly, it obtained grass yield model and quality classification through high resolution and wide scanning width characteristics of MODIS images and sample investigate data. Finally, it performs grassland field qualitative analysis through UAV remote sensing image. According to project area implementation, it proves that multi-source remote sensing data can be applied to the grassland resources investigation in Yunnan province and it is indispensable method.

  10. Diffusion weighted imaging by MR method

    Horikawa, Yoshiharu; Naruse, Shoji; Ebisu, Toshihiko; Tokumitsu, Takuaki; Ueda, Satoshi; Tanaka, Chuzo; Higuchi, Toshihiro; Umeda, Masahiro.

    1993-01-01

    Diffusion weighted magnetic resonance imaging is a recently developed technique used to examine the micromovement of water molecules in vivo. We have applied this technique to examine various kinds of brain diseases, both experimentally and clinically. The calculated apparent diffusion coefficient (ADC) in vivo showed reliable values. In experimentally induced brain edema in rats, the pathophysiological difference of the type of edema (such as cytotoxic, and vasogenic) could be differentiated on the diffusion weighted MR images. Cytotoxic brain edema showed high intensity (slower diffusion) on the diffusion weighted images. On the other hand, vasogenic brain edema showed a low intensity image (faster diffusion). Diffusion anisotropy was demonstrated according to the direction of myelinated fibers and applied motion proving gradient (MPG). This anisotropy was also demonstrated in human brain tissue along the course of the corpus callosum, pyramidal tract and optic radiation. In brain ischemia cases, lesions were detected as high signal intensity areas, even one hour after the onset of ischemia. Diffusion was faster in brain tumor compared with normal brain. Histological differences were not clearly reflected by the ADC value. In epidermoid tumor cases, the intensity was characteristically high, was demonstrated, and the cerebrospinal fluid border was clearly demonstrated. New clinical information obtainable with this molecular diffusion method will prove to be useful in various clinical studies. (author)

  11. An image-based search for pulsars among Fermi unassociated LAT sources

    Frail, D. A.; Ray, P. S.; Mooley, K. P.; Hancock, P.; Burnett, T. H.; Jagannathan, P.; Ferrara, E. C.; Intema, H. T.; de Gasperin, F.; Demorest, P. B.; Stovall, K.; McKinnon, M. M.

    2018-03-01

    We describe an image-based method that uses two radio criteria, compactness, and spectral index, to identify promising pulsar candidates among Fermi Large Area Telescope (LAT) unassociated sources. These criteria are applied to those radio sources from the Giant Metrewave Radio Telescope all-sky survey at 150 MHz (TGSS ADR1) found within the error ellipses of unassociated sources from the 3FGL catalogue and a preliminary source list based on 7 yr of LAT data. After follow-up interferometric observations to identify extended or variable sources, a list of 16 compact, steep-spectrum candidates is generated. An ongoing search for pulsations in these candidates, in gamma rays and radio, has found 6 ms pulsars and one normal pulsar. A comparison of this method with existing selection criteria based on gamma-ray spectral and variability properties suggests that the pulsar discovery space using Fermi may be larger than previously thought. Radio imaging is a hitherto underutilized source selection method that can be used, as with other multiwavelength techniques, in the search for Fermi pulsars.

  12. Relationship of Source Selection Methods to Contract Outcomes: an Analysis of Air Force Source Selection

    2015-12-01

    some occasions, performance is terminated early; this can occur due to either mutual agreement or a breach of contract by one of the parties (Garrett...Relationship of Source Selection Methods to Contract Outcomes: an Analysis of Air Force Source Selection December 2015 Capt Jacques Lamoureux, USAF...on the contract management process, with special emphasis on the source selection methods of tradeoff and lowest price technically acceptable (LPTA

  13. Method to Locate Contaminant Source and Estimate Emission Strength

    Qu Hongquan

    2013-01-01

    Full Text Available People greatly concern the issue of air quality in some confined spaces, such as spacecraft, aircraft, and submarine. With the increase of residence time in such confined space, contaminant pollution has become a main factor which endangers life. It is urgent to identify a contaminant source rapidly so that a prompt remedial action can be taken. A procedure of source identification should be able to locate the position and to estimate the emission strength of the contaminant source. In this paper, an identification method was developed to realize these two aims. This method was developed based on a discrete concentration stochastic model. With this model, a sensitivity analysis algorithm was induced to locate the source position, and a Kalman filter was used to further estimate the contaminant emission strength. This method could track and predict the source strength dynamically. Meanwhile, it can predict the distribution of contaminant concentration. Simulation results have shown the virtues of the method.

  14. Endoscopic hyperspectral imaging: light guide optimization for spectral light source

    Browning, Craig M.; Mayes, Samuel; Rich, Thomas C.; Leavesley, Silas J.

    2018-02-01

    Hyperspectral imaging (HSI) is a technology used in remote sensing, food processing and documentation recovery. Recently, this approach has been applied in the medical field to spectrally interrogate regions of interest within respective substrates. In spectral imaging, a two (spatial) dimensional image is collected, at many different (spectral) wavelengths, to sample spectral signatures from different regions and/or components within a sample. Here, we report on the use of hyperspectral imaging for endoscopic applications. Colorectal cancer is the 3rd leading cancer for incidences and deaths in the US. One factor of severity is the miss rate of precancerous/flat lesions ( 65% accuracy). Integrating HSI into colonoscopy procedures could minimize misdiagnosis and unnecessary resections. We have previously reported a working prototype light source with 16 high-powered light emitting diodes (LEDs) capable of high speed cycling and imaging. In recent testing, we have found our current prototype is limited by transmission loss ( 99%) through the multi-furcated solid light guide (lightpipe) and the desired framerate (20-30 fps) could not be achieved. Here, we report on a series of experimental and modeling studies to better optimize the lightpipe and the spectral endoscopy system as a whole. The lightpipe was experimentally evaluated using an integrating sphere and spectrometer (Ocean Optics). Modeling the lightpipe was performed using Monte Carlo optical ray tracing in TracePro (Lambda Research Corp.). Results of these optimization studies will aid in manufacturing a revised prototype with the newly designed light guide and increased sensitivity. Once the desired optical output (5-10 mW) is achieved then the HIS endoscope system will be able to be implemented without adding onto the procedure time.

  15. A nuclear method to authenticate Buddha images

    Khaweerat, S; Ratanatongchai, W; Channuie, J; Wonglee, S; Picha, R; Promping, J; Silva, K; Liamsuwan, T

    2015-01-01

    The value of Buddha images in Thailand varies dramatically depending on authentication and provenance. In general, people use their individual skills to make the justification which frequently leads to obscurity, deception and illegal activities. Here, we propose two non-destructive techniques of neutron radiography (NR) and neutron activation autoradiography (NAAR) to reveal respectively structural and elemental profiles of small Buddha images. For NR, a thermal neutron flux of 10 5 n cm -2 s -1 was applied. NAAR needed a higher neutron flux of 10 12 n cm -2 s -1 to activate the samples. Results from NR and NAAR revealed unique characteristic of the samples. Similarity of the profile played a key role in the classification of the samples. The results provided visual evidence to enhance the reliability of authenticity approval. The method can be further developed for routine practice which impact thousands of customers in Thailand. (paper)

  16. A nuclear method to authenticate Buddha images

    Khaweerat, S.; Ratanatongchai, W.; Channuie, J.; Wonglee, S.; Picha, R.; Promping, J.; Silva, K.; Liamsuwan, T.

    2015-05-01

    The value of Buddha images in Thailand varies dramatically depending on authentication and provenance. In general, people use their individual skills to make the justification which frequently leads to obscurity, deception and illegal activities. Here, we propose two non-destructive techniques of neutron radiography (NR) and neutron activation autoradiography (NAAR) to reveal respectively structural and elemental profiles of small Buddha images. For NR, a thermal neutron flux of 105 n cm-2s-1 was applied. NAAR needed a higher neutron flux of 1012 n cm-2 s-1 to activate the samples. Results from NR and NAAR revealed unique characteristic of the samples. Similarity of the profile played a key role in the classification of the samples. The results provided visual evidence to enhance the reliability of authenticity approval. The method can be further developed for routine practice which impact thousands of customers in Thailand.

  17. An open source toolkit for medical imaging de-identification

    Rodriguez Gonzalez, David; Carpenter, Trevor; Wardlaw, Joanna; Hemert, Jano I. van

    2010-01-01

    Medical imaging acquired for clinical purposes can have several legitimate secondary uses in research projects and teaching libraries. No commonly accepted solution for anonymising these images exists because the amount of personal data that should be preserved varies case by case. Our objective is to provide a flexible mechanism for anonymising Digital Imaging and Communications in Medicine (DICOM) data that meets the requirements for deployment in multicentre trials. We reviewed our current de-identification practices and defined the relevant use cases to extract the requirements for the de-identification process. We then used these requirements in the design and implementation of the toolkit. Finally, we tested the toolkit taking as a reference those requirements, including a multicentre deployment. The toolkit successfully anonymised DICOM data from various sources. Furthermore, it was shown that it could forward anonymous data to remote destinations, remove burned-in annotations, and add tracking information to the header. The toolkit also implements the DICOM standard confidentiality mechanism. A DICOM de-identification toolkit that facilitates the enforcement of privacy policies was developed. It is highly extensible, provides the necessary flexibility to account for different de-identification requirements and has a low adoption barrier for new users. (orig.)

  18. Research on neutron source multiplication method in nuclear critical safety

    Zhu Qingfu; Shi Yongqian; Hu Dingsheng

    2005-01-01

    The paper concerns in the neutron source multiplication method research in nuclear critical safety. Based on the neutron diffusion equation with external neutron source the effective sub-critical multiplication factor k s is deduced, and k s is different to the effective neutron multiplication factor k eff in the case of sub-critical system with external neutron source. The verification experiment on the sub-critical system indicates that the parameter measured with neutron source multiplication method is k s , and k s is related to the external neutron source position in sub-critical system and external neutron source spectrum. The relation between k s and k eff and the effect of them on nuclear critical safety is discussed. (author)

  19. Scaled nonuniform Fourier transform for image reconstruction in swept source optical coherence tomography

    Mezgebo, Biniyam; Nagib, Karim; Fernando, Namal; Kordi, Behzad; Sherif, Sherif

    2018-02-01

    Swept Source optical coherence tomography (SS-OCT) is an important imaging modality for both medical and industrial diagnostic applications. A cross-sectional SS-OCT image is obtained by applying an inverse discrete Fourier transform (DFT) to axial interferograms measured in the frequency domain (k-space). This inverse DFT is typically implemented as a fast Fourier transform (FFT) that requires the data samples to be equidistant in k-space. As the frequency of light produced by a typical wavelength-swept laser is nonlinear in time, the recorded interferogram samples will not be uniformly spaced in k-space. Many image reconstruction methods have been proposed to overcome this problem. Most such methods rely on oversampling the measured interferogram then use either hardware, e.g., Mach-Zhender interferometer as a frequency clock module, or software, e.g., interpolation in k-space, to obtain equally spaced samples that are suitable for the FFT. To overcome the problem of nonuniform sampling in k-space without any need for interferogram oversampling, an earlier method demonstrated the use of the nonuniform discrete Fourier transform (NDFT) for image reconstruction in SS-OCT. In this paper, we present a more accurate method for SS-OCT image reconstruction from nonuniform samples in k-space using a scaled nonuniform Fourier transform. The result is demonstrated using SS-OCT images of Axolotl salamander eggs.

  20. Methods for forming particles from single source precursors

    Fox, Robert V [Idaho Falls, ID; Rodriguez, Rene G [Pocatello, ID; Pak, Joshua [Pocatello, ID

    2011-08-23

    Single source precursors are subjected to carbon dioxide to form particles of material. The carbon dioxide may be in a supercritical state. Single source precursors also may be subjected to supercritical fluids other than supercritical carbon dioxide to form particles of material. The methods may be used to form nanoparticles. In some embodiments, the methods are used to form chalcopyrite materials. Devices such as, for example, semiconductor devices may be fabricated that include such particles. Methods of forming semiconductor devices include subjecting single source precursors to carbon dioxide to form particles of semiconductor material, and establishing electrical contact between the particles and an electrode.

  1. Method and apparatus for producing tomographic images

    Annis, M.

    1989-01-01

    A device useful in producing a tomographic image of a selected slice of an object to be examined is described comprising: a source of penetrating radiation, sweep means for forming energy from the source into a pencil beam and repeatedly sweeping the pencil beam over a line in space to define a sweep plane, first means for supporting an object to be examined so that the pencil beam intersections the object along a path passing through the object and the selected slice, line collimating means for filtering radiation scattered by the object, the line collimating means having a field of view which intersects and sweep plane in a bounded line so that the line collimating means passes only radiation scattered by elementary volumes of the object lying along the bounded line, and line collimating means including a plurality of channels such substantially planar in form to collectively define the field of view, the channels oriented so that pencil beam sweeps along the bounded line as a function of time, and radiation detector means responsive to radiation passed by the line collimating means

  2. Research of ART method in CT image reconstruction

    Li Zhipeng; Cong Peng; Wu Haifeng

    2005-01-01

    This paper studied Algebraic Reconstruction Technique (ART) in CT image reconstruction. Discussed the ray number influence on image quality. And the adopting of smooth method got high quality CT image. (authors)

  3. A scanning point source for quality control of FOV uniformity in GC-PET imaging

    Bergmann, H.; Minear, G.; Dobrozemsky, G.; Nowotny, R.; Koenig, B.

    2002-01-01

    Aim: PET imaging with coincidence cameras (GC-PET) requires additional quality control procedures to check the function of coincidence circuitry and detector zoning. In particular, the uniformity response over the field of view needs special attention since it is known that coincidence counting mode may suffer from non-uniformity effects not present in single photon mode. Materials and methods: An inexpensive linear scanner with a stepper motor and a digital interface to a PC with software allowing versatile scanning modes was developed. The scanner is used with a source holder containing a Sodium-22 point source. While moving the source along the axis of rotation of the GC-PET system, a tomographic acquisition takes place. The scan covers the full axial field of view of the 2-D or 3-D scatter frame. Depending on the acquisition software, point source scanning takes place continuously while only one projection is acquired or is done in step-and-shoot mode with the number of positions equal to the number of gantry steps. Special software was developed to analyse the resulting list mode acquisition files and to produce an image of the recorded coincidence events of each head. Results: Uniformity images of coincidence events were obtained after further correction for systematic sensitivity variations caused by acquisition geometry. The resulting images are analysed visually and by calculating NEMA uniformity indices as for a planar flood field. The method has been applied successfully to two different brands of GC-PET capable gamma cameras. Conclusion: Uniformity of GC-PET can be tested quickly and accurately with a routine QC procedure, using a Sodium-22 scanning point source and an inexpensive mechanical scanning device. The method can be used for both 2-D and 3-D acquisition modes and fills an important gap in the quality control system for GC-PET

  4. Ectomography - a tomographic method for gamma camera imaging

    Dale, S.; Edholm, P.E.; Hellstroem, L.G.; Larsson, S.

    1985-01-01

    In computerised gamma camera imaging the projections are readily obtained in digital form, and the number of picture elements may be relatively few. This condition makes emission techniques suitable for ectomography - a tomographic technique for directly visualising arbitrary sections of the human body. The camera rotates around the patient to acquire different projections in a way similar to SPECT. This method differs from SPECT, however, in that the camera is placed at an angle to the rotational axis, and receives two-dimensional, rather than one-dimensional, projections. Images of body sections are reconstructed by digital filtration and combination of the acquired projections. The main advantages of ectomography - a high and uniform resolution, a low and uniform attenuation and a high signal-to-noise ratio - are obtained when imaging sections close and parallel to a body surface. The filtration eliminates signals representing details outside the section and gives the section a certain thickness. Ectomographic transverse images of a line source and of a human brain have been reconstructed. Details within the sections are correctly visualised and details outside are effectively eliminated. For comparison, the same sections have been imaged with SPECT. (author)

  5. Tau method approximation of the Hubbell rectangular source integral

    Kalla, S.L.; Khajah, H.G.

    2000-01-01

    The Tau method is applied to obtain expansions, in terms of Chebyshev polynomials, which approximate the Hubbell rectangular source integral:I(a,b)=∫ b 0 (1/(√(1+x 2 )) arctan(a/(√(1+x 2 )))) This integral corresponds to the response of an omni-directional radiation detector situated over a corner of a plane isotropic rectangular source. A discussion of the error in the Tau method approximation follows

  6. HOW DO FIRMS SOURCE EXTERNAL KNOWLEDGE FOR INNOVATION? ANALYSING EFFECTS OF DIFFERENT KNOWLEDGE SOURCING METHODS

    KI H. KANG; JINA KANG

    2009-01-01

    In the era of "open innovation", external knowledge is a very important source for technology innovation. In this paper, we investigate the relationship between external knowledge and performance of technology innovation. The effect of external knowledge on the performance of technology innovation can vary with different external knowledge sourcing methods. We identify three ways of external knowledge sourcing: information transfer from informal network, R&D collaboration and technology acqui...

  7. Terahertz near-field imaging using subwavelength plasmonic apertures and a quantum cascade laser source.

    Baragwanath, Adam J; Freeman, Joshua R; Gallant, Andrew J; Zeitler, J Axel; Beere, Harvey E; Ritchie, David A; Chamberlain, J Martyn

    2011-07-01

    The first demonstration, to our knowledge, of near-field imaging using subwavelength plasmonic apertures with a terahertz quantum cascade laser source is presented. "Bull's-eye" apertures, featuring subwavelength circular apertures flanked by periodic annular corrugations were created using a novel fabrication method. A fivefold increase in intensity was observed for plasmonic apertures over plain apertures of the same diameter. Detailed studies of the transmitted beam profiles were undertaken for apertures with both planarized and corrugated exit facets, with the former producing spatially uniform intensity profiles and subwavelength spatial resolution. Finally, a proof-of-concept imaging experiment is presented, where an inhomogeneous pharmaceutical drug coating is investigated.

  8. Blind source separation analysis of PET dynamic data: a simple method with exciting MR-PET applications

    Oros-Peusquens, Ana-Maria; Silva, Nuno da [Institute of Neuroscience and Medicine, Forschungszentrum Jülich GmbH, 52425 Jülich (Germany); Weiss, Carolin [Department of Neurosurgery, University Hospital Cologne, 50924 Cologne (Germany); Stoffels, Gabrielle; Herzog, Hans; Langen, Karl J [Institute of Neuroscience and Medicine, Forschungszentrum Jülich GmbH, 52425 Jülich (Germany); Shah, N Jon [Institute of Neuroscience and Medicine, Forschungszentrum Jülich GmbH, 52425 Jülich (Germany); Jülich-Aachen Research Alliance (JARA) - Section JARA-Brain RWTH Aachen University, 52074 Aachen (Germany)

    2014-07-29

    Denoising of dynamic PET data improves parameter imaging by PET and is gaining momentum. This contribution describes an analysis of dynamic PET data by blind source separation methods and comparison of the results with MR-based brain properties.

  9. A novel design method for ground source heat pump

    Dong Xing-Jie

    2014-01-01

    Full Text Available This paper proposes a novel design method for ground source heat pump. The ground source heat pump operation is controllable by using several parameters, such as the total meters of buried pipe, the space between wells, the thermal properties of soil, thermal resistance of the well, the initial temperature of soil, and annual dynamic load. By studying the effect of well number and well space, we conclude that with the increase of the well number, the inlet and outlet water temperatures decrease in summer and increase in winter, which enhance the efficiency of ground source heat pump. The well space slightly affects the water temperatures, but it affects the soil temperature to some extent. Also the ground source heat pump operations matching with cooling tower are investigated to achieve the thermal balance. This method greatly facilitates ground source heat pump design.

  10. Device for forming the image of a radiation source

    Tosswill, C.H.

    1980-01-01

    An improvement can be made to the space resolution of systems providing the image of a radiation source by means of a slit collimator. In order to do so, a lateral movement of the collimator (with its detectors) is superimposed on the movement of the collimator, in a transversal direction in relation to the transmission direction through the collimator as well as in relation to the walls defining the slits. The total amplitude of the lateral movement is at least equal to the distance between centres of a slit and the following one. In the near field operating system, the lateral movement is a rectilinear movement perpendicular to the walls of the slits. In the distance field operating systems, it is an angular movement about an axis perpendicular to the direction of transmission through the slits [fr

  11. INTEGRATED FUSION METHOD FOR MULTIPLE TEMPORAL-SPATIAL-SPECTRAL IMAGES

    H. Shen

    2012-08-01

    Full Text Available Data fusion techniques have been widely researched and applied in remote sensing field. In this paper, an integrated fusion method for remotely sensed images is presented. Differently from the existed methods, the proposed method has the performance to integrate the complementary information in multiple temporal-spatial-spectral images. In order to represent and process the images in one unified framework, two general image observation models are firstly presented, and then the maximum a posteriori (MAP framework is used to set up the fusion model. The gradient descent method is employed to solve the fused image. The efficacy of the proposed method is validated using simulated images.

  12. FUSION SEGMENTATION METHOD BASED ON FUZZY THEORY FOR COLOR IMAGES

    J. Zhao

    2017-09-01

    Full Text Available The image segmentation method based on two-dimensional histogram segments the image according to the thresholds of the intensity of the target pixel and the average intensity of its neighborhood. This method is essentially a hard-decision method. Due to the uncertainties when labeling the pixels around the threshold, the hard-decision method can easily get the wrong segmentation result. Therefore, a fusion segmentation method based on fuzzy theory is proposed in this paper. We use membership function to model the uncertainties on each color channel of the color image. Then, we segment the color image according to the fuzzy reasoning. The experiment results show that our proposed method can get better segmentation results both on the natural scene images and optical remote sensing images compared with the traditional thresholding method. The fusion method in this paper can provide new ideas for the information extraction of optical remote sensing images and polarization SAR images.

  13. Three-dimensional display of magnetic source imaging (MSI)

    Morioka, Takato; Yamamoto, Tomoya; Nishio, Shunji; Hasuo, Kanehiro; Fujii, Kiyotaka; Fukui, Masashi; Nitta, Koichi.

    1995-01-01

    Magnetic source imaging (MSI) is a relatively new, noninvasive technique for defining the relationship between brain structure and function of individual patients, and to establish comparisons from one patient to another. This is achieved by combining detailed neurophysiological data derived via magnetoencephalography (MEG) with neuroimaging data such as computed tomographic scan and magnetic resonance imaging (MRI). The noninvasive presurgical mapping of cortical functional somatosensory activity and the direct mapping of epilepsy-associated activity are among the neurosurgical uses that are emerging for MSI. Although the procedure provides clinically useful data, there are still limitations to two-dimensional MSI. We employ three-dimensional (3-D) MSI, superimposing MSI localizations on 3-D volumetric reconstruction of MRI. 3-D MSI enhances the visualization of the entire sensory homunculus and clearly demonstrates the spatial relationship with structural lesions. The functional localization of the epileptic focus in spatial relation to the lesion provides important clues for preoperative planning and on the epileptogenicity of the lesion. 3-D MSI improves localization of the sensory cortex and generator areas of epileptic activity. (author)

  14. Matrix kernels for MEG and EEG source localization and imaging

    Mosher, J.C.; Lewis, P.S.; Leahy, R.M.

    1994-01-01

    The most widely used model for electroencephalography (EEG) and magnetoencephalography (MEG) assumes a quasi-static approximation of Maxwell's equations and a piecewise homogeneous conductor model. Both models contain an incremental field element that linearly relates an incremental source element (current dipole) to the field or voltage at a distant point. The explicit form of the field element is dependent on the head modeling assumptions and sensor configuration. Proper characterization of this incremental element is crucial to the inverse problem. The field element can be partitioned into the product of a vector dependent on sensor characteristics and a matrix kernel dependent only on head modeling assumptions. We present here the matrix kernels for the general boundary element model (BEM) and for MEG spherical models. We show how these kernels are easily interchanged in a linear algebraic framework that includes sensor specifics such as orientation and gradiometer configuration. We then describe how this kernel is easily applied to ''gain'' or ''transfer'' matrices used in multiple dipole and source imaging models

  15. Method and apparatus for imaging volume data

    Drebin, R.; Carpenter, L.C.

    1987-01-01

    An imaging system projects a two dimensional representation of three dimensional volumes where surface boundaries and objects internal to the volumes are readily shown, and hidden surfaces and the surface boundaries themselves are accurately rendered by determining volume elements or voxels. An image volume representing a volume object or data structure is written into memory. A color and opacity is assigned to each voxel within the volume and stored as a red (R), green (G), blue (B), and opacity (A) component, three dimensional data volume. The RGBA assignment for each voxel is determined based on the percentage component composition of the materials represented in the volume, and thus, the percentage of color and transparency associated with those materials. The voxels in the RGBA volume are used as mathematical filters such that each successive voxel filter is overlayed over a prior background voxel filter. Through a linear interpolation, a new background filter is determined and generated. The interpolation is successively performed for all voxels up to the front most voxel for the plane of view. The method is repeated until all display voxels are determined for the plane of view. (author)

  16. Verification of source and collimator configuration for Gamma Knife Perfexion using panoramic imaging

    Cho, Young-Bin; Prooijen, Monique van; Jaffray, David A.; Islam, Mohammad K.

    2010-01-01

    Purpose: The new model of stereotactic radiosurgery system, Gamma Knife Perfexion, allows automatic selection of built-in collimation, eliminating the need for the time consuming manual collimator installation required with previous models. However, the configuration of sources and collimators inside the system does not permit easy access for the verification of the selected collimation. While the conventional method of exposing a film at the isocenter is useful for obtaining composite dose information, it is difficult to interpret the data in terms of the integrity of each individual source and corresponding collimation. The primary aim of this study was to develop a method of verifying the geometric configuration of the sources and collimator modules of the Gamma Knife Perfexion. In addition, the method was extended to make dose measurements and verify the accuracy of dose distributions calculated by the mathematical formalism used in the treatment planning system, Leksell Gamma Plan. Methods: A panoramic view of all of 192 cobalt sources was simultaneously acquired by exposing a radiochromic film wrapped around the surface of a cylindrical phantom. The center of the phantom was mounted at the isocenter with its axis aligned along the longitudinal axis of the couch. The sizes and shapes of the source images projected on the phantom surface were compared to those calculated based on the manufacturer's design specifications. The measured dose at various points on the film was also compared to calculations using the algorithm of the planning system. Results: The panoramic images allowed clear identification of each of the 192 sources, verifying source integrity and selected collimator sizes. Dose on the film surface is due to the primary beam as well as phantom scatter and leakage contributions. Therefore, the dose at a point away from the isocenter cannot be determined simply based on the proportionality of collimator output factors; the use of a dose computation

  17. The adaptive collision source method for discrete ordinates radiation transport

    Walters, William J.; Haghighat, Alireza

    2017-01-01

    Highlights: • A new adaptive quadrature method to solve the discrete ordinates transport equation. • The adaptive collision source (ACS) method splits the flux into n’th collided components. • Uncollided flux requires high quadrature; this is lowered with number of collisions. • ACS automatically applies appropriate quadrature order each collided component. • The adaptive quadrature is 1.5–4 times more efficient than uniform quadrature. - Abstract: A novel collision source method has been developed to solve the Linear Boltzmann Equation (LBE) more efficiently by adaptation of the angular quadrature order. The angular adaptation method is unique in that the flux from each scattering source iteration is obtained, with potentially a different quadrature order used for each. Traditionally, the flux from every iteration is combined, with the same quadrature applied to the combined flux. Since the scattering process tends to distribute the radiation more evenly over angles (i.e., make it more isotropic), the quadrature requirements generally decrease with each iteration. This method allows for an optimal use of processing power, by using a high order quadrature for the first iterations that need it, before shifting to lower order quadratures for the remaining iterations. This is essentially an extension of the first collision source method, and is referred to as the adaptive collision source (ACS) method. The ACS methodology has been implemented in the 3-D, parallel, multigroup discrete ordinates code TITAN. This code was tested on a several simple and complex fixed-source problems. The ACS implementation in TITAN has shown a reduction in computation time by a factor of 1.5–4 on the fixed-source test problems, for the same desired level of accuracy, as compared to the standard TITAN code.

  18. Pixel-based parametric source depth map for Cerenkov luminescence imaging

    Altabella, L.; Spinelli, A.E.; Boschi, F.

    2016-01-01

    Optical tomography represents a challenging problem in optical imaging because of the intrinsically ill-posed inverse problem due to photon diffusion. Cerenkov luminescence tomography (CLT) for optical photons produced in tissues by several radionuclides (i.e.: 32P, 18F, 90Y), has been investigated using both 3D multispectral approach and multiviews methods. Difficult in convergence of 3D algorithms can discourage to use this technique to have information of depth and intensity of source. For these reasons, we developed a faster 2D corrected approach based on multispectral acquisitions, to obtain source depth and its intensity using a pixel-based fitting of source intensity. Monte Carlo simulations and experimental data were used to develop and validate the method to obtain the parametric map of source depth. With this approach we obtain parametric source depth maps with a precision between 3% and 7% for MC simulation and 5–6% for experimental data. Using this method we are able to obtain reliable information about the source depth of Cerenkov luminescence with a simple and flexible procedure

  19. Computed tomographic images using tube source of x rays: interior properties of the material

    Rao, Donepudi V.; Takeda, Tohoru; Itai, Yuji; Seltzer, S. M.; Hubbell, John H.; Zeniya, Tsutomu; Akatsuka, Takao; Cesareo, Roberto; Brunetti, Antonio; Gigante, Giovanni E.

    2002-01-01

    An image intensifier based computed tomography scanner and a tube source of x-rays are used to obtain the images of small objects, plastics, wood and soft materials in order to know the interior properties of the material. A new method is developed to estimate the degree of monochromacy, total solid angle, efficiency and geometrical effects of the measuring system and the way to produce monoenergetic radiation. The flux emitted by the x-ray tube is filtered using the appropriate filters at the chosen optimum energy and reasonable monochromacy is achieved and the images are acceptably distinct. Much attention has been focused on the imaging of small objects of weakly attenuating materials at optimum value. At optimum value it is possible to calculate the three-dimensional representation of inner and outer surfaces of the object. The image contrast between soft materials could be significantly enhanced by optimal selection of the energy of the x-rays by Monte Carlo methods. The imaging system is compact, reasonably economic, has a good contrast resolution, simple operation and routine availability and explores the use of optimizing tomography for various applications.

  20. Image Registration Methode in Radar Interferometry

    S. Chelbi

    2015-08-01

    Full Text Available This article presents a methodology for the determination of the registration of an Interferometric Synthetic radar (InSAR pair images with half pixel precision. Using the two superposed radar images Single Look complexes (SLC [1-4], we developed an iterative process to superpose these two images according to their correlation coefficient with a high coherence area. This work concerns the exploitation of ERS Tandem pair of radar images SLC of the Algiers area acquired on 03 January and 04 January 1994. The former is taken as a master image and the latter as a slave image.

  1. Numerical methods in image processing for applications in jewellery industry

    Petrla, Martin

    2016-01-01

    Presented thesis deals with a problem from the field of image processing for application in multiple scanning of jewelery stones. The aim is to develop a method for preprocessing and subsequent mathematical registration of images in order to increase the effectivity and reliability of the output quality control. For these purposes the thesis summerizes mathematical definition of digital image as well as theoretical base of image registration. It proposes a method adjusting every single image ...

  2. Keyhole imaging method for dynamic objects behind the occlusion area

    Hao, Conghui; Chen, Xi; Dong, Liquan; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Hui, Mei; Liu, Xiaohua; Wu, Hong

    2018-01-01

    A method of keyhole imaging based on camera array is realized to obtain the video image behind a keyhole in shielded space at a relatively long distance. We get the multi-angle video images by using a 2×2 CCD camera array to take the images behind the keyhole in four directions. The multi-angle video images are saved in the form of frame sequences. This paper presents a method of video frame alignment. In order to remove the non-target area outside the aperture, we use the canny operator and morphological method to realize the edge detection of images and fill the images. The image stitching of four images is accomplished on the basis of the image stitching algorithm of two images. In the image stitching algorithm of two images, the SIFT method is adopted to accomplish the initial matching of images, and then the RANSAC algorithm is applied to eliminate the wrong matching points and to obtain a homography matrix. A method of optimizing transformation matrix is proposed in this paper. Finally, the video image with larger field of view behind the keyhole can be synthesized with image frame sequence in which every single frame is stitched. The results show that the screen of the video is clear and natural, the brightness transition is smooth. There is no obvious artificial stitching marks in the video, and it can be applied in different engineering environment .

  3. Image processing methods and architectures in diagnostic pathology.

    Oscar DĂŠniz

    2010-05-01

    Full Text Available Grid technology has enabled the clustering and the efficient and secure access to and interaction among a wide variety of geographically distributed resources such as: supercomputers, storage systems, data sources, instruments and special devices and services. Their main applications include large-scale computational and data intensive problems in science and engineering. General grid structures and methodologies for both software and hardware in image analysis for virtual tissue-based diagnosis has been considered in this paper. This methods are focus on the user level middleware. The article describes the distributed programming system developed by the authors for virtual slide analysis in diagnostic pathology. The system supports different image analysis operations commonly done in anatomical pathology and it takes into account secured aspects and specialized infrastructures with high level services designed to meet application requirements. Grids are likely to have a deep impact on health related applications, and therefore they seem to be suitable for tissue-based diagnosis too. The implemented system is a joint application that mixes both Web and Grid Service Architecture around a distributed architecture for image processing. It has shown to be a successful solution to analyze a big and heterogeneous group of histological images under architecture of massively parallel processors using message passing and non-shared memory.

  4. Color management systems: methods and technologies for increased image quality

    Caretti, Maria

    1997-02-01

    All the steps in the imaging chain -- from handling the originals in the prepress to outputting them on any device - - have to be well calibrated and adjusted to each other, in order to reproduce color images in a desktop environment as accurate as possible according to the original. Today most of the steps in the prepress production are digital and therefore it is realistic to believe that the color reproduction can be well controlled. This is true thanks to the last years development of fast, cost effective scanners, digital sources and digital proofing devices not the least. It is likely to believe that well defined tools and methods to control this imaging flow will lead to large cost and time savings as well as increased overall image quality. Until now, there has been a lack of good, reliable, easy-to- use systems (e.g. hardware, software, documentation, training and support) in an extent that has made them accessible to the large group of users of graphic arts production systems. This paper provides an overview of the existing solutions to manage colors in a digital pre-press environment. Their benefits and limitations are discussed as well as how they affect the production workflow and organization. The difference between a color controlled environment and one that is not is explained.

  5. Optimization of Excitation in FDTD Method and Corresponding Source Modeling

    B. Dimitrijevic

    2015-04-01

    Full Text Available Source and excitation modeling in FDTD formulation has a significant impact on the method performance and the required simulation time. Since the abrupt source introduction yields intensive numerical variations in whole computational domain, a generally accepted solution is to slowly introduce the source, using appropriate shaping functions in time. The main goal of the optimization presented in this paper is to find balance between two opposite demands: minimal required computation time and acceptable degradation of simulation performance. Reducing the time necessary for source activation and deactivation is an important issue, especially in design of microwave structures, when the simulation is intensively repeated in the process of device parameter optimization. Here proposed optimized source models are realized and tested within an own developed FDTD simulation environment.

  6. Advanced Calibration Source for Planetary and Earth Observing Imaging

    National Aeronautics and Space Administration — Planetary and Earth imaging requires radiometrically calibrated and stable imaging sensors.  Radiometric calibration enables the ability to remove or mitigate...

  7. Dual-source CT coronary imaging in heart transplant recipients: image quality and optimal reconstruction interval

    Bastarrika, Gorka; Arraiza, Maria; Pueyo, Jesus C.; Cecco, Carlo N. de; Ubilla, Matias; Mastrobuoni, Stefano; Rabago, Gregorio

    2008-01-01

    The image quality and optimal reconstruction interval for coronary arteries in heart transplant recipients undergoing non-invasive dual-source computed tomography (DSCT) coronary angiography was evaluated. Twenty consecutive heart transplant recipients who underwent DSCT coronary angiography were included (19 male, one female; mean age 63.1±10.7 years). Data sets were reconstructed in 5% steps from 30% to 80% of the R-R interval. Two blinded independent observers assessed the image quality of each coronary segments using a five-point scale (from 0 = not evaluative to 4=excellent quality). A total of 289 coronary segments in 20 heart transplant recipients were evaluated. Mean heart rate during the scan was 89.1±10.4 bpm. At the best reconstruction interval, diagnostic image quality (score ≥2) was obtained in 93.4% of the coronary segments (270/289) with a mean image quality score of 3.04± 0.63. Systolic reconstruction intervals provided better image quality scores than diastolic reconstruction intervals (overall mean quality scores obtained with the systolic and diastolic reconstructions 3.03±1.06 and 2.73±1.11, respectively; P<0.001). Different systolic reconstruction intervals (35%, 40%, 45% of RR interval) did not yield to significant differences in image quality scores for the coronary segments (P=0.74). Reconstructions obtained at the systolic phase of the cardiac cycle allowed excellent diagnostic image quality coronary angiograms in heart transplant recipients undergoing DSCT coronary angiography. (orig.)

  8. dcmqi: An Open Source Library for Standardized Communication of Quantitative Image Analysis Results Using DICOM.

    Herz, Christian; Fillion-Robin, Jean-Christophe; Onken, Michael; Riesmeier, Jörg; Lasso, Andras; Pinter, Csaba; Fichtinger, Gabor; Pieper, Steve; Clunie, David; Kikinis, Ron; Fedorov, Andriy

    2017-11-01

    Quantitative analysis of clinical image data is an active area of research that holds promise for precision medicine, early assessment of treatment response, and objective characterization of the disease. Interoperability, data sharing, and the ability to mine the resulting data are of increasing importance, given the explosive growth in the number of quantitative analysis methods being proposed. The Digital Imaging and Communications in Medicine (DICOM) standard is widely adopted for image and metadata in radiology. dcmqi (DICOM for Quantitative Imaging) is a free, open source library that implements conversion of the data stored in commonly used research formats into the standard DICOM representation. dcmqi source code is distributed under BSD-style license. It is freely available as a precompiled binary package for every major operating system, as a Docker image, and as an extension to 3D Slicer. Installation and usage instructions are provided in the GitHub repository at https://github.com/qiicr/dcmqi Cancer Res; 77(21); e87-90. ©2017 AACR . ©2017 American Association for Cancer Research.

  9. Uncertainty in Earthquake Source Imaging Due to Variations in Source Time Function and Earth Structure

    Razafindrakoto, H. N. T.

    2014-03-25

    One way to improve the accuracy and reliability of kinematic earthquake source imaging is to investigate the origin of uncertainty and to minimize their effects. The difficulties in kinematic source inversion arise from the nonlinearity of the problem, nonunique choices in the parameterization, and observational errors. We analyze particularly the uncertainty related to the choice of the source time function (STF) and the variability in Earth structure. We consider a synthetic data set generated from a spontaneous dynamic rupture calculation. Using Bayesian inference, we map the solution space of peak slip rate, rupture time, and rise time to characterize the kinematic rupture in terms of posterior density functions. Our test to investigate the effect of the choice of STF reveals that all three tested STFs (isosceles triangle, regularized Yoffe with acceleration time of 0.1 and 0.3 s) retrieve the patch of high slip and slip rate around the hypocenter. However, the use of an isosceles triangle as STF artificially accelerates the rupture to propagate faster than the target solution. It additionally generates an artificial linear correlation between rupture onset time and rise time. These appear to compensate for the dynamic source effects that are not included in the symmetric triangular STF. The exact rise time for the tested STFs is difficult to resolve due to the small amount of radiated seismic moment in the tail of STF. To highlight the effect of Earth structure variability, we perform inversions including the uncertainty in the wavespeed only, and variability in both wavespeed and layer depth. We find that little difference is noticeable between the resulting rupture model uncertainties from these two parameterizations. Both significantly broaden the posterior densities and cause faster rupture propagation particularly near the hypocenter due to the major velocity change at the depth where the fault is located.

  10. Uncertainty in Earthquake Source Imaging Due to Variations in Source Time Function and Earth Structure

    Razafindrakoto, H. N. T.; Mai, Paul Martin

    2014-01-01

    One way to improve the accuracy and reliability of kinematic earthquake source imaging is to investigate the origin of uncertainty and to minimize their effects. The difficulties in kinematic source inversion arise from the nonlinearity of the problem, nonunique choices in the parameterization, and observational errors. We analyze particularly the uncertainty related to the choice of the source time function (STF) and the variability in Earth structure. We consider a synthetic data set generated from a spontaneous dynamic rupture calculation. Using Bayesian inference, we map the solution space of peak slip rate, rupture time, and rise time to characterize the kinematic rupture in terms of posterior density functions. Our test to investigate the effect of the choice of STF reveals that all three tested STFs (isosceles triangle, regularized Yoffe with acceleration time of 0.1 and 0.3 s) retrieve the patch of high slip and slip rate around the hypocenter. However, the use of an isosceles triangle as STF artificially accelerates the rupture to propagate faster than the target solution. It additionally generates an artificial linear correlation between rupture onset time and rise time. These appear to compensate for the dynamic source effects that are not included in the symmetric triangular STF. The exact rise time for the tested STFs is difficult to resolve due to the small amount of radiated seismic moment in the tail of STF. To highlight the effect of Earth structure variability, we perform inversions including the uncertainty in the wavespeed only, and variability in both wavespeed and layer depth. We find that little difference is noticeable between the resulting rupture model uncertainties from these two parameterizations. Both significantly broaden the posterior densities and cause faster rupture propagation particularly near the hypocenter due to the major velocity change at the depth where the fault is located.

  11. Generalized hybrid Monte Carlo - CMFD methods for fission source convergence

    Wolters, Emily R.; Larsen, Edward W.; Martin, William R.

    2011-01-01

    In this paper, we generalize the recently published 'CMFD-Accelerated Monte Carlo' method and present two new methods that reduce the statistical error in CMFD-Accelerated Monte Carlo. The CMFD-Accelerated Monte Carlo method uses Monte Carlo to estimate nonlinear functionals used in low-order CMFD equations for the eigenfunction and eigenvalue. The Monte Carlo fission source is then modified to match the resulting CMFD fission source in a 'feedback' procedure. The two proposed methods differ from CMFD-Accelerated Monte Carlo in the definition of the required nonlinear functionals, but they have identical CMFD equations. The proposed methods are compared with CMFD-Accelerated Monte Carlo on a high dominance ratio test problem. All hybrid methods converge the Monte Carlo fission source almost immediately, leading to a large reduction in the number of inactive cycles required. The proposed methods stabilize the fission source more efficiently than CMFD-Accelerated Monte Carlo, leading to a reduction in the number of active cycles required. Finally, as in CMFD-Accelerated Monte Carlo, the apparent variance of the eigenfunction is approximately equal to the real variance, so the real error is well-estimated from a single calculation. This is an advantage over standard Monte Carlo, in which the real error can be underestimated due to inter-cycle correlation. (author)

  12. Simultaneous collection method of on-peak window image and off-peak window image in Tl-201 imaging

    Murakami, Tomonori; Noguchi, Yasushi; Kojima, Akihiro; Takagi, Akihiro; Matsumoto, Masanori

    2007-01-01

    Tl-201 imaging detects the photopeak (71 keV, in on-peak window) of characteristic X-rays of Hg-201 formed from Tl-201 decay. The peak is derived from 4 rays of different energy and emission intensity and does not follow in Gaussian distribution. In the present study, authors made an idea for the method in the title to attain the more effective single imaging, which was examined for its accuracy and reliability with phantoms and applied clinically to Tl-201 scintigraphy in a patient. The authors applied the triple energy window method for data acquisition: the energy window setting was made on Hg-201 X-rays photopeak in three of the lower (3%, L), main (72 keV, M) and upper (14%, U) windows with the gamma camera with 2-gated detector (Toshiba E. CAM/ICON). L, M and U images obtained simultaneously were then constructed to images of on-peak (L+M, Mock on-peak) and off-peak (M+U) window settings for evaluation. Phantoms for line source with Tl-201-containing swab and for multi-defect with acrylic plate containing Tl-201 solution were imaged in water. The female patient with thyroid cancer was subjected to preoperative scintigraphy under the defined conditions. Mock on-, off-peak images were found to be equivalent to the true (ordinary, clinical) on-, off-peak ones, and the present method was thought usable for evaluation of usefulness of off-peak window data. (R.T.)

  13. [Multimodal medical image registration using cubic spline interpolation method].

    He, Yuanlie; Tian, Lianfang; Chen, Ping; Wang, Lifei; Ye, Guangchun; Mao, Zongyuan

    2007-12-01

    Based on the characteristic of the PET-CT multimodal image series, a novel image registration and fusion method is proposed, in which the cubic spline interpolation method is applied to realize the interpolation of PET-CT image series, then registration is carried out by using mutual information algorithm and finally the improved principal component analysis method is used for the fusion of PET-CT multimodal images to enhance the visual effect of PET image, thus satisfied registration and fusion results are obtained. The cubic spline interpolation method is used for reconstruction to restore the missed information between image slices, which can compensate for the shortage of previous registration methods, improve the accuracy of the registration, and make the fused multimodal images more similar to the real image. Finally, the cubic spline interpolation method has been successfully applied in developing 3D-CRT (3D Conformal Radiation Therapy) system.

  14. Preparation of protactinium measurement source by electroplating method

    Li Zongwei; Yang Weifan; Fang Keming; Yuan Shuanggui; Guo Junsheng; Pan Qiangyan

    1998-01-01

    An electroplating method for the preparation of Pa sources is described, and the main factors (such as pH value of solution, electroplating time and current density) influencing the electroplating of Pa are tested and discussed with 233 Pa as a tracer. A thin and uniform electroplating Pa-Layer of 1 mg/cm 2 thick on thin stainless steel disk was obtained. The Pa source was measured by a HPGe detector to determine the chemical efficiency

  15. Full field image reconstruction is suitable for high-pitch dual-source computed tomography.

    Mahnken, Andreas H; Allmendinger, Thomas; Sedlmair, Martin; Tamm, Miriam; Reinartz, Sebastian D; Flohr, Thomas

    2012-11-01

    The field of view (FOV) in high-pitch dual-source computed tomography (DSCT) is limited by the size of the second detector. The goal of this study was to develop and evaluate a full FOV image reconstruction technique for high-pitch DSCT. For reconstruction beyond the FOV of the second detector, raw data of the second system were extended to the full dimensions of the first system, using the partly existing data of the first system in combination with a very smooth transition weight function. During the weighted filtered backprojection, the data of the second system were applied with an additional weighting factor. This method was tested for different pitch values from 1.5 to 3.5 on a simulated phantom and on 25 high-pitch DSCT data sets acquired at pitch values of 1.6, 2.0, 2.5, 2.8, and 3.0. Images were reconstructed with FOV sizes of 260 × 260 and 500 × 500 mm. Image quality was assessed by 2 radiologists using a 5-point Likert scale and analyzed with repeated-measure analysis of variance. In phantom and patient data, full FOV image quality depended on pitch. Where complete projection data from both tube-detector systems were available, image quality was unaffected by pitch changes. Full FOV image quality was not compromised at pitch values of 1.6 and remained fully diagnostic up to a pitch of 2.0. At higher pitch values, there was an increasing difference in image quality between limited and full FOV images (P = 0.0097). With this new image reconstruction technique, full FOV image reconstruction can be used up to a pitch of 2.0.

  16. The synthesis method for design of electron flow sources

    Alexahin, Yu I.; Molodozhenzev, A. Yu

    1997-01-01

    The synthesis method to design a relativistic magnetically - focused beam source is described in this paper. It allows to find a shape of electrodes necessary to produce laminar space charge flows. Electron guns with shielded cathodes designed with this method were analyzed using the EGUN code. The obtained results have shown the coincidence of the synthesis and analysis calculations [1]. This method of electron gun calculation may be applied for immersed electron flows - of interest for the EBIS electron gun design.

  17. SPANDOM - source projection analytic nodal discrete ordinates method

    Kim, Tae Hyeong; Cho, Nam Zin

    1994-01-01

    We describe a new discrete ordinates nodal method for the two-dimensional transport equation. We solve the discrete ordinates equation analytically after the source term is projected and represented in polynomials. The method is applied to two fast reactor benchmark problems and compared with the TWOHEX code. The results indicate that the present method accurately predicts not only multiplication factor but also flux distribution

  18. Methods of performing downhole operations using orbital vibrator energy sources

    Cole, Jack H.; Weinberg, David M.; Wilson, Dennis R.

    2004-02-17

    Methods of performing down hole operations in a wellbore. A vibrational source is positioned within a tubular member such that an annulus is formed between the vibrational source and an interior surface of the tubular member. A fluid medium, such as high bulk modulus drilling mud, is disposed within the annulus. The vibrational source forms a fluid coupling with the tubular member through the fluid medium to transfer vibrational energy to the tubular member. The vibrational energy may be used, for example, to free a stuck tubular, consolidate a cement slurry and/or detect voids within a cement slurry prior to the curing thereof.

  19. Imaging choroidal neovascular membrane using en face swept-source optical coherence tomography angiography

    Moussa M

    2017-10-01

    Full Text Available Magdy Moussa,1,2 Mahmoud Leila,3 Hagar Khalid1,2 1Ophthalmology Department, Faculty of Medicine, Tanta University, Tanta, Egypt; 2MEDIC Eye Center, Tanta, Egypt; 3Retina Department, Research Institute of Ophthalmology, Giza, Egypt Purpose: The aim of this study was to assess the efficacy of swept-source optical coherence tomography angiography (SS-OCTA in delineating the morphology of choroidal neovascular membrane (CNV. Patients and methods: This was a retrospective observational case series reviewing clinical data and fundus fluorescein angiography (FFA, swept-source optical coherence tomography (SS-OCT, and SS-OCTA images of patients with CNV and comparing the findings. The swept-source technology enables deeper penetration and superior axial resolution. The incorporated blood flow detection algorithm, optical coherence tomography angiography ratio analysis (OCTARA, enables visualization of CNV in vivo without the need for dye injection. Results: The study included 136 eyes of 105 patients. Active lesions on SS-OCTA images showed increased capillary density, extensive arborization, vascular anastomosis and looping, and peri-lesional hollow. Inactive lesions showed decreased capillary density, presence of large linear vessels, and presence of feeder vessels supplying the CNV. We detected positive correlation between SS-OCTA, FFA, and SS-OCT images in 97% of eyes. In the remaining 3%, SS-OCTA confirmed the absence of CNV, whereas FFA and SS-OCT either were inconclusive in the diagnosis of CNV or yielded false-positive results. Conclusion: SS-OCT and SS-OCTA represent a reproducible risk-free analog for FFA in imaging CNV. SS-OCTA is particularly versatile in cases where FFA and SS-OCT are inconclusive. Keywords: swept-source OCT, OCT angiography, imaging of CNV, OCTARA algorithm

  20. Current use of imaging and electromagnetic source localization procedures in epilepsy surgery centers across Europe.

    Mouthaan, Brian E; Rados, Matea; Barsi, Péter; Boon, Paul; Carmichael, David W; Carrette, Evelien; Craiu, Dana; Cross, J Helen; Diehl, Beate; Dimova, Petia; Fabo, Daniel; Francione, Stefano; Gaskin, Vladislav; Gil-Nagel, Antonio; Grigoreva, Elena; Guekht, Alla; Hirsch, Edouard; Hecimovic, Hrvoje; Helmstaedter, Christoph; Jung, Julien; Kalviainen, Reetta; Kelemen, Anna; Kimiskidis, Vasilios; Kobulashvili, Teia; Krsek, Pavel; Kuchukhidze, Giorgi; Larsson, Pål G; Leitinger, Markus; Lossius, Morten I; Luzin, Roman; Malmgren, Kristina; Mameniskiene, Ruta; Marusic, Petr; Metin, Baris; Özkara, Cigdem; Pecina, Hrvoje; Quesada, Carlos M; Rugg-Gunn, Fergus; Rydenhag, Bertil; Ryvlin, Philippe; Scholly, Julia; Seeck, Margitta; Staack, Anke M; Steinhoff, Bernhard J; Stepanov, Valentin; Tarta-Arsene, Oana; Trinka, Eugen; Uzan, Mustafa; Vogt, Viola L; Vos, Sjoerd B; Vulliémoz, Serge; Huiskamp, Geertjan; Leijten, Frans S S; Van Eijsden, Pieter; Braun, Kees P J

    2016-05-01

    In 2014 the European Union-funded E-PILEPSY project was launched to improve awareness of, and accessibility to, epilepsy surgery across Europe. We aimed to investigate the current use of neuroimaging, electromagnetic source localization, and imaging postprocessing procedures in participating centers. A survey on the clinical use of imaging, electromagnetic source localization, and postprocessing methods in epilepsy surgery candidates was distributed among the 25 centers of the consortium. A descriptive analysis was performed, and results were compared to existing guidelines and recommendations. Response rate was 96%. Standard epilepsy magnetic resonance imaging (MRI) protocols are acquired at 3 Tesla by 15 centers and at 1.5 Tesla by 9 centers. Three centers perform 3T MRI only if indicated. Twenty-six different MRI sequences were reported. Six centers follow all guideline-recommended MRI sequences with the proposed slice orientation and slice thickness or voxel size. Additional sequences are used by 22 centers. MRI postprocessing methods are used in 16 centers. Interictal positron emission tomography (PET) is available in 22 centers; all using 18F-fluorodeoxyglucose (FDG). Seventeen centers perform PET postprocessing. Single-photon emission computed tomography (SPECT) is used by 19 centers, of which 15 perform postprocessing. Four centers perform neither PET nor SPECT in children. Seven centers apply magnetoencephalography (MEG) source localization, and nine apply electroencephalography (EEG) source localization. Fourteen combinations of inverse methods and volume conduction models are used. We report a large variation in the presurgical diagnostic workup among epilepsy surgery centers across Europe. This diversity underscores the need for high-quality systematic reviews, evidence-based recommendations, and harmonization of available diagnostic presurgical methods. Wiley Periodicals, Inc. © 2016 International League Against Epilepsy.

  1. Evaluation of processing methods for static radioisotope scan images

    Oakberg, J.A.

    1976-12-01

    Radioisotope scanning in the field of nuclear medicine provides a method for the mapping of a radioactive drug in the human body to produce maps (images) which prove useful in detecting abnormalities in vital organs. At best, radioisotope scanning methods produce images with poor counting statistics. One solution to improving the body scan images is using dedicated small computers with appropriate software to process the scan data. Eleven methods for processing image data are compared

  2. Enhancement of Electroluminescence (EL) image measurements for failure quantification methods

    Parikh, Harsh; Spataru, Sergiu; Sera, Dezso

    2018-01-01

    Enhanced quality images are necessary for EL image analysis and failure quantification. A method is proposed which determines image quality in terms of more accurate failure detection of solar panels through electroluminescence (EL) imaging technique. The goal of the paper is to determine the most...

  3. Imaging phase holdup distribution of three phase flow systems using dual source gamma ray tomography

    Varma, Rajneesh; Al-Dahhan, Muthanna; O'Sullivan, Joseph

    2008-01-01

    Full text: Multiphase reaction and process systems are used in abundance in the chemical and biochemical industry. Tomography has been successfully employed to visualize the hydrodynamics of multiphase systems. Most of the tomography methods (gamma ray, x-ray and electrical capacitance and resistance) have been successfully implemented for two phase dynamic systems. However, a significant number of chemical and biochemical systems consists of dynamic three phases. Research effort directed towards the development of tomography techniques to image such dynamic system has met with partial successes for specific systems with applicability to limited operating conditions. A dual source tomography scanner has been developed that uses the 661 keV and 1332 keV photo peaks from the 137 Cs and 60 Co for imaging three phase systems. A new approach has been developed and applied that uses the polyenergetic Alternating Minimization (A-M) algorithm, developed by O'Sullivan and Benac (2007), for imaging the holdup distribution in three phases' dynamic systems. The new approach avoids the traditional post image processing approach used to determine the holdup distribution where the attenuation images of the mixed flow obtained from gamma ray photons of two different energies are used to determine the holdup of three phases. In this approach the holdup images are directly reconstructed from the gamma ray transmission data. The dual source gamma ray tomography scanner and the algorithm were validated using a three phase phantom. Based in the validation, three phase holdup studies we carried out in slurry bubble column containing gas liquid and solid phases in a dynamic state using the dual energy gamma ray tomography. The key results of the holdup distribution studies in the slurry bubble column along with the validation of the dual source gamma ray tomography system would be presented and discussed

  4. Alternate method for to realize image fusion

    Vargas, L.; Hernandez, F.; Fernandez, R.

    2005-01-01

    At the present time the image departments have the necessity of carrying out image fusion obtained of diverse apparatuses. Conventionally its fuse resonance or tomography images by X-rays with functional images as the gammagrams and PET images. The fusion technology is for sale with the modern image equipment and not all the cabinets of nuclear medicine have access to it. By this reason we analyze, study and we find a solution so that all the cabinets of nuclear medicine can benefit of the image fusion. The first indispensable requirement is to have a personal computer with capacity to put up image digitizer cards. It is also possible, if one has a gamma camera that can export images in JPG, GIF, TIFF or BMP formats, to do without of the digitizer card and to record the images in a disk to be able to use them in the personal computer. It is required of one of the following commercially available graph design programs: Corel Draw, Photo Shop, FreeHand, Illustrator or Macromedia Flash that are those that we evaluate and that its allow to make the images fusion. Anyone of them works well and a short training is required to be able to manage them. It is necessary a photographic digital camera with a resolution of at least 3.0 mega pixel. The procedure consists on taking photographic images of the radiological studies that the patient already has, selecting those demonstrative images of the pathology in study and that its can also be concordant with the images that we have created in the gammagraphic studies, whether for planar or tomographic. We transfer the images to the personal computer and we read them with the graph design program. To continuation also reads the gammagraphic images. We use those digital tools to make transparent the images, to clip them, to adjust the sizes and to create the fused images. The process is manual and it is requires of ability and experience to choose the images, the cuts, those sizes and the transparency grade. (Author)

  5. MIVOC method at the mVINIS ion source

    Jovović Jovica

    2007-01-01

    Full Text Available Based on the metal-ions-from-volatile-compounds (MIVOC method with the mVINIS ion source, we have produced multiply charged ion beams from solid substances. Highly in tense, stable multiply charged ion beams of several solid substances with high melting points were extracted by using this method. The spectrum of multiply charged ion beams obtained from the element hafnium is presented here. For the first time ever, hafnium ion beam spectra were recorded at an electron cyclotron resonance ion source. Multiply charged ion beams from solid substances were used to irradiate the polymer, fullerene and glassy carbon samples at the channel for the modification of materials.

  6. Evaluation of methods to leak test sealed radiation sources

    Arbeau, N.D.; Scott, C.K.

    1987-04-01

    The methods for the leak testing of sealed radiation sources were reviewed. One hundred and thirty-one equipment vendors were surveyed to identify commercially available leak test instruments. The equipment is summarized in tabular form by radiation type and detector type for easy reference. The radiation characteristics of the licensed sources were reviewed and summarized in a format that can be used to select the most suitable detection method. A test kit is proposed for use by inspectors when verifying a licensee's test procedures. The general elements of leak test procedures are discussed

  7. Time domain localization technique with sparsity constraint for imaging acoustic sources

    Padois, Thomas; Doutres, Olivier; Sgard, Franck; Berry, Alain

    2017-09-01

    This paper addresses source localization technique in time domain for broadband acoustic sources. The objective is to accurately and quickly detect the position and amplitude of noise sources in workplaces in order to propose adequate noise control options and prevent workers hearing loss or safety risk. First, the generalized cross correlation associated with a spherical microphone array is used to generate an initial noise source map. Then a linear inverse problem is defined to improve this initial map. Commonly, the linear inverse problem is solved with an l2 -regularization. In this study, two sparsity constraints are used to solve the inverse problem, the orthogonal matching pursuit and the truncated Newton interior-point method. Synthetic data are used to highlight the performances of the technique. High resolution imaging is achieved for various acoustic sources configurations. Moreover, the amplitudes of the acoustic sources are correctly estimated. A comparison of computation times shows that the technique is compatible with quasi real-time generation of noise source maps. Finally, the technique is tested with real data.

  8. Large seismic source imaging from old analogue seismograms

    Caldeira, Bento; Buforn, Elisa; Borges, José; Bezzeghoud, Mourad

    2017-04-01

    In this work we present a procedure to recover the ground motions by a proper digital structure, from old seismograms in analogue physical support (paper or microfilm) to study the source rupture process, by application of modern finite source inversion tools. Despite the quality that the analog data and the digitizing technologies available may have, recover the ground motions with the accurate metrics from old seismograms, is often an intricate procedure. Frequently the general parameters of the analogue instruments response that allow recover the shape of the ground motions (free periods and damping) are known, but the magnification that allow recover the metric of these motions is dubious. It is in these situations that the procedure applies. The procedure is based on assign of the moment magnitude value to the integral of the apparent Source Time Function (STF), estimated by deconvolution of a synthetic elementary seismogram from the related observed seismogram, corrected with an instrument response affected by improper magnification. Two delicate issues in the process are 1) the calculus of the synthetic elementary seismograms that must consider later phases if applied to large earthquakes (the portions of signal should be 3 or 4 times larger than the rupture time) and 2) the deconvolution to calculate the apparent STF. In present version of the procedure was used the Direct Solution Method to compute the elementary seismograms and the deconvolution was processed in time domain by an iterative algorithm that allow constrains the STF to stay positive and time limited. The method was examined using synthetic data to test the accuracy and robustness. Finally, a set of 17 real old analog seismograms from the Santa Maria (Azores) 1939 earthquake (Mw=7.1) was used in order to recover the waveforms in the required digital structure, from which by inversion allows compute the finite source rupture model (slip distribution). Acknowledgements: This work is co

  9. 3D Interpolation Method for CT Images of the Lung

    Noriaki Asada

    2003-06-01

    Full Text Available A 3-D image can be reconstructed from numerous CT images of the lung. The procedure reconstructs a solid from multiple cross section images, which are collected during pulsation of the heart. Thus the motion of the heart is a special factor that must be taken into consideration during reconstruction. The lung exhibits a repeating transformation synchronized to the beating of the heart as an elastic body. There are discontinuities among neighboring CT images due to the beating of the heart, if no special techniques are used in taking CT images. The 3-D heart image is reconstructed from numerous CT images in which both the heart and the lung are taken. Although the outline shape of the reconstructed 3-D heart is quite unnatural, the envelope of the 3-D unnatural heart is fit to the shape of the standard heart. The envelopes of the lung in the CT images are calculated after the section images of the best fitting standard heart are located at the same positions of the CT images. Thus the CT images are geometrically transformed to the optimal CT images fitting best to the standard heart. Since correct transformation of images is required, an Area oriented interpolation method proposed by us is used for interpolation of transformed images. An attempt to reconstruct a 3-D lung image by a series of such operations without discontinuity is shown. Additionally, the same geometrical transformation method to the original projection images is proposed as a more advanced method.

  10. The application of fuzzy-based methods to central nerve fiber imaging

    Axer, Hubertus; Jantzen, Jan; Keyserlingk, Diedrich Graf v.

    2003-01-01

    This paper discusses the potential of fuzzy logic methods within medical imaging. Technical advances have produced imaging techniques that can visualize structures and their functions in the living human body. The interpretation of these images plays a prominent role in diagnostic and therapeutic...... decisions, so physicians must deal with a variety of image processing methods and their applications.This paper describes three different sources of medical imagery that allow the visualization of nerve fibers in the human brain: (1) an algorithm for automatic segmentation of some parts of the thalamus....... Fuzzy logic methods were applied to analyze these pictures from low- to high-level image processing. The solutions presented here are motivated by problems of routine neuroanatomic research demonstrating fuzzy-based methods to be valuable tools in medical image processing....

  11. Conception and data transfer analysis of an open-source digital image archive designed for radiology

    Teichgraeber, U.K.M.; Lehmkuhl, L.; Harderer, A.; Emmel, D.; Ehrenstein, T.; Ricke, J.; Felix, R.

    2003-01-01

    Purpose: Implementation of a self-designed, web-based digital image archive incorporating the existing DICOM infrastructure to assure distribution of digital pictures and reports and to optimize work flow. Assessment after three years. Materials and methods: Open-source software was used to guarantee highest reliability and cost effectiveness. In view of rapidly increasing capacity and decreasing costs of hard discs (HDs), HDs were preferred over slower and expensive magneto-optical disk (MOD) or tape storage systems. The number of installed servers increased from one to 12. By installing HDs with increased capacities, the number of servers should be kept constant. Entry and access of data were analyzed over two 4-month periods (after 1.5 and 2 years of continuous operations). Results: Our digital image archive was found to be very reliable, cost effective and suitable for its designated tasks. As judged from the measured access volume, the average utilization of the system increased by 160%. In the period from January to April 2002, the users accessed 239.8 gigabyte of the stored 873.7 gigabyte image data (27%). The volume of the stored data added 20%, mainly due to an increase in cross-section imaging. Conclusion: The challenge of developing a digital image archive with limited financial resources resulted in a practicable and expandable solution. The utilization, number of active users and volume of transferred data have increased significantly. Our concept of utilizing HDs for image storage proved to be successful. (orig.) [de

  12. Preliminary study of single contrast enhanced dual energy heart imaging using dual-source CT

    Peng Jin; Zhang Longjiang; Zhou Changsheng; Lu Guangming; Ma Yan; Gu Haifeng

    2009-01-01

    Objective: To evaluate the feasibility and preliminary applications of single contrast enhanced dual energy heart imaging using dual-source CT (DSCT). Methods: Thirty patients underwent dual energy heart imaging with DSCT, of which 6 cases underwent SPECT or DSA within one week. Two experienced radiologists assessed image quality of coronary arteries and iodine map of myocardium. and correlated the coronary artery stenosis with the perfusion distribution of iodine map. Results: l00% (300/300) segments reached diagnostic standards. The mean score of image for all patients was 4.68±0.57. Mural coronary artery was present in 10 segments in S cases, atherosclerotic plaques in 32 segments in 12 cases, of which 20 segments having ≥50% stenosis, 12 segments ≤50% stenosis; dual energy CT coronary angiography was consistent with the DSA in 3 patients. 37 segmental perfusion abnormalities on iodine map were found in 15 cases, including 28 coronary blood supply segment narrow segment and 9 no coronary stenosis (including three negative segments in SPECD. Conclusion: Single contrast enhanced dual energy heart imaging can provide good coronary artery and myocardium perfusion images in the patients with appropriate heart rate, which has a potential to be used in the clinic and further studies are needed. (authors)

  13. SIproc: an open-source biomedical data processing platform for large hyperspectral images.

    Berisha, Sebastian; Chang, Shengyuan; Saki, Sam; Daeinejad, Davar; He, Ziqi; Mankar, Rupali; Mayerich, David

    2017-04-10

    There has recently been significant interest within the vibrational spectroscopy community to apply quantitative spectroscopic imaging techniques to histology and clinical diagnosis. However, many of the proposed methods require collecting spectroscopic images that have a similar region size and resolution to the corresponding histological images. Since spectroscopic images contain significantly more spectral samples than traditional histology, the resulting data sets can approach hundreds of gigabytes to terabytes in size. This makes them difficult to store and process, and the tools available to researchers for handling large spectroscopic data sets are limited. Fundamental mathematical tools, such as MATLAB, Octave, and SciPy, are extremely powerful but require that the data be stored in fast memory. This memory limitation becomes impractical for even modestly sized histological images, which can be hundreds of gigabytes in size. In this paper, we propose an open-source toolkit designed to perform out-of-core processing of hyperspectral images. By taking advantage of graphical processing unit (GPU) computing combined with adaptive data streaming, our software alleviates common workstation memory limitations while achieving better performance than existing applications.

  14. An efficient direct method for image registration of flat objects

    Nikolaev, Dmitry; Tihonkih, Dmitrii; Makovetskii, Artyom; Voronin, Sergei

    2017-09-01

    Image alignment of rigid surfaces is a rapidly developing area of research and has many practical applications. Alignment methods can be roughly divided into two types: feature-based methods and direct methods. Known SURF and SIFT algorithms are examples of the feature-based methods. Direct methods refer to those that exploit the pixel intensities without resorting to image features and image-based deformations are general direct method to align images of deformable objects in 3D space. Nevertheless, it is not good for the registration of images of 3D rigid objects since the underlying structure cannot be directly evaluated. In the article, we propose a model that is suitable for image alignment of rigid flat objects under various illumination models. The brightness consistency assumptions used for reconstruction of optimal geometrical transformation. Computer simulation results are provided to illustrate the performance of the proposed algorithm for computing of an accordance between pixels of two images.

  15. Image Processing Methods Usable for Object Detection on the Chessboard

    Beran Ladislav

    2016-01-01

    Full Text Available Image segmentation and object detection is challenging problem in many research. Although many algorithms for image segmentation have been invented, there is no simple algorithm for image segmentation and object detection. Our research is based on combination of several methods for object detection. The first method suitable for image segmentation and object detection is colour detection. This method is very simply, but there is problem with different colours. For this method it is necessary to have precisely determined colour of segmented object before all calculations. In many cases it is necessary to determine this colour manually. Alternative simply method is method based on background removal. This method is based on difference between reference image and detected image. In this paper several methods suitable for object detection are described. Thisresearch is focused on coloured object detection on chessboard. The results from this research with fusion of neural networks for user-computer game checkers will be applied.

  16. Radioactivity measurements of metallic 192Ir sources by calorimetric methods

    Genka, Tsuguo; Iwamoto, Seikichi; Takeuchi, Norio

    1992-01-01

    The necessity of establishing the traceability of dose measurement in brachytherapy 192 Ir sources is realized by physicians and researchers in the medical field. Standard sources of various shapes such as open-quotes hairpin,close quotes open-quotes single pin,close quotes open-quotes thin wire,close quotes and open-quotes seedclose quotes for calibrating ionization chambers in hospitals are being demanded. Nominal activities of not only these source products but also the standard sources have been so far specified by open-quotes apparentclose quotes values. Determination of open-quotes absoluteclose quotes activity by an established means such as 4pi-beta-gamma coincidence counting is not practical because quantitative dissolution of metallic iridium is very difficult. We tried to determine the open-quotes absoluteclose quotes activity by a calorimetric method in a fully nondestructive way

  17. Finite element formulation for a digital image correlation method

    Sun Yaofeng; Pang, John H. L.; Wong, Chee Khuen; Su Fei

    2005-01-01

    A finite element formulation for a digital image correlation method is presented that will determine directly the complete, two-dimensional displacement field during the image correlation process on digital images. The entire interested image area is discretized into finite elements that are involved in the common image correlation process by use of our algorithms. This image correlation method with finite element formulation has an advantage over subset-based image correlation methods because it satisfies the requirements of displacement continuity and derivative continuity among elements on images. Numerical studies and a real experiment are used to verify the proposed formulation. Results have shown that the image correlation with the finite element formulation is computationally efficient, accurate, and robust

  18. Perceptual digital imaging methods and applications

    Lukac, Rastislav

    2012-01-01

    Visual perception is a complex process requiring interaction between the receptors in the eye that sense the stimulus and the neural system and the brain that are responsible for communicating and interpreting the sensed visual information. This process involves several physical, neural, and cognitive phenomena whose understanding is essential to design effective and computationally efficient imaging solutions. Building on advances in computer vision, image and video processing, neuroscience, and information engineering, perceptual digital imaging greatly enhances the capabilities of tradition

  19. Three-Dimensional Passive-Source Reverse-Time Migration of Converted Waves: The Method

    Li, Jiahang; Shen, Yang; Zhang, Wei

    2018-02-01

    At seismic discontinuities in the crust and mantle, part of the compressional wave energy converts to shear wave, and vice versa. These converted waves have been widely used in receiver function (RF) studies to image discontinuity structures in the Earth. While generally successful, the conventional RF method has its limitations and is suited mostly to flat or gently dipping structures. Among the efforts to overcome the limitations of the conventional RF method is the development of the wave-theory-based, passive-source reverse-time migration (PS-RTM) for imaging complex seismic discontinuities and scatters. To date, PS-RTM has been implemented only in 2D in the Cartesian coordinate for local problems and thus has limited applicability. In this paper, we introduce a 3D PS-RTM approach in the spherical coordinate, which is better suited for regional and global problems. New computational procedures are developed to reduce artifacts and enhance migrated images, including back-propagating the main arrival and the coda containing the converted waves separately, using a modified Helmholtz decomposition operator to separate the P and S modes in the back-propagated wavefields, and applying an imaging condition that maintains a consistent polarity for a given velocity contrast. Our new approach allows us to use migration velocity models with realistic velocity discontinuities, improving accuracy of the migrated images. We present several synthetic experiments to demonstrate the method, using regional and teleseismic sources. The results show that both regional and teleseismic sources can illuminate complex structures and this method is well suited for imaging dipping interfaces and sharp lateral changes in discontinuity structures.

  20. Experimental validation of a kilovoltage x-ray source model for computing imaging dose

    Poirier, Yannick, E-mail: yannick.poirier@cancercare.mb.ca [CancerCare Manitoba, 675 McDermot Ave, Winnipeg, Manitoba R3E 0V9 (Canada); Kouznetsov, Alexei; Koger, Brandon [Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 1N4 (Canada); Tambasco, Mauro, E-mail: mtambasco@mail.sdsu.edu [Department of Physics, San Diego State University, San Diego, California 92182-1233 and Department of Physics and Astronomy and Department of Oncology, University of Calgary, Calgary, Alberta T2N 1N4 (Canada)

    2014-04-15

    Purpose: To introduce and validate a kilovoltage (kV) x-ray source model and characterization method to compute absorbed dose accrued from kV x-rays. Methods: The authors propose a simplified virtual point source model and characterization method for a kV x-ray source. The source is modeled by: (1) characterizing the spatial spectral and fluence distributions of the photons at a plane at the isocenter, and (2) creating a virtual point source from which photons are generated to yield the derived spatial spectral and fluence distribution at isocenter of an imaging system. The spatial photon distribution is determined by in-air relative dose measurements along the transverse (x) and radial (y) directions. The spectrum is characterized using transverse axis half-value layer measurements and the nominal peak potential (kVp). This source modeling approach is used to characterize a Varian{sup ®} on-board-imager (OBI{sup ®}) for four default cone-beam CT beam qualities: beams using a half bowtie filter (HBT) with 110 and 125 kVp, and a full bowtie filter (FBT) with 100 and 125 kVp. The source model and characterization method was validated by comparing dose computed by the authors’ inhouse software (kVDoseCalc) to relative dose measurements in a homogeneous and a heterogeneous block phantom comprised of tissue, bone, and lung-equivalent materials. Results: The characterized beam qualities and spatial photon distributions are comparable to reported values in the literature. Agreement between computed and measured percent depth-dose curves is ⩽2% in the homogeneous block phantom and ⩽2.5% in the heterogeneous block phantom. Transverse axis profiles taken at depths of 2 and 6 cm in the homogeneous block phantom show an agreement within 4%. All transverse axis dose profiles in water, in bone, and lung-equivalent materials for beams using a HBT, have an agreement within 5%. Measured profiles of FBT beams in bone and lung-equivalent materials were higher than their

  1. A portable x-ray source and method for radiography

    Golovanivsky, K.S.

    1996-01-01

    A portable x-ray source that produces a sufficient x-ray flux to produce high quality x-ray images on x-ray films. The source includes a vacuum chamber filled with a heavy atomic weight gas at low pressure and an x-ray emitter. The chamber is in a magnetic field and an oscillating electric field and generates electron cyclotron resonance (ECR) plasma having a ring of energetic electrons inside the chamber. The electrons bombard the x-ray emitter which in turn produces x-ray. A pair of magnetic members generate an axisymmetric magnetic mirror trap inside the chamber. The chamber may be nested within a microwave resonant cavity and between the magnets or the chamber and the microwave cavity may be a single composite structure. (author)

  2. Linear source approximation scheme for method of characteristics

    Tang Chuntao

    2011-01-01

    Method of characteristics (MOC) for solving neutron transport equation based on unstructured mesh has already become one of the fundamental methods for lattice calculation of nuclear design code system. However, most of MOC codes are developed with flat source approximation called step characteristics (SC) scheme, which is another basic assumption for MOC. A linear source (LS) characteristics scheme and its corresponding modification for negative source distribution were proposed. The OECD/NEA C5G7-MOX 2D benchmark and a self-defined BWR mini-core problem were employed to validate the new LS module of PEACH code. Numerical results indicate that the proposed LS scheme employs less memory and computational time compared with SC scheme at the same accuracy. (authors)

  3. Energy-Based Acoustic Source Localization Methods: A Survey

    Wei Meng

    2017-02-01

    Full Text Available Energy-based source localization is an important problem in wireless sensor networks (WSNs, which has been studied actively in the literature. Numerous localization algorithms, e.g., maximum likelihood estimation (MLE and nonlinear-least-squares (NLS methods, have been reported. In the literature, there are relevant review papers for localization in WSNs, e.g., for distance-based localization. However, not much work related to energy-based source localization is covered in the existing review papers. Energy-based methods are proposed and specially designed for a WSN due to its limited sensor capabilities. This paper aims to give a comprehensive review of these different algorithms for energy-based single and multiple source localization problems, their merits and demerits and to point out possible future research directions.

  4. Probabilist methods applied to electric source problems in nuclear safety

    Carnino, A.; Llory, M.

    1979-01-01

    Nuclear Safety has frequently been asked to quantify safety margins and evaluate the hazard. In order to do so, the probabilist methods have proved to be the most promising. Without completely replacing determinist safety, they are now commonly used at the reliability or availability stages of systems as well as for determining the likely accidental sequences. In this paper an application linked to the problem of electric sources is described, whilst at the same time indicating the methods used. This is the calculation of the probable loss of all the electric sources of a pressurized water nuclear power station, the evaluation of the reliability of diesels by event trees of failures and the determination of accidental sequences which could be brought about by the 'total electric source loss' initiator and affect the installation or the environment [fr

  5. New LSB-based colour image steganography method to enhance ...

    Mustafa Cem kasapbaşi

    2018-04-27

    Apr 27, 2018 ... evaluate the proposed method, comparative performance tests are carried out against different spatial image ... image steganography applications based on LSB are ..... worst case scenario could occur when having highest.

  6. ISAR imaging using the instantaneous range instantaneous Doppler method

    Wazna, TM

    2015-10-01

    Full Text Available In Inverse Synthetic Aperture Radar (ISAR) imaging, the Range Instantaneous Doppler (RID) method is used to compensate for the nonuniform rotational motion of the target that degrades the Doppler resolution of the ISAR image. The Instantaneous Range...

  7. LINKS: learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images.

    Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H; Lin, Weili; Shen, Dinggang

    2015-03-01

    Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Lidar method to estimate emission rates from extended sources

    Currently, point measurements, often combined with models, are the primary means by which atmospheric emission rates are estimated from extended sources. However, these methods often fall short in their spatial and temporal resolution and accuracy. In recent years, lidar has emerged as a suitable to...

  9. Improved radionuclide bone imaging agent injection needle withdrawal method can improve image quality

    Qin Yongmei; Wang Laihao; Zhao Lihua; Guo Xiaogang; Kong Qingfeng

    2009-01-01

    Objective: To investigate the improvement of radionuclide bone imaging agent injection needle withdrawal method on whole body bone scan image quality. Methods: Elbow vein injection syringe needle directly into the bone imaging agent in the routine group of 117 cases, with a cotton swab needle injection method for the rapid pull out the needle puncture point pressing, pressing moment. Improvement of 117 cases of needle injection method to put two needles into the skin swabs and blood vessels, pull out the needle while pressing two or more entry point 5min. After 2 hours underwent whole body bone SPECT imaging plane. Results: The conventional group at the injection site imaging agents uptake rate was 16.24%, improved group was 2.56%. Conclusion: The modified bone imaging agent injection needle withdrawal method, injection-site imaging agent uptake were significantly decreased whole body bone imaging can improve image quality. (authors)

  10. Comparative analysis of different methods for image enhancement

    吴笑峰; 胡仕刚; 赵瑾; 李志明; 李劲; 唐志军; 席在芳

    2014-01-01

    Image enhancement technology plays a very important role to improve image quality in image processing. By enhancing some information and restraining other information selectively, it can improve image visual effect. The objective of this work is to implement the image enhancement to gray scale images using different techniques. After the fundamental methods of image enhancement processing are demonstrated, image enhancement algorithms based on space and frequency domains are systematically investigated and compared. The advantage and defect of the above-mentioned algorithms are analyzed. The algorithms of wavelet based image enhancement are also deduced and generalized. Wavelet transform modulus maxima (WTMM) is a method for detecting the fractal dimension of a signal, it is well used for image enhancement. The image techniques are compared by using the mean (μ), standard deviation (s), mean square error (MSE) and PSNR (peak signal to noise ratio). A group of experimental results demonstrate that the image enhancement algorithm based on wavelet transform is effective for image de-noising and enhancement. Wavelet transform modulus maxima method is one of the best methods for image enhancement.

  11. System and method for image mapping and visual attention

    Peters, II, Richard A. (Inventor)

    2011-01-01

    A method is described for mapping dense sensory data to a Sensory Ego Sphere (SES). Methods are also described for finding and ranking areas of interest in the images that form a complete visual scene on an SES. Further, attentional processing of image data is best done by performing attentional processing on individual full-size images from the image sequence, mapping each attentional location to the nearest node, and then summing all attentional locations at each node.

  12. Apparatus and method X-ray image processing

    1984-01-01

    The invention relates to a method for X-ray image processing. The radiation passed through the object is transformed into an electric image signal from which the logarithmic value is determined and displayed by a display device. Its main objective is to provide a method and apparatus that renders X-ray images or X-ray subtraction images with strong reduction of stray radiation. (Auth.)

  13. A comparative study on medical image segmentation methods

    Praylin Selva Blessy SELVARAJ ASSLEY

    2014-03-01

    Full Text Available Image segmentation plays an important role in medical images. It has been a relevant research area in computer vision and image analysis. Many segmentation algorithms have been proposed for medical images. This paper makes a review on segmentation methods for medical images. In this survey, segmentation methods are divided into five categories: region based, boundary based, model based, hybrid based and atlas based. The five different categories with their principle ideas, advantages and disadvantages in segmenting different medical images are discussed.

  14. An Image Encryption Method Based on Bit Plane Hiding Technology

    LIU Bin; LI Zhitang; TU Hao

    2006-01-01

    A novel image hiding method based on the correlation analysis of bit plane is described in this paper. Firstly, based on the correlation analysis, different bit plane of a secret image is hided in different bit plane of several different open images. And then a new hiding image is acquired by a nesting "Exclusive-OR" operation on those images obtained from the first step. At last, by employing image fusion technique, the final hiding result is achieved. The experimental result shows that the method proposed in this paper is effective.

  15. Interferometrically enhanced sub-terahertz picosecond imaging utilizing a miniature collapsing-field-domain source

    Vainshtein, Sergey N.; Duan, Guoyong; Mikhnev, Valeri A.; Zemlyakov, Valery E.; Egorkin, Vladimir I.; Kalyuzhnyy, Nikolay A.; Maleev, Nikolai A.; Näpänkangas, Juha; Sequeiros, Roberto Blanco; Kostamovaara, Juha T.

    2018-05-01

    Progress in terahertz spectroscopy and imaging is mostly associated with femtosecond laser-driven systems, while solid-state sources, mainly sub-millimetre integrated circuits, are still in an early development phase. As simple and cost-efficient an emitter as a Gunn oscillator could cause a breakthrough in the field, provided its frequency limitations could be overcome. Proposed here is an application of the recently discovered collapsing field domains effect that permits sub-THz oscillations in sub-micron semiconductor layers thanks to nanometer-scale powerfully ionizing domains arising due to negative differential mobility in extreme fields. This shifts the frequency limit by an order of magnitude relative to the conventional Gunn effect. Our first miniature picosecond pulsed sources cover the 100-200 GHz band and promise milliwatts up to ˜500 GHz. Thanks to the method of interferometrically enhanced time-domain imaging proposed here and the low single-shot jitter of ˜1 ps, our simple imaging system provides sufficient time-domain imaging contrast for fresh-tissue terahertz histology.

  16. Comparison of whole-body-imaging methods

    Rollo, F.D.; Hoffer, P.

    1977-01-01

    Currently there are four different devices that have found clinical utility in whole-body imaging. These are the rectilinear scanner, the multicrystal whole-body scanner, the Anger-type camera with a whole-body-imaging table, and the tomoscanner. In this text, the basic theory of operation and a discussion of the advantages and disadvantages in whole-body imaging is presented for each device. When applicable, a comparative assessment of the various devices is also presented. As with all else in life, there is no simple answer to the question ''which total body imaging device is best.'' Institutions with a very heavy total-body-imaging load may prefer to use an already available dual-headed rectilinear scanner system for these studies, rather than invest in a new instrument. Institutions with moderate total-body-imaging loads may wish to invest in moving table or moving camera devices which make total body imaging more convenient but retain the basic flexibility of the camera. The large-field Anger camera with or without motion offers another flexible option to these institutions. The laboratory with a very heavy total body imaging load may select efficiency over flexibility, thereby freeing up other instruments for additional studies. Finally, reliability as well as availability and quality of local service must be considered. After all, design features of an instrument become irrelevant when it is broken down and awaiting repair

  17. Image Registration Using Single Cluster PHD Methods

    Campbell, M.; Schlangen, I.; Delande, E.; Clark, D.

    Cadets in the Department of Physics at the United States Air Force Academy are using the technique of slitless spectroscopy to analyze the spectra from geostationary satellites during glint season. The equinox periods of the year are particularly favorable for earth-based observers to detect specular reflections off satellites (glints), which have been observed in the past using broadband photometry techniques. Three seasons of glints were observed and analyzed for multiple satellites, as measured across the visible spectrum using a diffraction grating on the Academy’s 16-inch, f/8.2 telescope. It is clear from the results that the glint maximum wavelength decreases relative to the time periods before and after the glint, and that the spectral reflectance during the glint is less like a blackbody. These results are consistent with the presumption that solar panels are the predominant source of specular reflection. The glint spectra are also quantitatively compared to different blackbody curves and the solar spectrum by means of absolute differences and standard deviations. Our initial analysis appears to indicate a potential method of determining relative power capacity.

  18. Methods of fetal MR: beyond T2-weighted imaging

    Brugger, Peter C. [Center of Anatomy and Cell Biology, Integrative Morphology Group, Medical University of Vienna, Waehringerstrasse 13, 1090 Vienna (Austria)]. E-mail: peter.brugger@meduniwien.ac.at; Stuhr, Fritz [Department of Radiology, Medical University of Vienna, Waehringerguertel 18-20, 1090 Vienna (Austria); Lindner, Christian [Department of Radiology, Medical University of Vienna, Waehringerguertel 18-20, 1090 Vienna (Austria); Prayer, Daniela [Department of Radiology, Medical University of Vienna, Waehringerguertel 18-20, 1090 Vienna (Austria)

    2006-02-15

    The present work reviews the basic methods of performing fetal magnetic resonance imaging (MRI). Since fetal MRI differs in many respects from a postnatal study, several factors have to be taken into account to achieve satisfying image quality. Image quality depends on adequate positioning of the pregnant woman in the magnet, use of appropriate coils and the selection of sequences. Ultrafast T2-weighted sequences are regarded as the mainstay of fetal MR-imaging. However, additional sequences, such as T1-weighted images, diffusion-weighted images, echoplanar imaging may provide further information, especially in extra- central-nervous system regions of the fetal body.

  19. Methods of fetal MR: beyond T2-weighted imaging

    Brugger, Peter C.; Stuhr, Fritz; Lindner, Christian; Prayer, Daniela

    2006-01-01

    The present work reviews the basic methods of performing fetal magnetic resonance imaging (MRI). Since fetal MRI differs in many respects from a postnatal study, several factors have to be taken into account to achieve satisfying image quality. Image quality depends on adequate positioning of the pregnant woman in the magnet, use of appropriate coils and the selection of sequences. Ultrafast T2-weighted sequences are regarded as the mainstay of fetal MR-imaging. However, additional sequences, such as T1-weighted images, diffusion-weighted images, echoplanar imaging may provide further information, especially in extra- central-nervous system regions of the fetal body

  20. Nonuniform Illumination Correction Algorithm for Underwater Images Using Maximum Likelihood Estimation Method

    Sonali Sachin Sankpal

    2016-01-01

    Full Text Available Scattering and absorption of light is main reason for limited visibility in water. The suspended particles and dissolved chemical compounds in water are also responsible for scattering and absorption of light in water. The limited visibility in water results in degradation of underwater images. The visibility can be increased by using artificial light source in underwater imaging system. But the artificial light illuminates the scene in a nonuniform fashion. It produces bright spot at the center with the dark region at surroundings. In some cases imaging system itself creates dark region in the image by producing shadow on the objects. The problem of nonuniform illumination is neglected by the researchers in most of the image enhancement techniques of underwater images. Also very few methods are discussed showing the results on color images. This paper suggests a method for nonuniform illumination correction for underwater images. The method assumes that natural underwater images are Rayleigh distributed. This paper used maximum likelihood estimation of scale parameter to map distribution of image to Rayleigh distribution. The method is compared with traditional methods for nonuniform illumination correction using no-reference image quality metrics like average luminance, average information entropy, normalized neighborhood function, average contrast, and comprehensive assessment function.

  1. Hiding a Covert Digital Image by Assembling the RSA Encryption Method and the Binary Encoding Method

    Kuang Tsan Lin

    2014-01-01

    Full Text Available The Rivest-Shamir-Adleman (RSA encryption method and the binary encoding method are assembled to form a hybrid hiding method to hide a covert digital image into a dot-matrix holographic image. First, the RSA encryption method is used to transform the covert image to form a RSA encryption data string. Then, all the elements of the RSA encryption data string are transferred into binary data. Finally, the binary data are encoded into the dot-matrix holographic image. The pixels of the dot-matrix holographic image contain seven groups of codes used for reconstructing the covert image. The seven groups of codes are identification codes, covert-image dimension codes, covert-image graylevel codes, pre-RSA bit number codes, RSA key codes, post-RSA bit number codes, and information codes. The reconstructed covert image derived from the dot-matrix holographic image and the original covert image are exactly the same.

  2. Imaging of noncarious cervical lesions by means of a fast swept source optical coherence tomography system

    Stoica, Eniko T.; Marcauteanu, Corina; Bradu, Adrian; Sinescu, Cosmin; Topala, Florin Ionel; Negrutiu, Meda Lavinia; Duma, Virgil Florin; Podoleanu, Adrian Gh.

    2014-01-01

    Non-carious cervical lesions (NCCL) are defined as the loss of tooth substance at the cemento-enamel junction and are caused by abrasion, erosion and/or occlusal overload. In this paper we proved that our fast swept source OCT system is a valuable tool to track the evolution of NCCL lesions in time. On several extracted bicuspids, four levels of NCCL were artificially created. After every level of induced lesion, OCT scanning was performed. B scans were acquired and 3D reconstructions were generated. The swept source OCT instrument used in this study has a central wavelength of 1050 nm, a sweeping range of 106 nm (measured at 10 dB), an average output power of 16 mW and a sweeping rate of 100 kHz. A depth resolution determined by the swept source of 12 μm in air was experimentally obtained. NCCL were measured on the B-scans as 2D images and 3D reconstructions (volumes). For quantitative evaluations of volumes, the Image J software was used. By calculating the areas of the amount of lost tissue corresponding to each difference of Bscans, the final volumes of NCCL were obtained. This swept source OCT method allows the dynamic diagnosis of NCCL in time.

  3. The first VLBI image of an infrared-faint radio source

    Middelberg, E.; Norris, R. P.; Tingay, S.; Mao, M. Y.; Phillips, C. J.; Hotan, A. W.

    2008-11-01

    Context: We investigate the joint evolution of active galactic nuclei and star formation in the Universe. Aims: In the 1.4 GHz survey with the Australia Telescope Compact Array of the Chandra Deep Field South and the European Large Area ISO Survey - S1 we have identified a class of objects which are strong in the radio but have no detectable infrared and optical counterparts. This class has been called Infrared-Faint Radio Sources, or IFRS. 53 sources out of 2002 have been classified as IFRS. It is not known what these objects are. Methods: To address the many possible explanations as to what the nature of these objects is we have observed four sources with the Australian Long Baseline Array. Results: We have detected and imaged one of the four sources observed. Assuming that the source is at a high redshift, we find its properties in agreement with properties of Compact Steep Spectrum sources. However, due to the lack of optical and infrared data the constraints are not particularly strong.

  4. Research on image complexity evaluation method based on color information

    Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo

    2017-11-01

    In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.

  5. A new method for mobile phone image denoising

    Jin, Lianghai; Jin, Min; Li, Xiang; Xu, Xiangyang

    2015-12-01

    Images captured by mobile phone cameras via pipeline processing usually contain various kinds of noises, especially granular noise with different shapes and sizes in both luminance and chrominance channels. In chrominance channels, noise is closely related to image brightness. To improve image quality, this paper presents a new method to denoise such mobile phone images. The proposed scheme converts the noisy RGB image to luminance and chrominance images, which are then denoised by a common filtering framework. The common filtering framework processes a noisy pixel by first excluding the neighborhood pixels that significantly deviate from the (vector) median and then utilizing the other neighborhood pixels to restore the current pixel. In the framework, the strength of chrominance image denoising is controlled by image brightness. The experimental results show that the proposed method obviously outperforms some other representative denoising methods in terms of both objective measure and visual evaluation.

  6. The new fabrication method of standard surface sources

    Sato, Yasushi E-mail: yss.sato@aist.go.jp; Hino, Yoshio; Yamada, Takahiro; Matsumoto, Mikio

    2004-04-01

    We developed a new fabrication method for standard surface sources by using an inkjet printer with inks in which a radioactive material is mixed to print on a sheet of paper. Three printed test patterns have been prepared: (1) 100 mmx100 mm uniformity-test patterns, (2) positional-resolution test patterns with different widths and intervals of straight lines, and (3) logarithmic intensity test patterns with different radioactive intensities. The results revealed that the fabricated standard surface sources had high uniformity, high positional resolution, arbitrary shapes and a broad intensity range.

  7. Computational methods for high-energy source shielding

    Armstrong, T.W.; Cloth, P.; Filges, D.

    1983-01-01

    The computational methods for high-energy radiation transport related to shielding of the SNQ-spallation source are outlined. The basic approach is to couple radiation-transport computer codes which use Monte Carlo methods and discrete ordinates methods. A code system is suggested that incorporates state-of-the-art radiation-transport techniques. The stepwise verification of that system is briefly summarized. The complexity of the resulting code system suggests a more straightforward code specially tailored for thick shield calculations. A short guide line to future development of such a Monte Carlo code is given

  8. From the Kirsch-Kress potential method via the range test to the singular sources method

    Potthast, R; Schulz, J

    2005-01-01

    We review three reconstruction methods for inverse obstacle scattering problems. We will analyse the relation between the Kirsch-Kress potential method 1986, the range test of Kusiak, Potthast and Sylvester (2003) and the singular sources method of Potthast (2000). In particular, we show that the range test is a logical extension of the Kirsch-Kress method into the category of sampling methods employing the tool of domain sampling. Then we will show how a multi-wave version of the range test can be set up and we will work out its relation to the singular sources method. Numerical examples and demonstrations will be provided

  9. New mobile methods for dietary assessment: review of image-assisted and image-based dietary assessment methods.

    Boushey, C J; Spoden, M; Zhu, F M; Delp, E J; Kerr, D A

    2017-08-01

    For nutrition practitioners and researchers, assessing dietary intake of children and adults with a high level of accuracy continues to be a challenge. Developments in mobile technologies have created a role for images in the assessment of dietary intake. The objective of this review was to examine peer-reviewed published papers covering development, evaluation and/or validation of image-assisted or image-based dietary assessment methods from December 2013 to January 2016. Images taken with handheld devices or wearable cameras have been used to assist traditional dietary assessment methods for portion size estimations made by dietitians (image-assisted methods). Image-assisted approaches can supplement either dietary records or 24-h dietary recalls. In recent years, image-based approaches integrating application technology for mobile devices have been developed (image-based methods). Image-based approaches aim at capturing all eating occasions by images as the primary record of dietary intake, and therefore follow the methodology of food records. The present paper reviews several image-assisted and image-based methods, their benefits and challenges; followed by details on an image-based mobile food record. Mobile technology offers a wide range of feasible options for dietary assessment, which are easier to incorporate into daily routines. The presented studies illustrate that image-assisted methods can improve the accuracy of conventional dietary assessment methods by adding eating occasion detail via pictures captured by an individual (dynamic images). All of the studies reduced underreporting with the help of images compared with results with traditional assessment methods. Studies with larger sample sizes are needed to better delineate attributes with regards to age of user, degree of error and cost.

  10. Separation method of heavy-ion particle image from gamma-ray mixed images using an imaging plate

    Yamadera, A; Ohuchi, H; Nakamura, T; Fukumura, A

    1999-01-01

    We have developed a separation method of alpha-ray and gamma-ray images using the imaging plate (IP). The IP from which the first image was read out by an image reader was annealed at 50 deg. C for 2 h in a drying oven and the second image was read out by the image reader. It was found out that an annealing ratio, k, which is defined as a ratio of the photo-stimulated luminescence (PSL) density at the first measurement to that at the second measurement, was different for alpha rays and gamma rays. By subtracting the second image multiplied by a factor of k from the first image, the alpha-ray image was separated from the alpha and gamma-ray mixed images. This method was applied to identify the images of helium, carbon and neon particles of high energies using the heavy-ion medical accelerator, HIMAC. (author)

  11. Quantum dynamic imaging theoretical and numerical methods

    Ivanov, Misha

    2011-01-01

    Studying and using light or "photons" to image and then to control and transmit molecular information is among the most challenging and significant research fields to emerge in recent years. One of the fastest growing areas involves research in the temporal imaging of quantum phenomena, ranging from molecular dynamics in the femto (10-15s) time regime for atomic motion to the atto (10-18s) time scale of electron motion. In fact, the attosecond "revolution" is now recognized as one of the most important recent breakthroughs and innovations in the science of the 21st century. A major participant in the development of ultrafast femto and attosecond temporal imaging of molecular quantum phenomena has been theory and numerical simulation of the nonlinear, non-perturbative response of atoms and molecules to ultrashort laser pulses. Therefore, imaging quantum dynamics is a new frontier of science requiring advanced mathematical approaches for analyzing and solving spatial and temporal multidimensional partial differ...

  12. Blind compressed sensing image reconstruction based on alternating direction method

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  13. Novel welding image processing method based on fractal theory

    陈强; 孙振国; 肖勇; 路井荣

    2002-01-01

    Computer vision has come into used in the fields of welding process control and automation. In order to improve precision and rapidity of welding image processing, a novel method based on fractal theory has been put forward in this paper. Compared with traditional methods, the image is preliminarily processed in the macroscopic regions then thoroughly analyzed in the microscopic regions in the new method. With which, an image is divided up to some regions according to the different fractal characters of image edge, and the fuzzy regions including image edges are detected out, then image edges are identified with Sobel operator and curved by LSM (Lease Square Method). Since the data to be processed have been decreased and the noise of image has been reduced, it has been testified through experiments that edges of weld seam or weld pool could be recognized correctly and quickly.

  14. Blind Methods for Detecting Image Fakery

    Mahdian, Babak; Saic, Stanislav

    2010-01-01

    Roč. 25, č. 4 (2010), s. 18-24 ISSN 0885-8985 R&D Projects: GA ČR GA102/08/0470 Institutional research plan: CEZ:AV0Z10750506 Keywords : Image forensics * Image Fakery * Forgery detection * Authentication Subject RIV: BD - Theory of Information Impact factor: 0.179, year: 2010 http://library.utia.cas.cz/separaty/2010/ZOI/saic-0343316.pdf

  15. Improvement of Source Number Estimation Method for Single Channel Signal.

    Zhi Dong

    Full Text Available Source number estimation methods for single channel signal have been investigated and the improvements for each method are suggested in this work. Firstly, the single channel data is converted to multi-channel form by delay process. Then, algorithms used in the array signal processing, such as Gerschgorin's disk estimation (GDE and minimum description length (MDL, are introduced to estimate the source number of the received signal. The previous results have shown that the MDL based on information theoretic criteria (ITC obtains a superior performance than GDE at low SNR. However it has no ability to handle the signals containing colored noise. On the contrary, the GDE method can eliminate the influence of colored noise. Nevertheless, its performance at low SNR is not satisfactory. In order to solve these problems and contradictions, the work makes remarkable improvements on these two methods on account of the above consideration. A diagonal loading technique is employed to ameliorate the MDL method and a jackknife technique is referenced to optimize the data covariance matrix in order to improve the performance of the GDE method. The results of simulation have illustrated that the performance of original methods have been promoted largely.

  16. Speckle imaging using the principle value decomposition method

    Sherman, J.W.

    1978-01-01

    Obtaining diffraction-limited images in the presence of atmospheric turbulence is a topic of current interest. Two types of approaches have evolved: real-time correction and speckle imaging. A speckle imaging reconstruction method was developed by use of an ''optimal'' filtering approach. This method is based on a nonlinear integral equation which is solved by principle value decomposition. The method was implemented on a CDC 7600 for study. The restoration algorithm is discussed and its performance is illustrated. 7 figures

  17. Survey: interpolation methods for whole slide image processing.

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  18. Stochastic Industrial Source Detection Using Lower Cost Methods

    Thoma, E.; George, I. J.; Brantley, H.; Deshmukh, P.; Cansler, J.; Tang, W.

    2017-12-01

    Hazardous air pollutants (HAPs) can be emitted from a variety of sources in industrial facilities, energy production, and commercial operations. Stochastic industrial sources (SISs) represent a subcategory of emissions from fugitive leaks, variable area sources, malfunctioning processes, and improperly controlled operations. From the shared perspective of industries and communities, cost-effective detection of mitigable SIS emissions can yield benefits such as safer working environments, cost saving through reduced product loss, lower air shed pollutant impacts, and improved transparency and community relations. Methods for SIS detection can be categorized by their spatial regime of operation, ranging from component-level inspection to high-sensitivity kilometer scale surveys. Methods can be temporally intensive (providing snap-shot measures) or sustained in both time-integrated and continuous forms. Each method category has demonstrated utility, however, broad adoption (or routine use) has thus far been limited by cost and implementation viability. Described here are a subset of SIS methods explored by the U.S EPA's next generation emission measurement (NGEM) program that focus on lower cost methods and models. An emerging systems approach that combines multiple forms to help compensate for reduced performance factors of lower cost systems is discussed. A case study of a multi-day HAP emission event observed by a combination of low cost sensors, open-path spectroscopy, and passive samplers is detailed. Early field results of a novel field gas chromatograph coupled with a fast HAP concentration sensor is described. Progress toward near real-time inverse source triangulation assisted by pre-modeled facility profiles using the Los Alamos Quick Urban & Industrial Complex (QUIC) model is discussed.

  19. On a selection method of imaging condition in scintigraphy

    Ikeda, Hozumi; Kishimoto, Kenji; Shimonishi, Yoshihiro; Ohmura, Masahiro; Kosakai, Kazuhisa; Ochi, Hironobu

    1992-01-01

    Selection of imaging condition in scintigraphy was evaluated using analytic hierarchy process. First, a method of the selection was led by determining at the points of image quantity and imaging time. Influence of image quality was thought to depend on changes of system resolution, count density, image size, and image density. Also influence of imaging time was thought to depend on changes of system sensitivity and data acquisition time. Phantom study was done for paired comparison of these selection factors, and relations of sample data and the factors, that is Rollo phantom images were taken by changing count density, image size, and image density. Image quality was shown by calculating the score of visual evaluation that done by comparing of a pair of images in clearer cold lesion on the scintigrams. Imaging time was shown by relative values for changes of count density. However, system resolution and system sensitivity were constant in this study. Next, using these values analytic hierarchy process was adapted for this selection of imaging conditions. We conclude that this selection of imaging conditions can be analyzed quantitatively using analytic hierarchy process and this analysis develops theoretical consideration of imaging technique. (author)

  20. 252Cf-source-driven neutron noise analysis method

    Mihalczo, J.T.; King, W.T.; Blakeman, E.D.

    1985-01-01

    The 252 Cf-source-driven neutron noise analysis method has been tested in a a wide variety of experiments that have indicated the broad range of applicability of the method. The neutron multiplication factor, k/sub eff/ has been satisfactorily determined for a variety of materials including uranium metal, light water reactor fuel pins, fissile solutions, fuel plates in water, and interacting cylinders. For a uranyl nitrate solution tank which is typical of a fuel processing or reprocessing plant, the k/sub eff/ values were satisfactorily determined for values between 0.92 and 0.5 using a simple point kinetics interpretation of the experimental data. The short measurement times, in several cases as low as 1 min, have shown that the development of this method can lead to a practical subcriticality monitor for many in-plant applications. The further development of the method will require experiments and the development of theoretical methods to predict the experimental observables

  1. Sparse contrast-source inversion using linear-shrinkage-enhanced inexact Newton method

    Desmal, Abdulla

    2014-07-01

    A contrast-source inversion scheme is proposed for microwave imaging of domains with sparse content. The scheme uses inexact Newton and linear shrinkage methods to account for the nonlinearity and ill-posedness of the electromagnetic inverse scattering problem, respectively. Thresholded shrinkage iterations are accelerated using a preconditioning technique. Additionally, during Newton iterations, the weight of the penalty term is reduced consistently with the quadratic convergence of the Newton method to increase accuracy and efficiency. Numerical results demonstrate the applicability of the proposed method.

  2. Sparse contrast-source inversion using linear-shrinkage-enhanced inexact Newton method

    Desmal, Abdulla; Bagci, Hakan

    2014-01-01

    A contrast-source inversion scheme is proposed for microwave imaging of domains with sparse content. The scheme uses inexact Newton and linear shrinkage methods to account for the nonlinearity and ill-posedness of the electromagnetic inverse scattering problem, respectively. Thresholded shrinkage iterations are accelerated using a preconditioning technique. Additionally, during Newton iterations, the weight of the penalty term is reduced consistently with the quadratic convergence of the Newton method to increase accuracy and efficiency. Numerical results demonstrate the applicability of the proposed method.

  3. Methods of filtering the graph images of the functions

    Олександр Григорович Бурса

    2017-06-01

    Full Text Available The theoretical aspects of cleaning raster images of scanned graphs of functions from digital, chromatic and luminance distortions by using computer graphics techniques have been considered. The basic types of distortions characteristic of graph images of functions have been stated. To suppress the distortion several methods, providing for high-quality of the resulting images and saving their topological features, were suggested. The paper describes the techniques developed and improved by the authors: the method of cleaning the image of distortions by means of iterative contrasting, based on the step-by-step increase in image contrast in the graph by 1%; the method of small entities distortion restoring, based on the thinning of the known matrix of contrast increase filter (the allowable dimensions of the nucleus dilution radius convolution matrix, which provide for the retention of the graph lines have been established; integration technique of the noise reduction method by means of contrasting and distortion restoring method of small entities with known σ-filter. Each method in the complex has been theoretically substantiated. The developed methods involve treatment of graph images as the entire image (global processing and its fragments (local processing. The metrics assessing the quality of the resulting image with the global and local processing have been chosen, the substantiation of the choice as well as the formulas have been given. The proposed complex methods of cleaning the graphs images of functions from grayscale image distortions is adaptive to the form of an image carrier, the distortion level in the image and its distribution. The presented results of testing the developed complex of methods for a representative sample of images confirm its effectiveness

  4. Wavelet imaging cleaning method for atmospheric Cherenkov telescopes

    Lessard, R. W.; Cayón, L.; Sembroski, G. H.; Gaidos, J. A.

    2002-07-01

    We present a new method of image cleaning for imaging atmospheric Cherenkov telescopes. The method is based on the utilization of wavelets to identify noise pixels in images of gamma-ray and hadronic induced air showers. This method selects more signal pixels with Cherenkov photons than traditional image processing techniques. In addition, the method is equally efficient at rejecting pixels with noise alone. The inclusion of more signal pixels in an image of an air shower allows for a more accurate reconstruction, especially at lower gamma-ray energies that produce low levels of light. We present the results of Monte Carlo simulations of gamma-ray and hadronic air showers which show improved angular resolution using this cleaning procedure. Data from the Whipple Observatory's 10-m telescope are utilized to show the efficacy of the method for extracting a gamma-ray signal from the background of hadronic generated images.

  5. A gamma-source method of measuring soil moisture

    Al-Jeboori, M.A.; Ameen, I.A.

    1986-01-01

    Water content in soil column was measured using NaI scintillation detector 5 mci Cs-137 as a gamma source. The measurements were done with a back scatter gauge, restricted with scattering angle less to than /2 overcome the effect of soil type. A 3 cm air gap was maintained between the front of the detector and the wall of the soil container in order to increase the counting rate. The distance between the center of the source and the center of the back scattering detector was 14 cm. The accuracy of the measurements was 0.63. For comparision, a direct rays method was used to measure the soil moisture. The results gave an error of 0.65. Results of the two methods were compared with the gravimetric method which gave an error of 0.18 g/g and 0.17 g/g for direct and back method respectively. The quick direct method was used to determine the gravimetric and volumetric percentage constants, and were found to be 1.62 and 0.865 respectively. The method then used to measure the water content in the layers of soil column.(6 tabs., 4 figs., 12 refs.)

  6. Training Methods for Image Noise Level Estimation on Wavelet Components

    A. De Stefano

    2004-12-01

    Full Text Available The estimation of the standard deviation of noise contaminating an image is a fundamental step in wavelet-based noise reduction techniques. The method widely used is based on the mean absolute deviation (MAD. This model-based method assumes specific characteristics of the noise-contaminated image component. Three novel and alternative methods for estimating the noise standard deviation are proposed in this work and compared with the MAD method. Two of these methods rely on a preliminary training stage in order to extract parameters which are then used in the application stage. The sets used for training and testing, 13 and 5 images, respectively, are fully disjoint. The third method assumes specific statistical distributions for image and noise components. Results showed the prevalence of the training-based methods for the images and the range of noise levels considered.

  7. Synthesis and assessment methods for an edge-alignment-free hybrid image

    Sripian, Peeraya; Yamaguchi, Yasushi

    2017-07-01

    A hybrid image allows multiple image interpretations to be modulated by the viewing distance. It can be constructed on the basis of the multiscale perceptual mechanisms of the human visual system by combining the low and high spatial frequencies of two different images. The hybrid image was introduced as an experimental tool for visual recognition study in terms of spatial frequency perception. To produce a compelling hybrid image, the original hybrid image synthesis method could only use similar shapes of source images that were aligned in the edges. If any two different images can be hybrid, it would be beneficial as a new experimental tool. In addition, there is no measure for the actual perception of spatial frequency, whether a single spatial frequency or both spatial frequencies are perceived from the hybrid stimulus. This paper describes two methods for synthesizing a hybrid image from dissimilar shape images or unaligned images; this hybrid image is known as an "edge-alignment-free hybrid image." A noise-inserted method can be done by intentionally inserting and enhancing noises into the high-frequency image. With this method, the low-frequency blobs are covered with high-frequency noises when viewed up close. A color-inserted method uses complementary color gratings in the background of the high-frequency image to emphasize the high-frequency image when viewed up close, whereas the gratings disappear when viewed from far away. To ascertain that our approach successfully separates the spatial frequency at each viewing distance, we measured this property using our proposed assessment method. Our proposed method allows the experimenter to quantify the probability of perceiving both spatial frequencies and a single spatial frequency in a hybrid image. The experimental results confirmed that our proposed synthesis methods successfully hid the low-frequency image and emphasized the high-frequency image at a close viewing distance. At the same time, the

  8. Methods of forming single source precursors, methods of forming polymeric single source precursors, and single source precursors and intermediate products formed by such methods

    Fox, Robert V.; Rodriguez, Rene G.; Pak, Joshua J.; Sun, Chivin; Margulieux, Kelsey R.; Holland, Andrew W.

    2012-12-04

    Methods of forming single source precursors (SSPs) include forming intermediate products having the empirical formula 1/2{L.sub.2N(.mu.-X).sub.2M'X.sub.2}.sub.2, and reacting MER with the intermediate products to form SSPs of the formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2, wherein L is a Lewis base, M is a Group IA atom, N is a Group IB atom, M' is a Group IIIB atom, each E is a Group VIB atom, each X is a Group VIIA atom or a nitrate group, and each R group is an alkyl, aryl, vinyl, (per)fluoro alkyl, (per)fluoro aryl, silane, or carbamato group. Methods of forming polymeric or copolymeric SSPs include reacting at least one of HE.sup.1R.sup.1E.sup.1H and MER with one or more substances having the empirical formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2 or L.sub.2N(.mu.-X).sub.2M'(X).sub.2 to form a polymeric or copolymeric SSP. New SSPs and intermediate products are formed by such methods.

  9. Methods of forming single source precursors, methods of forming polymeric single source precursors, and single source precursors formed by such methods

    Fox, Robert V.; Rodriguez, Rene G.; Pak, Joshua J.; Sun, Chivin; Margulieux, Kelsey R.; Holland, Andrew W.

    2014-09-09

    Methods of forming single source precursors (SSPs) include forming intermediate products having the empirical formula 1/2{L.sub.2N(.mu.-X).sub.2M'X.sub.2}.sub.2, and reacting MER with the intermediate products to form SSPs of the formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2, wherein L is a Lewis base, M is a Group IA atom, N is a Group IB atom, M' is a Group IIIB atom, each E is a Group VIB atom, each X is a Group VIIA atom or a nitrate group, and each R group is an alkyl, aryl, vinyl, (per)fluoro alkyl, (per)fluoro aryl, silane, or carbamato group. Methods of forming polymeric or copolymeric SSPs include reacting at least one of HE.sup.1R.sup.1E.sup.1H and MER with one or more substances having the empirical formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2 or L.sub.2N(.mu.-X).sub.2M'(X).sub.2 to form a polymeric or copolymeric SSP. New SSPs and intermediate products are formed by such methods.

  10. MIVOC Method at the mVINIS Ion Source

    Jovovic, J.; Cvetic, J.; Dobrosavljevic, A.; Nedeljkovic, T.; Draganic, I.

    2007-01-01

    We have used the well-known metal-ions-from-volatile- compounds (MIVOC) method with the mVINIS Ion Source to produce multiply charged ion beams form solid substances. Using this method very intense stable multiply charged ion beams of several solid substances having high melting points were obtained. The yields and spectrum of the multiply charged ion beams obtained from Hf will be presented. A hafnium ion beam spectrum was recorded at an ECR ion source for the first time. We have utilized the multiply charged ion beams from solid substances to irradiate the polymer, fullerene and glassy carbon samples at the channel for modification of materials (L3A). (author)

  11. A two-way regularization method for MEG source reconstruction

    Tian, Tian Siva; Huang, Jianhua Z.; Shen, Haipeng; Li, Zhimin

    2012-01-01

    The MEG inverse problem refers to the reconstruction of the neural activity of the brain from magnetoencephalography (MEG) measurements. We propose a two-way regularization (TWR) method to solve the MEG inverse problem under the assumptions that only a small number of locations in space are responsible for the measured signals (focality), and each source time course is smooth in time (smoothness). The focality and smoothness of the reconstructed signals are ensured respectively by imposing a sparsity-inducing penalty and a roughness penalty in the data fitting criterion. A two-stage algorithm is developed for fast computation, where a raw estimate of the source time course is obtained in the first stage and then refined in the second stage by the two-way regularization. The proposed method is shown to be effective on both synthetic and real-world examples. © Institute of Mathematical Statistics, 2012.

  12. A two-way regularization method for MEG source reconstruction

    Tian, Tian Siva

    2012-09-01

    The MEG inverse problem refers to the reconstruction of the neural activity of the brain from magnetoencephalography (MEG) measurements. We propose a two-way regularization (TWR) method to solve the MEG inverse problem under the assumptions that only a small number of locations in space are responsible for the measured signals (focality), and each source time course is smooth in time (smoothness). The focality and smoothness of the reconstructed signals are ensured respectively by imposing a sparsity-inducing penalty and a roughness penalty in the data fitting criterion. A two-stage algorithm is developed for fast computation, where a raw estimate of the source time course is obtained in the first stage and then refined in the second stage by the two-way regularization. The proposed method is shown to be effective on both synthetic and real-world examples. © Institute of Mathematical Statistics, 2012.

  13. Translate rotate scanning method for X-ray imaging

    Eberhard, J.W.; Kwog Cheong Tam.

    1990-01-01

    Rapid x-ray inspection of objects larger than an x-ray detector array is based on a translate rotate scanning motion of the object related to the fan beam source and detector. The scan for computerized tomography imaging is accomplished by rotating the object through 360 degrees at two or more positions relative to the source and detector array, in moving to another position the object is rotated and the object or source and detector are translated. A partial set of x-ray data is acquired at every position which are combined to obtain a full data set for complete image reconstruction. X-ray data for digital radiography imaging is acquired by scanning the object vertically at a first position at one view angle, rotating and translating the object relative to the source and detector to a second position, scanning vertically, and so on to cover the object field of view, and combining the partial data sets. (author)

  14. Progressive Image Transmission Based on Joint Source-Channel Decoding Using Adaptive Sum-Product Algorithm

    David G. Daut

    2007-03-01

    Full Text Available A joint source-channel decoding method is designed to accelerate the iterative log-domain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec making it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. The positions of bits belonging to error-free coding passes are then fed back to the channel decoder. The log-likelihood ratios (LLRs of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the nonsource controlled decoding method by up to 3 dB in terms of PSNR.

  15. Progressive Image Transmission Based on Joint Source-Channel Decoding Using Adaptive Sum-Product Algorithm

    Liu Weiliang

    2007-01-01

    Full Text Available A joint source-channel decoding method is designed to accelerate the iterative log-domain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec making it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. The positions of bits belonging to error-free coding passes are then fed back to the channel decoder. The log-likelihood ratios (LLRs of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the nonsource controlled decoding method by up to 3 dB in terms of PSNR.

  16. Distributed MIMO-ISAR Sub-image Fusion Method

    Gu Wenkun

    2017-02-01

    Full Text Available The fast fluctuation associated with maneuvering a target’s radar cross-section often affects the imaging performance stability of traditional monostatic Inverse Synthetic Aperture Radar (ISAR. To address this problem, in this study, we propose an imaging method based on the fusion of sub-images of frequencydiversity-distributed multiple Input-Multiple Output-Inverse Synthetic Aperture Radar (MIMO-ISAR. First, we establish the analytic expression of a two-dimensional ISAR sub-image acquired by different channels of distributed MIMO-ISAR. Then, we derive the distance and azimuth distortion factors of the image acquired by the different channels. By compensating for the distortion of the ISAR image, we ultimately realize distributed MIMO-ISAR fusion imaging. Simulations verify the validity of this imaging method using distributed MIMO-ISAR.

  17. Investigation of Optimal Integrated Circuit Raster Image Vectorization Method

    Leonas Jasevičius

    2011-03-01

    Full Text Available Visual analysis of integrated circuit layer requires raster image vectorization stage to extract layer topology data to CAD tools. In this paper vectorization problems of raster IC layer images are presented. Various line extraction from raster images algorithms and their properties are discussed. Optimal raster image vectorization method was developed which allows utilization of common vectorization algorithms to achieve the best possible extracted vector data match with perfect manual vectorization results. To develop the optimal method, vectorized data quality dependence on initial raster image skeleton filter selection was assessed.Article in Lithuanian

  18. A SAR IMAGE REGISTRATION METHOD BASED ON SIFT ALGORITHM

    W. Lu

    2017-09-01

    Full Text Available In order to improve the stability and rapidity of synthetic aperture radar (SAR images matching, an effective method was presented. Firstly, the adaptive smoothing filtering was employed for image denoising in image processing based on Wallis filtering to avoid the follow-up noise is amplified. Secondly, feature points were extracted by a simplified SIFT algorithm. Finally, the exact matching of the images was achieved with these points. Compared with the existing methods, it not only maintains the richness of features, but a-lso reduces the noise of the image. The simulation results show that the proposed algorithm can achieve better matching effect.

  19. Phenotypic and Genotypic Eligible Methods for Salmonella Typhimurium Source Tracking.

    Ferrari, Rafaela G; Panzenhagen, Pedro H N; Conte-Junior, Carlos A

    2017-01-01

    Salmonellosis is one of the most common causes of foodborne infection and a leading cause of human gastroenteritis. Throughout the last decade, Salmonella enterica serotype Typhimurium (ST) has shown an increase report with the simultaneous emergence of multidrug-resistant isolates, as phage type DT104. Therefore, to successfully control this microorganism, it is important to attribute salmonellosis to the exact source. Studies of Salmonella source attribution have been performed to determine the main food/food-production animals involved, toward which, control efforts should be correctly directed. Hence, the election of a ST subtyping method depends on the particular problem that efforts must be directed, the resources and the data available. Generally, before choosing a molecular subtyping, phenotyping approaches such as serotyping, phage typing, and antimicrobial resistance profiling are implemented as a screening of an investigation, and the results are computed using frequency-matching models (i.e., Dutch, Hald and Asymmetric Island models). Actually, due to the advancement of molecular tools as PFGE, MLVA, MLST, CRISPR, and WGS more precise results have been obtained, but even with these technologies, there are still gaps to be elucidated. To address this issue, an important question needs to be answered: what are the currently suitable subtyping methods to source attribute ST. This review presents the most frequently applied subtyping methods used to characterize ST, analyses the major available microbial subtyping attribution models and ponders the use of conventional phenotyping methods, as well as, the most applied genotypic tools in the context of their potential applicability to investigates ST source tracking.

  20. Phenotypic and Genotypic Eligible Methods for Salmonella Typhimurium Source Tracking

    Rafaela G. Ferrari

    2017-12-01

    Full Text Available Salmonellosis is one of the most common causes of foodborne infection and a leading cause of human gastroenteritis. Throughout the last decade, Salmonella enterica serotype Typhimurium (ST has shown an increase report with the simultaneous emergence of multidrug-resistant isolates, as phage type DT104. Therefore, to successfully control this microorganism, it is important to attribute salmonellosis to the exact source. Studies of Salmonella source attribution have been performed to determine the main food/food-production animals involved, toward which, control efforts should be correctly directed. Hence, the election of a ST subtyping method depends on the particular problem that efforts must be directed, the resources and the data available. Generally, before choosing a molecular subtyping, phenotyping approaches such as serotyping, phage typing, and antimicrobial resistance profiling are implemented as a screening of an investigation, and the results are computed using frequency-matching models (i.e., Dutch, Hald and Asymmetric Island models. Actually, due to the advancement of molecular tools as PFGE, MLVA, MLST, CRISPR, and WGS more precise results have been obtained, but even with these technologies, there are still gaps to be elucidated. To address this issue, an important question needs to be answered: what are the currently suitable subtyping methods to source attribute ST. This review presents the most frequently applied subtyping methods used to characterize ST, analyses the major available microbial subtyping attribution models and ponders the use of conventional phenotyping methods, as well as, the most applied genotypic tools in the context of their potential applicability to investigates ST source tracking.

  1. Testing methods of ECR ion source experimental platform

    Zhou Changgeng; Hu Yonghong; Li Yan

    2006-12-01

    The principle and structure of ECR ion source experimental platform were introduce. The testing methods of the parameters of single main component and the comprehensive parameters under the condition of certain beam current and beam spot diameter were summarized in process of manufacturing. Some appropriate testing dates were given. The existent questions (the parameters of plasma density in discharge chamber and accurate hydrogen flow, etc. can not be measured in operation) and resolutions were also put forward. (authors)

  2. Population-based imaging biobanks as source of big data.

    Gatidis, Sergios; Heber, Sophia D; Storz, Corinna; Bamberg, Fabian

    2017-06-01

    Advances of computational sciences over the last decades have enabled the introduction of novel methodological approaches in biomedical research. Acquiring extensive and comprehensive data about a research subject and subsequently extracting significant information has opened new possibilities in gaining insight into biological and medical processes. This so-called big data approach has recently found entrance into medical imaging and numerous epidemiological studies have been implementing advanced imaging to identify imaging biomarkers that provide information about physiological processes, including normal development and aging but also on the development of pathological disease states. The purpose of this article is to present existing epidemiological imaging studies and to discuss opportunities, methodological and organizational aspects, and challenges that population imaging poses to the field of big data research.

  3. A method for fast automated microscope image stitching.

    Yang, Fan; Deng, Zhen-Sheng; Fan, Qiu-Hong

    2013-05-01

    Image stitching is an important technology to produce a panorama or larger image by combining several images with overlapped areas. In many biomedical researches, image stitching is highly desirable to acquire a panoramic image which represents large areas of certain structures or whole sections, while retaining microscopic resolution. In this study, we develop a fast normal light microscope image stitching algorithm based on feature extraction. At first, an algorithm of scale-space reconstruction of speeded-up robust features (SURF) was proposed to extract features from the images to be stitched with a short time and higher repeatability. Then, the histogram equalization (HE) method was employed to preprocess the images to enhance their contrast for extracting more features. Thirdly, the rough overlapping zones of the images preprocessed were calculated by phase correlation, and the improved SURF was used to extract the image features in the rough overlapping areas. Fourthly, the features were corresponded by matching algorithm and the transformation parameters were estimated, then the images were blended seamlessly. Finally, this procedure was applied to stitch normal light microscope images to verify its validity. Our experimental results demonstrate that the improved SURF algorithm is very robust to viewpoint, illumination, blur, rotation and zoom of the images and our method is able to stitch microscope images automatically with high precision and high speed. Also, the method proposed in this paper is applicable to registration and stitching of common images as well as stitching the microscope images in the field of virtual microscope for the purpose of observing, exchanging, saving, and establishing a database of microscope images. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Quantifying Uncertainty in Near Surface Electromagnetic Imaging Using Bayesian Methods

    Blatter, D. B.; Ray, A.; Key, K.

    2017-12-01

    Geoscientists commonly use electromagnetic methods to image the Earth's near surface. Field measurements of EM fields are made (often with the aid an artificial EM source) and then used to infer near surface electrical conductivity via a process known as inversion. In geophysics, the standard inversion tool kit is robust and can provide an estimate of the Earth's near surface conductivity that is both geologically reasonable and compatible with the measured field data. However, standard inverse methods struggle to provide a sense of the uncertainty in the estimate they provide. This is because the task of finding an Earth model that explains the data to within measurement error is non-unique - that is, there are many, many such models; but the standard methods provide only one "answer." An alternative method, known as Bayesian inversion, seeks to explore the full range of Earth model parameters that can adequately explain the measured data, rather than attempting to find a single, "ideal" model. Bayesian inverse methods can therefore provide a quantitative assessment of the uncertainty inherent in trying to infer near surface conductivity from noisy, measured field data. This study applies a Bayesian inverse method (called trans-dimensional Markov chain Monte Carlo) to transient airborne EM data previously collected over Taylor Valley - one of the McMurdo Dry Valleys in Antarctica. Our results confirm the reasonableness of previous estimates (made using standard methods) of near surface conductivity beneath Taylor Valley. In addition, we demonstrate quantitatively the uncertainty associated with those estimates. We demonstrate that Bayesian inverse methods can provide quantitative uncertainty to estimates of near surface conductivity.

  5. Correcting saturation of detectors for particle/droplet imaging methods

    Kalt, Peter A M

    2010-01-01

    Laser-based diagnostic methods are being applied to more and more flows of theoretical and practical interest and are revealing interesting new flow features. Imaging particles or droplets in nephelometry and laser sheet dropsizing methods requires a trade-off of maximized signal-to-noise ratio without over-saturating the detector. Droplet and particle imaging results in lognormal distribution of pixel intensities. It is possible to fit a derived lognormal distribution to the histogram of measured pixel intensities. If pixel intensities are clipped at a saturated value, it is possible to estimate a presumed probability density function (pdf) shape without the effects of saturation from the lognormal fit to the unsaturated histogram. Information about presumed shapes of the pixel intensity pdf is used to generate corrections that can be applied to data to account for saturation. The effects of even slight saturation are shown to be a significant source of error on the derived average. The influence of saturation on the derived root mean square (rms) is even more pronounced. It is found that errors on the determined average exceed 5% when the number of saturated samples exceeds 3% of the total. Errors on the rms are 20% for a similar saturation level. This study also attempts to delineate limits, within which the detector saturation can be accurately corrected. It is demonstrated that a simple method for reshaping the clipped part of the pixel intensity histogram makes accurate corrections to account for saturated pixels. These outcomes can be used to correct a saturated signal, quantify the effect of saturation on a derived average and offer a method to correct the derived average in the case of slight to moderate saturation of pixels

  6. Combined fluorescence and phase contrast imaging at the Advanced Photon Source

    Hornberger, B.; Feser, M.; Jacobsen, C.; Vogt, S.; Legnini, D.; Paterson, D.; Rehak, P.; DeGeronimo, G.; Palmer, B.M.; Experimental Facilities Division; State Univ. of New York at Stony Brook Univ.; BNL; Univ. of Vermont

    2006-01-01

    X-ray fluorescence microprobes excel at detecting and quantifying trace metals in biological and environmental science samples, but typically do not detect low Z elements such as carbon and nitrogen. Therefore, it is hard to put the trace metals into context with their natural environment. We are implementing phase contrast capabilities with a segmented detector into several microprobes at the Advanced Photon Source (APS) to address this problem. Qualitative differential phase contrast images from a modified soft x-ray detector already provide very useful information for general users. We are also implementing a quantitative method to recover the absolute phase shift by Fourier filtering detector images. New detectors are under development which are optimized for the signal levels present at the APS. In this paper, we concentrate on fundamental signal to noise considerations comparing absorption and differential phase contrast

  7. From synchrotron radiation to lab source: advanced speckle-based X-ray imaging using abrasive paper

    Wang, Hongchang; Kashyap, Yogesh; Sawhney, Kawal

    2016-02-01

    X-ray phase and dark-field imaging techniques provide complementary and inaccessible information compared to conventional X-ray absorption or visible light imaging. However, such methods typically require sophisticated experimental apparatus or X-ray beams with specific properties. Recently, an X-ray speckle-based technique has shown great potential for X-ray phase and dark-field imaging using a simple experimental arrangement. However, it still suffers from either poor resolution or the time consuming process of collecting a large number of images. To overcome these limitations, in this report we demonstrate that absorption, dark-field, phase contrast, and two orthogonal differential phase contrast images can simultaneously be generated by scanning a piece of abrasive paper in only one direction. We propose a novel theoretical approach to quantitatively extract the above five images by utilising the remarkable properties of speckles. Importantly, the technique has been extended from a synchrotron light source to utilise a lab-based microfocus X-ray source and flat panel detector. Removing the need to raster the optics in two directions significantly reduces the acquisition time and absorbed dose, which can be of vital importance for many biological samples. This new imaging method could potentially provide a breakthrough for numerous practical imaging applications in biomedical research and materials science.

  8. Characterization of Crystallographic Structures Using Bragg-Edge Neutron Imaging at the Spallation Neutron Source

    Gian Song

    2017-12-01

    Full Text Available Over the past decade, wavelength-dependent neutron radiography, also known as Bragg-edge imaging, has been employed as a non-destructive bulk characterization method due to its sensitivity to coherent elastic neutron scattering that is associated with crystalline structures. Several analysis approaches have been developed to quantitatively determine crystalline orientation, lattice strain, and phase distribution. In this study, we report a systematic investigation of the crystal structures of metallic materials (such as selected textureless powder samples and additively manufactured (AM Inconel 718 samples, using Bragg-edge imaging at the Oak Ridge National Laboratory (ORNL Spallation Neutron Source (SNS. Firstly, we have implemented a phenomenological Gaussian-based fitting in a Python-based computer called iBeatles. Secondly, we have developed a model-based approach to analyze Bragg-edge transmission spectra, which allows quantitative determination of the crystallographic attributes. Moreover, neutron diffraction measurements were carried out to validate the Bragg-edge analytical methods. These results demonstrate that the microstructural complexity (in this case, texture plays a key role in determining the crystallographic parameters (lattice constant or interplanar spacing, which implies that the Bragg-edge image analysis methods must be carefully selected based on the material structures.

  9. On two methods of statistical image analysis

    Missimer, J; Knorr, U; Maguire, RP; Herzog, H; Seitz, RJ; Tellman, L; Leenders, K.L.

    1999-01-01

    The computerized brain atlas (CBA) and statistical parametric mapping (SPM) are two procedures for voxel-based statistical evaluation of PET activation studies. Each includes spatial standardization of image volumes, computation of a statistic, and evaluation of its significance. In addition,

  10. Ultrasound Imaging Methods for Breast Cancer Detection

    Ozmen, N.

    2014-01-01

    The main focus of this thesis is on modeling acoustic wavefield propagation and implementing imaging algorithms for breast cancer detection using ultrasound. As a starting point, we use an integral equation formulation, which can be used to solve both the forward and inverse problems. This thesis

  11. Human body region enhancement method based on Kinect infrared imaging

    Yang, Lei; Fan, Yubo; Song, Xiaowei; Cai, Wenjing

    2016-10-01

    To effectively improve the low contrast of human body region in the infrared images, a combing method of several enhancement methods is utilized to enhance the human body region. Firstly, for the infrared images acquired by Kinect, in order to improve the overall contrast of the infrared images, an Optimal Contrast-Tone Mapping (OCTM) method with multi-iterations is applied to balance the contrast of low-luminosity infrared images. Secondly, to enhance the human body region better, a Level Set algorithm is employed to improve the contour edges of human body region. Finally, to further improve the human body region in infrared images, Laplacian Pyramid decomposition is adopted to enhance the contour-improved human body region. Meanwhile, the background area without human body region is processed by bilateral filtering to improve the overall effect. With theoretical analysis and experimental verification, the results show that the proposed method could effectively enhance the human body region of such infrared images.

  12. Imaging systems and methods for obtaining and using biometric information

    McMakin, Douglas L [Richland, WA; Kennedy, Mike O [Richland, WA

    2010-11-30

    Disclosed herein are exemplary embodiments of imaging systems and methods of using such systems. In one exemplary embodiment, one or more direct images of the body of a clothed subject are received, and a motion signature is determined from the one or more images. In this embodiment, the one or more images show movement of the body of the subject over time, and the motion signature is associated with the movement of the subject's body. In certain implementations, the subject can be identified based at least in part on the motion signature. Imaging systems for performing any of the disclosed methods are also disclosed herein. Furthermore, the disclosed imaging, rendering, and analysis methods can be implemented, at least in part, as one or more computer-readable media comprising computer-executable instructions for causing a computer to perform the respective methods.

  13. Reliable method for fission source convergence of Monte Carlo criticality calculation with Wielandt's method

    Yamamoto, Toshihiro; Miyoshi, Yoshinori

    2004-01-01

    A new algorithm of Monte Carlo criticality calculations for implementing Wielandt's method, which is one of acceleration techniques for deterministic source iteration methods, is developed, and the algorithm can be successfully implemented into MCNP code. In this algorithm, part of fission neutrons emitted during random walk processes are tracked within the current cycle, and thus a fission source distribution used in the next cycle spread more widely. Applying this method intensifies a neutron interaction effect even in a loosely-coupled array where conventional Monte Carlo criticality methods have difficulties, and a converged fission source distribution can be obtained with fewer cycles. Computing time spent for one cycle, however, increases because of tracking fission neutrons within the current cycle, which eventually results in an increase of total computing time up to convergence. In addition, statistical fluctuations of a fission source distribution in a cycle are worsened by applying Wielandt's method to Monte Carlo criticality calculations. However, since a fission source convergence is attained with fewer source iterations, a reliable determination of convergence can easily be made even in a system with a slow convergence. This acceleration method is expected to contribute to prevention of incorrect Monte Carlo criticality calculations. (author)

  14. Imaging method of brain surface anatomy structures using conventional T2-weighted MR images

    Hatanaka, Masahiko; Machida, Yoshio; Yoshida, Tadatoki; Katada, Kazuhiro.

    1992-01-01

    As a non-invasive technique for visualizing the brain surface structure by MRI, surface anatomy scanning (SAS) and the multislice SAS methods have been developed. Both techniques require additional MRI scanning to obtain images for the brain surface. In this paper, we report an alternative method to obtain the brain surface image using conventional T2-weighted multislice images without any additional scanning. The power calculation of the image pixel values, which is incorporated in the routine processing, has been applied in order to enhance the cerebrospinal fluid (CSF) contrast. We think that this method is one of practical approaches for imaging the surface anatomy of the brain. (author)

  15. Brain diagnosis with imaging methods: Psychical changes made visible

    Anon.

    1988-01-01

    The First International Symposium on Imaging Methods in Psychiatry, held in May 1988 in Wuerzburg, very impressively has shown that imaging methods are on advance not only in medical diagnostics, but also in psychiatric diagnostics, where they already proved to be a valuable tool. (orig./MG) [de

  16. Comparison of methods for separating vibration sources in rotating machinery

    Klein, Renata

    2017-12-01

    Vibro-acoustic signatures are widely used for diagnostics of rotating machinery. Vibration based automatic diagnostics systems need to achieve a good separation between signals generated by different sources. The separation task may be challenging, since the effects of the different vibration sources often overlap. In particular, there is a need to separate between signals related to the natural frequencies of the structure and signals resulting from the rotating components (signal whitening), as well as a need to separate between signals generated by asynchronous components like bearings and signals generated by cyclo-stationary components like gears. Several methods were proposed to achieve the above separation tasks. The present study compares between some of these methods. The paper also presents a new method for whitening, Adaptive Clutter Separation, as well as a new efficient algorithm for dephase, which separates between asynchronous and cyclo-stationary signals. For whitening the study compares between liftering of the high quefrencies and adaptive clutter separation. For separating between the asynchronous and the cyclo-stationary signals the study compares between liftering in the quefrency domain and dephase. The methods are compared using both simulated signals and real data.

  17. A new method for measurement of femoral anteversion using 3D imaging technique

    Kim, S.I.; Lee, Y.H.; Park, S.-B.; Lee, K.-M.

    1996-01-01

    Conventional methods that use cross-sectional computed tomography (CT) images to estimate femoral anteversion have several problems because of the complex 3 dimensional structure of the femur. These are the ambiguity of defining the femoral neck axis and condylar line, and the dependence on patient positioning. Especially, the femoral neck axis that is known as a major source of error is hard to determine from a single or multiple 2-dimensional transverse CT images. In this study, we are presenting a new method that we have devised form the measurement of femoral anteversion by utilizing the 3 dimensional imaging technique. Poster 176. (author)

  18. A method based on IHS cylindrical transform model for quality assessment of image fusion

    Zhu, Xiaokun; Jia, Yonghong

    2005-10-01

    Image fusion technique has been widely applied to remote sensing image analysis and processing, and methods for quality assessment of image fusion in remote sensing have also become the research issues at home and abroad. Traditional assessment methods combine calculation of quantitative indexes and visual interpretation to compare fused images quantificationally and qualitatively. However, in the existing assessment methods, there are two defects: on one hand, most imdexes lack the theoretic support to compare different fusion methods. On the hand, there is not a uniform preference for most of the quantitative assessment indexes when they are applied to estimate the fusion effects. That is, the spatial resolution and spectral feature could not be analyzed synchronously by these indexes and there is not a general method to unify the spatial and spectral feature assessment. So in this paper, on the basis of the approximate general model of four traditional fusion methods, including Intensity Hue Saturation(IHS) triangle transform fusion, High Pass Filter(HPF) fusion, Principal Component Analysis(PCA) fusion, Wavelet Transform(WT) fusion, a correlation coefficient assessment method based on IHS cylindrical transform is proposed. By experiments, this method can not only get the evaluation results of spatial and spectral features on the basis of uniform preference, but also can acquire the comparison between fusion image sources and fused images, and acquire differences among fusion methods. Compared with the traditional assessment methods, the new methods is more intuitionistic, and in accord with subjective estimation.

  19. Radiopharmaceutical chelates and method of external imaging

    1976-01-01

    The preparation of the following chemicals is described: chelates of technetium-99m, cobalt-57, gallium-67, gallium-68, indium-111 or indium-113m and a substituted iminodiacetic acid or an 8-hydroxyquinoline useful as a radiopharmaceutical external imaging agent. The compounds described are suitable for intravenous injection, have an excellent in vivo stability and are good organ seekers. Tin(II) choride or other tin(II) compounds are used as chelating agents

  20. Soft Shadow Removal and Image Evaluation Methods

    Gryka, M.

    2016-01-01

    High-level image manipulation techniques are in increasing demand as they allow users to intuitively edit photographs to achieve desired effects quickly. As opposed to low-level manipulations, which provide complete freedom, but also require specialized skills and significant effort, high-level editing operations, such as removing objects (inpainting), relighting and material editing, need to respect semantic constraints. As such they shift the burden from the user to the algorithm to only al...

  1. An FPGA-based heterogeneous image fusion system design method

    Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong

    2011-08-01

    Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.

  2. An improved method for polarimetric image restoration in interferometry

    Pratley, Luke; Johnston-Hollitt, Melanie

    2016-11-01

    Interferometric radio astronomy data require the effects of limited coverage in the Fourier plane to be accounted for via a deconvolution process. For the last 40 years this process, known as `cleaning', has been performed almost exclusively on all Stokes parameters individually as if they were independent scalar images. However, here we demonstrate for the case of the linear polarization P, this approach fails to properly account for the complex vector nature resulting in a process which is dependent on the axes under which the deconvolution is performed. We present here an improved method, `Generalized Complex CLEAN', which properly accounts for the complex vector nature of polarized emission and is invariant under rotations of the deconvolution axes. We use two Australia Telescope Compact Array data sets to test standard and complex CLEAN versions of the Högbom and SDI (Steer-Dwedney-Ito) CLEAN algorithms. We show that in general the complex CLEAN version of each algorithm produces more accurate clean components with fewer spurious detections and lower computation cost due to reduced iterations than the current methods. In particular, we find that the complex SDI CLEAN produces the best results for diffuse polarized sources as compared with standard CLEAN algorithms and other complex CLEAN algorithms. Given the move to wide-field, high-resolution polarimetric imaging with future telescopes such as the Square Kilometre Array, we suggest that Generalized Complex CLEAN should be adopted as the deconvolution method for all future polarimetric surveys and in particular that the complex version of an SDI CLEAN should be used.

  3. Imaging C. elegans embryos using an epifluorescent microscope and open source software.

    Verbrugghe, Koen J C; Chan, Raymond C

    2011-03-24

    Cellular processes, such as chromosome assembly, segregation and cytokinesis,are inherently dynamic. Time-lapse imaging of living cells, using fluorescent-labeled reporter proteins or differential interference contrast (DIC) microscopy, allows for the examination of the temporal progression of these dynamic events which is otherwise inferred from analysis of fixed samples(1,2). Moreover, the study of the developmental regulations of cellular processes necessitates conducting time-lapse experiments on an intact organism during development. The Caenorhabiditis elegans embryo is light-transparent and has a rapid, invariant developmental program with a known cell lineage(3), thus providing an ideal experiment model for studying questions in cell biology(4,5)and development(6-9). C. elegans is amendable to genetic manipulation by forward genetics (based on random mutagenesis(10,11)) and reverse genetics to target specific genes (based on RNAi-mediated interference and targeted mutagenesis(12-15)). In addition, transgenic animals can be readily created to express fluorescently tagged proteins or reporters(16,17). These traits combine to make it easy to identify the genetic pathways regulating fundamental cellular and developmental processes in vivo(18-21). In this protocol we present methods for live imaging of C. elegans embryos using DIC optics or GFP fluorescence on a compound epifluorescent microscope. We demonstrate the ease with which readily available microscopes, typically used for fixed sample imaging, can also be applied for time-lapse analysis using open-source software to automate the imaging process.

  4. Standardization method for alpha and beta surface sources

    Sahagia, M; Grigorescu, E L; Razdolescu, A C; Ivan, C [Institute of Physics and Nuclear Engineering, Institute of Atomic Physics, PO Box MG-6, R-76900 Bucharest, (Romania)

    1994-01-01

    The installation and method of standardization of large surface alpha and beta sources are presented. A multiwire, flow-type proportional counter and the associated electronics is used. The counter is placed in a lead-shield. The response of the system in (s[sup -1]/Bq) or (s[sup -1]/(particle x s[sup -1])) was determined for [sup 241] Am, [sup 239] Pu, [sup 147] Pm, [sup 204] Tl, [sup 90](Sr+Y) and [sup 137] Cs using standard sources with different dimensions, from some mm[sup 2] to 180 x 220 mm[sup 2]. The system was legally attested for expanded uncertainties of +7%. (Author).

  5. Swept-frequency feedback interferometry using terahertz frequency QCLs: a method for imaging and materials analysis.

    Rakić, Aleksandar D; Taimre, Thomas; Bertling, Karl; Lim, Yah Leng; Dean, Paul; Indjin, Dragan; Ikonić, Zoran; Harrison, Paul; Valavanis, Alexander; Khanna, Suraj P; Lachab, Mohammad; Wilson, Stephen J; Linfield, Edmund H; Davies, A Giles

    2013-09-23

    The terahertz (THz) frequency quantum cascade laser (QCL) is a compact source of high-power radiation with a narrow intrinsic linewidth. As such, THz QCLs are extremely promising sources for applications including high-resolution spectroscopy, heterodyne detection, and coherent imaging. We exploit the remarkable phase-stability of THz QCLs to create a coherent swept-frequency delayed self-homodyning method for both imaging and materials analysis, using laser feedback interferometry. Using our scheme we obtain amplitude-like and phase-like images with minimal signal processing. We determine the physical relationship between the operating parameters of the laser under feedback and the complex refractive index of the target and demonstrate that this coherent detection method enables extraction of complex refractive indices with high accuracy. This establishes an ultimately compact and easy-to-implement THz imaging and materials analysis system, in which the local oscillator, mixer, and detector are all combined into a single laser.

  6. Level set method for image segmentation based on moment competition

    Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai

    2015-05-01

    We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.

  7. Analysis of live cell images: Methods, tools and opportunities.

    Nketia, Thomas A; Sailem, Heba; Rohde, Gustavo; Machiraju, Raghu; Rittscher, Jens

    2017-02-15

    Advances in optical microscopy, biosensors and cell culturing technologies have transformed live cell imaging. Thanks to these advances live cell imaging plays an increasingly important role in basic biology research as well as at all stages of drug development. Image analysis methods are needed to extract quantitative information from these vast and complex data sets. The aim of this review is to provide an overview of available image analysis methods for live cell imaging, in particular required preprocessing image segmentation, cell tracking and data visualisation methods. The potential opportunities recent advances in machine learning, especially deep learning, and computer vision provide are being discussed. This review includes overview of the different available software packages and toolkits. Copyright © 2017. Published by Elsevier Inc.

  8. Fresnel zone plate imaging of a 252Cf spontaneous fission source

    Stalker, K.T.; Hessel, K.R.

    1976-11-01

    The feasibility of coded aperture imaging for nuclear fuel motion monitoring is shown using Cf 252 spontaneous fission source. The theory of coded aperture imaging for Fresnel zone plate apertures is presented and design considerations for zone plate construction are discussed. Actual images are obtained which demonstrate a transverse resolution of 1.7 mm and a tomographic resolution of 1.5 millimeters. The capability of obtaining images through 12.7 mm of stainless steel is also shown

  9. Quantitative Analysis of Range Image Patches by NEB Method

    Wang Wen

    2017-01-01

    Full Text Available In this paper we analyze sampled high dimensional data with the NEB method from a range image database. Select a large random sample of log-valued, high contrast, normalized, 8×8 range image patches from the Brown database. We make a density estimator and we establish 1-dimensional cell complexes from the range image patch data. We find topological properties of 8×8 range image patches, prove that there exist two types of subsets of 8×8 range image patches modelled as a circle.

  10. New Finger Biometric Method Using Near Infrared Imaging

    Lee, Eui Chul; Jung, Hyunwoo; Kim, Daeyeoul

    2011-01-01

    In this paper, we propose a new finger biometric method. Infrared finger images are first captured, and then feature extraction is performed using a modified Gaussian high-pass filter through binarization, local binary pattern (LBP), and local derivative pattern (LDP) methods. Infrared finger images include the multimodal features of finger veins and finger geometries. Instead of extracting each feature using different methods, the modified Gaussian high-pass filter is fully convolved. Therefore, the extracted binary patterns of finger images include the multimodal features of veins and finger geometries. Experimental results show that the proposed method has an error rate of 0.13%. PMID:22163741

  11. An improved method to estimate reflectance parameters for high dynamic range imaging

    Li, Shiying; Deguchi, Koichiro; Li, Renfa; Manabe, Yoshitsugu; Chihara, Kunihiro

    2008-01-01

    Two methods are described to accurately estimate diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness, over the dynamic range of the camera used to capture input images. Neither method needs to segment color areas on an image, or to reconstruct a high dynamic range (HDR) image. The second method improves on the first, bypassing the requirement for specific separation of diffuse and specular reflection components. For the latter method, diffuse and specular reflectance parameters are estimated separately, using the least squares method. Reflection values are initially assumed to be diffuse-only reflection components, and are subjected to the least squares method to estimate diffuse reflectance parameters. Specular reflection components, obtained by subtracting the computed diffuse reflection components from reflection values, are then subjected to a logarithmically transformed equation of the Torrance-Sparrow reflection model, and specular reflectance parameters for gloss intensity and surface roughness are finally estimated using the least squares method. Experiments were carried out using both methods, with simulation data at different saturation levels, generated according to the Lambert and Torrance-Sparrow reflection models, and the second method, with spectral images captured by an imaging spectrograph and a moving light source. Our results show that the second method can estimate the diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness more accurately and faster than the first one, so that colors and gloss can be reproduced more efficiently for HDR imaging.

  12. Localization of epileptogenic zones in Lennox–Gastaut syndrome using frequency domain source imaging of intracranial electroencephalography: a preliminary investigation

    Cho, Jae-Hyun; Jung, Young-Jin; Kim, Jeong-Youn; Im, Chang-Hwan; Kang, Hoon-Chul; Kim, Heung Dong; Yoon, Dae Sung; Lee, Yong-Ho

    2013-01-01

    Although intracranial electroencephalography (iEEG) has been widely used to localize epileptogenic zones in epilepsy, visual inspection of iEEG recordings does not always result in a favorable surgical outcome, especially in secondary generalized epilepsy such as Lennox–Gastaut syndrome (LGS). Various computational iEEG analysis methods have recently been introduced to confirm the visual inspection results. Of these methods, high gamma oscillation in iEEG has attracted interest because a series of studies have reported a close relationship between epileptogenic zones and cortical areas with high gamma oscillation. Meanwhile, frequency domain source imaging of EEG and MEG oscillations has proven to be a useful auxiliary tool for identifying rough locations of epileptogenic zones. To the best of our knowledge, however, frequency domain source imaging of high gamma iEEG oscillations has not been studied. In this study, we investigated whether the iEEG-based frequency domain source imaging of high gamma oscillation (60–100 Hz) would be a useful supplementary tool for identifying epileptogenic zones in patients with secondary generalized epilepsy. The method was applied to three successfully operated on LGS patients, whose iEEG contained some ictal events with distinct high gamma oscillations before seizure onset. The resultant cortical source distributions were compared with surgical resection areas and with high gamma spectral power distributions on the intracranial sensor plane. While the results of the sensor-level analyses contained many spurious activities, the results of frequency domain source imaging coincided better with the surgical resection areas, suggesting that the frequency domain source imaging of iEEG high gamma oscillations might help enhance the accuracy of pre-surgical evaluations of patients with secondary generalized epilepsy. (paper)

  13. A New Method for Automated Identification and Morphometry of Myelinated Fibers Through Light Microscopy Image Analysis

    Novas, Romulo Bourget; Fazan, Valeria Paula Sassoli; Felipe, Joaquim Cezar

    2015-01-01

    Nerve morphometry is known to produce relevant information for the evaluation of several phenomena, such as nerve repair, regeneration, implant, transplant, aging, and different human neuropathies. Manual morphometry is laborious, tedious, time consuming, and subject to many sources of error. Therefore, in this paper, we propose a new method for the automated morphometry of myelinated fibers in cross-section light microscopy images. Images from the recurrent laryngeal nerve of adult rats and ...

  14. Image Classification Workflow Using Machine Learning Methods

    Christoffersen, M. S.; Roser, M.; Valadez-Vergara, R.; Fernández-Vega, J. A.; Pierce, S. A.; Arora, R.

    2016-12-01

    Recent increases in the availability and quality of remote sensing datasets have fueled an increasing number of scientifically significant discoveries based on land use classification and land use change analysis. However, much of the software made to work with remote sensing data products, specifically multispectral images, is commercial and often prohibitively expensive. The free to use solutions that are currently available come bundled up as small parts of much larger programs that are very susceptible to bugs and difficult to install and configure. What is needed is a compact, easy to use set of tools to perform land use analysis on multispectral images. To address this need, we have developed software using the Python programming language with the sole function of land use classification and land use change analysis. We chose Python to develop our software because it is relatively readable, has a large body of relevant third party libraries such as GDAL and Spectral Python, and is free to install and use on Windows, Linux, and Macintosh operating systems. In order to test our classification software, we performed a K-means unsupervised classification, Gaussian Maximum Likelihood supervised classification, and a Mahalanobis Distance based supervised classification. The images used for testing were three Landsat rasters of Austin, Texas with a spatial resolution of 60 meters for the years of 1984 and 1999, and 30 meters for the year 2015. The testing dataset was easily downloaded using the Earth Explorer application produced by the USGS. The software should be able to perform classification based on any set of multispectral rasters with little to no modification. Our software makes the ease of land use classification using commercial software available without an expensive license.

  15. THE EFFECT OF IMAGE ENHANCEMENT METHODS DURING FEATURE DETECTION AND MATCHING OF THERMAL IMAGES

    O. Akcay

    2017-05-01

    Full Text Available A successful image matching is essential to provide an automatic photogrammetric process accurately. Feature detection, extraction and matching algorithms have performed on the high resolution images perfectly. However, images of cameras, which are equipped with low-resolution thermal sensors are problematic with the current algorithms. In this paper, some digital image processing techniques were applied to the low-resolution images taken with Optris PI 450 382 x 288 pixel optical resolution lightweight thermal camera to increase extraction and matching performance. Image enhancement methods that adjust low quality digital thermal images, were used to produce more suitable images for detection and extraction. Three main digital image process techniques: histogram equalization, high pass and low pass filters were considered to increase the signal-to-noise ratio, sharpen image, remove noise, respectively. Later on, the pre-processed images were evaluated using current image detection and feature extraction methods Maximally Stable Extremal Regions (MSER and Speeded Up Robust Features (SURF algorithms. Obtained results showed that some enhancement methods increased number of extracted features and decreased blunder errors during image matching. Consequently, the effects of different pre-process techniques were compared in the paper.

  16. Multi-band Image Registration Method Based on Fourier Transform

    庹红娅; 刘允才

    2004-01-01

    This paper presented a registration method based on Fourier transform for multi-band images which is involved in translation and small rotation. Although different band images differ a lot in the intensity and features,they contain certain common information which we can exploit. A model was given that the multi-band images have linear correlations under the least-square sense. It is proved that the coefficients have no effect on the registration progress if two images have linear correlations. Finally, the steps of the registration method were proposed. The experiments show that the model is reasonable and the results are satisfying.

  17. Imaging spectroscopic analysis at the Advanced Light Source

    MacDowell, A. A.; Warwick, T.; Anders, S.; Lamble, G.M.; Martin, M.C.; McKinney, W.R.; Padmore, H.A.

    1999-01-01

    One of the major advances at the high brightness third generation synchrotrons is the dramatic improvement of imaging capability. There is a large multi-disciplinary effort underway at the ALS to develop imaging X-ray, UV and Infra-red spectroscopic analysis on a spatial scale from. a few microns to 10nm. These developments make use of light that varies in energy from 6meV to 15KeV. Imaging and spectroscopy are finding applications in surface science, bulk materials analysis, semiconductor structures, particulate contaminants, magnetic thin films, biology and environmental science. This article is an overview and status report from the developers of some of these techniques at the ALS. The following table lists all the currently available microscopes at the. ALS. This article will describe some of the microscopes and some of the early applications

  18. Image Mosaic Method Based on SIFT Features of Line Segment

    Jun Zhu

    2014-01-01

    Full Text Available This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling.

  19. Development of motion image prediction method using principal component analysis

    Chhatkuli, Ritu Bhusal; Demachi, Kazuyuki; Kawai, Masaki; Sakakibara, Hiroshi; Kamiaka, Kazuma

    2012-01-01

    Respiratory motion can induce the limit in the accuracy of area irradiated during lung cancer radiation therapy. Many methods have been introduced to minimize the impact of healthy tissue irradiation due to the lung tumor motion. The purpose of this research is to develop an algorithm for the improvement of image guided radiation therapy by the prediction of motion images. We predict the motion images by using principal component analysis (PCA) and multi-channel singular spectral analysis (MSSA) method. The images/movies were successfully predicted and verified using the developed algorithm. With the proposed prediction method it is possible to forecast the tumor images over the next breathing period. The implementation of this method in real time is believed to be significant for higher level of tumor tracking including the detection of sudden abdominal changes during radiation therapy. (author)

  20. [A preliminary research on multi-source medical image fusion].

    Kang, Yuanyuan; Li, Bin; Tian, Lianfang; Mao, Zongyuan

    2009-04-01

    Multi-modal medical image fusion has important value in clinical diagnosis and treatment. In this paper, the multi-resolution analysis of Daubechies 9/7 Biorthogonal Wavelet Transform is introduced for anatomical and functional image fusion, then a new fusion algorithm with the combination of local standard deviation and energy as texture measurement is presented. At last, a set of quantitative evaluation criteria is given. Experiments show that both anatomical and metabolism information can be obtained effectively, and both the edge and texture features can be reserved successfully. The presented algorithm is more effective than the traditional algorithms.

  1. Iterative methods for dose reduction and image enhancement in tomography

    Miao, Jianwei; Fahimian, Benjamin Pooya

    2012-09-18

    A system and method for creating a three dimensional cross sectional image of an object by the reconstruction of its projections that have been iteratively refined through modification in object space and Fourier space is disclosed. The invention provides systems and methods for use with any tomographic imaging system that reconstructs an object from its projections. In one embodiment, the invention presents a method to eliminate interpolations present in conventional tomography. The method has been experimentally shown to provide higher resolution and improved image quality parameters over existing approaches. A primary benefit of the method is radiation dose reduction since the invention can produce an image of a desired quality with a fewer number projections than seen with conventional methods.

  2. Thresholding methods for PET imaging: A review

    Dewalle-Vignion, A.S.; Betrouni, N.; Huglo, D.; Vermandel, M.; Dewalle-Vignion, A.S.; Hossein-Foucher, C.; Huglo, D.; Vermandel, M.; Dewalle-Vignion, A.S.; Hossein-Foucher, C.; Huglo, D.; Vermandel, M.; El Abiad, A.

    2010-01-01

    This work deals with positron emission tomography segmentation methods for tumor volume determination. We propose a state of art techniques based on fixed or adaptive threshold. Methods found in literature are analysed with an objective point of view on their methodology, advantages and limitations. Finally, a comparative study is presented. (authors)

  3. Real-time earthquake source imaging: An offline test for the 2011 Tohoku earthquake

    Zhang, Yong; Wang, Rongjiang; Zschau, Jochen; Parolai, Stefano; Dahm, Torsten

    2014-05-01

    In recent decades, great efforts have been expended in real-time seismology aiming at earthquake and tsunami early warning. One of the most important issues is the real-time assessment of earthquake rupture processes using near-field seismogeodetic networks. Currently, earthquake early warning systems are mostly based on the rapid estimate of P-wave magnitude, which contains generally large uncertainties and the known saturation problem. In the case of the 2011 Mw9.0 Tohoku earthquake, JMA (Japan Meteorological Agency) released the first warning of the event with M7.2 after 25 s. The following updates of the magnitude even decreased to M6.3-6.6. Finally, the magnitude estimate stabilized at M8.1 after about two minutes. This led consequently to the underestimated tsunami heights. By using the newly developed Iterative Deconvolution and Stacking (IDS) method for automatic source imaging, we demonstrate an offline test for the real-time analysis of the strong-motion and GPS seismograms of the 2011 Tohoku earthquake. The results show that we had been theoretically able to image the complex rupture process of the 2011 Tohoku earthquake automatically soon after or even during the rupture process. In general, what had happened on the fault could be robustly imaged with a time delay of about 30 s by using either the strong-motion (KiK-net) or the GPS (GEONET) real-time data. This implies that the new real-time source imaging technique is helpful to reduce false and missing warnings, and therefore should play an important role in future tsunami early warning and earthquake rapid response systems.

  4. Method of judging leak sources in a reactor container

    Maeda, Katsuji.

    1984-01-01

    Purpose: To enable exact judgement for leak sources upon leak accident in a reactor container of BWR type power plants as to whether the sources are present in the steam system or coolant system. Method: If leak is resulted from the main steam system, the hydrogen density in the reactor container is about 170 times as high as the same amount of leak from the reactor water. Accordingly, it can be judged whether the leak source is present in the steam system or reactor water system based on the change in the indication of hydrogen densitometer within the reactor container, and the indication from the drain amount from the sump in the container or the indication of a drain flow meter in the container dehumidifier. Further, I-131, Na-24 and the like as the radioactive nucleides in sump water of the container are measured to determine the density ratio R = (I-131)/(Na-24), and it is judged that the leak is resulted in nuclear water if the density ratio R is equal to that of reactor water and that the leak is resulted from the main steam or like other steam system if the density ratio R is higher than by about 100 times than that of reactor water. (Horiuchi, T.)

  5. Quantitative phase imaging of living cells with a swept laser source

    Chen, Shichao; Zhu, Yizheng

    2016-03-01

    Digital holographic phase microscopy is a well-established quantitative phase imaging technique. However, interference artifacts from inside the system, typically induced by elements whose optical thickness are within the source coherence length, limit the imaging quality as well as sensitivity. In this paper, a swept laser source based technique is presented. Spectra acquired at a number of wavelengths, after Fourier Transform, can be used to identify the sources of the interference artifacts. With proper tuning of the optical pathlength difference between sample and reference arms, it is possible to avoid these artifacts and achieve sensitivity below 0.3nm. Performance of the proposed technique is examined in live cell imaging.

  6. 252Cf-source-driven neutron noise analysis method

    Mihalczo, J.T.; King, W.T.; Blakeman, E.D.

    1985-01-01

    The 252 Cf-source-driven neutron noise analysis method has been tested in a wide variety of experiments that have indicated the broad range of applicability of the method. The neutron multiplication factor k/sub eff/ has been satisfactorily detemined for a variety of materials including uranium metal, light water reactor fuel pins, fissile solutions, fuel plates in water, and interacting cylinders. For a uranyl nitrate solution tank which is typical of a fuel processing or reprocessing plant, the k/sub eff/ values were satisfactorily determined for values between 0.92 and 0.5 using a simple point kinetics interpretation of the experimental data. The short measurement times, in several cases as low as 1 min, have shown that the development of this method can lead to a practical subcriticality monitor for many in-plant applications. The further development of the method will require experiments oriented toward particular applications including dynamic experiments and the development of theoretical methods to predict the experimental observables

  7. Generalized Row-Action Methods for Tomographic Imaging

    Andersen, Martin Skovgaard; Hansen, Per Christian

    2014-01-01

    Row-action methods play an important role in tomographic image reconstruction. Many such methods can be viewed as incremental gradient methods for minimizing a sum of a large number of convex functions, and despite their relatively poor global rate of convergence, these methods often exhibit fast...... initial convergence which is desirable in applications where a low-accuracy solution is acceptable. In this paper, we propose relaxed variants of a class of incremental proximal gradient methods, and these variants generalize many existing row-action methods for tomographic imaging. Moreover, they allow...

  8. Method of electroplating a conversion electron emitting source on implant

    Srivastava, Suresh C [Setauket, NY; Gonzales, Gilbert R [New York, NY; Adzic, Radoslav [East Setauket, NY; Meinken, George E [Middle Island, NY

    2012-02-14

    Methods for preparing an implant coated with a conversion electron emitting source (CEES) are disclosed. The typical method includes cleaning the surface of the implant; placing the implant in an activating solution comprising hydrochloric acid to activate the surface; reducing the surface by H.sub.2 evolution in H.sub.2SO.sub.4 solution; and placing the implant in an electroplating solution that includes ions of the CEES, HCl, H.sub.2SO.sub.4, and resorcinol, gelatin, or a combination thereof. Alternatively, before tin plating, a seed layer is formed on the surface. The electroplated CEES coating can be further protected and stabilized by annealing in a heated oven, by passivation, or by being covered with a protective film. The invention also relates to a holding device for holding an implant, wherein the device selectively prevents electrodeposition on the portions of the implant contacting the device.

  9. Optical Coherence Tomography Technology and Quality Improvement Methods for Optical Coherence Tomography Images of Skin: A Short Review

    Adabi, Saba; Turani, Zahra; Fatemizadeh, Emad; Clayton, Anne; Nasiriavanaki, Mohammadreza

    2017-01-01

    Optical coherence tomography (OCT) delivers 3-dimensional images of tissue microstructures. Although OCT imaging offers a promising high-resolution method, OCT images experience some artifacts that lead to misapprehension of tissue structures. Speckle, intensity decay, and blurring are 3 major artifacts in OCT images. Speckle is due to the low coherent light source used in the configuration of OCT. Intensity decay is a deterioration of light with respect to depth, and blurring is the conseque...

  10. Image based method for aberration measurement of lithographic tools

    Xu, Shuang; Tao, Bo; Guo, Yongxing; Li, Gongfa

    2018-01-01

    Information of lens aberration of lithographic tools is important as it directly affects the intensity distribution in the image plane. Zernike polynomials are commonly used for a mathematical description of lens aberrations. Due to the advantage of lower cost and easier implementation of tools, image based measurement techniques have been widely used. Lithographic tools are typically partially coherent systems that can be described by a bilinear model, which entails time consuming calculations and does not lend a simple and intuitive relationship between lens aberrations and the resulted images. Previous methods for retrieving lens aberrations in such partially coherent systems involve through-focus image measurements and time-consuming iterative algorithms. In this work, we propose a method for aberration measurement in lithographic tools, which only requires measuring two images of intensity distribution. Two linear formulations are derived in matrix forms that directly relate the measured images to the unknown Zernike coefficients. Consequently, an efficient non-iterative solution is obtained.

  11. A NDVI assisted remote sensing image adaptive scale segmentation method

    Zhang, Hong; Shen, Jinxiang; Ma, Yanmei

    2018-03-01

    Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.

  12. An efficient method for facial component detection in thermal images

    Paul, Michael; Blanik, Nikolai; Blazek, Vladimir; Leonhardt, Steffen

    2015-04-01

    A method to detect certain regions in thermal images of human faces is presented. In this approach, the following steps are necessary to locate the periorbital and the nose regions: First, the face is segmented from the background by thresholding and morphological filtering. Subsequently, a search region within the face, around its center of mass, is evaluated. Automatically computed temperature thresholds are used per subject and image or image sequence to generate binary images, in which the periorbital regions are located by integral projections. Then, the located positions are used to approximate the nose position. It is possible to track features in the located regions. Therefore, these regions are interesting for different applications like human-machine interaction, biometrics and biomedical imaging. The method is easy to implement and does not rely on any training images or templates. Furthermore, the approach saves processing resources due to simple computations and restricted search regions.

  13. New diffusion imaging method with a single acquisition sequence

    Melki, Ph.S.; Bittoun, J.; Lefevre, J.E.

    1987-01-01

    The apparent diffusion coefficient (ADC) is related to the molecular diffusion coefficient and to physiologic information: microcirculation in the capillary network, incoherent slow flow, and restricted diffusion. The authors present a new MR imaging sequence that yields computed ADC images in only one acquisition of 9-minutes with a 1.5-T imager (GE Signa). Compared to the previous method, this sequence is at least two times faster and thus can be used as a routine examination to supplement T1-, T2-, and density-weighted images. The method was assessed by measurement of the molecular diffusion in liquids, and the first clinical images obtained in neurologic diseases demonstrate its efficiency for clinical investigation. The possibility of separately imaging diffusion and perfusion is supported by an algorithm

  14. Profiling pleural effusion cells by a diffraction imaging method

    Al-Qaysi, Safaa; Hong, Heng; Wen, Yuhua; Lu, Jun Q.; Feng, Yuanming; Hu, Xin-Hua

    2018-02-01

    Assay of cells in pleural effusion (PE) is an important means of disease diagnosis. Conventional cytology of effusion samples, however, has low sensitivity and depends heavily on the expertise of cytopathologists. We applied a polarization diffraction imaging flow cytometry method on effusion cells to investigate their features. Diffraction imaging of the PE cell samples has been performed on 6000 to 12000 cells for each effusion cell sample of three patients. After prescreening to remove images by cellular debris and aggregated non-cellular particles, the image textures were extracted with a gray level co-occurrence matrix (GLCM) algorithm. The distribution of the imaged cells in the GLCM parameters space was analyzed by a Gaussian Mixture Model (GMM) to determine the number of clusters among the effusion cells. These results yield insight on textural features of diffraction images and related cellular morphology in effusion samples and can be used toward the development of a label-free method for effusion cells assay.

  15. Defocusing effects of lensless ghost imaging and ghost diffraction with partially coherent sources

    Zhou, Shuang-Xi; Sheng, Wei; Bi, Yu-Bo; Luo, Chun-Ling

    2018-04-01

    The defocusing effect is inevitable and degrades the image quality in the conventional optical imaging process significantly due to the close confinement of the imaging lens. Based on classical optical coherent theory and linear algebra, we develop a unified formula to describe the defocusing effects of both lensless ghost imaging (LGI) and lensless ghost diffraction (LGD) systems with a partially coherent source. Numerical examples are given to illustrate the influence of defocusing length on the quality of LGI and LGD. We find that the defocusing effects of the test and reference paths in the LGI or LGD systems are entirely different, while the LGD system is more robust against defocusing than the LGI system. Specifically, we find that the imaging process for LGD systems can be viewed as pinhole imaging, which may find applications in ultra-short-wave band imaging without imaging lenses, e.g. x-ray diffraction and γ-ray imaging.

  16. Imaging methods for detection of infectious foci

    Couret, I.; Rossi, M.; Weinemann, P.; Moretti, J.L.

    1993-01-01

    Several tracers can be used for imaging infection. None is a worthwhile agent for all infectious foci, but each one has preferential applications, depending on its uptake mechanism by the infectious and/or inflammatory focus. Autologous leucocytes labeled in vitro with indium-111 (In-111) or with technetium-99-hexamethylpropyleneamine oxime (Tc-99m HMPAO) were applied with success in the detection of peripheral bone infection, focal vascular graft infection and inflammatory bowel disease. Labeling with In-111 is of interest in chronic bone infection, while labeling with Tc-99m HMPAO gets the advantage of a better dosimetry and imaging. The interest of in vivo labeled leucocytes with a Tc-99m labeled monoclonal antigranulocyte antibody anti-NCA 95 (BW 250/183) was proved in the same principal type of infectious foci than in vitro labeled leucocytes. Sites of chronic infection in the spine and the pelvis, whether active or healed, appear as photopenic defects on both in vitro labeled leucocytes and Tc-99m monoclonal antigranulocyte antibody (BW 250/183) scintigraphies. With gallium-67 results showed a high sensitivity with a low specificity. This tracer demonstrated good performance to delineate foci of infectious spondylitis. In-111 and Tc-99m labeled polyclonal human immunoglobulin (HIG) was applied with success in the assessment of various infectious foci, particularly in chronic sepsis. As labeled leucocytes, labeled HIG showed cold defects in infectious sepsis of the spine. Research in nuclear medicine is very active in the development of more specific tracers of infection, mainly involved in Tc-99m or In-111 labeled chemotactic peptides, antigranulocyte antibody fragments, antibiotic derivatives and interleukins. (authors). 70 refs

  17. A preconditioned inexact newton method for nonlinear sparse electromagnetic imaging

    Desmal, Abdulla

    2015-03-01

    A nonlinear inversion scheme for the electromagnetic microwave imaging of domains with sparse content is proposed. Scattering equations are constructed using a contrast-source (CS) formulation. The proposed method uses an inexact Newton (IN) scheme to tackle the nonlinearity of these equations. At every IN iteration, a system of equations, which involves the Frechet derivative (FD) matrix of the CS operator, is solved for the IN step. A sparsity constraint is enforced on the solution via thresholded Landweber iterations, and the convergence is significantly increased using a preconditioner that levels the FD matrix\\'s singular values associated with contrast and equivalent currents. To increase the accuracy, the weight of the regularization\\'s penalty term is reduced during the IN iterations consistently with the scheme\\'s quadratic convergence. At the end of each IN iteration, an additional thresholding, which removes small \\'ripples\\' that are produced by the IN step, is applied to maintain the solution\\'s sparsity. Numerical results demonstrate the applicability of the proposed method in recovering sparse and discontinuous dielectric profiles with high contrast values.

  18. Open Source software and social networks: Disruptive alternatives for medical imaging

    Ratib, Osman; Rosset, Antoine; Heuberger, Joris

    2011-01-01

    In recent decades several major changes in computer and communication technology have pushed the limits of imaging informatics and PACS beyond the traditional system architecture providing new perspectives and innovative approach to a traditionally conservative medical community. Disruptive technologies such as the world-wide-web, wireless networking, Open Source software and recent emergence of cyber communities and social networks have imposed an accelerated pace and major quantum leaps in the progress of computer and technology infrastructure applicable to medical imaging applications. Methods: This paper reviews the impact and potential benefits of two major trends in consumer market software development and how they will influence the future of medical imaging informatics. Open Source software is emerging as an attractive and cost effective alternative to traditional commercial software developments and collaborative social networks provide a new model of communication that is better suited to the needs of the medical community. Observations: Evidence shows that successful Open Source software tools have penetrated the medical market and have proven to be more robust and cost effective than their commercial counterparts. Developed by developers that are themselves part of the user community, these tools are usually better adapted to the user's need and are more robust than traditional software programs being developed and tested by a large number of contributing users. This context allows a much faster and more appropriate development and evolution of the software platforms. Similarly, communication technology has opened up to the general public in a way that has changed the social behavior and habits adding a new dimension to the way people communicate and interact with each other. The new paradigms have also slowly penetrated the professional market and ultimately the medical community. Secure social networks allowing groups of people to easily communicate

  19. Open Source software and social networks: Disruptive alternatives for medical imaging

    Ratib, Osman, E-mail: osman.ratib@hcuge.ch [Department of Medical Imaging and Information Sciences, University Hospital of Geneva, 24, rue Micheli-du-Crest, 1205 Geneva (Switzerland); Rosset, Antoine; Heuberger, Joris [Department of Medical Imaging and Information Sciences, University Hospital of Geneva, 24, rue Micheli-du-Crest, 1205 Geneva (Switzerland)

    2011-05-15

    In recent decades several major changes in computer and communication technology have pushed the limits of imaging informatics and PACS beyond the traditional system architecture providing new perspectives and innovative approach to a traditionally conservative medical community. Disruptive technologies such as the world-wide-web, wireless networking, Open Source software and recent emergence of cyber communities and social networks have imposed an accelerated pace and major quantum leaps in the progress of computer and technology infrastructure applicable to medical imaging applications. Methods: This paper reviews the impact and potential benefits of two major trends in consumer market software development and how they will influence the future of medical imaging informatics. Open Source software is emerging as an attractive and cost effective alternative to traditional commercial software developments and collaborative social networks provide a new model of communication that is better suited to the needs of the medical community. Observations: Evidence shows that successful Open Source software tools have penetrated the medical market and have proven to be more robust and cost effective than their commercial counterparts. Developed by developers that are themselves part of the user community, these tools are usually better adapted to the user's need and are more robust than traditional software programs being developed and tested by a large number of contributing users. This context allows a much faster and more appropriate development and evolution of the software platforms. Similarly, communication technology has opened up to the general public in a way that has changed the social behavior and habits adding a new dimension to the way people communicate and interact with each other. The new paradigms have also slowly penetrated the professional market and ultimately the medical community. Secure social networks allowing groups of people to easily

  20. The best printing methods to print satellite images

    G.A. Yousif; R.Sh. Mohamed

    2011-01-01

    Printing systems operate in general as a system of color its color scale is limited as compared with the system color satellite images. Satellite image is building from very small cell named pixel, which represents the picture element and the unity of color when the image is displayed on the screen, this unit becomes lesser in size and called screen point. This unit posseses different size and shape from the method of printing to another, depending on the output resolution, tools and material...

  1. Dual source and dual detector arrays tetrahedron beam computed tomography for image guided radiotherapy

    Kim, Joshua; Lu, Weiguo; Zhang, Tiezhi

    2014-02-01

    Cone-beam computed tomography (CBCT) is an important online imaging modality for image guided radiotherapy. But suboptimal image quality and the lack of a real-time stereoscopic imaging function limit its implementation in advanced treatment techniques, such as online adaptive and 4D radiotherapy. Tetrahedron beam computed tomography (TBCT) is a novel online imaging modality designed to improve on the image quality provided by CBCT. TBCT geometry is flexible, and multiple detector and source arrays can be used for different applications. In this paper, we describe a novel dual source-dual detector TBCT system that is specially designed for LINAC radiation treatment machines. The imaging system is positioned in-line with the MV beam and is composed of two linear array x-ray sources mounted aside the electrical portal imaging device and two linear arrays of x-ray detectors mounted below the machine head. The detector and x-ray source arrays are orthogonal to each other, and each pair of source and detector arrays forms a tetrahedral volume. Four planer images can be obtained from different view angles at each gantry position at a frame rate as high as 20 frames per second. The overlapped regions provide a stereoscopic field of view of approximately 10-15 cm. With a half gantry rotation, a volumetric CT image can be reconstructed having a 45 cm field of view. Due to the scatter rejecting design of the TBCT geometry, the system can potentially produce high quality 2D and 3D images with less radiation exposure. The design of the dual source-dual detector system is described, and preliminary results of studies performed on numerical phantoms and simulated patient data are presented.

  2. Mathematical methods in time series analysis and digital image processing

    Kurths, J; Maass, P; Timmer, J

    2008-01-01

    The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/enviromental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.

  3. Optimization of in-line phase contrast particle image velocimetry using a laboratory x-ray source

    Ng, I.; Fouras, A.; Paganin, D. M.

    2012-01-01

    Phase contrast particle image velocimetry (PIV) using a laboratory x-ray microfocus source is investigated using a numerical model. Phase contrast images of 75 μm air bubbles, embedded within water exhibiting steady-state vortical flow, are generated under the paraxial approximation using a tungsten x-ray spectrum at 30 kVp. Propagation-based x-ray phase-contrast speckle images at a range of source-object and object-detector distances are generated, and used as input into a simulated PIV measurement. The effects of source-size-induced penumbral blurring, together with the finite dynamic range of the detector, are accounted for in the simulation. The PIV measurement procedure involves using the cross-correlation between temporally sequential speckle images to estimate the transverse displacement field for the fluid. The global error in the PIV reconstruction, for the set of simulations that was performed, suggests that geometric magnification is the key parameter for designing a laboratory-based x-ray phase-contrast PIV system. For the modeled system, x-ray phase-contrast PIV data measurement can be optimized to obtain low error ( 15 μm) of the detector, high geometric magnification (>2.5) is desired, while for large source size system (FWHM > 30 μm), low magnification (<1.5) would be suggested instead. The methods developed in this paper can be applied to optimizing phase-contrast velocimetry using a variety of laboratory x-ray sources.

  4. Image quality analysis to reduce dental artifacts in head and neck imaging with dual-source computed tomography

    Ketelsen, D.; Werner, M.K.; Thomas, C.; Tsiflikas, I.; Reimann, A.; Claussen, C.D.; Heuschmid, M. [Tuebingen Univ. (Germany). Abt. fuer Diagnostische und Interventionelle Radiologie; Koitschev, A. [Tuebingen Univ. (Germany). Abt. fuer Hals-Nasen-Ohrenheilkunde

    2009-01-15

    Purpose: Important oropharyngeal structures can be superimposed by metallic artifacts due to dental implants. The aim of this study was to compare the image quality of multiplanar reconstructions and an angulated spiral in dual-source computed tomography (DSCT) of the neck. Materials and Methods: Sixty-two patients were included for neck imaging with DSCT. MPRs from an axial dataset and an additional short spiral parallel to the mouth floor were acquired. Leading anatomical structures were then evaluated with respect to the extent to which they were affected by dental artifacts using a visual scale, ranging from 1 (least artifacts) to 4 (most artifacts). Results: In MPR, 87.1 % of anatomical structures had significant artifacts (3.12 {+-} 0.86), while in angulated slices leading anatomical structures of the oropharynx showed negligible artifacts (1.28 {+-} 0.46). The diagnostic growth due to primarily angulated slices concerning artifact severity was significant (p < 0.01). Conclusion: MPRs are not capable of reducing dental artifacts sufficiently. In patients with dental artifacts overlying the anatomical structures of the oropharynx, an additional short angulated spiral parallel to the floor of the mouth is recommended and should be applied for daily routine. As a result of the static gantry design of DSCT, the use of a flexible head holder is essential. (orig.)

  5. Checklist for imaging methods. Vol. 1

    Wenz, W.

    1988-01-01

    The checklists in current medicine are intended as a source of information and reference. The first part of this issue explains examination techniques in practice and in hospitals. The second part deals with aetiology, pathogenesis and clinical symptomatology, with findings and examination techniques leading to the specific diagnosis, and, where appropriate, with differential diagnosis and conservative therapy of the various diseases. The third part gives concise information on possible surgery, surgical principles and techniques, which in this context means interventional radiology. The chapters of the second part each discuss a specific body area or disease, such as the abdominal cavity, the extraperitoneal region, the retroperitoneum, acute abdomen, acute abdominal trauma, acute affections of the abdominal wall, the stomach, the post-surgery stomach, duodenum, the small intestine, the colon, sigma and rectum, the gallbladder and bile ducts, the liver, spleen, pancreas, kidneys, and the peritoneum. (orig./MG) With 256 figs., 24 tabs [de

  6. Fingerprint image reconstruction for swipe sensor using Predictive Overlap Method

    Mardiansyah Ahmad Zafrullah

    2018-01-01

    Full Text Available Swipe sensor is one of many biometric authentication sensor types that widely applied to embedded devices. The sensor produces an overlap on every pixel block of the image, so the picture requires a reconstruction process before heading to the feature extraction process. Conventional reconstruction methods require extensive computation, causing difficult to apply to embedded devices that have limited computing process. In this paper, image reconstruction is proposed using predictive overlap method, which determines the image block shift from the previous set of change data. The experiments were performed using 36 images generated by a swipe sensor with 128 x 8 pixels size of the area, where each image has an overlap in each block. The results reveal computation can increase up to 86.44% compared with conventional methods, with accuracy decreasing to 0.008% in average.

  7. Texture recognition of medical images with the ICM method

    Kinser, Jason M.; Wang Guisong

    2004-01-01

    The Integrated Cortical Model (ICM) is based upon several models of the mammalian visual cortex and produces pulse images over several iterations. These pulse images tend to isolate segments, edges, and textures that are inherent in the input image. To create a texture recognition engine the pulse spectrum of individual pixels are collected and used to develop a recognition library. Recognition is performed by comparing pulse spectra of unclassified regions of images with the known regions. Because signatures are smaller than images, signature-based computation is quite efficient and parasites can be recognized quickly. The precision of this method depends on the representative of signatures and classification. Our experiment results support the theoretical findings and show perspectives of practical applications of ICM-based method. The advantage of ICM method is using signatures to represent objects. ICM can extract the internal features of objects and represent them with signatures. Signature classification is critical for the precision of recognition

  8. Matrix-based image reconstruction methods for tomography

    Llacer, J.; Meng, J.D.

    1984-10-01

    Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures

  9. Near infrared spectral imaging of explosives using a tunable laser source

    Klunder, G L; Margalith, E; Nguyen, L K

    2010-03-26

    Diffuse reflectance near infrared hyperspectral imaging is an important analytical tool for a wide variety of industries, including agriculture consumer products, chemical and pharmaceutical development and production. Using this technique as a method for the standoff detection of explosive particles is presented and discussed. The detection of the particles is based on the diffuse reflectance of light from the particle in the near infrared wavelength range where CH, NH, OH vibrational overtones and combination bands are prominent. The imaging system is a NIR focal plane array camera with a tunable OPO/laser system as the illumination source. The OPO is programmed to scan over a wide spectral range in the NIR and the camera is synchronized to record the light reflected from the target for each wavelength. The spectral resolution of this system is significantly higher than that of hyperspectral systems that incorporate filters or dispersive elements. The data acquisition is very fast and the entire hyperspectral cube can be collected in seconds. A comparison of data collected with the OPO system to data obtained with a broadband light source with LCTF filters is presented.

  10. Guided wave imaging of oblique reflecting interfaces in pipes using common-source synthetic focusing

    Sun, Zeqing; Sun, Anyu; Ju, Bing-Feng

    2018-04-01

    Cross-mode-family mode conversion and secondary reflection of guided waves in pipes complicate the processing of guided waves signals, and can cause false detection. In this paper, filters operating in the spectral domain of wavenumber, circumferential order and frequency are designed to suppress the signal components of unwanted mode-family and unwanted traveling direction. Common-source synthetic focusing is used to reconstruct defect images from the guided wave signals. Simulations of the reflections from linear oblique defects and a semicircle defect are separately implemented. Defect images, which are reconstructed from the simulation results under different excitation conditions, are comparatively studied in terms of axial resolution, reflection amplitude, detectable oblique angle and so on. Further, the proposed method is experimentally validated by detecting linear cracks with various oblique angles (10-40°). The proposed method relies on the guided wave signals that are captured during 2-D scanning of a cylindrical area on the pipe. The redundancy of the signals is analyzed to reduce the time-consumption of the scanning process and to enhance the practicability of the proposed method.

  11. An overview of methods to mitigate artifacts in optical coherence tomography imaging of the skin.

    Adabi, Saba; Fotouhi, Audrey; Xu, Qiuyun; Daveluy, Steve; Mehregan, Darius; Podoleanu, Adrian; Nasiriavanaki, Mohammadreza

    2018-05-01

    Optical coherence tomography (OCT) of skin delivers three-dimensional images of tissue microstructures. Although OCT imaging offers a promising high-resolution modality, OCT images suffer from some artifacts that lead to misinterpretation of tissue structures. Therefore, an overview of methods to mitigate artifacts in OCT imaging of the skin is of paramount importance. Speckle, intensity decay, and blurring are three major artifacts in OCT images. Speckle is due to the low coherent light source used in the configuration of OCT. Intensity decay is a deterioration of light with respect to depth, and blurring is the consequence of deficiencies of optical components. Two speckle reduction methods (one based on artificial neural network and one based on spatial compounding), an attenuation compensation algorithm (based on Beer-Lambert law) and a deblurring procedure (using deconvolution), are described. Moreover, optical properties extraction algorithm based on extended Huygens-Fresnel (EHF) principle to obtain some additional information from OCT images are discussed. In this short overview, we summarize some of the image enhancement algorithms for OCT images which address the abovementioned artifacts. The results showed a significant improvement in the visibility of the clinically relevant features in the images. The quality improvement was evaluated using several numerical assessment measures. Clinical dermatologists benefit from using these image enhancement algorithms to improve OCT diagnosis and essentially function as a noninvasive optical biopsy. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Method and apparatus for improving the alignment of radiographic images

    Schuller, P.D.; Hatcher, D.C.; Caelli, T.M.; Eggert, F.M.; Yuzyk, J.

    1991-01-01

    This invention relates generally to the field of radiology, and has to do particularly with a method and apparatus for improving the alignment of radiographic images taken at different times of the same tissue structure, so that the images can be sequentially shown in aligned condition, whereby changes in the structure can be noted. (author). 10 figs

  13. Method for analysis of failure of material employing imaging

    Vinegar, H.J.; Wellington, S.L.; de Waal, J.A.

    1989-12-05

    This patent describes a method for determining at least one preselected property of a sample of material employing an imaging apparatus. It comprises: imaging the sample during the application of known preselected forces to the sample, and determining density in the sample responsive to the preselected forces.

  14. An attenuation correction method for PET/CT images

    Ue, Hidenori; Yamazaki, Tomohiro; Haneishi, Hideaki

    2006-01-01

    In PET/CT systems, accurate attenuation correction can be achieved by creating an attenuation map from an X-ray CT image. On the other hand, respiratory-gated PET acquisition is an effective method for avoiding motion blurring of the thoracic and abdominal organs caused by respiratory motion. In PET/CT systems employing respiratory-gated PET, using an X-ray CT image acquired during breath-holding for attenuation correction may have a large effect on the voxel values, especially in regions with substantial respiratory motion. In this report, we propose an attenuation correction method in which, as the first step, a set of respiratory-gated PET images is reconstructed without attenuation correction, as the second step, the motion of each phase PET image from the PET image in the same phase as the CT acquisition timing is estimated by the previously proposed method, as the third step, the CT image corresponding to each respiratory phase is generated from the original CT image by deformation according to the motion vector maps, and as the final step, attenuation correction using these CT images and reconstruction are performed. The effectiveness of the proposed method was evaluated using 4D-NCAT phantoms, and good stability of the voxel values near the diaphragm was observed. (author)

  15. Method and Apparatus for Computed Imaging Backscatter Radiography

    Shedlock, Daniel (Inventor); Meng, Christopher (Inventor); Sabri, Nissia (Inventor); Dugan, Edward T. (Inventor); Jacobs, Alan M. (Inventor)

    2013-01-01

    Systems and methods of x-ray backscatter radiography are provided. A single-sided, non-destructive imaging technique utilizing x-ray radiation to image subsurface features is disclosed, capable of scanning a region using a fan beam aperture and gathering data using rotational motion.

  16. Beamlines of the biomedical imaging and therapy facility at the Canadian light source - part 3

    Wysokinski, Tomasz W.; Chapman, Dean; Adams, Gregg; Renier, Michel; Suortti, Pekka; Thomlinson, William

    2015-03-01

    The BioMedical Imaging and Therapy (BMIT) facility provides synchrotron-specific imaging and radiation therapy capabilities [1-4]. We describe here the Insertion Device (ID) beamline 05ID-2 with the beam terminated in the SOE-1 (Secondary Optical Enclosure) experimental hutch. This endstation is designed for imaging and therapy research primarily in animals ranging in size from mice to humans to horses, as well as tissue specimens including plants. Core research programs include human and animal reproduction, cancer imaging and therapy, spinal cord injury and repair, cardiovascular and lung imaging and disease, bone and cartilage growth and deterioration, mammography, developmental biology, gene expression research as well as the introduction of new imaging methods. The source for the ID beamline is a multi-pole superconducting 4.3 T wiggler [5]. The high field gives a critical energy over 20 keV. The high critical energy presents shielding challenges and great care must be taken to assess shielding requirements [6-9]. The optics in the POE-1 and POE-3 hutches [4,10] prepare a monochromatic beam that is 22 cm wide in the last experimental hutch SOE-1. The double crystal bent-Laue or Bragg monochromator, or the single-crystal K-edge subtraction (KES) monochromator provide an energy range appropriate for imaging studies in animals (20-100+ keV). SOE-1 (excluding the basement structure 4 m below the experimental floor) is 6 m wide, 5 m tall and 10 m long with a removable back wall to accommodate installation and removal of the Large Animal Positioning System (LAPS) capable of positioning and manipulating animals as large as a horse [11]. This end-station also includes a unique detector positioner with a vertical travel range of 4.9 m which is required for the KES imaging angle range of +12.3° to -7.3°. The detector positioner also includes moveable shielding integrated with the safety shutters. An update on the status of the other two end-stations at BMIT, described

  17. Beamlines of the biomedical imaging and therapy facility at the Canadian light source – part 3

    Wysokinski, Tomasz W.; Chapman, Dean; Adams, Gregg; Renier, Michel; Suortti, Pekka; Thomlinson, William

    2015-01-01

    The BioMedical Imaging and Therapy (BMIT) facility provides synchrotron-specific imaging and radiation therapy capabilities [1–4]. We describe here the Insertion Device (ID) beamline 05ID-2 with the beam terminated in the SOE-1 (Secondary Optical Enclosure) experimental hutch. This endstation is designed for imaging and therapy research primarily in animals ranging in size from mice to humans to horses, as well as tissue specimens including plants. Core research programs include human and animal reproduction, cancer imaging and therapy, spinal cord injury and repair, cardiovascular and lung imaging and disease, bone and cartilage growth and deterioration, mammography, developmental biology, gene expression research as well as the introduction of new imaging methods. The source for the ID beamline is a multi-pole superconducting 4.3 T wiggler [5]. The high field gives a critical energy over 20 keV. The high critical energy presents shielding challenges and great care must be taken to assess shielding requirements [6–9]. The optics in the POE-1 and POE-3 hutches [4,10] prepare a monochromatic beam that is 22 cm wide in the last experimental hutch SOE-1. The double crystal bent-Laue or Bragg monochromator, or the single-crystal K-edge subtraction (KES) monochromator provide an energy range appropriate for imaging studies in animals (20–100+ keV). SOE-1 (excluding the basement structure 4 m below the experimental floor) is 6 m wide, 5 m tall and 10 m long with a removable back wall to accommodate installation and removal of the Large Animal Positioning System (LAPS) capable of positioning and manipulating animals as large as a horse [11]. This end-station also includes a unique detector positioner with a vertical travel range of 4.9 m which is required for the KES imaging angle range of +12.3° to –7.3°. The detector positioner also includes moveable shielding integrated with the safety shutters. An update on the status of the other two end-stations at BMIT

  18. Beamlines of the biomedical imaging and therapy facility at the Canadian light source – part 3

    Wysokinski, Tomasz W., E-mail: bmit@lightsource.ca [Canadian Light Source, Saskatoon, SK (Canada); Chapman, Dean [Anatomy and Cell Biology, University of Saskatchewan, Saskatoon, SK (Canada); Adams, Gregg [Western College of Veterinary Medicine, Saskatoon, SK (Canada); Renier, Michel [European Synchrotron Radiation Facility, Grenoble (France); Suortti, Pekka [Department of Physics, University of Helsinki (Finland); Thomlinson, William [Department of Physics, University of Saskatchewan, Saskatoon, SK (Canada)

    2015-03-01

    The BioMedical Imaging and Therapy (BMIT) facility provides synchrotron-specific imaging and radiation therapy capabilities [1–4]. We describe here the Insertion Device (ID) beamline 05ID-2 with the beam terminated in the SOE-1 (Secondary Optical Enclosure) experimental hutch. This endstation is designed for imaging and therapy research primarily in animals ranging in size from mice to humans to horses, as well as tissue specimens including plants. Core research programs include human and animal reproduction, cancer imaging and therapy, spinal cord injury and repair, cardiovascular and lung imaging and disease, bone and cartilage growth and deterioration, mammography, developmental biology, gene expression research as well as the introduction of new imaging methods. The source for the ID beamline is a multi-pole superconducting 4.3 T wiggler [5]. The high field gives a critical energy over 20 keV. The high critical energy presents shielding challenges and great care must be taken to assess shielding requirements [6–9]. The optics in the POE-1 and POE-3 hutches [4,10] prepare a monochromatic beam that is 22 cm wide in the last experimental hutch SOE-1. The double crystal bent-Laue or Bragg monochromator, or the single-crystal K-edge subtraction (KES) monochromator provide an energy range appropriate for imaging studies in animals (20–100+ keV). SOE-1 (excluding the basement structure 4 m below the experimental floor) is 6 m wide, 5 m tall and 10 m long with a removable back wall to accommodate installation and removal of the Large Animal Positioning System (LAPS) capable of positioning and manipulating animals as large as a horse [11]. This end-station also includes a unique detector positioner with a vertical travel range of 4.9 m which is required for the KES imaging angle range of +12.3° to –7.3°. The detector positioner also includes moveable shielding integrated with the safety shutters. An update on the status of the other two end-stations at BMIT

  19. A new method for crosstalk correction in simultaneous dual-isotope myocardial imaging with Tl-201 and I-123

    Tsuji, Akinori; Kojima, Akihiro; Oyama, Yoichi; Tomiguchi, Seiji; Kira, Tomohiro; Takagi, Yoshikazu; Shimomura, Osamu; Takahashi, Mutsumasa; Matsumoto, Masanori

    1999-01-01

    We have developed a new method of crosstalk correction in simultaneous dual-isotope imaging with Tl-201 and I-123 by using crosstalk ratios and a blurring filter. Single isotope myocardial studies (10 for Tl-201 and 7 for I-123) were performed with a dual energy window acquisition mode and two low energy general-purpose collimators. Then two planar images acquired with dual energy windows for a Tl-201 line source and an I-123 line source were obtained to measure line spread functions (LSFs) and crosstalk ratios for each image. The line source experiments showed that the LSFs for the Tl-201 imaging window from the single Tl-201 source were very similar to those for the I-123 imaging window from the single Tl-201 source, but the LSFs for the Tl-201 imaging window from the single I-123 source had broad shapes which differed from those for the I-123 imaging window from the single I-123. To obtain accurate I-123 crosstalk images in the Tl-201 imaging window from the I-123 images in the I-123 imaging window, we designed a low-pass blurring filter. In 7 clinical I-123 MIBG studies, I-123 window images processed with this filter became very similar to the Tl-201 window image from the single I-123 source. The method proposed in this study can accurately correct the crosstalk in dual isotope studies with Tl-201 and I-123 and is easily applicable to conventional gamma camera systems with any dual energy window acquisition mode. (author)

  20. Effect of the lead screen in the radiographic image using iridium 192 as a source

    Garate Rojas, M.

    1983-01-01

    It's presented the effect of the lead screen in the image obtained on an impressionable film used in industrial gammagraphy. The source used was Iridium 192 and the tests were simulated like a real inspection. (E.G.) [pt

  1. Development of Realistic Head Models for Electromagnetic Source Imaging of the Human Brain

    Akalin, Z

    2001-01-01

    In this work, a methodology is developed to solve the forward problem of electromagnetic source imaging using realistic head models, For this purpose, first segmentation of the 3 dimensional MR head...

  2. A Fieldable-Prototype Large-Area Gamma-ray Imager for Orphan Source Search

    Ziock, Klaus-Peter [ORNL; Fabris, Lorenzo [ORNL; Carr, Dennis [Lawrence Livermore National Laboratory (LLNL); Collins, Jeff [Lawrence Livermore National Laboratory (LLNL); Cunningham, Mark F [Lawrence Livermore National Laboratory (LLNL); Habte Ghebretatios, Frezghi [ORNL; Karnowski, Thomas Paul [ORNL; Marchant, William [University of California, Berkeley

    2008-01-01

    We have constructed a unique instrument for use in the search for orphan sources. The system uses gamma-ray imaging to "see through" the natural background variations that effectively limit the search range of normal devices to ~10 m. The imager is mounted in a 4.9- m-long trailer and can be towed by a large personal vehicle. Source locations are determined both in range and along the direction of travel as the vehicle moves. A fully inertial platform coupled to a Global Positioning System receiver is used to map the gamma-ray images onto overhead geospatial imagery. The resulting images provide precise source locations, allowing rapid follow-up work. The instrument simultaneously searches both sides of the street to a distance of 50 m (100-m swath) for milliCurieclass sources with near-perfect performance.

  3. Image segmentation with a finite element method

    Bourdin, Blaise

    1999-01-01

    regularization results, make possible to imagine a finite element resolution method.In a first time, the Mumford-Shah functional is introduced and some existing results are quoted. Then, a discrete formulation for the Mumford-Shah problem is proposed and its $\\Gamma$-convergence is proved. Finally, some...

  4. Method for imaging pulmonary arterial hypoplasia

    Triantafillou, M.

    2000-01-01

    Full text: Pulmonary hypoplasia represents an incomplete development of the lung, resulting in the reduction of distended lung volume. This is associated with small or absent number of airway divisions, alveoli, arteries and veins. Unilateral pulmonary Hypoplasia is often asymptomatic and may be demonstrated as a hypodense lung on a chest X-ray. Computer Tomography (CT) scanning would show anatomical detail and proximal vessels. Magnetic Resonance Imaging (MRI) will show no more detail than which the CT scan has already demonstrated. It is, also, difficult to visualise collateral vessels from systemic and/or bronchial vessels on both these modalities. Pulmonary Angiography would give the definitive answer, but it is time consuming and has significant risks associated with the procedure. There are high costs associated with these modalities. Nuclear Medicine Ventilation/Perfusion (V/Q) scan performed on these patients would demonstrate diminished ventilation due to reduced lung volume and absence of perfusion to the hypoplastic lung. To date, we have performed V/Q lung scan on two children in our department. Both cases demonstrate diminished ventilation with no perfusion to the hypoplastic lung. Though the gold standard is Pulmonary Angiography, V/Q scanning is cost effective, less time consuming and a non invasive procedure that can be performed as an outpatient. It is accurate as it demonstrates absent lung perfusion, confirming the patient has pulmonary arterial hypoplasia. Copyright (2000) The Australian and New Zealand Society of Nuclear Medicine Inc

  5. A New Wave Equation Based Source Location Method with Full-waveform Inversion

    Wu, Zedong

    2017-05-26

    Locating the source of a passively recorded seismic event is still a challenging problem, especially when the velocity is unknown. Many imaging approaches to focus the image do not address the velocity issue and result in images plagued with illumination artifacts. We develop a waveform inversion approach with an additional penalty term in the objective function to reward the focusing of the source image. This penalty term is relaxed early to allow for data fitting, and avoid cycle skipping, using an extended source. At the later stages the focusing of the image dominates the inversion allowing for high resolution source and velocity inversion. We also compute the source location explicitly and numerical tests show that we obtain good estimates of the source locations with this approach.

  6. Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method

    Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan

    2018-04-01

    Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.

  7. A method of fast mosaic for massive UAV images

    Xiang, Ren; Sun, Min; Jiang, Cheng; Liu, Lei; Zheng, Hui; Li, Xiaodong

    2014-11-01

    With the development of UAV technology, UAVs are used widely in multiple fields such as agriculture, forest protection, mineral exploration, natural disaster management and surveillances of public security events. In contrast of traditional manned aerial remote sensing platforms, UAVs are cheaper and more flexible to use. So users can obtain massive image data with UAVs, but this requires a lot of time to process the image data, for example, Pix4UAV need approximately 10 hours to process 1000 images in a high performance PC. But disaster management and many other fields require quick respond which is hard to realize with massive image data. Aiming at improving the disadvantage of high time consumption and manual interaction, in this article a solution of fast UAV image stitching is raised. GPS and POS data are used to pre-process the original images from UAV, belts and relation between belts and images are recognized automatically by the program, in the same time useless images are picked out. This can boost the progress of finding match points between images. Levenberg-Marquard algorithm is improved so that parallel computing can be applied to shorten the time of global optimization notably. Besides traditional mosaic result, it can also generate superoverlay result for Google Earth, which can provide a fast and easy way to show the result data. In order to verify the feasibility of this method, a fast mosaic system of massive UAV images is developed, which is fully automated and no manual interaction is needed after original images and GPS data are provided. A test using 800 images of Kelan River in Xinjiang Province shows that this system can reduce 35%-50% time consumption in contrast of traditional methods, and increases respond speed of UAV image processing rapidly.

  8. Separation of non-stationary multi-source sound field based on the interpolated time-domain equivalent source method

    Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng

    2016-05-01

    In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.

  9. Hiding a Covert Digital Image by Assembling the RSA Encryption Method and the Binary Encoding Method

    Kuang Tsan Lin; Sheng Lih Yeh

    2014-01-01

    The Rivest-Shamir-Adleman (RSA) encryption method and the binary encoding method are assembled to form a hybrid hiding method to hide a covert digital image into a dot-matrix holographic image. First, the RSA encryption method is used to transform the covert image to form a RSA encryption data string. Then, all the elements of the RSA encryption data string are transferred into binary data. Finally, the binary data are encoded into the dot-matrix holographic image. The pixels of the dot-matri...

  10. Method and algorithm for image processing

    He, George G.; Moon, Brain D.

    2003-12-16

    The present invention is a modified Radon transform. It is similar to the traditional Radon transform for the extraction of line parameters and similar to traditional slant stack for the intensity summation of pixels away from a given pixel, for example ray paths that spans 360 degree at a given grid in the time and offset domain. However, the present invention differs from these methods in that the intensity and direction of a composite intensity for each pixel are maintained separately instead of combined after the transformation. An advantage of this approach is elimination of the work required to extract the line parameters in the transformed domain. The advantage of the modified Radon Transform method is amplified when many lines are present in the imagery or when the lines are just short segments which both occur in actual imagery.

  11. A Method for Improving the Progressive Image Coding Algorithms

    Ovidiu COSMA

    2014-12-01

    Full Text Available This article presents a method for increasing the performance of the progressive coding algorithms for the subbands of images, by representing the coefficients with a code that reduces the truncation error.

  12. Development of digital image correlation method to analyse crack ...

    samples were performed to verify the performance of the digital image correlation method. ... development cannot be measured accurately. ..... Mendelson A 1983 Plasticity: Theory and application (USA: Krieger Publishing company Malabar,.

  13. Quantitative Methods for Molecular Diagnostic and Therapeutic Imaging

    Li, Quanzheng

    2013-01-01

    This theme issue provides an overview on the basic quantitative methods, an in-depth discussion on the cutting-edge quantitative analysis approaches as well as their applications for both static and dynamic molecular diagnostic and therapeutic imaging.

  14. Discrete gradient methods for solving variational image regularisation models

    Grimm, V; McLachlan, Robert I; McLaren, David I; Quispel, G R W; Schönlieb, C-B

    2017-01-01

    Discrete gradient methods are well-known methods of geometric numerical integration, which preserve the dissipation of gradient systems. In this paper we show that this property of discrete gradient methods can be interesting in the context of variational models for image processing, that is where the processed image is computed as a minimiser of an energy functional. Numerical schemes for computing minimisers of such energies are desired to inherit the dissipative property of the gradient system associated to the energy and consequently guarantee a monotonic decrease of the energy along iterations, avoiding situations in which more computational work might lead to less optimal solutions. Under appropriate smoothness assumptions on the energy functional we prove that discrete gradient methods guarantee a monotonic decrease of the energy towards stationary states, and we promote their use in image processing by exhibiting experiments with convex and non-convex variational models for image deblurring, denoising, and inpainting. (paper)

  15. Spatio Temporal EEG Source Imaging with the Hierarchical Bayesian Elastic Net and Elitist Lasso Models.

    Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A; Valdés-Hernández, Pedro A; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A

    2017-01-01

    The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods

  16. Double Minimum Variance Beamforming Method to Enhance Photoacoustic Imaging

    Paridar, Roya; Mozaffarzadeh, Moein; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2018-01-01

    One of the common algorithms used to reconstruct photoacoustic (PA) images is the non-adaptive Delay-and-Sum (DAS) beamformer. However, the quality of the reconstructed PA images obtained by DAS is not satisfying due to its high level of sidelobes and wide mainlobe. In contrast, adaptive beamformers, such as minimum variance (MV), result in an improved image compared to DAS. In this paper, a novel beamforming method, called Double MV (D-MV) is proposed to enhance the image quality compared to...

  17. Beam imaging sensor and method for using same

    McAninch, Michael D.; Root, Jeffrey J.

    2017-01-03

    The present invention relates generally to the field of sensors for beam imaging and, in particular, to a new and useful beam imaging sensor for use in determining, for example, the power density distribution of a beam including, but not limited to, an electron beam or an ion beam. In one embodiment, the beam imaging sensor of the present invention comprises, among other items, a circumferential slit that is either circular, elliptical or polygonal in nature. In another embodiment, the beam imaging sensor of the present invention comprises, among other things, a discontinuous partially circumferential slit. Also disclosed is a method for using the various beams sensor embodiments of the present invention.

  18. Quantitative methods for the analysis of electron microscope images

    Skands, Peter Ulrik Vallø

    1996-01-01

    The topic of this thesis is an general introduction to quantitative methods for the analysis of digital microscope images. The images presented are primarily been acquired from Scanning Electron Microscopes (SEM) and interfermeter microscopes (IFM). The topic is approached though several examples...... foundation of the thesis fall in the areas of: 1) Mathematical Morphology; 2) Distance transforms and applications; and 3) Fractal geometry. Image analysis opens in general the possibility of a quantitative and statistical well founded measurement of digital microscope images. Herein lies also the conditions...

  19. Development and validation of a combined phased acoustical radiosity and image source model for predicting sound fields in rooms.

    Marbjerg, Gerd; Brunskog, Jonas; Jeong, Cheol-Ho; Nilsson, Erling

    2015-09-01

    A model, combining acoustical radiosity and the image source method, including phase shifts on reflection, has been developed. The model is denoted Phased Acoustical Radiosity and Image Source Method (PARISM), and it has been developed in order to be able to model both specular and diffuse reflections with complex-valued and angle-dependent boundary conditions. This paper mainly describes the combination of the two models and the implementation of the angle-dependent boundary conditions. It furthermore describes how a pressure impulse response is obtained from the energy-based acoustical radiosity by regarding the model as being stochastic. Three methods of implementation are proposed and investigated, and finally, recommendations are made for their use. Validation of the image source method is done by comparison with finite element simulations of a rectangular room with a porous absorber ceiling. Results from the full model are compared with results from other simulation tools and with measurements. The comparisons of the full model are done for real-valued and angle-independent surface properties. The proposed model agrees well with both the measured results and the alternative theories, and furthermore shows a more realistic spatial variation than energy-based methods due to the fact that interference is considered.

  20. Dual-source spiral CT with pitch up to 3.2 and 75 ms temporal resolution: Image reconstruction and assessment of image quality

    Flohr, Thomas G.; Leng Shuai; Yu Lifeng; Allmendinger, Thomas; Bruder, Herbert; Petersilka, Martin; Eusemann, Christian D.; Stierstorfer, Karl; Schmidt, Bernhard; McCollough, Cynthia H.

    2009-01-01

    Purpose: To present the theory for image reconstruction of a high-pitch, high-temporal-resolution spiral scan mode for dual-source CT (DSCT) and evaluate its image quality and dose. Methods: With the use of two x-ray sources and two data acquisition systems, spiral CT exams having a nominal temporal resolution per image of up to one-quarter of the gantry rotation time can be acquired using pitch values up to 3.2. The scan field of view (SFOV) for this mode, however, is limited to the SFOV of the second detector as a maximum, depending on the pitch. Spatial and low contrast resolution, image uniformity and noise, CT number accuracy and linearity, and radiation dose were assessed using the ACR CT accreditation phantom, a 30 cm diameter cylindrical water phantom or a 32 cm diameter cylindrical PMMA CTDI phantom. Slice sensitivity profiles (SSPs) were measured for different nominal slice thicknesses, and an anthropomorphic phantom was used to assess image artifacts. Results were compared between single-source scans at pitch=1.0 and dual-source scans at pitch=3.2. In addition, image quality and temporal resolution of an ECG-triggered version of the DSCT high-pitch spiral scan mode were evaluated with a moving coronary artery phantom, and radiation dose was assessed in comparison with other existing cardiac scan techniques. Results: No significant differences in quantitative measures of image quality were found between single-source scans at pitch=1.0 and dual-source scans at pitch=3.2 for spatial and low contrast resolution, CT number accuracy and linearity, SSPs, image uniformity, and noise. The pitch value (1.6≤pitch≤3.2) had only a minor impact on radiation dose and image noise when the effective tube current time product (mA s/pitch) was kept constant. However, while not severe, artifacts were found to be more prevalent for the dual-source pitch=3.2 scan mode when structures varied markedly along the z axis, particularly for head scans. Images of the moving

  1. Dual-source spiral CT with pitch up to 3.2 and 75 ms temporal resolution: Image reconstruction and assessment of image quality

    Flohr, Thomas G.; Leng Shuai; Yu Lifeng; Allmendinger, Thomas; Bruder, Herbert; Petersilka, Martin; Eusemann, Christian D.; Stierstorfer, Karl; Schmidt, Bernhard; McCollough, Cynthia H. [Siemens Healthcare, Computed Tomography, 91301 Forchheim, Germany and Department of Diagnostic Radiology, Eberhard-Karls-Universitaet, 72076 Tuebingen (Germany); Department of Radiology, Mayo Clinic, Rochester, Minnesota 55905 (United States); Siemens Healthcare, Computed Tomography, 91301 Forchheim (Germany); Department of Radiology, Mayo Clinic, Rochester, Minnesota 55905 (United States)

    2009-12-15

    Purpose: To present the theory for image reconstruction of a high-pitch, high-temporal-resolution spiral scan mode for dual-source CT (DSCT) and evaluate its image quality and dose. Methods: With the use of two x-ray sources and two data acquisition systems, spiral CT exams having a nominal temporal resolution per image of up to one-quarter of the gantry rotation time can be acquired using pitch values up to 3.2. The scan field of view (SFOV) for this mode, however, is limited to the SFOV of the second detector as a maximum, depending on the pitch. Spatial and low contrast resolution, image uniformity and noise, CT number accuracy and linearity, and radiation dose were assessed using the ACR CT accreditation phantom, a 30 cm diameter cylindrical water phantom or a 32 cm diameter cylindrical PMMA CTDI phantom. Slice sensitivity profiles (SSPs) were measured for different nominal slice thicknesses, and an anthropomorphic phantom was used to assess image artifacts. Results were compared between single-source scans at pitch=1.0 and dual-source scans at pitch=3.2. In addition, image quality and temporal resolution of an ECG-triggered version of the DSCT high-pitch spiral scan mode were evaluated with a moving coronary artery phantom, and radiation dose was assessed in comparison with other existing cardiac scan techniques. Results: No significant differences in quantitative measures of image quality were found between single-source scans at pitch=1.0 and dual-source scans at pitch=3.2 for spatial and low contrast resolution, CT number accuracy and linearity, SSPs, image uniformity, and noise. The pitch value (1.6{<=}pitch{<=}3.2) had only a minor impact on radiation dose and image noise when the effective tube current time product (mA s/pitch) was kept constant. However, while not severe, artifacts were found to be more prevalent for the dual-source pitch=3.2 scan mode when structures varied markedly along the z axis, particularly for head scans. Images of the moving

  2. Statistical image reconstruction methods for simultaneous emission/transmission PET scans

    Erdogan, H.; Fessler, J.A.

    1996-01-01

    Transmission scans are necessary for estimating the attenuation correction factors (ACFs) to yield quantitatively accurate PET emission images. To reduce the total scan time, post-injection transmission scans have been proposed in which one can simultaneously acquire emission and transmission data using rod sources and sinogram windowing. However, since the post-injection transmission scans are corrupted by emission coincidences, accurate correction for attenuation becomes more challenging. Conventional methods (emission subtraction) for ACF computation from post-injection scans are suboptimal and require relatively long scan times. We introduce statistical methods based on penalized-likelihood objectives to compute ACFs and then use them to reconstruct lower noise PET emission images from simultaneous transmission/emission scans. Simulations show the efficacy of the proposed methods. These methods improve image quality and SNR of the estimates as compared to conventional methods

  3. A paper sheet phantom for scintigraphic planar imaging. Usefulness of pouch-laminated paper source

    Takaki, Akihiro; Soma, Tsutomu; Murase, Kenya; Teraoka, Satomi; Murakami, Tomonori; Kojima, Akihiro; Matsumoto, Masanori

    2007-01-01

    In order to perform experimental measurements for evaluation of imaging device's performance, data acquisition technique, and clinical images on scintigraphic imaging, many kinds of phantoms are employed. However, since these materials are acrylic and plastic, the thickness and quality of those materials cause attenuation and scatter in itself. We developed a paper sheet phantom sealed with a pouch laminator, which can be a true radioactive source in air. In this study, the paper sheet phantom was compared to the acrylic liver phantom, with the thickness of 2 cm, which is commercially available. The results showed that although some scatter counts were contained within the image of the acrylic liver phantom, there were few scattered photons in the paper sheet phantom image. Furthermore, this laminated paper sheet phantom made handling of the source and its waste easier. If the paper sheet phantom will be designed more sophisticatedly, it becomes a useful tool for planar imaging experiments. (author)

  4. Computation of rectangular source integral by rational parameter polynomial method

    Prabha, Hem

    2001-01-01

    Hubbell et al. (J. Res. Nat Bureau Standards 64C, (1960) 121) have obtained a series expansion for the calculation of the radiation field generated by a plane isotropic rectangular source (plaque), in which leading term is the integral H(a,b). In this paper another integral I(a,b), which is related with the integral H(a,b) has been solved by the rational parameter polynomial method. From I(a,b), we compute H(a,b). Using this method the integral I(a,b) is expressed in the form of a polynomial of a rational parameter. Generally, a function f (x) is expressed in terms of x. In this method this is expressed in terms of x/(1+x). In this way, the accuracy of the expression is good over a wide range of x as compared to the earlier approach. The results for I(a,b) and H(a,b) are given for a sixth degree polynomial and are found to be in good agreement with the results obtained by numerically integrating the integral. Accuracy could be increased either by increasing the degree of the polynomial or by dividing the range of integration. The results of H(a,b) and I(a,b) are given for values of b and a up to 2.0 and 20.0, respectively

  5. IMAGE TO POINT CLOUD METHOD OF 3D-MODELING

    A. G. Chibunichev

    2012-07-01

    Full Text Available This article describes the method of constructing 3D models of objects (buildings, monuments based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

  6. Analysis and Comparison of Objective Methods for Image Quality Assessment

    P. S. Babkin

    2014-01-01

    Full Text Available The purpose of this work is research and modification of the reference objective methods for image quality assessment. The ultimate goal is to obtain a modification of formal assessments that more closely corresponds to the subjective expert estimates (MOS.In considering the formal reference objective methods for image quality assessment we used the results of other authors, which offer results and comparative analyzes of the most effective algorithms. Based on these investigations we have chosen two of the most successful algorithm for which was made a further analysis in the MATLAB 7.8 R 2009 a (PQS and MSSSIM. The publication focuses on the features of the algorithms, which have great importance in practical implementation, but are insufficiently covered in the publications by other authors.In the implemented modification of the algorithm PQS boundary detector Kirsch was replaced by the boundary detector Canny. Further experiments were carried out according to the method of the ITU-R VT.500-13 (01/2012 using monochrome images treated with different types of filters (should be emphasized that an objective assessment of image quality PQS is applicable only to monochrome images. Images were obtained with a thermal imaging surveillance system. The experimental results proved the effectiveness of this modification.In the specialized literature in the field of formal to evaluation methods pictures, this type of modification was not mentioned.The method described in the publication can be applied to various practical implementations of digital image processing.Advisability and effectiveness of using the modified method of PQS to assess the structural differences between the images are shown in the article and this will be used in solving the problems of identification and automatic control.

  7. Optical Coherence Tomography Technology and Quality Improvement Methods for Optical Coherence Tomography Images of Skin: A Short Review

    Adabi, Saba; Turani, Zahra; Fatemizadeh, Emad; Clayton, Anne; Nasiriavanaki, Mohammadreza

    2017-01-01

    Optical coherence tomography (OCT) delivers 3-dimensional images of tissue microstructures. Although OCT imaging offers a promising high-resolution method, OCT images experience some artifacts that lead to misapprehension of tissue structures. Speckle, intensity decay, and blurring are 3 major artifacts in OCT images. Speckle is due to the low coherent light source used in the configuration of OCT. Intensity decay is a deterioration of light with respect to depth, and blurring is the consequence of deficiencies of optical components. In this short review, we summarize some of the image enhancement algorithms for OCT images which address the abovementioned artifacts. PMID:28638245

  8. Optical Coherence Tomography Technology and Quality Improvement Methods for Optical Coherence Tomography Images of Skin: A Short Review

    Saba Adabi

    2017-06-01

    Full Text Available Optical coherence tomography (OCT delivers 3-dimensional images of tissue microstructures. Although OCT imaging offers a promising high-resolution method, OCT images experience some artifacts that lead to misapprehension of tissue structures. Speckle, intensity decay, and blurring are 3 major artifacts in OCT images. Speckle is due to the low coherent light source used in the configuration of OCT. Intensity decay is a deterioration of light with respect to depth, and blurring is the consequence of deficiencies of optical components. In this short review, we summarize some of the image enhancement algorithms for OCT images which address the abovementioned artifacts.

  9. Detection of Point Sources on Two-Dimensional Images Based on Peaks

    R. B. Barreiro

    2005-09-01

    Full Text Available This paper considers the detection of point sources in two-dimensional astronomical images. The detection scheme we propose is based on peak statistics. We discuss the example of the detection of far galaxies in cosmic microwave background experiments throughout the paper, although the method we present is totally general and can be used in many other fields of data analysis. We consider sources with a Gaussian profile—that is, a fair approximation of the profile of a point source convolved with the detector beam in microwave experiments—on a background modeled by a homogeneous and isotropic Gaussian random field characterized by a scale-free power spectrum. Point sources are enhanced with respect to the background by means of linear filters. After filtering, we identify local maxima and apply our detection scheme, a Neyman-Pearson detector that defines our region of acceptance based on the a priori pdf of the sources and the ratio of number densities. We study the different performances of some linear filters that have been used in this context in the literature: the Mexican hat wavelet, the matched filter, and the scale-adaptive filter. We consider as well an extension to two dimensions of the biparametric scale-adaptive filter (BSAF. The BSAF depends on two parameters which are determined by maximizing the number density of real detections while fixing the number density of spurious detections. For our detection criterion the BSAF outperforms the other filters in the interesting case of white noise.

  10. Modelling the Impact of Ground Planes on Antenna Radiation Using the Method of Auxiliary Sources

    Larsen, Niels Vesterdal; Breinbjerg, Olav

    2007-01-01

    The Method of Auxiliary Sources is employed to model the impact of finite ground planes on the radiation from antennas. In many cases the computational cost of available commercial tools restricts the simulations to include only a small ground plane or, by use of the image principle, the infinitely...... large ground plane. The method proposed here makes use of results from such simulations to model large and moderate-sized finite ground planes. The method is applied to 3 different antenna test cases and a total of 5 different ground planes. Firstly it is validated through comparison with reference...... and measured reference solutions and the method is thus found to be a useful tool in determining the impact of finite ground planes....

  11. Ictal and interictal electric source imaging in presurgical evaluation

    Sharma, Praveen; Scherg, Michael; Pinborg, Lars H

    2018-01-01

    comparing feasibility and accuracy of interictal (II) and ictal (IC) ESI are lacking. METHODS: We prospectively analysed long-term video EEG recordings (LTM) of patients admitted for presurgical evaluation. We performed ESI of II and IC signals, using two methods: equivalent current dipole (ECD...

  12. Online molecular image repository and analysis system: A multicenter collaborative open-source infrastructure for molecular imaging research and application.

    Rahman, Mahabubur; Watabe, Hiroshi

    2018-05-01

    Molecular imaging serves as an important tool for researchers and clinicians to visualize and investigate complex biochemical phenomena using specialized instruments; these instruments are either used individually or in combination with targeted imaging agents to obtain images related to specific diseases with high sensitivity, specificity, and signal-to-noise ratios. However, molecular imaging, which is a multidisciplinary research field, faces several challenges, including the integration of imaging informatics with bioinformatics and medical informatics, requirement of reliable and robust image analysis algorithms, effective quality control of imaging facilities, and those related to individualized disease mapping, data sharing, software architecture, and knowledge management. As a cost-effective and open-source approach to address these challenges related to molecular imaging, we develop a flexible, transparent, and secure infrastructure, named MIRA, which stands for Molecular Imaging Repository and Analysis, primarily using the Python programming language, and a MySQL relational database system deployed on a Linux server. MIRA is designed with a centralized image archiving infrastructure and information database so that a multicenter collaborative informatics platform can be built. The capability of dealing with metadata, image file format normalization, and storing and viewing different types of documents and multimedia files make MIRA considerably flexible. With features like logging, auditing, commenting, sharing, and searching, MIRA is useful as an Electronic Laboratory Notebook for effective knowledge management. In addition, the centralized approach for MIRA facilitates on-the-fly access to all its features remotely through any web browser. Furthermore, the open-source approach provides the opportunity for sustainable continued development. MIRA offers an infrastructure that can be used as cross-boundary collaborative MI research platform for the rapid

  13. Method and apparatus to image biological interactions in plants

    Weisenberger, Andrew; Bonito, Gregory M.; Reid, Chantal D.; Smith, Mark Frederick

    2015-12-22

    A method to dynamically image the actual translocation of molecular compounds of interest in a plant root, root system, and rhizosphere without disturbing the root or the soil. The technique makes use of radioactive isotopes as tracers to label molecules of interest and to image their distribution in the plant and/or soil. The method allows for the study and imaging of various biological and biochemical interactions in the rhizosphere of a plant, including, but not limited to, mycorrhizal associations in such regions.

  14. A new optimal seam method for seamless image stitching

    Xue, Jiale; Chen, Shengyong; Cheng, Xu; Han, Ying; Zhao, Meng

    2017-07-01

    A novel optimal seam method which aims to stitch those images with overlapping area more seamlessly has been propos ed. Considering the traditional gradient domain optimal seam method and fusion algorithm result in bad color difference measurement and taking a long time respectively, the input images would be converted to HSV space and a new energy function is designed to seek optimal stitching path. To smooth the optimal stitching path, a simplified pixel correction and weighted average method are utilized individually. The proposed methods exhibit performance in eliminating the stitching seam compared with the traditional gradient optimal seam and high efficiency with multi-band blending algorithm.

  15. Energy source perceptions and policy support: Image associations, emotional evaluations, and cognitive beliefs

    Barnes Truelove, Heather

    2012-01-01

    This paper represents the most in-depth effort conducted to date to assess affective, emotional and cognitive perceptions of coal, natural gas, nuclear, and wind energy and the relationship between these perceptions and support for the energy sources. U.S. residents, recruited from a consumer panel, completed surveys assessing image associations, emotional reactions, and cognitive beliefs about energy sources and support for increased reliance on energy sources and local siting of energy facilities. The content of images produced by participants when evaluating energy sources revealed several interesting findings. Additionally, analysis of the image evaluations, emotions, and beliefs about each energy source showed that coal and nuclear energy were viewed most negatively, with natural gas in the middle, and wind viewed most positively. Importantly, these affective, emotional, and cognitive perceptions explained significant amounts of variance in support for each of the energy sources. Implications for future researchers and policy makers are discussed. - Highlights: ► Image associations, emotions, and beliefs about energy sources were measured. ► A dual-process model of energy support was proposed and tested. ► Coal and nuclear were viewed most negatively and wind was viewed most positively. ► The cognitive-affective model predicted support for each energy source.

  16. Methods for processing and analysis functional and anatomical brain images: computerized tomography, emission tomography and nuclear resonance imaging

    Mazoyer, B.M.

    1988-01-01

    The various methods for brain image processing and analysis are presented and compared. The following topics are developed: the physical basis of brain image comparison (nature and formation of signals intrinsic performance of the methods image characteristics); mathematical methods for image processing and analysis (filtering, functional parameter extraction, morphological analysis, robotics and artificial intelligence); methods for anatomical localization (neuro-anatomy atlas, proportional stereotaxic atlas, numerized atlas); methodology of cerebral image superposition (normalization, retiming); image networks [fr

  17. A distortion correction method for image intensifier and electronic portal images used in radiotherapy

    Ioannidis, G T; Geramani, K N; Zamboglou, N [Strahlenklinik, Stadtische Kliniken Offenbach, Offenbach (Germany); Uzunoglu, N [Department of Electrical and Computer Engineering, National Technical University of Athens, Athens (Greece)

    1999-12-31

    At the most of radiation departments a simulator and an `on line` verification system of the treated volume, in form of an electronic portal imaging device (EPID), are available. Networking and digital handling (saving, archiving etc.) of the image information is a necessity in the image processing procedures in order to evaluate verification and simulation recordings at the computer screen. Distortion is on the other hand prerequisite for quantitative comparison of both image modalities. Another limitation factor, in order to make quantitative assertions, is the fact that the irradiation fields in radiotherapy are usually bigger than the field of view of an image intensifier. Several segments of the irradiation field must therefore be acquired. Using pattern recognition techniques these segments can be composed into a single image. In this paper a distortion correction method will be presented. The method is based upon a well defined Grid which is embedded during the registration process on the image. The video signal from the image intensifier is acquired and processed. The grid is then recognised using image processing techniques. Ideally if all grid points are recognised, various methods can be applied in order to correct the distortion. But in practice this is not the case. Overlapping structures (bones etc.) have as a consequence that not all of the grid points can be recognised. Mathematical models from the Graph theory are applied in order to reconstruct the whole grid. The deviation of the grid points positions from the rated value is then used to calculate correction coefficients. This method (well defined grid, grid recognition, correction factors) can also be applied in verification images from the EPID or in other image modalities, and therefore a quantitative comparison in radiation treatment is possible. The distortion correction method and the application on simulator images will be presented. (authors)

  18. Implementation of an Imaging Spectrometer for Localization and Identification of Radioactive Sources

    Hermine, Lemaire; Carrel, Frederick; Gmar, Mehdi; Menesguen Yves; Normand, Stephane; Schoepff, Vincent; Abou-Khalil, Roger; Amgarou, Khalil; Menaa, Nabil; Tebug, Timi; Angelique, Jean-Claude; Bonnet, Florent; De-Toro, Daniel; Giarmana, Olivier; Patoz, Audrey; Talent, Philippe

    2013-06-01

    Spatial localization of radioactive sources is currently a main issue interesting nuclear industry as well as homeland security applications, and can be achieved using gamma cameras. For several years, CEA LIST has been designing a new system, called GAMPIX, with improved sensitivity, portability and ease of use. The main remaining limitation is the lack of spectrometric information, preventing radioactive materials identification. This article describes the development of an imaging spectrometer based on the GAMPIX technology. Experimental tests have been carried out according to both spectrometric methods enabled by the pixelated Timepix readout chip used in the GAMPIX gamma camera. The first method is based on the size of the impacts produced by a gamma-ray energy deposition in the detection matrix. The second one uses the Time over Threshold (ToT) mode of the Timepix chip and deals with time spent by pulses generated by charge preamplifiers over a user-specified threshold. Both energy resolution and sensitivity studies proved the superiority of the ToT approach that will consequently be further explored. Energy calibration, tests of several pixel sizes and use of the Medipix3 readout chip are tracks to improve performances of the newly implemented imaging spectrometer. (authors)

  19. Method of synthesis of abstract images with high self-similarity

    Matveev, Nikolay V.; Shcheglov, Sergey A.; Romanova, Galina E.; Koneva, Ð.¢atiana A.

    2017-06-01

    Abstract images with high self-similarity could be used for drug-free stress therapy. This based on the fact that a complex visual environment has a high affective appraisal. To create such an image we can use the setup based on the three laser sources of small power and different colors (Red, Green, Blue), the image is the pattern resulting from the reflecting and refracting by the complicated form object placed into the laser ray paths. The images were obtained experimentally which showed the good therapy effect. However, to find and to choose the object which gives needed image structure is very difficult and requires many trials. The goal of the work is to develop a method and a procedure of finding the object form which if placed into the ray paths can provide the necessary structure of the image In fact the task means obtaining the necessary irradiance distribution on the given surface. Traditionally such problems are solved using the non-imaging optics methods. In the given case this task is very complicated because of the complicated structure of the illuminance distribution and its high non-linearity. Alternative way is to use the projected image of a mask with a given structure. We consider both ways and discuss how they can help to speed up the synthesis procedure for the given abstract image of the high self-similarity for the setups of drug-free therapy.

  20. HIGH-RESOLUTION IMAGING OF THE ATLBS REGIONS: THE RADIO SOURCE COUNTS

    Thorat, K.; Subrahmanyan, R.; Saripalli, L.; Ekers, R. D., E-mail: kshitij@rri.res.in [Raman Research Institute, C. V. Raman Avenue, Sadashivanagar, Bangalore 560080 (India)

    2013-01-01

    The Australia Telescope Low-brightness Survey (ATLBS) regions have been mosaic imaged at a radio frequency of 1.4 GHz with 6'' angular resolution and 72 {mu}Jy beam{sup -1} rms noise. The images (centered at R.A. 00{sup h}35{sup m}00{sup s}, decl. -67 Degree-Sign 00'00'' and R.A. 00{sup h}59{sup m}17{sup s}, decl. -67 Degree-Sign 00'00'', J2000 epoch) cover 8.42 deg{sup 2} sky area and have no artifacts or imaging errors above the image thermal noise. Multi-resolution radio and optical r-band images (made using the 4 m CTIO Blanco telescope) were used to recognize multi-component sources and prepare a source list; the detection threshold was 0.38 mJy in a low-resolution radio image made with beam FWHM of 50''. Radio source counts in the flux density range 0.4-8.7 mJy are estimated, with corrections applied for noise bias, effective area correction, and resolution bias. The resolution bias is mitigated using low-resolution radio images, while effects of source confusion are removed by using high-resolution images for identifying blended sources. Below 1 mJy the ATLBS counts are systematically lower than the previous estimates. Showing no evidence for an upturn down to 0.4 mJy, they do not require any changes in the radio source population down to the limit of the survey. The work suggests that automated image analysis for counts may be dependent on the ability of the imaging to reproduce connecting emission with low surface brightness and on the ability of the algorithm to recognize sources, which may require that source finding algorithms effectively work with multi-resolution and multi-wavelength data. The work underscores the importance of using source lists-as opposed to component lists-and correcting for the noise bias in order to precisely estimate counts close to the image noise and determine the upturn at sub-mJy flux density.

  1. TH-EF-207A-05: Feasibility of Applying SMEIR Method On Small Animal 4D Cone Beam CT Imaging

    Zhong, Y; Zhang, Y; Shao, Y; Wang, J

    2016-01-01

    Purpose: Small animal cone beam CT imaging has been widely used in preclinical research. Due to the higher respiratory rate and heat beats of small animals, motion blurring is inevitable and needs to be corrected in the reconstruction. Simultaneous motion estimation and image reconstruction (SMEIR) method, which uses projection images of all phases, proved to be effective in motion model estimation and able to reconstruct motion-compensated images. We demonstrate the application of SMEIR for small animal 4D cone beam CT imaging by computer simulations on a digital rat model. Methods: The small animal CBCT imaging system was simulated with the source-to-detector distance of 300 mm and the source-to-object distance of 200 mm. A sequence of rat phantom were generated with 0.4 mm 3 voxel size. The respiratory cycle was taken as 1.0 second and the motions were simulated with a diaphragm motion of 2.4mm and an anterior-posterior expansion of 1.6 mm. The projection images were calculated using a ray-tracing method, and 4D-CBCT were reconstructed using SMEIR and FDK methods. The SMEIR method iterates over two alternating steps: 1) motion-compensated iterative image reconstruction by using projections from all respiration phases and 2) motion model estimation from projections directly through a 2D-3D deformable registration of the image obtained in the first step to projection images of other phases. Results: The images reconstructed using SMEIR method reproduced the features in the original phantom. Projections from the same phase were also reconstructed using FDK method. Compared with the FDK results, the images from SMEIR method substantially improve the image quality with minimum artifacts. Conclusion: We demonstrate that it is viable to apply SMEIR method to reconstruct small animal 4D-CBCT images.

  2. Computed tomography of x-ray index of refraction using the diffraction enhanced imaging method

    Dilmanian, F.A.; Ren, B.; Wu, X.Y.; Orion, I.; Zhong, Z.; Thomlinson, W.C.; Chapman, L.D.

    2000-01-01

    Diffraction enhanced imaging (DEI) is a new, synchrotron-based, x-ray radiography method that uses monochromatic, fan-shaped beams, with an analyser crystal positioned between the subject and the detector. The analyser allows the detection of only those x-rays transmitted by the subject that fall into the acceptance angle (central part of the rocking curve) of the monochromator/analyser system. As shown by Chapman et al , in addition to the x-ray attenuation, the method provides information on the out-of-plane angular deviation of x-rays. New images result in which the image contrast depends on the x-ray index of refraction and on the yield of small-angle scattering, respectively. We implemented DEI in the tomography mode at the National Synchrotron Light Source using 22 keV x-rays, and imaged a cylindrical acrylic phantom that included oil-filled, slanted channels. The resulting 'refraction CT image' shows the pure image of the out-of-plane gradient of the x-ray index of refraction. No image artefacts were present, indicating that the CT projection data were a consistent set. The 'refraction CT image' signal is linear with the gradient of the refractive index, and its value is equal to that expected. The method, at the energy used or higher, has the potential for use in clinical radiography and in industry. (author)

  3. Free and open source software for the manipulation of digital images.

    Solomon, Robert W

    2009-06-01

    Free and open source software is a type of software that is nearly as powerful as commercial software but is freely downloadable. This software can do almost everything that the expensive programs can. GIMP (gnu image manipulation program) is the free program that is comparable to Photoshop, and versions are available for Windows, Macintosh, and Linux platforms. This article briefly describes how GIMP can be installed and used to manipulate radiology images. It is no longer necessary to budget large amounts of money for high-quality software to achieve the goals of image processing and document creation because free and open source software is available for the user to download at will.

  4. New adaptive sampling method in particle image velocimetry

    Yu, Kaikai; Xu, Jinglei; Tang, Lan; Mo, Jianwei

    2015-01-01

    This study proposes a new adaptive method to enable the number of interrogation windows and their positions in a particle image velocimetry (PIV) image interrogation algorithm to become self-adapted according to the seeding density. The proposed method can relax the constraint of uniform sampling rate and uniform window size commonly adopted in the traditional PIV algorithm. In addition, the positions of the sampling points are redistributed on the basis of the spring force generated by the sampling points. The advantages include control of the number of interrogation windows according to the local seeding density and smoother distribution of sampling points. The reliability of the adaptive sampling method is illustrated by processing synthetic and experimental images. The synthetic example attests to the advantages of the sampling method. Compared with that of the uniform interrogation technique in the experimental application, the spatial resolution is locally enhanced when using the proposed sampling method. (technical design note)

  5. Research on interpolation methods in medical image processing.

    Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian

    2012-04-01

    Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.

  6. 99Tc in the environment. Sources, distribution and methods

    Garcia-Leon, Manuel

    2005-01-01

    99 Tc is a β-emitter, E max =294 keV, with a very long half-life (T 1/2 =2.11 x 10 5 y). It is mainly produced in the fission of 235 U and 239 Pu at a rate of about 6%. This rate together with its long half-life makes it a significant nuclide in the whole nuclear fuel cycle, from which it can be introduced into the environment at different rates depending on the cycle step. A gross estimation shows that adding all the possible sources, at least 2000 TBq had been released into the environment up to 2000 and that up to the middle of the nineties of the last century some 64000 TBq had been produced worldwide. Nuclear explosions have liberated some 160 TBq into the environment. In this work, environmental distribution of 99 Tc as well as the methods for its determination will be discussed. Emphasis is put on the environmental relevance of 99 Tc, mainly with regard to the future committed radiation dose received by the population and to the problem of nuclear waste management. Its determination at environmental levels is a challenging task. For that, special mention is made about the mass spectrometric methods for its measurement. (author)

  7. A compact hard X-ray source for medical imaging and biomolecular studies

    Cline, D.B.; Green, M.A.; Kolonko, J.

    1995-01-01

    There are a large number of synchrotron light sources in the world. However, these sources are designed for physics, chemistry, and engineering studies. To our knowledge, none have been optimized for either medical imaging or biomolecular studies. There are special needs for these applications. We present here a preliminary design of a very compact source, small enough for a hospital or a biomolecular laboratory, that is suitable for these applications. (orig.)

  8. Advanced methods for image registration applied to JET videos

    Craciunescu, Teddy, E-mail: teddy.craciunescu@jet.uk [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Murari, Andrea [Consorzio RFX, Associazione EURATOM-ENEA per la Fusione, Padova (Italy); Gelfusa, Michela [Associazione EURATOM-ENEA – University of Rome “Tor Vergata”, Roma (Italy); Tiseanu, Ion; Zoita, Vasile [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Arnoux, Gilles [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon (United Kingdom)

    2015-10-15

    Graphical abstract: - Highlights: • Development of an image registration method for JET IR and fast visible cameras. • Method based on SIFT descriptors and coherent point drift points set registration technique. • Method able to deal with extremely noisy images and very low luminosity images. • Computation time compatible with the inter-shot analysis. - Abstract: The last years have witnessed a significant increase in the use of digital cameras on JET. They are routinely applied for imaging in the IR and visible spectral regions. One of the main technical difficulties in interpreting the data of camera based diagnostics is the presence of movements of the field of view. Small movements occur due to machine shaking during normal pulses while large ones may arise during disruptions. Some cameras show a correlation of image movement with change of magnetic field strength. For deriving unaltered information from the videos and for allowing correct interpretation an image registration method, based on highly distinctive scale invariant feature transform (SIFT) descriptors and on the coherent point drift (CPD) points set registration technique, has been developed. The algorithm incorporates a complex procedure for rejecting outliers. The method has been applied for vibrations correction to videos collected by the JET wide angle infrared camera and for the correction of spurious rotations in the case of the JET fast visible camera (which is equipped with an image intensifier). The method has proved to be able to deal with the images provided by this camera frequently characterized by low contrast and a high level of blurring and noise.

  9. Dual source and dual detector arrays tetrahedron beam computed tomography for image guided radiotherapy

    Kim, Joshua; Zhang, Tiezhi; Lu, Weiguo

    2014-01-01

    Cone-beam computed tomography (CBCT) is an important online imaging modality for image guided radiotherapy. But suboptimal image quality and the lack of a real-time stereoscopic imaging function limit its implementation in advanced treatment techniques, such as online adaptive and 4D radiotherapy. Tetrahedron beam computed tomography (TBCT) is a novel online imaging modality designed to improve on the image quality provided by CBCT. TBCT geometry is flexible, and multiple detector and source arrays can be used for different applications. In this paper, we describe a novel dual source–dual detector TBCT system that is specially designed for LINAC radiation treatment machines. The imaging system is positioned in-line with the MV beam and is composed of two linear array x-ray sources mounted aside the electrical portal imaging device and two linear arrays of x-ray detectors mounted below the machine head. The detector and x-ray source arrays are orthogonal to each other, and each pair of source and detector arrays forms a tetrahedral volume. Four planer images can be obtained from different view angles at each gantry position at a frame rate as high as 20 frames per second. The overlapped regions provide a stereoscopic field of view of approximately 10–15 cm. With a half gantry rotation, a volumetric CT image can be reconstructed having a 45 cm field of view. Due to the scatter rejecting design of the TBCT geometry, the system can potentially produce high quality 2D and 3D images with less radiation exposure. The design of the dual source–dual detector system is described, and preliminary results of studies performed on numerical phantoms and simulated patient data are presented. (paper)

  10. Method for estimating modulation transfer function from sample images.

    Saiga, Rino; Takeuchi, Akihisa; Uesugi, Kentaro; Terada, Yasuko; Suzuki, Yoshio; Mizutani, Ryuta

    2018-02-01

    The modulation transfer function (MTF) represents the frequency domain response of imaging modalities. Here, we report a method for estimating the MTF from sample images. Test images were generated from a number of images, including those taken with an electron microscope and with an observation satellite. These original images were convolved with point spread functions (PSFs) including those of circular apertures. The resultant test images were subjected to a Fourier transformation. The logarithm of the squared norm of the Fourier transform was plotted against the squared distance from the origin. Linear correlations were observed in the logarithmic plots, indicating that the PSF of the test images can be approximated with a Gaussian. The MTF was then calculated from the Gaussian-approximated PSF. The obtained MTF closely coincided with the MTF predicted from the original PSF. The MTF of an x-ray microtomographic section of a fly brain was also estimated with this method. The obtained MTF showed good agreement with the MTF determined from an edge profile of an aluminum test object. We suggest that this approach is an alternative way of estimating the MTF, independently of the image type. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Technical Note: PLASTIMATCH MABS, an open source tool for automatic image segmentation

    Zaffino, Paolo; Spadea, Maria Francesca; Raudaschl, Patrik; Fritscher, Karl; Sharp, Gregory C.

    2016-01-01

    Purpose: Multiatlas based segmentation is largely used in many clinical and research applications. Due to its good performances, it has recently been included in some commercial platforms for radiotherapy planning and surgery guidance. Anyway, to date, a software with no restrictions about the anatomical district and image modality is still missing. In this paper we introduce PLASTIMATCH MABS, an open source software that can be used with any image modality for automatic segmentation. Methods: PLASTIMATCH MABS workflow consists of two main parts: (1) an offline phase, where optimal registration and voting parameters are tuned and (2) an online phase, where a new patient is labeled from scratch by using the same parameters as identified in the former phase. Several registration strategies, as well as different voting criteria can be selected. A flexible atlas selection scheme is also available. To prove the effectiveness of the proposed software across anatomical districts and image modalities, it was tested on two very different scenarios: head and neck (H&N) CT segmentation for radiotherapy application, and magnetic resonance image brain labeling for neuroscience investigation. Results: For the neurological study, minimum dice was equal to 0.76 (investigated structures: left and right caudate, putamen, thalamus, and hippocampus). For head and neck case, minimum dice was 0.42 for the most challenging structures (optic nerves and submandibular glands) and 0.62 for the other ones (mandible, brainstem, and parotid glands). Time required to obtain the labels was compatible with a real clinical workflow (35 and 120 min). Conclusions: The proposed software fills a gap in the multiatlas based segmentation field, since all currently available tools (both for commercial and for research purposes) are restricted to a well specified application. Furthermore, it can be adopted as a platform for exploring MABS parameters and as a reference implementation for comparing against

  12. Technical Note: PLASTIMATCH MABS, an open source tool for automatic image segmentation

    Zaffino, Paolo; Spadea, Maria Francesca [Department of Experimental and Clinical Medicine, Magna Graecia University of Catanzaro, Catanzaro 88100 (Italy); Raudaschl, Patrik; Fritscher, Karl [Institute for Biomedical Image Analysis, Private University of Health Sciences, Medical Informatics and Technology, Hall in Tirol 6060 (Austria); Sharp, Gregory C. [Department for Radiation Oncology, Massachusetts General Hospital, Boston, Massachusetts 02114 (United States)

    2016-09-15

    Purpose: Multiatlas based segmentation is largely used in many clinical and research applications. Due to its good performances, it has recently been included in some commercial platforms for radiotherapy planning and surgery guidance. Anyway, to date, a software with no restrictions about the anatomical district and image modality is still missing. In this paper we introduce PLASTIMATCH MABS, an open source software that can be used with any image modality for automatic segmentation. Methods: PLASTIMATCH MABS workflow consists of two main parts: (1) an offline phase, where optimal registration and voting parameters are tuned and (2) an online phase, where a new patient is labeled from scratch by using the same parameters as identified in the former phase. Several registration strategies, as well as different voting criteria can be selected. A flexible atlas selection scheme is also available. To prove the effectiveness of the proposed software across anatomical districts and image modalities, it was tested on two very different scenarios: head and neck (H&N) CT segmentation for radiotherapy application, and magnetic resonance image brain labeling for neuroscience investigation. Results: For the neurological study, minimum dice was equal to 0.76 (investigated structures: left and right caudate, putamen, thalamus, and hippocampus). For head and neck case, minimum dice was 0.42 for the most challenging structures (optic nerves and submandibular glands) and 0.62 for the other ones (mandible, brainstem, and parotid glands). Time required to obtain the labels was compatible with a real clinical workflow (35 and 120 min). Conclusions: The proposed software fills a gap in the multiatlas based segmentation field, since all currently available tools (both for commercial and for research purposes) are restricted to a well specified application. Furthermore, it can be adopted as a platform for exploring MABS parameters and as a reference implementation for comparing against

  13. Research of x-ray automatic image mosaic method

    Liu, Bin; Chen, Shunan; Guo, Lianpeng; Xu, Wanpeng

    2013-10-01

    Image mosaic has widely applications value in the fields of medical image analysis, and it is a technology that carries on the spatial matching to a series of image which are overlapped with each other, and finally builds a seamless and high quality image which has high resolution and big eyeshot. In this paper, the method of grayscale cutting pseudo-color enhancement was firstly used to complete the mapping transformation from gray to the pseudo-color, and to extract SIFT features from the images. And then by making use of a similar measure of NCC (normalized cross correlation - Normalized cross-correlation), the method of RANSAC (Random Sample Consensus) was used to exclude the pseudofeature points right in order to complete the exact match of feature points. Finally, seamless mosaic and color fusion were completed by using wavelet multi-decomposition. The experiment shows that the method we used can effectively improve the precision and automation of the medical image mosaic, and provide an effective technical approach for automatic medical image mosaic.

  14. Methods to identify and locate spent radiation sources

    NONE

    1995-07-01

    The objective of this manual is to provide essential guidance to Member States with nuclear applications involving the use of a wide range of sealed radiation sources on the practical task of physically locating spent radiation sources not properly accounted for. Advice is also provided to render the located source safe on location. Refs, figs and tabs.

  15. Methods to identify and locate spent radiation sources

    1997-06-01

    The objective of this manual is to provide essential guidance to Member States with nuclear applications involving the use of a wide range of sealed radiation sources on the practical task of physically locating spent radiation sources not properly accounted for. Advice is also provided to render the located source safe on location. Refs, figs, tabs

  16. COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS

    Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...

  17. Methods to identify and locate spent radiation sources

    1995-07-01

    The objective of this manual is to provide essential guidance to Member States with nuclear applications involving the use of a wide range of sealed radiation sources on the practical task of physically locating spent radiation sources not properly accounted for. Advice is also provided to render the located source safe on location. Refs, figs and tabs

  18. [An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].

    Xu, Yonghong; Gao, Shangce; Hao, Xiaofei

    2016-04-01

    Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.

  19. EXPRESS METHOD OF BARCODE GENERATION FROM FACIAL IMAGES

    G. A. Kukharev

    2014-03-01

    Full Text Available In the paper a method of generating of standard type linear barcodes from facial images is proposed. The method is based on use of the histogram of facial image brightness, averaging the histogram on a limited number of intervals, quantization of results in a range of decimal numbers from 0 to 9 and table conversion into the final barcode. The proposed solution is computationally low-cost and not requires the use of specialized software on image processing that allows generating of facial barcodes in mobile systems, and thus the proposed method can be interpreted as an express method. Results of tests on the Face94 and CUHK Face Sketch FERET Databases showed that the proposed method is a new solution for use in the real-world practice and ensures the stability of generated barcodes in changes of scale, pose and mirroring of a facial image, and also changes of a facial expression and shadows on faces from local lighting. The proposed method is based on generating of a standard barcode directly from the facial image, and thus contains the subjective information about a person's face.

  20. A rotating modulation imager for locating mid-range point sources

    Kowash, B.R.; Wehe, D.K.; Fessler, J.A.

    2009-01-01

    Rotating modulation collimators (RMC) are relatively simple indirect imaging devices that have proven useful in gamma ray astronomy (far field) and have more recently been studied for medical imaging (very near field). At the University of Michigan a RMC has been built to study the performance for homeland security applications. This research highlights the imaging performance of this system and focuses on three distinct regions in the RMC field of view that can impact the search for hidden sources. These regions are a blind zone around the axis of rotation, a two mask image zone that extends from the blind zone to the edge of the field of view, and a single mask image zone that occurs when sources fall outside the field of view of both masks. By considering the extent and impact of these zones, the size of the two mask region can be optimized for the best system performance.