WorldWideScience

Sample records for driven imaging methods

  1. Imaging characteristics of distance-driven method in a prototype cone-beam computed tomography (CBCT)

    Science.gov (United States)

    Choi, Sunghoon; Kim, Ye-seul; Lee, Haenghwa; Lee, Donghoon; Seo, Chang-Woo; Kim, Hee-Joung

    2016-03-01

    Cone-beam computed tomography (CBCT) has widely been used and studied in both medical imaging and radiation therapy. The aim of this study was to evaluate our newly developed CBCT system by implementing a distance-driven system modeling technique in order to produce excellent and accurate cross-sectional images. For the purpose of comparing the performance of the distance-driven methods, we also performed pixel-driven and ray-driven techniques when conducting forward- and back-projection schemes. We conducted the Feldkamp-Davis-Kress (FDK) algorithm and simultaneous algebraic reconstruction technique (SART) to retrieve a volumetric information of scanned chest phantom. The results indicated that contrast-to-noise (CNR) of the reconstructed images by using FDK and SART showed 8.02 and 15.78 for distance-driven, whereas 4.02 and 5.16 for pixel-driven scheme and 7.81 and 13.01 for ray-driven scheme, respectively. This could demonstrate that distance-driven method described more closely the chest phantom compared to pixel- and ray-driven. However, both elapsed time for modeling a system matrix and reconstruction time took longer time when performing the distance-driven scheme. Therefore, future works will be directed toward reducing computational time to acceptable limits for real applications.

  2. A Scale-Driven Change Detection Method Incorporating Uncertainty Analysis for Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Ming Hao

    2016-09-01

    Full Text Available Change detection (CD based on remote sensing images plays an important role in Earth observation. However, the CD accuracy is usually affected by sunlight and atmospheric conditions and sensor calibration. In this study, a scale-driven CD method incorporating uncertainty analysis is proposed to increase CD accuracy. First, two temporal images are stacked and segmented into multiscale segmentation maps. Then, a pixel-based change map with memberships belonging to changed and unchanged parts is obtained by fuzzy c-means clustering. Finally, based on the Dempster-Shafer evidence theory, the proposed scale-driven CD method incorporating uncertainty analysis is performed on the multiscale segmentation maps and the pixel-based change map. Two experiments were carried out on Landsat-7 Enhanced Thematic Mapper Plus (ETM+ and SPOT 5 data sets. The ratio of total errors can be reduced to 4.0% and 7.5% for the ETM+ and SPOT 5 data sets in this study, respectively. Moreover, the proposed approach outperforms some state-of-the-art CD methods and provides an effective solution for CD.

  3. MULTIGRID METHOD FOR A MODIFIED CURVATURE DRIVEN DIFFUSION MODEL FOR IMAGE INPAINTING

    Institute of Scientific and Technical Information of China (English)

    Carlos Brito-Loeza; Ke Chen

    2008-01-01

    Digital inpainting is a fundamental problem in image processing and many variational models for this problem have appeared recently in the literature. Among them are the very successfully Total Variation (TV) model [11] designed for local inpainting and its improved version for large scale inpainting: the Curvature-Driven Diffusion (CDD) model [10]. For the above two models, their associated Euler Lagrange equations are highly nonlinear par-tial differential equations. For the TV model there exists a relatively fast and easy to implement fixed point method, so adapting the multigrid method of [24] to here is immedi-ate. For the CDD model however, so far only the well known but usually very slow explicit time marching method has been reported and we explain why the implementation of a fixed point method for the CDD model is not straightforward. Consequently the multigrid method as in [Savage and Chen, Int. J. Comput. Math., 82 (2005), pp. 1001-1015] will not work here. This fact represents a strong limitation to the range of applications of this model since usually fast solutions are expected. In this paper, we introduce a modification designed to enable a fixed point method to work and to preserve the features of the orig-inal CDD model. As a result, a fast and efficient multigrid method is developed for the modified model. Numerical experiments are presented to show the very good performance of the fast algorithm.

  4. A tunable fluorescent timer method for imaging spatial-temporal protein dynamics using light-driven photoconvertible protein.

    Science.gov (United States)

    Zhu, Xinxin; Zhang, Luyuan; Kao, Ya-Ting; Xu, Fang; Min, Wei

    2015-03-01

    Cellular function is largely determined by protein behaviors occurring in both space and time. While regular fluorescent proteins can only report spatial locations of the target inside cells, fluorescent timers have emerged as an invaluable tool for revealing coupled spatial-temporal protein dynamics. Existing fluorescent timers are all based on chemical maturation. Herein we propose a light-driven timer concept that could report relative protein ages at specific sub-cellular locations, by weakly but chronically illuminating photoconvertible fluorescent proteins inside cells. This new method exploits light, instead of oxygen, as the driving force. Therefore its timing speed is optically tunable by adjusting the photoconverting laser intensity. We characterized this light-driven timer method both in vitro and in vivo and applied it to image spatiotemporal distributions of several proteins with different lifetimes. This novel timer method thus offers a flexible "ruler" for studying temporal hierarchy of spatially ordered processes with exquisite spatial-temporal resolution. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Image Filtering Driven by Level Curves

    Science.gov (United States)

    Rajwade, Ajit; Banerjee, Arunava; Rangarajan, Anand

    This paper presents an approach to image filtering that is driven by the properties of the iso-valued level curves of the image and their relationship with one another. We explore the relationship of our algorithm to existing probabilistically driven filtering methods such as those based on kernel density estimation, local-mode finding and mean-shift. Extensive experimental results on filtering gray-scale images, color images, gray-scale video and chromaticity fields are presented. In contrast to existing probabilistic methods, in our approach, the selection of the parameter that prevents diffusion across the edge is robustly decoupled from the smoothing of the density itself. Furthermore, our method is observed to produce better filtering results for the same settings of parameters for the filter window size and the edge definition.

  6. A distance-driven deconvolution method for CT image-resolution improvement

    Science.gov (United States)

    Han, Seokmin; Choi, Kihwan; Yoo, Sang Wook; Yi, Jonghyon

    2016-12-01

    The purpose of this research is to achieve high spatial resolution in CT (computed tomography) images without hardware modification. The main idea is to consider geometry optics model, which can provide the approximate blurring PSF (point spread function) kernel, which varies according to the distance from the X-ray tube to each point. The FOV (field of view) is divided into several band regions based on the distance from the X-ray source, and each region is deconvolved with a different deconvolution kernel. As the number of subbands increases, the overshoot of the MTF (modulation transfer function) curve increases first. After that, the overshoot begins to decrease while still showing a larger MTF than the normal FBP (filtered backprojection). The case of five subbands seems to show balanced performance between MTF boost and overshoot minimization. It can be seen that, as the number of subbands increases, the noise (STD) can be seen to show a tendency to decrease. The results shows that spatial resolution in CT images can be improved without using high-resolution detectors or focal spot wobbling. The proposed algorithm shows promising results in improving spatial resolution while avoiding excessive noise boost.

  7. Image-driven mesh optimization

    Energy Technology Data Exchange (ETDEWEB)

    Lindstrom, P; Turk, G

    2001-01-05

    We describe a method of improving the appearance of a low vertex count mesh in a manner that is guided by rendered images of the original, detailed mesh. This approach is motivated by the fact that greedy simplification methods often yield meshes that are poorer than what can be represented with a given number of vertices. Our approach relies on edge swaps and vertex teleports to alter the mesh connectivity, and uses the downhill simplex method to simultaneously improve vertex positions and surface attributes. Note that this is not a simplification method--the vertex count remains the same throughout the optimization. At all stages of the optimization the changes are guided by a metric that measures the differences between rendered versions of the original model and the low vertex count mesh. This method creates meshes that are geometrically faithful to the original model. Moreover, the method takes into account more subtle aspects of a model such as surface shading or whether cracks are visible between two interpenetrating parts of the model.

  8. Application-driven computational imaging

    Science.gov (United States)

    McCloskey, Scott

    2016-05-01

    This paper addresses how the image processing steps involved in computational imaging can be adapted to specific image-based recognition tasks, and how significant reductions in computational complexity can be achieved by leveraging the recognition algorithm's robustness to defocus, poor exposure, and the like. Unlike aesthetic applications of computational imaging, recognition systems need not produce the best possible image quality, but instead need only satisfy certain quality thresholds that allow for reliable recognition. The paper specifically addresses light field processing for barcode scanning, and presents three optimizations which bring light field processing within the complexity limits of low-powered embedded processors.

  9. Undersampled MR Image Reconstruction with Data-Driven Tight Frame

    Directory of Open Access Journals (Sweden)

    Jianbo Liu

    2015-01-01

    Full Text Available Undersampled magnetic resonance image reconstruction employing sparsity regularization has fascinated many researchers in recent years under the support of compressed sensing theory. Nevertheless, most existing sparsity-regularized reconstruction methods either lack adaptability to capture the structure information or suffer from high computational load. With the aim of further improving image reconstruction accuracy without introducing too much computation, this paper proposes a data-driven tight frame magnetic image reconstruction (DDTF-MRI method. By taking advantage of the efficiency and effectiveness of data-driven tight frame, DDTF-MRI trains an adaptive tight frame to sparsify the to-be-reconstructed MR image. Furthermore, a two-level Bregman iteration algorithm has been developed to solve the proposed model. The proposed method has been compared to two state-of-the-art methods on four datasets and encouraging performances have been achieved by DDTF-MRI.

  10. Undersampled MR Image Reconstruction with Data-Driven Tight Frame.

    Science.gov (United States)

    Liu, Jianbo; Wang, Shanshan; Peng, Xi; Liang, Dong

    2015-01-01

    Undersampled magnetic resonance image reconstruction employing sparsity regularization has fascinated many researchers in recent years under the support of compressed sensing theory. Nevertheless, most existing sparsity-regularized reconstruction methods either lack adaptability to capture the structure information or suffer from high computational load. With the aim of further improving image reconstruction accuracy without introducing too much computation, this paper proposes a data-driven tight frame magnetic image reconstruction (DDTF-MRI) method. By taking advantage of the efficiency and effectiveness of data-driven tight frame, DDTF-MRI trains an adaptive tight frame to sparsify the to-be-reconstructed MR image. Furthermore, a two-level Bregman iteration algorithm has been developed to solve the proposed model. The proposed method has been compared to two state-of-the-art methods on four datasets and encouraging performances have been achieved by DDTF-MRI.

  11. Undersampled MR Image Reconstruction with Data-Driven Tight Frame

    OpenAIRE

    Jianbo Liu; Shanshan Wang; Xi Peng; Dong Liang

    2015-01-01

    Undersampled magnetic resonance image reconstruction employing sparsity regularization has fascinated many researchers in recent years under the support of compressed sensing theory. Nevertheless, most existing sparsity-regularized reconstruction methods either lack adaptability to capture the structure information or suffer from high computational load. With the aim of further improving image reconstruction accuracy without introducing too much computation, this paper proposes a data-driven ...

  12. A Table-Driven Control Method to Meet Continuous, Near-Real-Time Observation Requirements for the Solar X-Ray Imager

    Science.gov (United States)

    Wallace, Shawn; Brown, Terry; Freestone, Kathleen

    1998-01-01

    The design of the Solar X-Ray Imager (SXI) for the Geostationary Operational Environmental Satellite (GOES) presents an unusual scenario for controlling the observing sequences. The SXI is an operational instrument, designed not primarily for scientific research, but for providing "operational" data used by the National Oceanic and Atmospheric Administration (NOAA) to forecast the near-term space weather. To this end, a sequence of images selected to cover the full dynamic range of the sun will be executed routinely. As the dynamics of the sun have differing temporal cadences, the frequency of various images will differ. These images must be routinely received at the forecast center in near real-time, 24-hours a day, with a minimum of interruptions. While these requirements clearly lead to a 'routine patrol' of images, the parameters for each do not form a static set. The dynamics of the sun will change with the I 1-year solar cycle. The performance of the imaging will vary with on-orbit conditions and time. And while the SXI is not intended as a research instrument, forecasting techniques may change with time, which in turn will further alter the imaging sequences. An additional complication is the highly restricted commanding window, and a very slow commanding rate. To fulfill these requirements, the SXI was designed to utilize a table-driven approach. Sequences are defined using structured loops, with nested repetitions and delays. These sequences reference combinations of imaging parameters which in turn reference tables of parameters than can be loaded by ground commands. Multiple sequences can be built and stored in preparation for execution when determined appropriate by the NOAA forecasters. The result is an approach that can be used to provide a flexible, yet autonomous SXI capable of meeting both arbitrary forecasting requirements, and operating within the commanding constraints.

  13. A Table-Driven Control Method to Meet Continuous, Near-Real-Time Observation Requirements for the Solar X-Ray Imager

    Science.gov (United States)

    Wallace, Shawn; Brown, Terry; Freestone, Kathleen

    1998-01-01

    The design of the Solar X-Ray Imager (SXI) for the Geostationary Operational Environmental Satellite (GOES) presents an unusual scenario for controlling the observing sequences. The SXI is an operational instrument, designed not primarily for scientific research, but for providing "operational" data used by the National Oceanic and Atmospheric Administration (NOAA) to forecast the near-term space weather. To this end, a sequence of images selected to cover the full dynamic range of the sun will be executed routinely. As the dynamics of the sun have differing temporal cadences, the frequency of various images will differ. These images must be routinely received at the forecast center in near real-time, 24-hours a day, with a minimum of interruptions. While these requirements clearly lead to a 'routine patrol' of images, the parameters for each do not form a static set. The dynamics of the sun will change with the I 1-year solar cycle. The performance of the imaging will vary with on-orbit conditions and time. And while the SXI is not intended as a research instrument, forecasting techniques may change with time, which in turn will further alter the imaging sequences. An additional complication is the highly restricted commanding window, and a very slow commanding rate. To fulfill these requirements, the SXI was designed to utilize a table-driven approach. Sequences are defined using structured loops, with nested repetitions and delays. These sequences reference combinations of imaging parameters which in turn reference tables of parameters than can be loaded by ground commands. Multiple sequences can be built and stored in preparation for execution when determined appropriate by the NOAA forecasters. The result is an approach that can be used to provide a flexible, yet autonomous SXI capable of meeting both arbitrary forecasting requirements, and operating within the commanding constraints.

  14. On the data-driven COS method

    NARCIS (Netherlands)

    A. Leitao Rodriguez (Álvaro); C.W. Oosterlee (Cornelis); L. Ortiz Gracia (Luis); S.M. Bohte (Sander)

    2018-01-01

    textabstractIn this paper, we present the data-driven COS method, ddCOS. It is a Fourier-based finan- cial option valuation method which assumes the availability of asset data samples: a char- acteristic function of the underlying asset probability density function is not required. As such, the

  15. Knowledge Driven Image Mining with Mixture Density Mercer Kernels

    Science.gov (United States)

    Srivastava, Ashok N.; Oza, Nikunj

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven image mining based on the theory of Mercer Kernels; which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. In that high dimensional feature space, linear clustering, prediction, and classification algorithms can be applied and the results can be mapped back down to the original image space. Thus, highly nonlinear structure in the image can be recovered through the use of well-known linear mathematics in the feature space. This process has a number of advantages over traditional methods in that it allows for nonlinear interactions to be modelled with only a marginal increase in computational costs. In this paper, we present the theory of Mercer Kernels, describe its use in image mining, discuss a new method to generate Mercer Kernels directly from data, and compare the results with existing algorithms on data from the MODIS (Moderate Resolution Spectral Radiometer) instrument taken over the Arctic region. We also discuss the potential application of these methods on the Intelligent Archive, a NASA initiative for developing a tagged image data warehouse for the Earth Sciences.

  16. Knowledge Driven Image Mining with Mixture Density Mercer Kernals

    Science.gov (United States)

    Srivastava, Ashok N.; Oza, Nikunj

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven image mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. In that high dimensional feature space, linear clustering, prediction, and classification algorithms can be applied and the results can be mapped back down to the original image space. Thus, highly nonlinear structure in the image can be recovered through the use of well-known linear mathematics in the feature space. This process has a number of advantages over traditional methods in that it allows for nonlinear interactions to be modelled with only a marginal increase in computational costs. In this paper we present the theory of Mercer Kernels; describe its use in image mining, discuss a new method to generate Mercer Kernels directly from data, and compare the results with existing algorithms on data from the MODIS (Moderate Resolution Spectral Radiometer) instrument taken over the Arctic region. We also discuss the potential application of these methods on the Intelligent Archive, a NASA initiative for developing a tagged image data warehouse for the Earth Sciences.

  17. 3D weighting in cone beam image reconstruction algorithms: ray-driven vs. pixel-driven.

    Science.gov (United States)

    Tang, Xiangyang; Nilsen, Roy A; Smolin, Alex; Lifland, Ilya; Samsonov, Dmitry; Taha, Basel

    2008-01-01

    A 3D weighting scheme have been proposed previously to reconstruct images at both helical and axial scans in stat-of-the-art volumetric CT scanners for diagnostic imaging. Such a 3D weighting can be implemented in the manner of either ray-driven or pixel-drive, depending on the available computation resources. An experimental study is conducted in this paper to evaluate the difference between the ray-driven and pixel-driven implementations of the 3D weighting from the perspective of image quality, while their computational complexity is analyzed theoretically. Computer simulated data and several phantoms, such as the helical body phantom and humanoid chest phantom, are employed in the experimental study, showing that both the ray-driven and pixel-driven 3D weighting provides superior image quality for diagnostic imaging in clinical applications. With the availability of image reconstruction engine at increasing computational power, it is believed that the pixel-driven 3D weighting will be dominantly employed in state-of-the-art volumetric CT scanners over clinical applications.

  18. IMPROVED COVARIANCE DRIVEN BLIND SUBSPACE IDENTIFICATION METHOD

    Institute of Scientific and Technical Information of China (English)

    ZHANG Zhiyi; FAN Jiangling; HUA Hongxing

    2006-01-01

    An improved covariance driven subspace identification method is presented to identify the weakly excited modes. In this method, the traditional Hankel matrix is replaced by a reformed one to enhance the identifiability of weak characteristics. The robustness of eigenparameter estimation to noise contamination is reinforced by the improved Hankel matrix. In combination with component energy index (CEI) which indicates the vibration intensity of signal components, an alternative stabilization diagram is adopted to effectively separate spurious and physical modes. Simulation of a vibration system of multiple-degree-of-freedom and experiment of a frame structure subject to wind excitation are presented to demonstrate the improvement of the proposed blind method. The performance of this blind method is assessed in terms of its capability in extracting the weak modes as well as the accuracy of estimated parameters. The results have shown that the proposed blind method gives a better estimation of the weak modes from response signals of small signal to noise ratio (SNR)and gives a reliable separation of spurious and physical estimates.

  19. Heart imaging method

    Science.gov (United States)

    Collins, H. Dale; Gribble, R. Parks; Busse, Lawrence J.

    1991-01-01

    A method for providing an image of the human heart's electrical system derives time-of-flight data from an array of EKG electrodes and this data is transformed into phase information. The phase information, treated as a hologram, is reconstructed to provide an image in one or two dimensions of the electrical system of the functioning heart.

  20. Image-driven cardiac left ventricle segmentation for the evaluation of multiview fused real-time 3-dimensional echocardiography images.

    Science.gov (United States)

    Rajpoot, Kashif; Noble, J Alison; Grau, Vicente; Szmigielski, Cezary; Becher, Harald

    2009-01-01

    Real-time 3-dimensional echocardiography (RT3DE) permits the acquisition and visualization of the beating heart in 3D. Despite a number of efforts to automate the left ventricle (LV) delineation from RT3DE images, this remains a challenging problem due to the poor nature of the acquired images usually containing missing anatomical information and high speckle noise. Recently, there have been efforts to improve image quality and anatomical definition by acquiring multiple single-view RT3DE images with small probe movements and fusing them together after alignment. In this work, we evaluate the quality of the multiview fused images using an image-driven semiautomatic LV segmentation method. The segmentation method is based on an edge-driven level set framework, where the edges are extracted using a local-phase inspired feature detector for low-contrast echocardiography boundaries. This totally image-driven segmentation method is applied for the evaluation of end-diastolic (ED) and end-systolic (ES) single-view and multiview fused images. Experiments were conducted on 17 cases and the results show that multiview fused images have better image segmentation quality, but large failures were observed on ED (88.2%) and ES (58.8%) single-view images.

  1. Surface driven biomechanical breast image registration

    Science.gov (United States)

    Eiben, Björn; Vavourakis, Vasileios; Hipwell, John H.; Kabus, Sven; Lorenz, Cristian; Buelow, Thomas; Williams, Norman R.; Keshtgar, M.; Hawkes, David J.

    2016-03-01

    Biomechanical modelling enables large deformation simulations of breast tissues under different loading conditions to be performed. Such simulations can be utilised to transform prone Magnetic Resonance (MR) images into a different patient position, such as upright or supine. We present a novel integration of biomechanical modelling with a surface registration algorithm which optimises the unknown material parameters of a biomechanical model and performs a subsequent regularised surface alignment. This allows deformations induced by effects other than gravity, such as those due to contact of the breast and MR coil, to be reversed. Correction displacements are applied to the biomechanical model enabling transformation of the original pre-surgical images to the corresponding target position. The algorithm is evaluated for the prone-to-supine case using prone MR images and the skin outline of supine Computed Tomography (CT) scans for three patients. A mean target registration error (TRE) of 10:9 mm for internal structures is achieved. For the prone-to-upright scenario, an optical 3D surface scan of one patient is used as a registration target and the nipple distances after alignment between the transformed MRI and the surface are 10:1 mm and 6:3 mm respectively.

  2. Educational Accountability: A Qualitatively Driven Mixed-Methods Approach

    Science.gov (United States)

    Hall, Jori N.; Ryan, Katherine E.

    2011-01-01

    This article discusses the importance of mixed-methods research, in particular the value of qualitatively driven mixed-methods research for quantitatively driven domains like educational accountability. The article demonstrates the merits of qualitative thinking by describing a mixed-methods study that focuses on a middle school's system of…

  3. Methods in Astronomical Image Processing

    Science.gov (United States)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  4. C-arm technique using distance driven method for nephrolithiasis and kidney stones detection

    Science.gov (United States)

    Malalla, Nuhad; Sun, Pengfei; Chen, Ying; Lipkin, Michael E.; Preminger, Glenn M.; Qin, Jun

    2016-04-01

    Distance driven represents a state of art method that used for reconstruction for x-ray techniques. C-arm tomography is an x-ray imaging technique that provides three dimensional information of the object by moving the C-shaped gantry around the patient. With limited view angle, C-arm system was investigated to generate volumetric data of the object with low radiation dosage and examination time. This paper is a new simulation study with two reconstruction methods based on distance driven including: simultaneous algebraic reconstruction technique (SART) and Maximum Likelihood expectation maximization (MLEM). Distance driven is an efficient method that has low computation cost and free artifacts compared with other methods such as ray driven and pixel driven methods. Projection images of spherical objects were simulated with a virtual C-arm system with a total view angle of 40 degrees. Results show the ability of limited angle C-arm technique to generate three dimensional images with distance driven reconstruction.

  5. Image-driven constitutive modeling of myocardial fibrosis

    Science.gov (United States)

    Wang, Vicky Y.; Niestrawska, Justyna A.; Wilson, Alexander J.; Sands, Gregory B.; Young, Alistair A.; LeGrice, Ian J.; Nash, Martyn P.

    2016-05-01

    Myocardial fibrosis is a pathological process that occurs during heart failure (HF). It involves microstructural remodeling of normal myocardial tissue, and consequent changes in both cardiac geometry and function. The role of myocardial structural remodeling in the progression of HF remains poorly understood. We propose a constitutive modeling framework, informed by high-resolution images of cardiac tissue structure, to model the mechanical response of normal and fibrotic myocardium. This image-driven constitutive modeling approach allows us to better reproduce and understand the relationship between structural and functional remodeling of ventricular myocardium during HF.

  6. Universal Image Steganalytic Method

    Directory of Open Access Journals (Sweden)

    V. Banoci

    2014-12-01

    Full Text Available In the paper we introduce a new universal steganalytic method in JPEG file format that is detecting well-known and also newly developed steganographic methods. The steganalytic model is trained by MHF-DZ steganographic algorithm previously designed by the same authors. The calibration technique with the Feature Based Steganalysis (FBS was employed in order to identify statistical changes caused by embedding a secret data into original image. The steganalyzer concept utilizes Support Vector Machine (SVM classification for training a model that is later used by the same steganalyzer in order to identify between a clean (cover and steganographic image. The aim of the paper was to analyze the variety in accuracy of detection results (ACR while detecting testing steganographic algorithms as F5, Outguess, Model Based Steganography without deblocking, JP Hide and Seek which represent the generally used steganographic tools. The comparison of four feature vectors with different lengths FBS (22, FBS (66 FBS(274 and FBS(285 shows promising results of proposed universal steganalytic method comparing to binary methods.

  7. Photoacoustic imaging driven by an interstitial irradiation source

    Directory of Open Access Journals (Sweden)

    Trevor Mitcham

    2015-06-01

    Full Text Available Photoacoustic (PA imaging has shown tremendous promise in providing valuable diagnostic and therapy-monitoring information in select clinical procedures. Many of these pursued applications, however, have been relatively superficial due to difficulties with delivering light deep into tissue. To address this limitation, this work investigates generating a PA image using an interstitial irradiation source with a clinical ultrasound (US system, which was shown to yield improved PA signal quality at distances beyond 13 mm and to provide improved spectral fidelity. Additionally, interstitially driven multi-wavelength PA imaging was able to provide accurate spectra of gold nanoshells and deoxyhemoglobin in excised prostate and liver tissue, respectively, and allowed for clear visualization of a wire at 7 cm in excised liver. This work demonstrates the potential of using a local irradiation source to extend the depth capabilities of future PA imaging techniques for minimally invasive interventional radiology procedures.

  8. User-driven sampling strategies in image exploitation

    Science.gov (United States)

    Harvey, Neal; Porter, Reid

    2013-12-01

    Visual analytics and interactive machine learning both try to leverage the complementary strengths of humans and machines to solve complex data exploitation tasks. These fields overlap most significantly when training is involved: the visualization or machine learning tool improves over time by exploiting observations of the human-computer interaction. This paper focuses on one aspect of the human-computer interaction that we call user-driven sampling strategies. Unlike relevance feedback and active learning sampling strategies, where the computer selects which data to label at each iteration, we investigate situations where the user selects which data is to be labeled at each iteration. User-driven sampling strategies can emerge in many visual analytics applications but they have not been fully developed in machine learning. User-driven sampling strategies suggest new theoretical and practical research questions for both visualization science and machine learning. In this paper we identify and quantify the potential benefits of these strategies in a practical image analysis application. We find user-driven sampling strategies can sometimes provide significant performance gains by steering tools towards local minima that have lower error than tools trained with all of the data. In preliminary experiments we find these performance gains are particularly pronounced when the user is experienced with the tool and application domain.

  9. An assessment of user-driven innovation methods

    DEFF Research Database (Denmark)

    Jacobsen, Alexia; Lassen, Astrid Heidemann; Gorm Hansen, Katrine

    This publication serves to test a number of methods that have been presented in the ‘method-graph’, which was created in connection with Project InnoDoors. The primary function of the test will be to further improve the ‘method-graph’ by refining and advancing some of the user-driven methods that...

  10. Hyperspectral image processing methods

    Science.gov (United States)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  11. Image registration method for medical image sequences

    Science.gov (United States)

    Gee, Timothy F.; Goddard, James S.

    2013-03-26

    Image registration of low contrast image sequences is provided. In one aspect, a desired region of an image is automatically segmented and only the desired region is registered. Active contours and adaptive thresholding of intensity or edge information may be used to segment the desired regions. A transform function is defined to register the segmented region, and sub-pixel information may be determined using one or more interpolation methods.

  12. Dynamic Data Driven Methods for Self-aware Aerospace Vehicles

    Science.gov (United States)

    2015-04-08

    AFRL-OSR-VA-TR-2015-0127 Dynamic Data Driven Methods for Self-aware Aerospace Vehicles Karen Willcox MASSACHUSETTS INSTITUTE OF TECHNOLOGY Final...Methods for Self-aware Aerospace Vehicles 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-11-1-0339 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Karen E...Back (Rev. 8/98) Dynamic Data Driven Methods for Self-aware Aerospace Vehicles Grant # FA9550-11-1-0339 Final Report Participating Institutions

  13. A risk management approach for imaging biomarker-driven clinical trials in oncology.

    Science.gov (United States)

    Liu, Yan; deSouza, Nandita M; Shankar, Lalitha K; Kauczor, Hans-Ulrich; Trattnig, Siegfried; Collette, Sandra; Chiti, Arturo

    2015-12-01

    Imaging has steadily evolved in clinical cancer research as a result of improved conventional imaging methods and the innovation of new functional and molecular imaging techniques. Despite this evolution, the design and data quality derived from imaging within clinical trials are not ideal and gaps exist with paucity of optimised methods, constraints of trial operational support, and scarce resources. Difficulties associated with integrating imaging biomarkers into trials have been neglected compared with inclusion of tissue and blood biomarkers, largely because of inherent challenges in the complexity of imaging technologies, safety issues related to new imaging contrast media, standardisation of image acquisition across multivendor platforms, and various postprocessing options available with advanced software. Ignorance of these pitfalls directly affects the quality of the imaging read-out, leading to trial failure, particularly when imaging is a primary endpoint. Therefore, we propose a practical risk-based framework and recommendations for trials driven by imaging biomarkers, which allow identification of risks at trial initiation to better allocate resources and prioritise key tasks. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. CT segmentation of dental shapes by anatomy-driven reformation imaging and B-spline modelling.

    Science.gov (United States)

    Barone, S; Paoli, A; Razionale, A V

    2016-06-01

    Dedicated imaging methods are among the most important tools of modern computer-aided medical applications. In the last few years, cone beam computed tomography (CBCT) has gained popularity in digital dentistry for 3D imaging of jawbones and teeth. However, the anatomy of a maxillofacial region complicates the assessment of tooth geometry and anatomical location when using standard orthogonal views of the CT data set. In particular, a tooth is defined by a sub-region, which cannot be easily separated from surrounding tissues by only considering pixel grey-intensity values. For this reason, an image enhancement is usually necessary in order to properly segment tooth geometries. In this paper, an anatomy-driven methodology to reconstruct individual 3D tooth anatomies by processing CBCT data is presented. The main concept is to generate a small set of multi-planar reformation images along significant views for each target tooth, driven by the individual anatomical geometry of a specific patient. The reformation images greatly enhance the clearness of the target tooth contours. A set of meaningful 2D tooth contours is extracted and used to automatically model the overall 3D tooth shape through a B-spline representation. The effectiveness of the methodology has been verified by comparing some anatomy-driven reconstructions of anterior and premolar teeth with those obtained by using standard tooth segmentation tools. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Computational methods for molecular imaging

    CERN Document Server

    Shi, Kuangyu; Li, Shuo

    2015-01-01

    This volume contains original submissions on the development and application of molecular imaging computing. The editors invited authors to submit high-quality contributions on a wide range of topics including, but not limited to: • Image Synthesis & Reconstruction of Emission Tomography (PET, SPECT) and other Molecular Imaging Modalities • Molecular Imaging Enhancement • Data Analysis of Clinical & Pre-clinical Molecular Imaging • Multi-Modal Image Processing (PET/CT, PET/MR, SPECT/CT, etc.) • Machine Learning and Data Mining in Molecular Imaging. Molecular imaging is an evolving clinical and research discipline enabling the visualization, characterization and quantification of biological processes taking place at the cellular and subcellular levels within intact living subjects. Computational methods play an important role in the development of molecular imaging, from image synthesis to data analysis and from clinical diagnosis to therapy individualization. This work will bring readers fro...

  16. Multimodal Task-Driven Dictionary Learning for Image Classification.

    Science.gov (United States)

    Bahrampour, Soheil; Nasrabadi, Nasser M; Ray, Asok; Jenkins, William Kenneth

    2016-01-01

    Dictionary learning algorithms have been successfully used for both reconstructive and discriminative tasks, where an input signal is represented with a sparse linear combination of dictionary atoms. While these methods are mostly developed for single-modality scenarios, recent studies have demonstrated the advantages of feature-level fusion based on the joint sparse representation of the multimodal inputs. In this paper, we propose a multimodal task-driven dictionary learning algorithm under the joint sparsity constraint (prior) to enforce collaborations among multiple homogeneous/heterogeneous sources of information. In this task-driven formulation, the multimodal dictionaries are learned simultaneously with their corresponding classifiers. The resulting multimodal dictionaries can generate discriminative latent features (sparse codes) from the data that are optimized for a given task such as binary or multiclass classification. Moreover, we present an extension of the proposed formulation using a mixed joint and independent sparsity prior, which facilitates more flexible fusion of the modalities at feature level. The efficacy of the proposed algorithms for multimodal classification is illustrated on four different applications--multimodal face recognition, multi-view face recognition, multi-view action recognition, and multimodal biometric recognition. It is also shown that, compared with the counterpart reconstructive-based dictionary learning algorithms, the task-driven formulations are more computationally efficient in the sense that they can be equipped with more compact dictionaries and still achieve superior performance.

  17. The edge-driven dual-bootstrap iterative closest point algorithm for multimodal retinal image registration

    Science.gov (United States)

    Tsai, Chia-Ling; Li, Chun-Yi; Yang, Gehua

    2008-03-01

    Red-free (RF) fundus retinal images and fluorescein angiogram (FA) sequence are often captured from an eye for diagnosis and treatment of abnormalities of the retina. With the aid of multimodal image registration, physicians can combine information to make accurate surgical planning and quantitative judgment of the progression of a disease. The goal of our work is to jointly align the RF images with the FA sequence of the same eye in a common reference space. Our work is inspired by Generalized Dual-Bootstrap Iterative Closest Point (GDB-ICP), which is a fully-automatic, feature-based method using structural similarity. GDB-ICP rank-orders Lowe keypoint matches and refines the transformation computed from each keypoint match in succession. Albeit GDB-ICP has been shown robust to image pairs with illumination difference, the performance is not satisfactory for multimodal and some FA pairs which exhibit substantial non-linear illumination changes. Our algorithm, named Edge-Driven DBICP, modifies generation of keypoint matches for initialization by extracting the Lowe keypoints from the gradient magnitude image, and enriching the keypoint descriptor with global-shape context using the edge points. Our dataset consists of 61 randomly selected pathological sequences, each on average having two RF and 13 FA images. There are total of 4985 image pairs, out of which 1323 are multimodal pairs. Edge-Driven DBICP successfully registered 93% of all pairs, and 82% multimodal pairs, whereas GDB-ICP registered 80% and 40%, respectively. Regarding registration of the whole image sequence in a common reference space, Edge-Driven DBICP succeeded in 60 sequences, which is 26% improvement over GDB-ICP.

  18. Data-driven forward model inference for EEG brain imaging

    DEFF Research Database (Denmark)

    Hansen, Sofie Therese; Hauberg, Søren; Hansen, Lars Kai

    2016-01-01

    Electroencephalography (EEG) is a flexible and accessible tool with excellent temporal resolution but with a spatial resolution hampered by volume conduction. Reconstruction of the cortical sources of measured EEG activity partly alleviates this problem and effectively turns EEG into a brain......-of-concept study, we show that, even when anatomical knowledge is unavailable, a suitable forward model can be estimated directly from the EEG. We propose a data-driven approach that provides a low-dimensional parametrization of head geometry and compartment conductivities, built using a corpus of forward models....... Combined with only a recorded EEG signal, we are able to estimate both the brain sources and a person-specific forward model by optimizing this parametrization. We thus not only solve an inverse problem, but also optimize over its specification. Our work demonstrates that personalized EEG brain imaging...

  19. Data-driven forward model inference for EEG brain imaging

    DEFF Research Database (Denmark)

    Hansen, Sofie Therese; Hauberg, Søren; Hansen, Lars Kai

    2016-01-01

    Electroencephalography (EEG) is a flexible and accessible tool with excellent temporal resolution but with a spatial resolution hampered by volume conduction. Reconstruction of the cortical sources of measured EEG activity partly alleviates this problem and effectively turns EEG into a brain......-of-concept study, we show that, even when anatomical knowledge is unavailable, a suitable forward model can be estimated directly from the EEG. We propose a data-driven approach that provides a low-dimensional parametrization of head geometry and compartment conductivities, built using a corpus of forward models....... Combined with only a recorded EEG signal, we are able to estimate both the brain sources and a person-specific forward model by optimizing this parametrization. We thus not only solve an inverse problem, but also optimize over its specification. Our work demonstrates that personalized EEG brain imaging...

  20. EEG/fMRI fusion based on independent component analysis: integration of data-driven and model-driven methods.

    Science.gov (United States)

    Lei, Xu; Valdes-Sosa, Pedro A; Yao, Dezhong

    2012-09-01

    Simultaneous electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) provide complementary noninvasive information of brain activity, and EEG/fMRI fusion can achieve higher spatiotemporal resolution than each modality separately. This focuses on independent component analysis (ICA)-based EEG/fMRI fusion. In order to appreciate the issues, we first describe the potential and limitations of the developed fusion approaches: fMRI-constrained EEG imaging, EEG-informed fMRI analysis, and symmetric fusion. We then outline some newly developed hybrid fusion techniques using ICA and the combination of data-/model-driven methods, with special mention of the spatiotemporal EEG/fMRI fusion (STEFF). Finally, we discuss the current trend in methodological development and the existing limitations for extrapolating neural dynamics.

  1. Data-driven forward model inference for EEG brain imaging.

    Science.gov (United States)

    Hansen, Sofie Therese; Hauberg, Søren; Hansen, Lars Kai

    2016-06-13

    Electroencephalography (EEG) is a flexible and accessible tool with excellent temporal resolution but with a spatial resolution hampered by volume conduction. Reconstruction of the cortical sources of measured EEG activity partly alleviates this problem and effectively turns EEG into a brain imaging device. The quality of the source reconstruction depends on the forward model which details head geometry and conductivities of different head compartments. These person-specific factors are complex to determine, requiring detailed knowledge of the subject's anatomy and physiology. In this proof-of-concept study, we show that, even when anatomical knowledge is unavailable, a suitable forward model can be estimated directly from the EEG. We propose a data-driven approach that provides a low-dimensional parametrization of head geometry and compartment conductivities, built using a corpus of forward models. Combined with only a recorded EEG signal, we are able to estimate both the brain sources and a person-specific forward model by optimizing this parametrization. We thus not only solve an inverse problem, but also optimize over its specification. Our work demonstrates that personalized EEG brain imaging is possible, even when the head geometry and conductivities are unknown.

  2. A Full Parallel Event Driven Readout Technique for Area Array SPAD FLIM Image Sensors

    Directory of Open Access Journals (Sweden)

    Kaiming Nie

    2016-01-01

    Full Text Available This paper presents a full parallel event driven readout method which is implemented in an area array single-photon avalanche diode (SPAD image sensor for high-speed fluorescence lifetime imaging microscopy (FLIM. The sensor only records and reads out effective time and position information by adopting full parallel event driven readout method, aiming at reducing the amount of data. The image sensor includes four 8 × 8 pixel arrays. In each array, four time-to-digital converters (TDCs are used to quantize the time of photons’ arrival, and two address record modules are used to record the column and row information. In this work, Monte Carlo simulations were performed in Matlab in terms of the pile-up effect induced by the readout method. The sensor’s resolution is 16 × 16. The time resolution of TDCs is 97.6 ps and the quantization range is 100 ns. The readout frame rate is 10 Mfps, and the maximum imaging frame rate is 100 fps. The chip’s output bandwidth is 720 MHz with an average power of 15 mW. The lifetime resolvability range is 5–20 ns, and the average error of estimated fluorescence lifetimes is below 1% by employing CMM to estimate lifetimes.

  3. An Image Registration Method for Colposcopic Images

    Directory of Open Access Journals (Sweden)

    Efrén Mezura-Montes

    2013-01-01

    sequence and a division of such image into small windows. A search process is then carried out to find the window with the highest affinity in each image of the sequence and replace it with the window in the reference image. The affinity value is based on polynomial approximation of the time series computed and the search is bounded by a search radius which defines the neighborhood of each window. The proposed approach is tested in ten 310-frame real cases in two experiments: the first one to determine the best values for the window size and the search radius and the second one to compare the best obtained results with respect to four registration methods found in the specialized literature. The obtained results show a robust and competitive performance of the proposed approach with a significant lower time with respect to the compared methods.

  4. A sparsity-driven approach for joint SAR imaging and phase error correction.

    Science.gov (United States)

    Önhon, N Özben; Cetin, Müjdat

    2012-04-01

    Image formation algorithms in a variety of applications have explicit or implicit dependence on a mathematical model of the observation process. Inaccuracies in the observation model may cause various degradations and artifacts in the reconstructed images. The application of interest in this paper is synthetic aperture radar (SAR) imaging, which particularly suffers from motion-induced model errors. These types of errors result in phase errors in SAR data, which cause defocusing of the reconstructed images. Particularly focusing on imaging of fields that admit a sparse representation, we propose a sparsity-driven method for joint SAR imaging and phase error correction. Phase error correction is performed during the image formation process. The problem is set up as an optimization problem in a nonquadratic regularization-based framework. The method involves an iterative algorithm, where each iteration of which consists of consecutive steps of image formation and model error correction. Experimental results show the effectiveness of the approach for various types of phase errors, as well as the improvements that it provides over existing techniques for model error compensation in SAR.

  5. Direct Imaging of Laser-driven Ultrafast Molecular Rotation.

    Science.gov (United States)

    Mizuse, Kenta; Fujimoto, Romu; Mizutani, Nobuo; Ohshima, Yasuhiro

    2017-02-04

    We present a method for visualizing laser-induced, ultrafast molecular rotational wave packet dynamics. We have developed a new 2-dimensional Coulomb explosion imaging setup in which a hitherto-impractical camera angle is realized. In our imaging technique, diatomic molecules are irradiated with a circularly polarized strong laser pulse. The ejected atomic ions are accelerated perpendicularly to the laser propagation. The ions lying in the laser polarization plane are selected through the use of a mechanical slit and imaged with a high-throughput, 2-dimensional detector installed parallel to the polarization plane. Because a circularly polarized (isotropic) Coulomb exploding pulse is used, the observed angular distribution of the ejected ions directly corresponds to the squared rotational wave function at the time of the pulse irradiation. To create a real-time movie of molecular rotation, the present imaging technique is combined with a femtosecond pump-probe optical setup in which the pump pulses create unidirectionally rotating molecular ensembles. Due to the high image throughput of our detection system, the pump-probe experimental condition can be easily optimized by monitoring a real-time snapshot. As a result, the quality of the observed movie is sufficiently high for visualizing the detailed wave nature of motion. We also note that the present technique can be implemented in existing standard ion imaging setups, offering a new camera angle or viewpoint for the molecular systems without the need for extensive modification.

  6. An exact management method for demand driven, industrial operations

    OpenAIRE

    Puikko, J. (Janne)

    2010-01-01

    Abstract The framing into demand driven operations is because of the operations research modelling approach. The modelling approach requires continuous regressors and an independent response factor. The demand as an operating factor is considered as independent response factor in relation to the continuous regressors. The method validation is made along several longitudinal case studies to cover local, global and international industrial operations. The examined operational scope is from c...

  7. A purely data driven method for European option valuation

    Institute of Scientific and Technical Information of China (English)

    HUANG Guang-hui; WAN Jian-ping

    2006-01-01

    An alternative option pricing method is proposed based on a random walk market model.The minimal entropy martingale measure which adopts no arbitrage opportunity in the market,is deduced for this market model and is used as the pricing measure to evaluate European call options by a Monte Carlo simulation method.The proposed method is a purely data driven valuation method without any distributional assumption about the price process of underlying asset.The performance of the proposed method is compared with the canonical valuation method and the historical volatility-based Black-Scholes method in an artificial Black-Scholes world.The simulation results show that the proposed method has merits,and is valuable to financial engineering.

  8. Numerical methods for image registration

    CERN Document Server

    Modersitzki, Jan

    2003-01-01

    Based on the author's lecture notes and research, this well-illustrated and comprehensive text is one of the first to provide an introduction to image registration with particular emphasis on numerical methods in medical imaging. Ideal for researchers in industry and academia, it is also a suitable study guide for graduate mathematicians, computer scientists, engineers, medical physicists, and radiologists.Image registration is utilised whenever information obtained from different viewpoints needs to be combined or compared and unwanted distortion needs to be eliminated. For example, CCTV imag

  9. Modern methods of image reconstruction.

    Science.gov (United States)

    Puetter, R. C.

    The author reviews the image restoration or reconstruction problem in its general setting. He first discusses linear methods for solving the problem of image deconvolution, i.e. the case in which the data are a convolution of a point-spread function and an underlying unblurred image. Next, non-linear methods are introduced in the context of Bayesian estimation, including maximum likelihood and maximum entropy methods. Then, the author discusses the role of language and information theory concepts for data compression and solving the inverse problem. The concept of algorithmic information content (AIC) is introduced and is shown to be crucial to achieving optimal data compression and optimized Bayesian priors for image reconstruction. The dependence of the AIC on the selection of language then suggests how efficient coordinate systems for the inverse problem may be selected. The author also introduced pixon-based image restoration and reconstruction methods. The relation between image AIC and the Bayesian incarnation of Occam's Razor is discussed, as well as the relation of multiresolution pixon languages and image fractal dimension. Also discussed is the relation of pixons to the role played by the Heisenberg uncertainty principle in statistical physics and how pixon-based image reconstruction provides a natural extension to the Akaike information criterion for maximum likelihood. The author presents practical applications of pixon-based Bayesian estimation to the restoration of astronomical images. He discusses the effects of noise, effects of finite sampling on resolution, and special problems associated with spatially correlated noise introduced by mosaicing. Comparisons to other methods demonstrate the significant improvements afforded by pixon-based methods and illustrate the science that such performance improvements allow.

  10. Twin-Foucault imaging method

    Science.gov (United States)

    Harada, Ken

    2012-02-01

    A method of Lorentz electron microscopy, which enables observation two Foucault images simultaneously by using an electron biprism instead of an objective aperture, was developed. The electron biprism is installed between two electron beams deflected by 180° magnetic domains. Potential applied to the biprism deflects the two electron beams further, and two Foucault images with reversed contrast are then obtained in one visual field. The twin Foucault images are able to extract the magnetic domain structures and to reconstruct an ordinary electron micrograph. The developed Foucault method was demonstrated with a 180° domain structure of manganite La0.825Sr0.175MnO3.

  11. Image portion identification methods, image parsing methods, image parsing systems, and articles of manufacture

    Science.gov (United States)

    Lassahn, Gordon D.; Lancaster, Gregory D.; Apel, William A.; Thompson, Vicki S.

    2013-01-08

    Image portion identification methods, image parsing methods, image parsing systems, and articles of manufacture are described. According to one embodiment, an image portion identification method includes accessing data regarding an image depicting a plurality of biological substrates corresponding to at least one biological sample and indicating presence of at least one biological indicator within the biological sample and, using processing circuitry, automatically identifying a portion of the image depicting one of the biological substrates but not others of the biological substrates.

  12. A defect-driven diagnostic method for machine tool spindles.

    Science.gov (United States)

    Vogl, Gregory W; Donmez, M Alkan

    2015-01-01

    Simple vibration-based metrics are, in many cases, insufficient to diagnose machine tool spindle condition. These metrics couple defect-based motion with spindle dynamics; diagnostics should be defect-driven. A new method and spindle condition estimation device (SCED) were developed to acquire data and to separate system dynamics from defect geometry. Based on this method, a spindle condition metric relying only on defect geometry is proposed. Application of the SCED on various milling and turning spindles shows that the new approach is robust for diagnosing the machine tool spindle condition.

  13. Simulation of electrically driven jet using Chebyshev collocation method

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    The model of electrically driven jet is governed by a series of quasi 1D dimensionless partial differential equations(PDEs).Following the method of lines,the Chebyshev collocation method is employed to discretize the PDEs and obtain a system of differential-algebraic equations(DAEs).By differentiating constrains in DAEs twice,the system is transformed into a set of ordinary differential equations(ODEs) with invariants.Then the implicit differential equations solver "ddaskr" is used to solve the ODEs and ...

  14. Improved Digital Image Correlation method

    Science.gov (United States)

    Mudassar, Asloob Ahmad; Butt, Saira

    2016-12-01

    Digital Image Correlation (DIC) is a powerful technique which is used to correlate two image segments to determine the similarity between them. A correlation image is formed which gives a peak known as correlation peak. If the two image segments are identical the peak is known as auto-correlation peak otherwise it is known as cross correlation peak. The location of the peak in a correlation image gives the relative displacement between the two image segments. Use of DIC for in-plane displacement and deformation measurements in Electronic Speckle Photography (ESP) is well known. In ESP two speckle images are correlated using DIC and relative displacement is measured. We are presenting background review of ESP and disclosing a technique based on DIC for improved relative measurements which we regard as the improved DIC method. Simulation and experimental results reveal that the proposed improved-DIC method is superior to the conventional DIC method in two aspects, in resolution and in the availability of reference position in displacement measurements.

  15. Fast regularized image interpolation method

    Institute of Scientific and Technical Information of China (English)

    Hongchen Liu; Yong Feng; Linjing Li

    2007-01-01

    The regularized image interpolation method is widely used based on the vector interpolation model in which down-sampling matrix has very large dimension and needs large storage consumption and higher computation complexity. In this paper, a fast algorithm for image interpolation based on the tensor product of matrices is presented, which transforms the vector interpolation model to matrix form. The proposed algorithm can extremely reduce the storage requirement and time consumption. The simulation results verify their validity.

  16. A Data-Driven Point Cloud Simplification Framework for City-Scale Image-Based Localization.

    Science.gov (United States)

    Cheng, Wentao; Lin, Weisi; Zhang, Xinfeng; Goesele, Michael; Sun, Ming-Ting

    2017-01-01

    City-scale 3D point clouds reconstructed via structure-from-motion from a large collection of Internet images are widely used in the image-based localization task to estimate a 6-DOF camera pose of a query image. Due to prohibitive memory footprint of city-scale point clouds, image-based localization is difficult to be implemented on devices with limited memory resources. Point cloud simplification aims to select a subset of points to achieve a comparable localization performance using the original point cloud. In this paper, we propose a data-driven point cloud simplification framework by taking it as a weighted K-Cover problem, which mainly includes two complementary parts. First, a utility-based parameter determination method is proposed to select a reasonable parameter K for K-Cover-based approaches by evaluating the potential of a point cloud for establishing sufficient 2D-3D feature correspondences. Second, we formulate the 3D point cloud simplification problem as a weighted K-Cover problem, and propose an adaptive exponential weight function based on the visibility probability of 3D points. The experimental results on three popular datasets demonstrate that the proposed point cloud simplification framework outperforms the state-of-the-art methods for the image-based localization application with a well predicted parameter in the K-Cover problem.

  17. Quantitative imaging methods in osteoporosis.

    Science.gov (United States)

    Oei, Ling; Koromani, Fjorda; Rivadeneira, Fernando; Zillikens, M Carola; Oei, Edwin H G

    2016-12-01

    Osteoporosis is characterized by a decreased bone mass and quality resulting in an increased fracture risk. Quantitative imaging methods are critical in the diagnosis and follow-up of treatment effects in osteoporosis. Prior radiographic vertebral fractures and bone mineral density (BMD) as a quantitative parameter derived from dual-energy X-ray absorptiometry (DXA) are among the strongest known predictors of future osteoporotic fractures. Therefore, current clinical decision making relies heavily on accurate assessment of these imaging features. Further, novel quantitative techniques are being developed to appraise additional characteristics of osteoporosis including three-dimensional bone architecture with quantitative computed tomography (QCT). Dedicated high-resolution (HR) CT equipment is available to enhance image quality. At the other end of the spectrum, by utilizing post-processing techniques such as the trabecular bone score (TBS) information on three-dimensional architecture can be derived from DXA images. Further developments in magnetic resonance imaging (MRI) seem promising to not only capture bone micro-architecture but also characterize processes at the molecular level. This review provides an overview of various quantitative imaging techniques based on different radiological modalities utilized in clinical osteoporosis care and research.

  18. Dynamic and data-driven classification for polarimetric SAR images

    Science.gov (United States)

    Uhlmann, S.; Kiranyaz, S.; Ince, T.; Gabbouj, M.

    2011-11-01

    In this paper, we introduce dynamic and scalable Synthetic Aperture Radar (SAR) terrain classification based on the Collective Network of Binary Classifiers (CNBC). The CNBC framework is primarily adapted to maximize the SAR classification accuracy on dynamically varying databases where variations do occur in any time in terms of (new) images, classes, features and users' relevance feedback. Whenever a "change" occurs, the CNBC dynamically and "optimally" adapts itself to the change by means of its topology and the underlying evolutionary method MD PSO. Thanks to its "Divide and Conquer" type approach, the CNBC can also support varying and large set of (PolSAR) features among which it optimally selects, weighs and fuses the most discriminative ones for a particular class. Each SAR terrain class is discriminated by a dedicated Network of Binary Classifiers (NBC), which encapsulates a set of evolutionary Binary Classifiers (BCs) discriminating the class with a distinctive feature set. Moreover, with each incremental evolution session, new classes/features can be introduced which signals the CNBC to create new corresponding NBCs and BCs within to adapt and scale dynamically to the change. This can in turn be a significant advantage when the current CNBC is used to classify multiple SAR images with similar terrain classes since no or only minimal (incremental) evolution sessions are needed to adapt it to a new classification problem while using the previously acquired knowledge. We demonstrate our proposed classification approach over several medium and highresolution NASA/JPL AIRSAR images applying various polarimetric decompositions. We evaluate and compare the computational complexity and classification accuracy against static Neural Network classifiers. As CNBC classification accuracy can compete and even surpass them, the computational complexity of CNBC is significantly lower as the CNBC body supports high parallelization making it applicable to grid

  19. Three-dimensional brain magnetic resonance imaging segmentation via knowledge-driven decision theory.

    Science.gov (United States)

    Verma, Nishant; Muralidhar, Gautam S; Bovik, Alan C; Cowperthwaite, Matthew C; Burnett, Mark G; Markey, Mia K

    2014-10-01

    Brain tissue segmentation on magnetic resonance (MR) imaging is a difficult task because of significant intensity overlap between the tissue classes. We present a new knowledge-driven decision theory (KDT) approach that incorporates prior information of the relative extents of intensity overlap between tissue class pairs for volumetric MR tissue segmentation. The proposed approach better handles intensity overlap between tissues without explicitly employing methods for removal of MR image corruptions (such as bias field). Adaptive tissue class priors are employed that combine probabilistic atlas maps with spatial contextual information obtained from Markov random fields to guide tissue segmentation. The energy function is minimized using a variational level-set-based framework, which has shown great promise for MR image analysis. We evaluate the proposed method on two well-established real MR datasets with expert ground-truth segmentations and compare our approach against existing segmentation methods. KDT has low-computational complexity and shows better segmentation performance than other segmentation methods evaluated using these MR datasets.

  20. Data-driven execution of fast multipole methods

    KAUST Repository

    Ltaief, Hatem

    2013-09-17

    Fast multipole methods (FMMs) have O (N) complexity, are compute bound, and require very little synchronization, which makes them a favorable algorithm on next-generation supercomputers. Their most common application is to accelerate N-body problems, but they can also be used to solve boundary integral equations. When the particle distribution is irregular and the tree structure is adaptive, load balancing becomes a non-trivial question. A common strategy for load balancing FMMs is to use the work load from the previous step as weights to statically repartition the next step. The authors discuss in the paper another approach based on data-driven execution to efficiently tackle this challenging load balancing problem. The core idea consists of breaking the most time-consuming stages of the FMMs into smaller tasks. The algorithm can then be represented as a directed acyclic graph where nodes represent tasks and edges represent dependencies among them. The execution of the algorithm is performed by asynchronously scheduling the tasks using the queueing and runtime for kernels runtime environment, in a way such that data dependencies are not violated for numerical correctness purposes. This asynchronous scheduling results in an out-of-order execution. The performance results of the data-driven FMM execution outperform the previous strategy and show linear speedup on a quad-socket quad-core Intel Xeon system.Copyright © 2013 John Wiley & Sons, Ltd. Copyright © 2013 John Wiley & Sons, Ltd.

  1. Image Resolution Enhancement via Data-Driven Parametric Models in the Wavelet Space

    OpenAIRE

    2007-01-01

    We present a data-driven, project-based algorithm which enhances image resolution by extrapolating high-band wavelet coefficients. High-resolution images are reconstructed by alternating the projections onto two constraint sets: the observation constraint defined by the given low-resolution image and the prior constraint derived from the training data at the high resolution (HR). Two types of prior constraints are considered: spatially homogeneous constraint suitable for texture images and p...

  2. A time-driven transmission method for well logging networks

    Institute of Scientific and Technical Information of China (English)

    Wu Ruiqing; Chen Wei; Chen Tianqi; Li Qun

    2009-01-01

    Long delays and poor real-time transmission are disadvantageous to well logging networks consisting of multiple subnets. In this paper, we proposed a time-driven transmission method (TDTM) to improve the efficiency and precision of logging networks. Using TDTM, we obtained well logging curves by fusing the depth acquired on the surface, and the data acquired in downhole instruments based on the synchronization timestamp. For the TDTM, the precision of time synchronization and the data fusion algorithm were two main factors influencing system errors. A piecewise fractal interpolation was proposed to fast fuse data in each interval of the logging curves. Intervals with similar characteristics in curves were extracted based on the change in the histogram of the interval. The TDTM is evaluated with a sonic curve, as an example. Experimental results showed that the fused data had little error, and the TDTM was effective and suitable for the logging networks.

  3. The Sensitivity of Respondent-driven Sampling Method

    CERN Document Server

    Lu, Xin; Britton, Tom; Camitz, Martin; Kim, Beom Jun; Thorson, Anna; Liljeros, Fredrik

    2012-01-01

    Researchers in many scientific fields make inferences from individuals to larger groups. For many groups however, there is no list of members from which to take a random sample. Respondent-driven sampling (RDS) is a relatively new sampling methodology that circumvents this difficulty by using the social networks of the groups under study. The RDS method has been shown to provide unbiased estimates of population proportions given certain conditions. The method is now widely used in the study of HIV-related high-risk populations globally. In this paper, we test the RDS methodology by simulating RDS studies on the social networks of a large LGBT web community. The robustness of the RDS method is tested by violating, one by one, the conditions under which the method provides unbiased estimates. Results reveal that the risk of bias is large if networks are directed, or respondents choose to invite persons based on characteristics that are correlated with the study outcomes. If these two problems are absent, the RD...

  4. Task-Driven Dictionary Learning Based on Mutual Information for Medical Image Classification.

    Science.gov (United States)

    Diamant, Idit; Klang, Eyal; Amitai, Michal; Konen, Eli; Goldberger, Jacob; Greenspan, Hayit

    2017-06-01

    We present a novel variant of the bag-of-visual-words (BoVW) method for automated medical image classification. Our approach improves the BoVW model by learning a task-driven dictionary of the most relevant visual words per task using a mutual information-based criterion. Additionally, we generate relevance maps to visualize and localize the decision of the automatic classification algorithm. These maps demonstrate how the algorithm works and show the spatial layout of the most relevant words. We applied our algorithm to three different tasks: chest x-ray pathology identification (of four pathologies: cardiomegaly, enlarged mediastinum, right consolidation, and left consolidation), liver lesion classification into four categories in computed tomography (CT) images and benign/malignant clusters of microcalcifications (MCs) classification in breast mammograms. Validation was conducted on three datasets: 443 chest x-rays, 118 portal phase CT images of liver lesions, and 260 mammography MCs. The proposed method improves the classical BoVW method for all tested applications. For chest x-ray, area under curve of 0.876 was obtained for enlarged mediastinum identification compared to 0.855 using classical BoVW (with p-value 0.01). For MC classification, a significant improvement of 4% was achieved using our new approach (with p-value = 0.03). For liver lesion classification, an improvement of 6% in sensitivity and 2% in specificity were obtained (with p-value 0.001). We demonstrated that classification based on informative selected set of words results in significant improvement. Our new BoVW approach shows promising results in clinically important domains. Additionally, it can discover relevant parts of images for the task at hand without explicit annotations for training data. This can provide computer-aided support for medical experts in challenging image analysis tasks.

  5. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  6. User-Driven Planning for Digital-Image Delivery

    Science.gov (United States)

    Pisciotta, Henry; Halm, Michael J.; Dooris, Michael J.

    2006-01-01

    This article draws on two projects funded by the Andrew W. Mellon Foundation concerning the ways colleges and universities can support the legitimate sharing of digital learning resources for scholarly use. The 2001-03 Visual Image User Study (VIUS) assessed the scholarly needs of digital image users-faculty, staff, and students. That study led to…

  7. A wavelet-based quadtree driven stereo image coding

    Science.gov (United States)

    Bensalma, Rafik; Larabi, Mohamed-Chaker

    2009-02-01

    In this work, a new stereo image coding technique is proposed. The new approach integrates the coding of the residual image with the disparity map. The latter computed in the wavelet transform domain. The motivation behind using this transform is that it imitates some properties of the human visual system (HVS), particularly, the decomposition in the perspective canals. Therefore, using the wavelet transform allows for better perceptual image quality preservation. In order to estimate the disparity map, we used a quadtree segmentation in each wavelet frequency band. This segmentation has the advantage of minimizing the entropy. Dyadic squares in the subbands of target image that they are not matched with other in the reference image constitutes the residuals are coded by using an arithmetic codec. The obtained results are evaluated by using the SSIM and PSNR criteria.

  8. Segmentation in dermatological hyperspectral images: dedicated methods

    OpenAIRE

    Koprowski, Robert; Olczyk, Paweł

    2016-01-01

    Background Segmentation of hyperspectral medical images is one of many image segmentation methods which require profiling. This profiling involves either the adjustment of existing, known image segmentation methods or a proposal of new dedicated methods of hyperspectral image segmentation. Taking into consideration the size of analysed data, the time of analysis is of major importance. Therefore, the authors proposed three new dedicated methods of hyperspectral image segmentation with special...

  9. CT Image Reconstruction by Spatial-Radon Domain Data-Driven Tight Frame Regularization

    CERN Document Server

    Zhan, Ruohan

    2016-01-01

    This paper proposes a spatial-Radon domain CT image reconstruction model based on data-driven tight frames (SRD-DDTF). The proposed SRD-DDTF model combines the idea of joint image and Radon domain inpainting model of \\cite{Dong2013X} and that of the data-driven tight frames for image denoising \\cite{cai2014data}. It is different from existing models in that both CT image and its corresponding high quality projection image are reconstructed simultaneously using sparsity priors by tight frames that are adaptively learned from the data to provide optimal sparse approximations. An alternative minimization algorithm is designed to solve the proposed model which is nonsmooth and nonconvex. Convergence analysis of the algorithm is provided. Numerical experiments showed that the SRD-DDTF model is superior to the model by \\cite{Dong2013X} especially in recovering some subtle structures in the images.

  10. Spatial context driven manifold learning for hyperspectral image classification

    CSIR Research Space (South Africa)

    Zhang, Y

    2014-06-01

    Full Text Available Manifold learning techniques have demonstrated various levels of success in their ability to represent spectral signature characteristics in hyperspectral imagery. Such images consists of spectral features with very subtle differences and at times...

  11. Method of assessing heterogeneity in images

    Energy Technology Data Exchange (ETDEWEB)

    Jacob, Richard E.; Carson, James P.

    2016-08-23

    A method of assessing heterogeneity in images is disclosed. 3D images of an object are acquired. The acquired images may be filtered and masked. Iterative decomposition is performed on the masked images to obtain image subdivisions that are relatively homogeneous. Comparative analysis, such as variogram analysis or correlogram analysis, is performed of the decomposed images to determine spatial relationships between regions of the images that are relatively homogeneous.

  12. Research on method of pressure grouting piling of driven tube

    Institute of Scientific and Technical Information of China (English)

    Dianqi PAN; Zupei ZHANG; Diancai PAN; Yong CHEN; Maosen TAN

    2006-01-01

    The pressure grouting pile of driven tube can improve the load bearing capacity of the single pile from the mechanism of pressure grouting pile of driven tube. On the basis of analyzing the mechanism, the authors designed the machines and tools of pressure grouting, determined the operating manufacture and technology parameter on the pressure grouting secondly. The result shows that the pressure grouting pile of driven tube not only changes the pile type but also reduce the length of the pile and its engineering cost, it enhances the load bearing capacity of single pile an the same time.

  13. Mapping landslide susceptibility using data-driven methods.

    Science.gov (United States)

    Zêzere, J L; Pereira, S; Melo, R; Oliveira, S C; Garcia, R A C

    2017-07-01

    Most epistemic uncertainty within data-driven landslide susceptibility assessment results from errors in landslide inventories, difficulty in identifying and mapping landslide causes and decisions related with the modelling procedure. In this work we evaluate and discuss differences observed on landslide susceptibility maps resulting from: (i) the selection of the statistical method; (ii) the selection of the terrain mapping unit; and (iii) the selection of the feature type to represent landslides in the model (polygon versus point). The work is performed in a single study area (Silveira Basin - 18.2km(2) - Lisbon Region, Portugal) using a unique database of geo-environmental landslide predisposing factors and an inventory of 82 shallow translational slides. The logistic regression, the discriminant analysis and two versions of the information value were used and we conclude that multivariate statistical methods perform better when computed over heterogeneous terrain units and should be selected to assess landslide susceptibility based on slope terrain units, geo-hydrological terrain units or census terrain units. However, evidence was found that the chosen terrain mapping unit can produce greater differences on final susceptibility results than those resulting from the chosen statistical method for modelling. The landslide susceptibility should be assessed over grid cell terrain units whenever the spatial accuracy of landslide inventory is good. In addition, a single point per landslide proved to be efficient to generate accurate landslide susceptibility maps, providing the landslides are of small size, thus minimizing the possible existence of heterogeneities of predisposing factors within the landslide boundary. Although during last years the ROC curves have been preferred to evaluate the susceptibility model's performance, evidence was found that the model with the highest AUC ROC is not necessarily the best landslide susceptibility model, namely when terrain

  14. Multilabel image classification via high-order label correlation driven active learning.

    Science.gov (United States)

    Zhang, Bang; Wang, Yang; Chen, Fang

    2014-03-01

    Supervised machine learning techniques have been applied to multilabel image classification problems with tremendous success. Despite disparate learning mechanisms, their performances heavily rely on the quality of training images. However, the acquisition of training images requires significant efforts from human annotators. This hinders the applications of supervised learning techniques to large scale problems. In this paper, we propose a high-order label correlation driven active learning (HoAL) approach that allows the iterative learning algorithm itself to select the informative example-label pairs from which it learns so as to learn an accurate classifier with less annotation efforts. Four crucial issues are considered by the proposed HoAL: 1) unlike binary cases, the selection granularity for multilabel active learning need to be fined from example to example-label pair; 2) different labels are seldom independent, and label correlations provide critical information for efficient learning; 3) in addition to pair-wise label correlations, high-order label correlations are also informative for multilabel active learning; and 4) since the number of label combinations increases exponentially with respect to the number of labels, an efficient mining method is required to discover informative label correlations. The proposed approach is tested on public data sets, and the empirical results demonstrate its effectiveness.

  15. An adaptive knowledge-driven medical image search engine for interactive diffuse parenchymal lung disease quantification

    Science.gov (United States)

    Tao, Yimo; Zhou, Xiang Sean; Bi, Jinbo; Jerebkoa, Anna; Wolf, Matthias; Salganicoff, Marcos; Krishnana, Arun

    2009-02-01

    Characterization and quantification of the severity of diffuse parenchymal lung diseases (DPLDs) using Computed Tomography (CT) is an important issue in clinical research. Recently, several classification-based computer-aided diagnosis (CAD) systems [1-3] for DPLD have been proposed. For some of those systems, a degradation of performance [2] was reported on unseen data because of considerable inter-patient variances of parenchymal tissue patterns. We believe that a CAD system of real clinical value should be robust to inter-patient variances and be able to classify unseen cases online more effectively. In this work, we have developed a novel adaptive knowledge-driven CT image search engine that combines offline learning aspects of classification-based CAD systems with online learning aspects of content-based image retrieval (CBIR) systems. Our system can seamlessly and adaptively fuse offline accumulated knowledge with online feedback, leading to an improved online performance in detecting DPLD in both accuracy and speed aspects. Our contribution lies in: (1) newly developed 3D texture-based and morphology-based features; (2) a multi-class offline feature selection method; and, (3) a novel image search engine framework for detecting DPLD. Very promising results have been obtained on a small test set.

  16. Imaging FTIR emissivity measurement method

    Science.gov (United States)

    Burdette, Edward M.; Nichols, C. Spencer; Lane, Sarah E.; Prussing, Keith F.; Cathcart, J. Michael

    2013-09-01

    Though many materials behave approximately as greybodies across the long-wave infrared (LWIR) waveband, certain important infrared (IR) scene modeling materials such as brick and galvanized steel exhibit more complex optical properties1. Accurately describing how non-greybody materials interact relies critically on the accurate incorporation of the emissive and reflective properties of the in-scene materials. Typically, measured values are obtained and used. When measured using a non-imaging spectrometer, a given material's spectral emissivity requires more than one collection episode, as both the sample under test and a standard must be measured separately. In the interval between episodes changes in environment degrade emissivity measurement accuracy. While repeating and averaging measurements of the standard and sample helps mitigate such effects, a simultaneous measurement of both can ensure identical environmental conditions during the measurement process, thus reducing inaccuracies and delivering a temporally accurate determination of background or `down-welling' radiation. We report on a method for minimizing temporal inaccuracies in sample emissivity measurements. Using a LWIR hyperspectral imager, a Telops Hyper-Cam2, an approach permitting hundreds of simultaneous, calibrated spectral radiance measurements of the sample under test as well as a diffuse gold standard is described. In addition, we describe the data reduction technique to exploit these measurements. Following development of the reported method, spectral reflectance data from 10 samples of various materials of interest were collected. These data are presented along with comments on how such data will enhance the fidelity of computer models of IR scenes.

  17. Mathematical methods in elasticity imaging

    CERN Document Server

    Ammari, Habib; Garnier, Josselin; Wahab, Abdul

    2015-01-01

    This book is the first to comprehensively explore elasticity imaging and examines recent, important developments in asymptotic imaging, modeling, and analysis of deterministic and stochastic elastic wave propagation phenomena. It derives the best possible functional images for small inclusions and cracks within the context of stability and resolution, and introduces a topological derivative-based imaging framework for detecting elastic inclusions in the time-harmonic regime. For imaging extended elastic inclusions, accurate optimal control methodologies are designed and the effects of uncertai

  18. Fractal methods in image analysis and coding

    OpenAIRE

    Neary, David

    2001-01-01

    In this thesis we present an overview of image processing techniques which use fractal methods in some way. We show how these fields relate to each other, and examine various aspects of fractal methods in each area. The three principal fields of image processing and analysis th a t we examine are texture classification, image segmentation and image coding. In the area of texture classification, we examine fractal dimension estimators, comparing these methods to other methods in use, a...

  19. WE-EF-207-01: FEATURED PRESENTATION and BEST IN PHYSICS (IMAGING): Task-Driven Imaging for Cone-Beam CT in Interventional Guidance

    Energy Technology Data Exchange (ETDEWEB)

    Gang, G; Stayman, J; Ouadah, S; Siewerdsen, J [Johns Hopkins University, Baltimore, MD (United States); Ehtiati, T [Siemens Healthcare AX Division, Erlangen, DE (Germany)

    2015-06-15

    Purpose: This work introduces a task-driven imaging framework that utilizes a patient-specific anatomical model, mathematical definition of the imaging task, and a model of the imaging system to prospectively design acquisition and reconstruction techniques that maximize task-based imaging performance. Utility of the framework is demonstrated in the joint optimization of tube current modulation and view-dependent reconstruction kernel in filtered-backprojection reconstruction and non-circular orbit design in model-based reconstruction. Methods: The system model is based on a cascaded systems analysis of cone-beam CT capable of predicting the spatially varying noise and resolution characteristics as a function of the anatomical model and a wide range of imaging parameters. Detectability index for a non-prewhitening observer model is used as the objective function in a task-driven optimization. The combination of tube current and reconstruction kernel modulation profiles were identified through an alternating optimization algorithm where tube current was updated analytically followed by a gradient-based optimization of reconstruction kernel. The non-circular orbit is first parameterized as a linear combination of bases functions and the coefficients were then optimized using an evolutionary algorithm. The task-driven strategy was compared with conventional acquisitions without modulation, using automatic exposure control, and in a circular orbit. Results: The task-driven strategy outperformed conventional techniques in all tasks investigated, improving the detectability of a spherical lesion detection task by an average of 50% in the interior of a pelvis phantom. The non-circular orbit design successfully mitigated photon starvation effects arising from a dense embolization coil in a head phantom, improving the conspicuity of an intracranial hemorrhage proximal to the coil. Conclusion: The task-driven imaging framework leverages a knowledge of the imaging task within

  20. T2-weighted four dimensional magnetic resonance imaging with result-driven phase sorting

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yilin; Yin, Fang-Fang; Cai, Jing, E-mail: jing.cai@duke.edu [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27710 and Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710 (United States); Czito, Brian G. [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710 (United States); Bashir, Mustafa R. [Department of Radiology, Duke University Medical Center, Durham, North Carolina 27710 (United States)

    2015-08-15

    Purpose: T2-weighted MRI provides excellent tumor-to-tissue contrast for target volume delineation in radiation therapy treatment planning. This study aims at developing a novel T2-weighted retrospective four dimensional magnetic resonance imaging (4D-MRI) phase sorting technique for imaging organ/tumor respiratory motion. Methods: A 2D fast T2-weighted half-Fourier acquisition single-shot turbo spin-echo MR sequence was used for image acquisition of 4D-MRI, with a frame rate of 2–3 frames/s. Respiratory motion was measured using an external breathing monitoring device. A phase sorting method was developed to sort the images by their corresponding respiratory phases. Besides, a result-driven strategy was applied to effectively utilize redundant images in the case when multiple images were allocated to a bin. This strategy, selecting the image with minimal amplitude error, will generate the most representative 4D-MRI. Since we are using a different image acquisition mode for 4D imaging (the sequential image acquisition scheme) with the conventionally used cine or helical image acquisition scheme, the 4D dataset sufficient condition was not obviously and directly predictable. An important challenge of the proposed technique was to determine the number of repeated scans (N{sub R}) required to obtain sufficient phase information at each slice position. To tackle this challenge, the authors first conducted computer simulations using real-time position management respiratory signals of the 29 cancer patients under an IRB-approved retrospective study to derive the relationships between N{sub R} and the following factors: number of slices (N{sub S}), number of 4D-MRI respiratory bins (N{sub B}), and starting phase at image acquisition (P{sub 0}). To validate the authors’ technique, 4D-MRI acquisition and reconstruction were simulated on a 4D digital extended cardiac-torso (XCAT) human phantom using simulation derived parameters. Twelve healthy volunteers were involved

  1. A Homomorphic Method for Sharing Secret Images

    Science.gov (United States)

    Islam, Naveed; Puech, William; Brouzet, Robert

    In this paper, we present a new method for sharing images between two parties exploiting homomorphic property of public key cryptosystem. With our method, we show that it is possible to multiply two encrypted images, to decrypt the resulted image and after to extract and reconstruct one of the two original images if the second original image is available. Indeed, extraction and reconstruction of original image at the receiving end is done with the help of carrier image. Experimental results and security analysis show the effectiveness of the proposed scheme.

  2. Imaging of Coulomb-Driven Quantum Hall Edge States

    KAUST Repository

    Lai, Keji

    2011-10-01

    The edges of a two-dimensional electron gas (2DEG) in the quantum Hall effect (QHE) regime are divided into alternating metallic and insulating strips, with their widths determined by the energy gaps of the QHE states and the electrostatic Coulomb interaction. Local probing of these submicrometer features, however, is challenging due to the buried 2DEG structures. Using a newly developed microwave impedance microscope, we demonstrate the real-space conductivity mapping of the edge and bulk states. The sizes, positions, and field dependence of the edge strips around the sample perimeter agree quantitatively with the self-consistent electrostatic picture. The evolution of microwave images as a function of magnetic fields provides rich microscopic information around the ν=2 QHE state. © 2011 American Physical Society.

  3. Uncertainty driven probabilistic voxel selection for image registration.

    Science.gov (United States)

    Oreshkin, Boris N; Arbel, Tal

    2013-10-01

    This paper presents a novel probabilistic voxel selection strategy for medical image registration in time-sensitive contexts, where the goal is aggressive voxel sampling (e.g., using less than 1% of the total number) while maintaining registration accuracy and low failure rate. We develop a Bayesian framework whereby, first, a voxel sampling probability field (VSPF) is built based on the uncertainty on the transformation parameters. We then describe a practical, multi-scale registration algorithm, where, at each optimization iteration, different voxel subsets are sampled based on the VSPF. The approach maximizes accuracy without committing to a particular fixed subset of voxels. The probabilistic sampling scheme developed is shown to manage the tradeoff between the robustness of traditional random voxel selection (by permitting more exploration) and the accuracy of fixed voxel selection (by permitting a greater proportion of informative voxels).

  4. Human cell structure-driven model construction for predicting protein subcellular location from biological images.

    Science.gov (United States)

    Shao, Wei; Liu, Mingxia; Zhang, Daoqiang

    2016-01-01

    The systematic study of subcellular location pattern is very important for fully characterizing the human proteome. Nowadays, with the great advances in automated microscopic imaging, accurate bioimage-based classification methods to predict protein subcellular locations are highly desired. All existing models were constructed on the independent parallel hypothesis, where the cellular component classes are positioned independently in a multi-class classification engine. The important structural information of cellular compartments is missed. To deal with this problem for developing more accurate models, we proposed a novel cell structure-driven classifier construction approach (SC-PSorter) by employing the prior biological structural information in the learning model. Specifically, the structural relationship among the cellular components is reflected by a new codeword matrix under the error correcting output coding framework. Then, we construct multiple SC-PSorter-based classifiers corresponding to the columns of the error correcting output coding codeword matrix using a multi-kernel support vector machine classification approach. Finally, we perform the classifier ensemble by combining those multiple SC-PSorter-based classifiers via majority voting. We evaluate our method on a collection of 1636 immunohistochemistry images from the Human Protein Atlas database. The experimental results show that our method achieves an overall accuracy of 89.0%, which is 6.4% higher than the state-of-the-art method. The dataset and code can be downloaded from https://github.com/shaoweinuaa/. dqzhang@nuaa.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Lossless Digital Image Compression Method for Bitmap Images

    CERN Document Server

    Meyyappan, Dr T; Nachiaban, N M Jeya; 10.5121/ijma.2011.3407

    2011-01-01

    In this research paper, the authors propose a new approach to digital image compression using crack coding This method starts with the original image and develop crack codes in a recursive manner, marking the pixels visited earlier and expanding the entropy in four directions. The proposed method is experimented with sample bitmap images and results are tabulated. The method is implemented in uni-processor machine using C language source code.

  6. Fast neutron imaging device and method

    Science.gov (United States)

    Popov, Vladimir; Degtiarenko, Pavel; Musatov, Igor V.

    2014-02-11

    A fast neutron imaging apparatus and method of constructing fast neutron radiography images, the apparatus including a neutron source and a detector that provides event-by-event acquisition of position and energy deposition, and optionally timing and pulse shape for each individual neutron event detected by the detector. The method for constructing fast neutron radiography images utilizes the apparatus of the invention.

  7. Image enhancement method for fingerprint recognition system.

    Science.gov (United States)

    Li, Shunshan; Wei, Min; Tang, Haiying; Zhuang, Tiange; Buonocore, Michael

    2005-01-01

    Image enhancement plays an important role in Fingerprint Recognition System. In this paper fingerprint image enhancement method, a refined Gabor filter, is presented. This enhancement method can connect the ridge breaks, ensures the maximal gray values located at the ridge center and has the ability to compensate for the nonlinear deformations. The result shows it can improve the performance of image enhancement.

  8. Task-driven image acquisition and reconstruction in cone-beam CT.

    Science.gov (United States)

    Gang, Grace J; Stayman, J Webster; Ehtiati, Tina; Siewerdsen, Jeffrey H

    2015-04-21

    This work introduces a task-driven imaging framework that incorporates a mathematical definition of the imaging task, a model of the imaging system, and a patient-specific anatomical model to prospectively design image acquisition and reconstruction techniques to optimize task performance. The framework is applied to joint optimization of tube current modulation, view-dependent reconstruction kernel, and orbital tilt in cone-beam CT. The system model considers a cone-beam CT system incorporating a flat-panel detector and 3D filtered backprojection and accurately describes the spatially varying noise and resolution over a wide range of imaging parameters in the presence of a realistic anatomical model. Task-based detectability index (d') is incorporated as the objective function in a task-driven optimization of image acquisition and reconstruction techniques. The orbital tilt was optimized through an exhaustive search across tilt angles ranging ± 30°. For each tilt angle, the view-dependent tube current and reconstruction kernel (i.e. the modulation profiles) that maximized detectability were identified via an alternating optimization. The task-driven approach was compared with conventional unmodulated and automatic exposure control (AEC) strategies for a variety of imaging tasks and anthropomorphic phantoms. The task-driven strategy outperformed the unmodulated and AEC cases for all tasks. For example, d' for a sphere detection task in a head phantom was improved by 30% compared to the unmodulated case by using smoother kernels for noisy views and distributing mAs across less noisy views (at fixed total mAs) in a manner that was beneficial to task performance. Similarly for detection of a line-pair pattern, the task-driven approach increased d' by 80% compared to no modulation by means of view-dependent mA and kernel selection that yields modulation transfer function and noise-power spectrum optimal to the task. Optimization of orbital tilt identified the tilt

  9. An improved image reconstruction method for optical intensity correlation Imaging

    Science.gov (United States)

    Gao, Xin; Feng, Lingjie; Li, Xiyu

    2016-12-01

    The intensity correlation imaging method is a novel kind of interference imaging and it has favorable prospects in deep space recognition. However, restricted by the low detecting signal-to-noise ratio (SNR), it's usually very difficult to obtain high-quality image of deep space object like high-Earth-orbit (HEO) satellite with existing phase retrieval methods. In this paper, based on the priori intensity statistical distribution model of the object and characteristics of measurement noise distribution, an improved method of Prior Information Optimization (PIO) is proposed to reduce the ambiguous images and accelerate the phase retrieval procedure thus realizing fine image reconstruction. As the simulations and experiments show, compared to previous methods, our method could acquire higher-resolution images with less error in low SNR condition.

  10. Computational methods in molecular imaging technologies

    CERN Document Server

    Gunjan, Vinit Kumar; Venkatesh, C; Amarnath, M

    2017-01-01

    This book highlights the experimental investigations that have been carried out on magnetic resonance imaging and computed tomography (MRI & CT) images using state-of-the-art Computational Image processing techniques, and tabulates the statistical values wherever necessary. In a very simple and straightforward way, it explains how image processing methods are used to improve the quality of medical images and facilitate analysis. It offers a valuable resource for researchers, engineers, medical doctors and bioinformatics experts alike.

  11. An overview of medical image processing methods

    African Journals Online (AJOL)

    USER

    2010-06-14

    Jun 14, 2010 ... theoretical subjects about methods and algorithms used are explained. In the forth section, ... image processing techniques such as image segmentation, compression .... A convolution mask like -1 | 0 | 1 could be used in each.

  12. Efficient visibility-driven medical image visualisation via adaptive binned visibility histogram.

    Science.gov (United States)

    Jung, Younhyun; Kim, Jinman; Kumar, Ashnil; Feng, David Dagan; Fulham, Michael

    2016-07-01

    (MR) imaging and multi-modality positron emission tomography-CT (PET-CT). In our experiments, the AB-VH markedly improved the computational efficiency for the VH construction and thus improved the subsequent VH-driven volume manipulations. This efficiency was achieved without major degradation in the VH visually and numerical differences between the AB-VH and its full-bin counterpart. We applied several variants of the K-means clustering algorithm with varying Ks (the number of clusters) and found that higher values of K resulted in better performance at a lower computational gain. The AB-VH also had an improved performance when compared to the conventional method of down-sampling of the histogram bins (equal binning) for volume rendering visualisation.

  13. Selection of image acquisition methods

    Science.gov (United States)

    Donnelly, Joseph J.

    1991-05-01

    A comprehensive picture archiving and communications system (PACS), such as the medical diagnostic imaging support (MDIS) system, consists of several interrelated sub systems. The image acquisition subsystem is the means by which images are introduced into the system and as such it is analogous to the ''eyes'' of the system. Images from digital modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) are readily transferable to a PACS since they are acquired in a digital format. Conventional film based analog images are particularly challenging since at no point in their production or display do they exist in an electronic form suitable for transfer to the MDIS system. In recent years, commercial high resolution film digitizers and computed radiology (CR) devices have become available. These devices now provide us with the means to capture conventional radiographic images in a format suitable for transfer to a PACS. Through the careful selection of acquisition devices we can now design an image acquisition subsystem tailored to meet our clinical needs.

  14. Image Resolution Enhancement via Data-Driven Parametric Models in the Wavelet Space

    Directory of Open Access Journals (Sweden)

    Xin Li

    2007-02-01

    Full Text Available We present a data-driven, project-based algorithm which enhances image resolution by extrapolating high-band wavelet coefficients. High-resolution images are reconstructed by alternating the projections onto two constraint sets: the observation constraint defined by the given low-resolution image and the prior constraint derived from the training data at the high resolution (HR. Two types of prior constraints are considered: spatially homogeneous constraint suitable for texture images and patch-based inhomogeneous one for generic images. A probabilistic fusion strategy is developed for combining reconstructed HR patches when overlapping (redundancy is present. It is argued that objective fidelity measure is important to evaluate the performance of resolution enhancement techniques and the role of antialiasing filter should be properly addressed. Experimental results are reported to show that our projection-based approach can achieve both good subjective and objective performance especially for the class of texture images.

  15. Image Resolution Enhancement via Data-Driven Parametric Models in the Wavelet Space

    Directory of Open Access Journals (Sweden)

    Li Xin

    2007-01-01

    Full Text Available We present a data-driven, project-based algorithm which enhances image resolution by extrapolating high-band wavelet coefficients. High-resolution images are reconstructed by alternating the projections onto two constraint sets: the observation constraint defined by the given low-resolution image and the prior constraint derived from the training data at the high resolution (HR. Two types of prior constraints are considered: spatially homogeneous constraint suitable for texture images and patch-based inhomogeneous one for generic images. A probabilistic fusion strategy is developed for combining reconstructed HR patches when overlapping (redundancy is present. It is argued that objective fidelity measure is important to evaluate the performance of resolution enhancement techniques and the role of antialiasing filter should be properly addressed. Experimental results are reported to show that our projection-based approach can achieve both good subjective and objective performance especially for the class of texture images.

  16. Comparison of the Two-Hemisphere Model-Driven Approach to Other Methods for Model-Driven Software Development

    Directory of Open Access Journals (Sweden)

    Nikiforova Oksana

    2015-12-01

    Full Text Available Models are widely used not only in computer science field, but also in other fields. They are an effective way to show relevant information in a convenient way. Model-driven software development uses models and transformations as first-class citizens. That makes software development phases more related to each other, those links later help to make changes or modify software product more freely. At the moment there are a lot of methods and techniques to create those models and transform them into each other. Since 2004, authors have been developing the so called 2HMD approach to bridge the gap between problem domain and software components by using models and model transformation. The goal of this research is to compare different methods positioned for performing the same tasks as the 2HMD approach and to understand the state of the art in the area of model-driven software development.

  17. Research on polarization imaging information parsing method

    Science.gov (United States)

    Yuan, Hongwu; Zhou, Pucheng; Wang, Xiaolong

    2016-11-01

    Polarization information parsing plays an important role in polarization imaging detection. This paper focus on the polarization information parsing method: Firstly, the general process of polarization information parsing is given, mainly including polarization image preprocessing, multiple polarization parameters calculation, polarization image fusion and polarization image tracking, etc.; And then the research achievements of the polarization information parsing method are presented, in terms of polarization image preprocessing, the polarization image registration method based on the maximum mutual information is designed. The experiment shows that this method can improve the precision of registration and be satisfied the need of polarization information parsing; In terms of multiple polarization parameters calculation, based on the omnidirectional polarization inversion model is built, a variety of polarization parameter images are obtained and the precision of inversion is to be improve obviously; In terms of polarization image fusion , using fuzzy integral and sparse representation, the multiple polarization parameters adaptive optimal fusion method is given, and the targets detection in complex scene is completed by using the clustering image segmentation algorithm based on fractal characters; In polarization image tracking, the average displacement polarization image characteristics of auxiliary particle filtering fusion tracking algorithm is put forward to achieve the smooth tracking of moving targets. Finally, the polarization information parsing method is applied to the polarization imaging detection of typical targets such as the camouflage target, the fog and latent fingerprints.

  18. Comparing image compression methods in biomedical applications

    Directory of Open Access Journals (Sweden)

    Libor Hargas

    2004-01-01

    Full Text Available Compression methods suitable for image processing are described in this article in biomedical applications. The compression is often realized by reduction of irrelevance or redundancy. There are described lossless and lossy compression methods which can be use for compress of images in biomedical applications and comparison of these methods based on fidelity criteria.

  19. Multispectral image filtering method based on image fusion

    Science.gov (United States)

    Zhang, Wei; Chen, Wei

    2015-12-01

    This paper proposed a novel filter scheme by image fusion based on Nonsubsampled ContourletTransform(NSCT) for multispectral image. Firstly, an adaptive median filter is proposed which shows great advantage in speed and weak edge preserving. Secondly, the algorithm put bilateral filter and adaptive median filter on image respectively and gets two denoised images. Then perform NSCT multi-scale decomposition on the de-noised images and get detail sub-band and approximate sub-band. Thirdly, the detail sub-band and approximate sub-band are fused respectively. Finally, the object image is obtained by inverse NSCT. Simulation results show that the method has strong adaptability to deal with the textural images. And it can suppress noise effectively and preserve the image details. This algorithm has better filter performance than the Bilateral filter standard and median filter and theirs improved algorithms for different noise ratio.

  20. A general and efficient method for incorporating precise spike times in globally time-driven simulations

    Directory of Open Access Journals (Sweden)

    Alexander Hanuschkin

    2010-10-01

    Full Text Available Traditionally, event-driven simulations have been limited to the very restricted class of neuronal models for which the timing of future spikes can be expressed in closed form. Recently, the class of models that is amenable to event-driven simulation has been extended by the development of techniques to accurately calculate firing times for some integrate-and-fire neuron models that do not enable the prediction of future spikes in closed form. The motivation of this development is the general perception that time-driven simulations are imprecise. Here, we demonstrate that a globally time-driven scheme can calculate firing times that cannot be discriminated from those calculated by an event-driven implementation of the same model; moreover, the time-driven scheme incurs lower computational costs. The key insight is that time-driven methods are based on identifying a threshold crossing in the recent past, which can be implemented by a much simpler algorithm than the techniques for predicting future threshold crossings that are necessary for event-driven approaches. As run time is dominated by the cost of the operations performed at each incoming spike, which includes spike prediction in the case of event-driven simulation and retrospective detection in the case of time-driven simulation, the simple time-driven algorithm outperforms the event-driven approaches. Additionally, our method is generally applicable to all commonly used integrate-and-fire neuronal models; we show that a nonlinear model employing a standard adaptive solver can reproduce a reference spike train with a high degree of precision.

  1. Review methods for image segmentation from computed tomography images

    Energy Technology Data Exchange (ETDEWEB)

    Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik [Faculty of Science Computer and Mathematics, Universiti Teknologi Mara Malaysia, 40450 Shah Alam Selangor (Malaysia); Mahmud, Rozi [Faculty of Medicine and Health Sciences, Universiti Putra Malaysia 43400 Serdang Selangor (Malaysia)

    2014-12-04

    Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affect the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan.

  2. Data-driven methods towards learning the highly nonlinear inverse kinematics of tendon-driven surgical manipulators.

    Science.gov (United States)

    Xu, Wenjun; Chen, Jie; Lau, Henry Y K; Ren, Hongliang

    2017-09-01

    Accurate motion control of flexible surgical manipulators is crucial in tissue manipulation tasks. The tendon-driven serpentine manipulator (TSM) is one of the most widely adopted flexible mechanisms in minimally invasive surgery because of its enhanced maneuverability in torturous environments. TSM, however, exhibits high nonlinearities and conventional analytical kinematics model is insufficient to achieve high accuracy. To account for the system nonlinearities, we applied a data driven approach to encode the system inverse kinematics. Three regression methods: extreme learning machine (ELM), Gaussian mixture regression (GMR) and K-nearest neighbors regression (KNNR) were implemented to learn a nonlinear mapping from the robot 3D position states to the control inputs. The performance of the three algorithms was evaluated both in simulation and physical trajectory tracking experiments. KNNR performed the best in the tracking experiments, with the lowest RMSE of 2.1275 mm. The proposed inverse kinematics learning methods provide an alternative and efficient way to accurately model the tendon driven flexible manipulator. Copyright © 2016 John Wiley & Sons, Ltd.

  3. An Image Retrieval Method Using DCT Features

    Institute of Scientific and Technical Information of China (English)

    樊昀; 王润生

    2002-01-01

    In this paper a new image representation for compressed domain image re-trieval and an image retrieval system are presented. To represent images compactly and hi-erarchically, multiple features such as color and texture features directly extracted from DCTcoefficients are structurally organized using vector quantization. To train the codebook, a newMinimum Description Length vector quantization algorithm is used and it automatically decidesthe number of code words. To compare two images using the proposed representation, a newefficient similarity measure is designed. The new method is applied to an image database with1,005 pictures. The results demonstrate that the method is better than two typical histogrammethods and two DCT-based image retrieval methods.

  4. Gamma-ray Imaging Methods

    Energy Technology Data Exchange (ETDEWEB)

    Vetter, K; Mihailescu, L; Nelson, K; Valentine, J; Wright, D

    2006-10-05

    In this document we discuss specific implementations for gamma-ray imaging instruments including the principle of operation and describe systems which have been built and demonstrated as well as systems currently under development. There are several fundamentally different technologies each with specific operational requirements and performance trade offs. We provide an overview of the different gamma-ray imaging techniques and briefly discuss challenges and limitations associated with each modality (in the appendix we give detailed descriptions of specific implementations for many of these technologies). In Section 3 we summarize the performance and operational aspects in tabular form as an aid for comparing technologies and mapping technologies to potential applications.

  5. New data-driven method from 3D confocal microscopy for calculating phytoplankton cell biovolume.

    Science.gov (United States)

    Roselli, L; Paparella, F; Stanca, E; Basset, A

    2015-06-01

    Confocal laser scanner microscopy coupled with an image analysis system was used to directly determine the shape and calculate the biovolume of phytoplankton organisms by constructing 3D models of cells. The study was performed on Biceratium furca (Ehrenberg) Vanhoeffen, which is one of the most complex-shaped phytoplankton. Traditionally, biovolume is obtained from a standardized set of geometric models based on linear dimensions measured by light microscopy. However, especially in the case of complex-shaped cells, biovolume is affected by very large errors associated with the numerous manual measurements that this entails. We evaluate the accuracy of these traditional methods by comparing the results obtained using geometric models with direct biovolume measurement by image analysis. Our results show cell biovolume measurement based on decomposition into simple geometrical shapes can be highly inaccurate. Although we assume that the most accurate cell shape is obtained by 3D direct biovolume measurement, which is based on voxel counting, the intrinsic uncertainty of this method is explored and assessed. Finally, we implement a data-driven formula-based approach to the calculation of biovolume of this complex-shaped organism. On one hand, the model is obtained from 3D direct calculation. On the other hand, it is based on just two linear dimensions which can easily be measured by hand. This approach has already been used for investigating the complexities of morphology and for determining the 3D structure of cells. It could also represent a novel way to generalize scaling laws for biovolume calculation.

  6. Improved High Dynamic Range Image Reproduction Method

    Directory of Open Access Journals (Sweden)

    András Rövid

    2007-10-01

    Full Text Available High dynamic range (HDR of illumination may cause serious distortions andother problems in viewing and further processing of digital images. This paper describes anew algorithm for HDR image creation based on merging images taken with differentexposure time. There are many fields, in which HDR images can be used advantageously,with the help of them the accuracy, reliability and many other features of the certain imageprocessing methods can be improved.

  7. Method and device for current driven electric energy conversion

    DEFF Research Database (Denmark)

    2012-01-01

    configurations such as half bridge buck, full bridge buck, half bridge boost, or full bridge boost. A current driven conversion is advantageous for high efficient energy conversion from current sources such as solar cells or where a voltage source is connected through long cables, e.g. powerline cables for long......Device comprising an electric power converter circuit for converting electric energy. The converter circuit comprises a switch arrangement with two or more controllable electric switches connected in a switching configuration and controlled so as to provide a current drive of electric energy from...... the output from the switch arrangement and designed such that a high impedance at a frequency range below the switching frequency is obtained, seen from the output terminals. Switches implemented by normally-on-devices are preferred, e.g. in the form of a JFET. The converter circuit may be in different...

  8. Line-imaging VISAR for laser-driven equations of state experiments

    Science.gov (United States)

    Mikhaylyuk, A. V.; Koshkin, D. S.; Gubskii, K. L.; Kuznetsov, A. P.

    2016-11-01

    The paper presents the diagnostic system for velocity measurements in laser- driven equations of state experiments. Two Mach-Zehnder line-imaging VISAR-type (velocity interferometer system for any reflector) interferometers form a vernier measuring system and can measure velocity in the interval of 5 to 50 km/s. Also, the system includes a passive channel that records target luminescence in the shock wave front. Spatial resolution of the optical layout is about 5 μm.

  9. Literature Review of Image Denoising Methods

    Institute of Scientific and Technical Information of China (English)

    LIU Qian; YANG Xing-qiang; LI Yun-liang

    2014-01-01

    Image denoising is a fundamental and important task in image processing and computer vision fields. A lot of methods are proposed to reconstruct clean images from their noisy versions. These methods differ in both methodology and performance. On one hand, denoising methods can be classified into local and nonlocal methods. On the other hand, they can be marked as spatial and frequency domain methods. Sparse coding and low-rank are two popular techniques for denoising recently. This paper summarizes existing techniques and provides several promising directions for further studying in the future.

  10. Fuzzy Methods and Image Fusion in a Digital Image Processing

    Directory of Open Access Journals (Sweden)

    Jaroslav Vlach

    2012-01-01

    Full Text Available Although the basics of image processing were laid more than 50 years ago, significant development occurred mainly in the last 25 years with the entrance of personal computers and today's problems are already very sophisticated and quick. This article is a contribution to the study of the use of fuzzy logic methods and image fusion for image processing using LabVIEW tools for quality management, in this case especially in the jewelry industry.  

  11. Sign determination methods for the respiratory signal in data-driven PET gating

    Science.gov (United States)

    Bertolli, Ottavia; Arridge, Simon; Wollenweber, Scott D.; Stearns, Charles W.; Hutton, Brian F.; Thielemans, Kris

    2017-04-01

    Patient respiratory motion during PET image acquisition leads to blurring in the reconstructed images and may cause significant artifacts, resulting in decreased lesion detectability, inaccurate standard uptake value calculation and incorrect treatment planning in radiation therapy. To reduce these effects data can be regrouped into (nearly) ‘motion-free’ gates prior to reconstruction by selecting the events with respect to the breathing phase. This gating procedure therefore needs a respiratory signal: on current scanners it is obtained from an external device, whereas with data driven (DD) methods it can be directly obtained from the raw PET data. DD methods thus eliminate the use of external equipment, which is often expensive, needs prior setup and can cause patient discomfort, and they could also potentially provide increased fidelity to the internal movement. DD methods have been recently applied on PET data showing promising results. However, many methods provide signals whose direction with respect to the physical motion is uncertain (i.e. their sign is arbitrary), therefore a maximum in the signal could refer either to the end-inspiration or end-expiration phase, possibly causing inaccurate motion correction. In this work we propose two novel methods, CorrWeights and CorrSino, to detect the correct direction of the motion represented by the DD signal, that is obtained by applying principal component analysis (PCA) on the acquired data. They only require the PET raw data, and they rely on the assumption that one of the major causes of change in the acquired data related to the chest is respiratory motion in the axial direction, that generates a cranio-caudal motion of the internal organs. We also implemented two versions of a published registration-based method, that require image reconstruction. The methods were first applied on XCAT simulations, and later evaluated on cancer patient datasets monitored by the Varian Real-time Position ManagementTM (RPM

  12. Research on image scrambling degree evaluation method

    Science.gov (United States)

    Bai, Sen; Liao, Xiaofeng; Chen, Jinyu; Liu, Yijun; Wang, Xiao

    2005-12-01

    This paper discussed the evaluation problem of image scrambling degree (ISD). Inspired by the evaluation method of image texture characteristics, three new metrics for assessing objectively the ISD were proposed. The first method utilized the performance of energy concentration of Walsh transformation (WT), which took into account the properties that a good ISD measurement method should be contented. The second method used angular second moment (ASM) of image gray level co-occurrence matrix (GLCM). The third method combined the entropy of GLCM with image texture characteristic. Experimental results show that the proposed metrics are effective to assess the ISD, which correlates well with subjective assessment. Considering the computational complexity, the first evaluation method based on WT is remarkably superior to the method based on ASM and GLCM in terms of the time cost.

  13. Swarm Optimization Methods in Microwave Imaging

    Directory of Open Access Journals (Sweden)

    Andrea Randazzo

    2012-01-01

    Full Text Available Swarm intelligence denotes a class of new stochastic algorithms inspired by the collective social behavior of natural entities (e.g., birds, ants, etc.. Such approaches have been proven to be quite effective in several applicative fields, ranging from intelligent routing to image processing. In the last years, they have also been successfully applied in electromagnetics, especially for antenna synthesis, component design, and microwave imaging. In this paper, the application of swarm optimization methods to microwave imaging is discussed, and some recent imaging approaches based on such methods are critically reviewed.

  14. Application of Bounded Linear Stability Analysis Method for Metrics-Driven Adaptive Control

    Science.gov (United States)

    Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje

    2009-01-01

    This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics-driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a second order system that represents a pitch attitude control of a generic transport aircraft. The analysis shows that the system with the metrics-conforming variable adaptive gain becomes more robust to unmodeled dynamics or time delay. The effect of analysis time-window for BLSA is also evaluated in order to meet the stability margin criteria.

  15. Parallel image registration method for snapshot Fourier transform imaging spectroscopy

    Science.gov (United States)

    Zhang, Yu; Zhu, Shuaishuai; Lin, Jie; Zhu, Feijia; Jin, Peng

    2017-08-01

    A fast and precise registration method for multi-image snapshot Fourier transform imaging spectroscopy is proposed. This method accomplishes registration of an image array using the positional relationship between homologous points in the subimages, which are obtained offline by preregistration. Through the preregistration process, the registration problem is converted to the problem of using a registration matrix to interpolate subimages. Therefore, the hardware interpolation of graphics processing unit (GPU) texture memory, which has speed advantages for its parallel computing, can be used to significantly enhance computational efficiency. Compared to a central processing unit, GPU performance showed ˜27 times acceleration in registration efficiency.

  16. Method for position emission mammography image reconstruction

    Science.gov (United States)

    Smith, Mark Frederick

    2004-10-12

    An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.

  17. Data-driven assessment of eQTL mapping methods

    Directory of Open Access Journals (Sweden)

    Schughart Klaus

    2010-09-01

    Full Text Available Abstract Background The analysis of expression quantitative trait loci (eQTL is a potentially powerful way to detect transcriptional regulatory relationships at the genomic scale. However, eQTL data sets often go underexploited because legacy QTL methods are used to map the relationship between the expression trait and genotype. Often these methods are inappropriate for complex traits such as gene expression, particularly in the case of epistasis. Results Here we compare legacy QTL mapping methods with several modern multi-locus methods and evaluate their ability to produce eQTL that agree with independent external data in a systematic way. We found that the modern multi-locus methods (Random Forests, sparse partial least squares, lasso, and elastic net clearly outperformed the legacy QTL methods (Haley-Knott regression and composite interval mapping in terms of biological relevance of the mapped eQTL. In particular, we found that our new approach, based on Random Forests, showed superior performance among the multi-locus methods. Conclusions Benchmarks based on the recapitulation of experimental findings provide valuable insight when selecting the appropriate eQTL mapping method. Our battery of tests suggests that Random Forests map eQTL that are more likely to be validated by independent data, when compared to competing multi-locus and legacy eQTL mapping methods.

  18. Digital image inpainting by example-based image synthesis method

    Institute of Scientific and Technical Information of China (English)

    Nie Dongdong; Ma Lizhuang; Xiao Shuangjiu

    2006-01-01

    A simple and effective image inpainting method is proposed in this paper, which is proved to be suitable for different kinds of target regions with shapes from little scraps to large unseemly objects in a wide range of images. It is an important improvement upon the traditional image inpainting techniques.By introducing a new bijective-mapping term into the matching cost function, the artificial repetition problem in the final inpainting image is practically solved. In addition, by adopting an inpainting error map,not only the target pixels are refined gradually during the inpainting process but also the overlapped target patches are combined more seamlessly than previous method. Finally, the inpainting time is dramatically decreased by using a new acceleration method in the matching process.

  19. A Practical Method for Image Rectification

    Institute of Scientific and Technical Information of China (English)

    CHENZezhi; WUChengke; YANYaoping

    2003-01-01

    This paper gives a new method for image rectification. The method is based on an estimation of the epipolar constraints and homography matrix H, which de-scribes the relationship of the corresponding epipolar lines.The approach makes the resampling images extremely sim-ple by using Bresenham Algorithm to extract pixels alongthe corresponding epipolar line. For a large set of cameramotions, remapping to a plane has the drawback of cre-ating rectified images that are potentially infinitely largeand presents a loss of pixel information a long the epipolar lines. In contrast, our method guarantees that the recti-fied images are bounded for all possible camera motions and minimizes the loss of pixel information along epipo-lar lines. Excellent experimental results obtained with a binocular stereovision images are presented and detailed analysis is provided.

  20. MULTISCALE DIFFERENTIAL METHOD FOR DIGITAL IMAGE SHARPENING

    Directory of Open Access Journals (Sweden)

    Vitaly V. Bezzubik

    2014-11-01

    Full Text Available We have proposed and tested a novel method for digital image sharpening. The method is based on multi-scale image analysis, calculation of differential responses of image brightness in different spatial scales, and the subsequent calculation of a restoration function, which sharpens the image by simple subtraction of its brightness values from those of the original image. The method features spatial transposition of the restoration function elements, its normalization, and taking into account the sign of the brightness differential response gradient close to the object edges. The calculation algorithm for the proposed method makes use of integer arithmetic that significantly reduces the computation time. The paper shows that for the images containing small amount of the blur due to the residual aberrations of an imaging system, only the first two scales are needed for the calculation of the restoration function. Similar to the blind deconvolution, the method requires no a priori information about the nature and magnitude of the blur kernel, but it is computationally inexpensive and is much easier in practical implementation. The most promising applications of the method are machine vision and surveillance systems based on real-time intelligent pattern recognition and decision making.

  1. COMPARISON OF DIGITAL IMAGE STEGANOGRAPHY METHODS

    OpenAIRE

    Seyyedi, S. A.; R. Kh. Sadykhov

    2013-01-01

    Steganography is a method of hiding information in other information of different format (container). There are many steganography techniques with various types of container. In the Internet, digital images are the most popular and frequently used containers. We consider main image steganography techniques and their advantages and disadvantages. We also identify the requirements of a good steganography algorithm and compare various such algorithms.

  2. Facilitating User Driven Innovation – A Study of Methods and Tools at Herlev Hospital

    DEFF Research Database (Denmark)

    Fronczek-Munter, Aneta

    2011-01-01

    Purpose: To present the preliminary research results of user driven innovation methods at healthcare facilities and their relevance to research and practice. Background/Approach: The paper is based on a case study conducted at the Gynaecologic Department at Herlev Hospital as part of Healthcare...... methods used in planning of new hospital facilities and the experiences with using them in practice to improve usability of the built environment. The study focuses on the initial stages of the design processes, specially ‘user driven innovation’ – the participatory design process in which users...... Innovation Lab, which is a public-private collaboration project testing the simulation and user-driven innovation between users and companies at Hospitals in the Danish Capital Region. The theories presented are user driven innovation, usability and boundary objects. Results: This article presents different...

  3. Facilitating User Driven Innovation – A Study of Methods and Tools at Herlev Hospital

    DEFF Research Database (Denmark)

    Fronczek-Munter, Aneta

    2012-01-01

    Purpose: To present the preliminary research results of user driven innovation methods at healthcare facilities and their relevance to research and practice. Background/Approach: The paper is based on a case study conducted at the Gynaecologic Department at Herlev Hospital as part of Healthcare...... methods used in planning of new hospital facilities and the experiences with using them in practice to improve usability of the built environment. The study focuses on the initial stages of the design processes, specially ‘user driven innovation’ – the participatory design process in which users...... Innovation Lab, which is a public-private collaboration project testing the simulation and user-driven innovation between users and companies at Hospitals in the Danish Capital Region. The theories presented are user driven innovation, usability and boundary objects. Results: This article presents different...

  4. Evaluating laser-driven Bremsstrahlung radiation sources for imaging and analysis of nuclear waste packages.

    Science.gov (United States)

    Jones, Christopher P; Brenner, Ceri M; Stitt, Camilla A; Armstrong, Chris; Rusby, Dean R; Mirfayzi, Seyed R; Wilson, Lucy A; Alejo, Aarón; Ahmed, Hamad; Allott, Ric; Butler, Nicholas M H; Clarke, Robert J; Haddock, David; Hernandez-Gomez, Cristina; Higginson, Adam; Murphy, Christopher; Notley, Margaret; Paraskevoulakos, Charilaos; Jowsey, John; McKenna, Paul; Neely, David; Kar, Satya; Scott, Thomas B

    2016-11-15

    A small scale sample nuclear waste package, consisting of a 28mm diameter uranium penny encased in grout, was imaged by absorption contrast radiography using a single pulse exposure from an X-ray source driven by a high-power laser. The Vulcan laser was used to deliver a focused pulse of photons to a tantalum foil, in order to generate a bright burst of highly penetrating X-rays (with energy >500keV), with a source size of waste materials. This feasibility study successfully demonstrated non-destructive radiography of encapsulated, high density, nuclear material. With recent developments of high-power laser systems, to 10Hz operation, a laser-driven multi-modal beamline for waste monitoring applications is envisioned.

  5. Marzipan: polymerase chain reaction-driven methods for authenticity control.

    Science.gov (United States)

    Brüning, Philipp; Haase, Ilka; Matissek, Reinhard; Fischer, Markus

    2011-11-23

    According to German food guidelines, almonds are the only oilseed ingredient allowed for the production of marzipan. Persipan is a marzipan surrogate in which the almonds are replaced by apricot or peach kernels. Cross-contamination of marzipan products with persipan may occur if both products are produced using the same production line. Adulterations or dilutions, respectively, of marzipan with other plant-derived products, for example, lupine or pea, have also been found. Almond and apricot plants are closely related. Consequently, classical analytical methods for the identification/differentiation often fail or are not sensitive enough to quantify apricot concentrations below 1%. Polymerase chain reaction (PCR)-based methods have been shown to enable the differentiation of closely related plant species in the past. These methods are characterized by high specificity and low detection limits. Isolation methods were developed and evaluated especially with respect to the matrix marzipan in terms of yield, purity, integrity, and amplificability of the isolated DNA. For the reliable detection of apricot, peach, pea, bean, lupine, soy, cashew, pistachio, and chickpea, qualitative standard and duplex PCR methods were developed and established. The applicability of these methods was tested by cross-reaction studies and analysis of spiked raw pastes. Contaminations at the level of 0.1% could be detected.

  6. A NEW IMAGE REGISTRATION METHOD FOR GREY IMAGES

    Institute of Scientific and Technical Information of China (English)

    Nie Xuan; Zhao Rongchun; Jiang Zetao

    2004-01-01

    The proposed algorithm relies on a group of new formulas for calculating tangent slope so as to address angle feature of edge curves of image. It can utilize tangent angle features to estimate automatically and fully the rotation parameters of geometric transform and enable rough matching of images with huge rotation difference. After angle compensation, it can search for matching point sets by correlation criterion, then calculate parameters of affine transform, enable higher-precision emendation of rotation and transferring. Finally, it fulfills precise matching for images with relax-tense iteration method. Compared with the registration approach based on wavelet direction-angle features, the matching algorithm with tangent feature of image edge is more robust and realizes precise registration of various images. Furthermore, it is also helpful in graphics matching.

  7. An attribute-based image segmentation method

    Directory of Open Access Journals (Sweden)

    M.C. de Andrade

    1999-07-01

    Full Text Available This work addresses a new image segmentation method founded on Digital Topology and Mathematical Morphology grounds. The ABA (attribute based absorptions transform can be viewed as a region-growing method by flooding simulation working at the scale of the main structures of the image. In this method, the gray level image is treated as a relief flooded from all its local minima, which are progressively detected and merged as the flooding takes place. Each local minimum is exclusively associated to one catchment basin (CB. The CBs merging process is guided by their geometric parameters as depth, area and/or volume. This solution enables the direct segmentation of the original image without the need of a preprocessing step or the explicit marker extraction step, often required by other flooding simulation methods. Some examples of image segmentation, employing the ABA transform, are illustrated for uranium oxide samples. It is shown that the ABA transform presents very good segmentation results even in presence of noisy images. Moreover, it's use is often easier and faster when compared to similar image segmentation methods.

  8. Alternating Krylov subspace image restoration methods

    National Research Council Canada - National Science Library

    Abad, J.O; Morigi, S; Reichel, L; Sgallari, F

    2012-01-01

    ... of the Krylov subspace used. However, our solution methods, suitably modified, also can be applied when no bound for the norm of η δ is known. We determine an approximation of the desired image u ˆ by so...

  9. Application of numerical methods to elasticity imaging.

    Science.gov (United States)

    Castaneda, Benjamin; Ormachea, Juvenal; Rodríguez, Paul; Parker, Kevin J

    2013-03-01

    Elasticity imaging can be understood as the intersection of the study of biomechanical properties, imaging sciences, and physics. It was mainly motivated by the fact that pathological tissue presents an increased stiffness when compared to surrounding normal tissue. In the last two decades, research on elasticity imaging has been an international and interdisciplinary pursuit aiming to map the viscoelastic properties of tissue in order to provide clinically useful information. As a result, several modalities of elasticity imaging, mostly based on ultrasound but also on magnetic resonance imaging and optical coherence tomography, have been proposed and applied to a number of clinical applications: cancer diagnosis (prostate, breast, liver), hepatic cirrhosis, renal disease, thyroiditis, arterial plaque evaluation, wall stiffness in arteries, evaluation of thrombosis in veins, and many others. In this context, numerical methods are applied to solve forward and inverse problems implicit in the algorithms in order to estimate viscoelastic linear and nonlinear parameters, especially for quantitative elasticity imaging modalities. In this work, an introduction to elasticity imaging modalities is presented. The working principle of qualitative modalities (sonoelasticity, strain elastography, acoustic radiation force impulse) and quantitative modalities (Crawling Waves Sonoelastography, Spatially Modulated Ultrasound Radiation Force (SMURF), Supersonic Imaging) will be explained. Subsequently, the areas in which numerical methods can be applied to elasticity imaging are highlighted and discussed. Finally, we present a detailed example of applying total variation and AM-FM techniques to the estimation of elasticity.

  10. Historic Methods for Capturing Magnetic Field Images

    Science.gov (United States)

    Kwan, Alistair

    2016-01-01

    I investigated two late 19th-century methods for capturing magnetic field images from iron filings for historical insight into the pedagogy of hands-on physics education methods, and to flesh out teaching and learning practicalities tacit in the historical record. Both methods offer opportunities for close sensory engagement in data-collection…

  11. Historic Methods for Capturing Magnetic Field Images

    Science.gov (United States)

    Kwan, Alistair

    2016-03-01

    I investigated two late 19th-century methods for capturing magnetic field images from iron filings for historical insight into the pedagogy of hands-on physics education methods, and to flesh out teaching and learning practicalities tacit in the historical record. Both methods offer opportunities for close sensory engagement in data-collection processes.

  12. New Imaging Spectrometric Method for Rotary Object

    Institute of Scientific and Technical Information of China (English)

    方俊永; 赵达尊; 蒋月娟; 楚建军

    2003-01-01

    A new technique for imaging spectrometer for rotary object based on computed-tomography is proposed. A discrete model of this imaging spectrometric system is established, which is accordant to actual measurements and convenient for computation. In computer simulations with this method, projections of the object are detected by CCD while the object is rotating, and the original spectral images are numerically reconstructed from them by using the algorithm of computed-tomography. Simulation results indicate that the principle of the method is correct and it performs well for both broadband and narrow-band spectral objects.

  13. An event-driven distributed processing architecture for image-guided cardiac ablation therapy.

    Science.gov (United States)

    Rettmann, M E; Holmes, D R; Cameron, B M; Robb, R A

    2009-08-01

    Medical imaging data is becoming increasing valuable in interventional medicine, not only for preoperative planning, but also for real-time guidance during clinical procedures. Three key components necessary for image-guided intervention are real-time tracking of the surgical instrument, aligning the real-world patient space with image-space, and creating a meaningful display that integrates the tracked instrument and patient data. Issues to consider when developing image-guided intervention systems include the communication scheme, the ability to distribute CPU intensive tasks, and flexibility to allow for new technologies. In this work, we have designed a communication architecture for use in image-guided catheter ablation therapy. Communication between the system components is through a database which contains an event queue and auxiliary data tables. The communication scheme is unique in that each system component is responsible for querying and responding to relevant events from the centralized database queue. An advantage of the architecture is the flexibility to add new system components without affecting existing software code. In addition, the architecture is intrinsically distributed, in that components can run on different CPU boxes, and even different operating systems. We refer to this Framework for Image-Guided Navigation using a Distributed Event-Driven Database in Real-Time as the FINDER architecture. This architecture has been implemented for the specific application of image-guided cardiac ablation therapy. We describe our prototype image-guidance system and demonstrate its functionality by emulating a cardiac ablation procedure with a patient-specific phantom. The proposed architecture, designed to be modular, flexible, and intuitive, is a key step towards our goal of developing a complete system for visualization and targeting in image-guided cardiac ablation procedures.

  14. Method for eliminating artifacts in CCD imagers

    Science.gov (United States)

    Turko, B. T.; Yates, G. J.

    1990-06-01

    An electronic method for eliminating artifacts in a video camera employing a charge coupled device (CCD) as an image sensor is presented. The method comprises the step of initializing the camera prior to normal readout. The method includes a first dump cycle period for transferring radiation generated charge into the horizontal register. This occurs while the decaying image on the phosphor being imaged is being integrated in the photosites, and a second dump cycle period, occurring after the phosphor image has decayed, for rapidly dumping unwanted smear charge which has been generated in the vertical registers. Image charge is then transferred from the photosites and to the vertical registers and readout in conventional fashion. The inventive method allows the video camera to be used in environments having high ionizing radiation content, and to capture images of events of very short duration and occurring either within or outside the normal visual wavelength spectrum. Resultant images are free from ghost, smear, and smear phenomena caused by insufficient opacity of the registers, and are also free from random damage caused by ionization charges which exceed the charge limit capacity of the photosites.

  15. A Cloud-Based Infrastructure for Feedback-Driven Training and Image Recognition.

    Science.gov (United States)

    Abedini, Mani; von Cavallar, Stefan; Chakravorty, Rajib; Davis, Matthew; Garnavi, Rahil

    2015-01-01

    Advanced techniques in machine learning combined with scalable "cloud" computing infrastructure are driving the creation of new and innovative health diagnostic applications. We describe a service and application for performing image training and recognition, tailored to dermatology and melanoma identification. The system implements new machine learning approaches to provide a feedback-driven training loop. This training sequence enhances classification performance by incrementally retraining the classifier model from expert responses. To easily provide this application and associated web service to clinical practices, we also describe a scalable cloud infrastructure, deployable in public cloud infrastructure and private, on-premise systems.

  16. Matrix Krylov subspace methods for image restoration

    Directory of Open Access Journals (Sweden)

    khalide jbilou

    2015-09-01

    Full Text Available In the present paper, we consider some matrix Krylov subspace methods for solving ill-posed linear matrix equations and in those problems coming from the restoration of blurred and noisy images. Applying the well known Tikhonov regularization procedure leads to a Sylvester matrix equation depending the Tikhonov regularized parameter. We apply the matrix versions of the well known Krylov subspace methods, namely the Least Squared (LSQR and the conjugate gradient (CG methods to get approximate solutions representing the restored images. Some numerical tests are presented to show the effectiveness of the proposed methods.

  17. The event-driven constant volume method for particle coagulation dynamics

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Monte Carlo (MC) method, which tracks small numbers of the dispersed simulation parti- cles and then describes the dynamic evolution of large numbers of real particles, consti- tutes an important class of methods for the numerical solution of population balance modeling. Particle coagulation dynamics is a complex task for MC. Event-driven MC ex- hibits higher accuracy and efficiency than time-driven MC on the whole. However, these available event-driven MCs track the "equally weighted simulation particle population" and maintain the number of simulated particles within bounds at the cost of "regulating" com- putational domain, which results in some constraints and drawbacks. This study designed the procedure of "differently weighted fictitious particle population" and the corresponding coagulation rule for differently weighted fictitious particles. And then, a new event-driven MC method was promoted to describe the coagulation dynamics between differently weighted fictitious particles, where "constant number scheme" and "stepwise constant number scheme" were developed to maintain the number of fictitious particles within bounds as well as the constant computational domain. The MC is named event-driven constant volume (EDCV) method. The quantitative comparison among several popular MCs shows that the EDCV method has the advantages of computational precision and computational efficiency over other available MCs.

  18. The event-driven constant volume method for particle coagulation dynamics

    Institute of Scientific and Technical Information of China (English)

    ZHAO HaiBo; ZHENG ChuGuang

    2008-01-01

    Monte Carlo (MC) method, which tracks small numbers of the dispersed simulation parti-cles and then describes the dynamic evolution of large numbers of real particles, consti-tutes an important class of methods for the numerical solution of population balance modeling. Particle coagulation dynamics is a complex task for MC. Event-driven MC ex-hibits higher accuracy and efficiency than time-driven MC on the whole. However, these available event-driven MCs track the "equally weighted simulation particle population" and maintain the number of simulated particles within bounds at the cost of "regulating" com-putational domain, which results in some constraints and drawbacks. This study designed the procedure of "differently weighted fictitious particle population" and the corresponding coagulation rule for differently weighted fictitious particles. And then, a new event-driven MC method was promoted to describe the coagulation dynamics between differently weighted fictitious particles, where "constant number scheme" and "stepwise constant number scheme" were developed to maintain the number of fictitious particles within bounds as well as the constant computational domain. The MC is named event-driven constant volume (EDCV) method. The quantitative comparison among several popular MCs shows that the EDCV method has the advantages of computational precision and computational efficiency over other available MCs.

  19. Note: Model-based identification method of a cable-driven wearable device for arm rehabilitation

    Science.gov (United States)

    Cui, Xiang; Chen, Weihai; Zhang, Jianbin; Wang, Jianhua

    2015-09-01

    Cable-driven exoskeletons have used active cables to actuate the system and are worn on subjects to provide motion assistance. However, this kind of wearable devices usually contains uncertain kinematic parameters. In this paper, a model-based identification method has been proposed for a cable-driven arm exoskeleton to estimate its uncertainties. The identification method is based on the linearized error model derived from the kinematics of the exoskeleton. Experiment has been conducted to demonstrate the feasibility of the proposed model-based method in practical application.

  20. An efficient method for unfolding kinetic pressure driven VISAR data

    Institute of Scientific and Technical Information of China (English)

    M.Hess; K.Peterson; A.Harvey-Thompson

    2015-01-01

    Velocity Interferometer System for Any Reflector(VISAR) [Barker and Hollenbach, J. Appl. Phys. 43, 4669(1972)]is a well-known diagnostic that is employed on many shock physics and pulsed-power experiments. With the VISAR diagnostic, the velocity on the surface of any metal flyer can be found. For most experiments employing VISAR, either a kinetic pressure [Grady, Mech. Mater. 29, 181(1998)] or a magnetic pressure [Lemke et al., Intl J. Impact Eng. 38,480(2011)] drives the motion of the flyer. Moreover, reliable prediction of the time-dependent pressure is often a critical component to understanding the physics of these experiments. Although VISAR can provide a precise measurement of a flyer’s surface velocity, the real challenge of this diagnostic implementation is using this velocity to unfold the timedependent pressure. The purpose of this paper is to elucidate a new method for quickly and reliably unfolding VISAR data.

  1. Evaluation of standoff distance method to determine the coronal magnetic field using CME-driven shocks

    Science.gov (United States)

    Suresh, K.; Shanmugaraju, A.; Syed Ibrahim, M.

    2016-11-01

    We have analyzed the propagation characteristics of four limb coronal mass ejections (CMEs) with their shocks. These CMEs were observed in 18 frames up to 18 solar radii using LASCO white light images. Gopalswamy and Yashiro (Astrophys. J. 736:L17, 2011) introduced the standoff distance method (SOD) to find the magnetic field in the corona using CME-driven shock. In this paper, we have used this technique to determine the magnetic field strength and to study the propagation/shock formation condition of these CMEs at 18 different locations. Since the thickness of shock sheath (standoff distance or SOD) is not constant around CME, we estimate the shock parameters and their variation in large and small SOD regions of the shock. The Mach number ranges from 1.7 to 2.8 and Alfvén speed varies from 197 to 857 km s^{-1}. Finally, we estimate the magnetic field variation in the corona. The magnetic field strength ranges from 4.9 to 26.2 mG from 8.3 to 17.5 solar radii. The estimated magnetic field strength in this study is consistent with the literature value (7.6 to 45.8 mG from Gopalswamy and Yashiro (Astrophys. J. 736:L17, 2011), and 6 to 105 mG from Kim et al. (Astrophys. J. 746:118, 2012)) and it smoothly follows the general coronal magnetic field profile.

  2. Image Retrieval Method for Multiscale Objects from Optical Colonoscopy Images

    Directory of Open Access Journals (Sweden)

    Hirokazu Nosato

    2017-01-01

    Full Text Available Optical colonoscopy is the most common approach to diagnosing bowel diseases through direct colon and rectum inspections. Periodic optical colonoscopy examinations are particularly important for detecting cancers at early stages while still treatable. However, diagnostic accuracy is highly dependent on both the experience and knowledge of the medical doctor. Moreover, it is extremely difficult, even for specialist doctors, to detect the early stages of cancer when obscured by inflammations of the colonic mucosa due to intractable inflammatory bowel diseases, such as ulcerative colitis. Thus, to assist the UC diagnosis, it is necessary to develop a new technology that can retrieve similar cases of diagnostic target image from cases in the past that stored the diagnosed images with various symptoms of colonic mucosa. In order to assist diagnoses with optical colonoscopy, this paper proposes a retrieval method for colonoscopy images that can cope with multiscale objects. The proposed method can retrieve similar colonoscopy images despite varying visible sizes of the target objects. Through three experiments conducted with real clinical colonoscopy images, we demonstrate that the method is able to retrieve objects of any visible size and any location at a high level of accuracy.

  3. Zeeman-Doppler Imaging : Old Problems and New Methods

    CERN Document Server

    Carroll, T A; Strassmeier, K G; Ilyin, I

    2009-01-01

    Zeeman-Doppler Imaging (ZDI) is a powerful inversion method to reconstruct stellar magnetic surface fields. The reconstruction process is usually solved by translating the inverse problem into a regularized least-square or optimization problem. In this contribution we will emphasize that ZDI is an inherent non-linear problem and the corresponding regularized optimization is, like many non-linear problems, potentially prone to local minima. We show how this problem will be exacerbated by using an inadequate forward model. To facilitate a more consistent full radiative transfer driven approach to ZDI we describe a two-stage strategy that consist of a principal component analysis (PCA) based line profile reconstruction and a fast approximate polarized radiative transfer method to synthesize local Stokes profiles. Moreover, we introduce a novel statistical inversion method based on artificial neural networks (ANN) which provide a fast calculation of a first guess model and allows to incorporate better physical co...

  4. Zeeman-Doppler imaging: old problems and new methods

    Science.gov (United States)

    Carroll, Thorsten A.; Kopf, Markus; Strassmeier, Klaus G.; Ilyin, Ilya

    2009-04-01

    Zeeman-Doppler Imaging (ZDI) is a powerful inversion method to reconstruct stellar magnetic surface fields. The reconstruction process is usually solved by translating the inverse problem into a regularized least-square or optimization problem. In this contribution we will emphasize that ZDI is an inherent non-linear problem and the corresponding regularized optimization is, like many non-linear problems, potentially prone to local minima. We show how this problem will be exacerbated by using an inadequate forward model. To facilitate a more consistent full radiative transfer driven approach to ZDI we describe a two-stage strategy that consist of a principal component analysis (PCA) based line profile reconstruction and a fast approximate polarized radiative transfer method to synthesize local Stokes profiles. Moreover, we introduce a novel statistical inversion method based on artificial neural networks (ANN) which provide a fast calculation of a first guess model and allows to incorporate better physical constraints into the inversion process.

  5. Schlieren High Speed Imaging on Fluid Flow in Liquid Induced by Plasma-driven Interfacial Forces

    Science.gov (United States)

    Lai, Janis; Foster, John

    2016-10-01

    Effective plasma-based water purification depends heavily on the transport of plasma-derived reactive species from the plasma into the liquid. Plasma interactions at the liquid-gas boundary are known to drive circulation in the bulk liquid. This forced circulation is not well understood. A 2-D plasma- in-liquid water apparatus is currently being investigated as a means to study the plasma-liquid interface to understand not only reactive species flows but to also understand plasma- driven fluid dynamic effects in the bulk fluid. Using Schlieren high speed imaging, plasma-induced density gradients near the interfacial region and into the bulk solution are measured to investigate the nature of these interfacial forces. Plasma-induced flow was also measured using particle imaging velocimetry. NSF CBET 1336375 and DOE DE-SC0001939.

  6. Probabilistic density function method for nonlinear dynamical systems driven by colored noise

    Energy Technology Data Exchange (ETDEWEB)

    Barajas-Solano, David A.; Tartakovsky, Alexandre M.

    2016-05-01

    We present a probability density function (PDF) method for a system of nonlinear stochastic ordinary differential equations driven by colored noise. The method provides an integro-differential equation for the temporal evolution of the joint PDF of the system's state, which we close by means of a modified Large-Eddy-Diffusivity-type closure. Additionally, we introduce the generalized local linearization (LL) approximation for deriving a computable PDF equation in the form of the second-order partial differential equation (PDE). We demonstrate the proposed closure and localization accurately describe the dynamics of the PDF in phase space for systems driven by noise with arbitrary auto-correlation time. We apply the proposed PDF method to the analysis of a set of Kramers equations driven by exponentially auto-correlated Gaussian colored noise to study the dynamics and stability of a power grid.

  7. Residual-driven online generalized multiscale finite element methods

    KAUST Repository

    Chung, Eric T.

    2015-09-08

    The construction of local reduced-order models via multiscale basis functions has been an area of active research. In this paper, we propose online multiscale basis functions which are constructed using the offline space and the current residual. Online multiscale basis functions are constructed adaptively in some selected regions based on our error indicators. We derive an error estimator which shows that one needs to have an offline space with certain properties to guarantee that additional online multiscale basis function will decrease the error. This error decrease is independent of physical parameters, such as the contrast and multiple scales in the problem. The offline spaces are constructed using Generalized Multiscale Finite Element Methods (GMsFEM). We show that if one chooses a sufficient number of offline basis functions, one can guarantee that additional online multiscale basis functions will reduce the error independent of contrast. We note that the construction of online basis functions is motivated by the fact that the offline space construction does not take into account distant effects. Using the residual information, we can incorporate the distant information provided the offline approximation satisfies certain properties. In the paper, theoretical and numerical results are presented. Our numerical results show that if the offline space is sufficiently large (in terms of the dimension) such that the coarse space contains all multiscale spectral basis functions that correspond to small eigenvalues, then the error reduction by adding online multiscale basis function is independent of the contrast. We discuss various ways computing online multiscale basis functions which include a use of small dimensional offline spaces.

  8. Handbook of mathematical methods in imaging

    CERN Document Server

    2015-01-01

    The Handbook of Mathematical Methods in Imaging provides a comprehensive treatment of the mathematical techniques used in imaging science. The material is grouped into two central themes, namely, Inverse Problems (Algorithmic Reconstruction) and Signal and Image Processing. Each section within the themes covers applications (modeling), mathematics, numerical methods (using a case example) and open questions. Written by experts in the area, the presentation is mathematically rigorous. This expanded and revised second edition contains updates to existing chapters and 16 additional entries on important mathematical methods such as graph cuts, morphology, discrete geometry, PDEs, conformal methods, to name a few. The entries are cross-referenced for easy navigation through connected topics. Available in both print and electronic forms, the handbook is enhanced by more than 200 illustrations and an extended bibliography. It will benefit students, scientists and researchers in applied mathematics. Engineers and com...

  9. Image change detection systems, methods, and articles of manufacture

    Science.gov (United States)

    Jones, James L.; Lassahn, Gordon D.; Lancaster, Gregory D.

    2010-01-05

    Aspects of the invention relate to image change detection systems, methods, and articles of manufacture. According to one aspect, a method of identifying differences between a plurality of images is described. The method includes loading a source image and a target image into memory of a computer, constructing source and target edge images from the source and target images to enable processing of multiband images, displaying the source and target images on a display device of the computer, aligning the source and target edge images, switching displaying of the source image and the target image on the display device, to enable identification of differences between the source image and the target image.

  10. A flower image retrieval method based on ROI feature

    Institute of Scientific and Technical Information of China (English)

    洪安祥; 陈刚; 李均利; 池哲儒; 张亶

    2004-01-01

    Flower image retrieval is a very important step for computer-aided plant species recognition. In this paper, we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower regions from flower images. For flower retrieval, we use the color histogram of a flower region to characterize the color features of flower and two shape-based features sets, Centroid-Contour Distance (CCD) and Angle Code Histogram (ACH), to characterize the shape features of a flower contour. Experimental results showed that our flower region extraction method based on color clustering and domain knowledge can produce accurate flower regions. Flower retrieval results on a database of 885 flower images collected from 14 plant species showed that our Region-of-Interest (ROI) based retrieval approach using both color and shape features can perform better than a method based on the global color histogram proposed by Swain and Ballard (1991) and a method based on domain knowledge-driven segmentation and color names proposed by Das et al.(1999).

  11. A flower image retrieval method based on ROI feature

    Institute of Scientific and Technical Information of China (English)

    洪安祥; 陈刚; 李均利; 池哲儒; 张亶

    2004-01-01

    Flower image retrieval is a very important step for computer-aided plant species recognition.In this paper,we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower regions from flower images.For flower retrieval,we use the color histogram of a flower region to characterize the color features of flower and two shape-based features sets,Centroid-Contour Distance(CCD)and Angle Code Histogram(ACH),to characterize the shape features of a flower contour.Experimental results showed that our flower region extraction method based on color clustering and domain knowledge can produce accurate flower regions.Flower retrieval results on a database of 885 flower images collected from 14 plant species showed that our Region-of-Interest(ROD based retrieval approach using both color and shape features can perform better than a method based on the global color histogram proposed by Swain and Ballard(1991)and a method based on domain knowledge-driven segmentation and color names proposed by Das et al.(1999).

  12. Image Inpainting Methods Evaluation and Improvement

    Directory of Open Access Journals (Sweden)

    Raluca Vreja

    2014-01-01

    Full Text Available With the upgrowing of digital processing of images and film archiving, the need for assisted or unsupervised restoration required the development of a series of methods and techniques. Among them, image inpainting is maybe the most impressive and useful. Based on partial derivative equations or texture synthesis, many other hybrid techniques have been proposed recently. The need for an analytical comparison, beside the visual one, urged us to perform the studies shown in the present paper. Starting with an overview of the domain, an evaluation of the five methods was performed using a common benchmark and measuring the PSNR. Conclusions regarding the performance of the investigated algorithms have been presented, categorizing them in function of the restored image structure. Based on these experiments, we have proposed an adaptation of Oliveira’s and Hadhoud’s algorithms, which are performing well on images with natural defects.

  13. Testing the Utility of a Data-Driven Approach for Assessing BMI from Face Images.

    Directory of Open Access Journals (Sweden)

    Karin Wolffhechel

    Full Text Available Several lines of evidence suggest that facial cues of adiposity may be important for human social interaction. However, tests for quantifiable cues of body mass index (BMI in the face have examined only a small number of facial proportions and these proportions were found to have relatively low predictive power. Here we employed a data-driven approach in which statistical models were built using principal components (PCs derived from objectively defined shape and color characteristics in face images. The predictive power of these models was then compared with models based on previously studied facial proportions (perimeter-to-area ratio, width-to-height ratio, and cheek-to-jaw width. Models based on 2D shape-only PCs, color-only PCs, and 2D shape and color PCs combined each performed significantly and substantially better than models based on one or more of the previously studied facial proportions. A non-linear PC model considering both 2D shape and color PCs was the best predictor of BMI. These results highlight the utility of a "bottom-up", data-driven approach for assessing BMI from face images.

  14. Testing the Utility of a Data-Driven Approach for Assessing BMI from Face Images.

    Science.gov (United States)

    Wolffhechel, Karin; Hahn, Amanda C; Jarmer, Hanne; Fisher, Claire I; Jones, Benedict C; DeBruine, Lisa M

    2015-01-01

    Several lines of evidence suggest that facial cues of adiposity may be important for human social interaction. However, tests for quantifiable cues of body mass index (BMI) in the face have examined only a small number of facial proportions and these proportions were found to have relatively low predictive power. Here we employed a data-driven approach in which statistical models were built using principal components (PCs) derived from objectively defined shape and color characteristics in face images. The predictive power of these models was then compared with models based on previously studied facial proportions (perimeter-to-area ratio, width-to-height ratio, and cheek-to-jaw width). Models based on 2D shape-only PCs, color-only PCs, and 2D shape and color PCs combined each performed significantly and substantially better than models based on one or more of the previously studied facial proportions. A non-linear PC model considering both 2D shape and color PCs was the best predictor of BMI. These results highlight the utility of a "bottom-up", data-driven approach for assessing BMI from face images.

  15. High power ring methods and accelerator driven subcritical reactor application

    Energy Technology Data Exchange (ETDEWEB)

    Tahar, Malek Haj [Univ. of Grenoble (France)

    2016-08-07

    High power proton accelerators allow providing, by spallation reaction, the neutron fluxes necessary in the synthesis of fissile material, starting from Uranium 238 or Thorium 232. This is the basis of the concept of sub-critical operation of a reactor, for energy production or nuclear waste transmutation, with the objective of achieving cleaner, safer and more efficient process than today’s technologies allow. Designing, building and operating a proton accelerator in the 500-1000 MeV energy range, CW regime, MW power class still remains a challenge nowadays. There is a limited number of installations at present achieving beam characteristics in that class, e.g., PSI in Villigen, 590 MeV CW beam from a cyclotron, SNS in Oakland, 1 GeV pulsed beam from a linear accelerator, in addition to projects as the ESS in Europe, a 5 MW beam from a linear accelerator. Furthermore, coupling an accelerator to a sub-critical nuclear reactor is a challenging proposition: some of the key issues/requirements are the design of a spallation target to withstand high power densities as well as ensure the safety of the installation. These two domains are the grounds of the PhD work: the focus is on the high power ring methods in the frame of the KURRI FFAG collaboration in Japan: upgrade of the installation towards high intensity is crucial to demonstrate the high beam power capability of FFAG. Thus, modeling of the beam dynamics and benchmarking of different codes was undertaken to validate the simulation results. Experimental results revealed some major losses that need to be understood and eventually overcome. By developing analytical models that account for the field defects, one identified major sources of imperfection in the design of scaling FFAG that explain the important tune variations resulting in the crossing of several betatron resonances. A new formula is derived to compute the tunes and properties established that characterize the effect of the field imperfections on the

  16. Circular SAR Optimization Imaging Method of Buildings

    Directory of Open Access Journals (Sweden)

    Wang Jian-feng

    2015-12-01

    Full Text Available The Circular Synthetic Aperture Radar (CSAR can obtain the entire scattering properties of targets because of its great ability of 360° observation. In this study, an optimal orientation of the CSAR imaging algorithm of buildings is proposed by applying a combination of coherent and incoherent processing techniques. FEKO software is used to construct the electromagnetic scattering modes and simulate the radar echo. The FEKO imaging results are compared with the isotropic scattering results. On comparison, the optimal azimuth coherent accumulation angle of CSAR imaging of buildings is obtained. Practically, the scattering directions of buildings are unknown; therefore, we divide the 360° echo of CSAR into many overlapped and few angle echoes corresponding to the sub-aperture and then perform an imaging procedure on each sub-aperture. Sub-aperture imaging results are applied to obtain the all-around image using incoherent fusion techniques. The polarimetry decomposition method is used to decompose the all-around image and further retrieve the edge information of buildings successfully. The proposed method is validated with P-band airborne CSAR data from Sichuan, China.

  17. COMPARISON OF DIGITAL IMAGE STEGANOGRAPHY METHODS

    Directory of Open Access Journals (Sweden)

    S. A. Seyyedi

    2013-01-01

    Full Text Available Steganography is a method of hiding information in other information of different format (container. There are many steganography techniques with various types of container. In the Internet, digital images are the most popular and frequently used containers. We consider main image steganography techniques and their advantages and disadvantages. We also identify the requirements of a good steganography algorithm and compare various such algorithms.

  18. Active Learning in Context-Driven Stream Mining With an Application to Image Mining.

    Science.gov (United States)

    Tekin, Cem; van der Schaar, Mihaela

    2015-11-01

    We propose an image stream mining method in which images arrive with contexts (metadata) and need to be processed in real time by the image mining system (IMS), which needs to make predictions and derive actionable intelligence from these streams. After extracting the features of the image by preprocessing, IMS determines online the classifier to use on the extracted features to make a prediction using the context of the image. A key challenge associated with stream mining is that the prediction accuracy of the classifiers is unknown, since the image source is unknown; thus, these accuracies need to be learned online. Another key challenge of stream mining is that learning can only be done by observing the true label, but this is costly to obtain. To address these challenges, we model the image stream mining problem as an active, online contextual experts problem, where the context of the image is used to guide the classifier selection decision. We develop an active learning algorithm and show that it achieves regret sublinear in the number of images that have been observed so far. To further illustrate and assess the performance of our proposed methods, we apply them to diagnose breast cancer from the images of cellular samples obtained from the fine needle aspirate of breast mass. Our findings show that very high diagnosis accuracy can be achieved by actively obtaining only a small fraction of true labels through surgical biopsies. Other applications include video surveillance and video traffic monitoring.

  19. Region-based multisensor image fusion method

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Image fusion should consider the priori knowledge of the source images to be fused, such as the characteristics of the images and the goal of image fusion, that is to say, the knowledge about the input data and the application plays a crucial role. This paper is concerned on multiresolution (MR) image fusion. Considering the characteristics of the multisensor (SAR and FLIR etc) and the goal of fusion, which is to achieve one image in possession of the contours feature and the target region feature. It seems more meaningful to combine features rather than pixels. A multisensor image fusion scheme based on K-means cluster and steerable pyramid is presented. K-means cluster is used to segment out objects in FLIR images. The steerable pyramid is a multiresolution analysis method, which has a good property to extract contours information at different scales. Comparisons are made with the relevant existing techniques in the literature. The paper concludes with some examples to illustrate the efficiency of the proposed scheme.

  20. Iterative Lavrentiev regularization for symmetric kernel-driven operator equations: with application to digital image restoration problems

    Institute of Scientific and Technical Information of China (English)

    WANG Yanfei; GU Xingfa; YU Tao; FAN Shufang

    2005-01-01

    The symmetric kernel-driven operator equations play an important role in mathematical physics, engineering, atmospheric image processing and remote sensing sciences. Such problems are usually ill-posed in the sense that even if a unique solution exists, the solution need not depend continuously on the input data. One common technique to overcome the difficulty is applying the Tikhonov regularization to the symmetric kernel operator equations, which is more generally called the Lavrentiev regularization.It has been shown that the iterative implementation of the Tikhonov regularization can improve the rate of convergence. Therefore in this paper, we study the iterative Lavrentiev regularization method in a similar way when applying it to symmetric kernel problems which appears frequently in applications, say digital image restoration problems. We first prove the convergence property, and then under the widely used Morozov discrepancy principle(MDP), we prove the regularity of the method. Numerical performance for digital image restoration is included to confirm the theory. It seems that the iterated Lavrentiev regularization with the MDP strategy is appropriate for solving symmetric kernel problems.

  1. Case Study of CPT-based Design Methods for Axial Capacity of Driven Piles in Sand

    DEFF Research Database (Denmark)

    Thomassen, Kristina; Ibsen, Lars Bo; Andersen, Lars Vabbersgaard

    2012-01-01

    Today the design of onshore axially loaded driven piles in cohesionless soil is commonly made on basis of CPT-based methods because field investigations have shown strong correlation between the local shaft friction and the CPT cone resistance. However, the recommended design method for axially....... Thus, several CPT-based methods have been proposed for the design of offshore driven piles in cohesionless soil such as the UWA-05, ICP-05, and NGI-99 methods. This article treats a case study where the API-method as well as the UWA-05 and NGI-99 methods are compared using CPT-data from an offshore...... location with dense to very dense sand. The design of the piles in the jacket foundation shows that API-00 for both the tension and the compression loads predicted much longer piles than the CPT-based methods. Variation of the pile length and pile diameter shows that NGI-99 and UWA-05 predicts almost...

  2. A ROBUST METHOD FOR FINGERPRINTING DIGITAL IMAGES

    Institute of Scientific and Technical Information of China (English)

    Saad Amer; Yi xian Yang

    2001-01-01

    In this paper, a method to fingerprint digital images is proposed, and different watermarked copies with different identification string are made. After determining the number of the customers and the length of the watermark string, this method chooses some values inside the digital image using a characteristic function, and adds watermarks to these values in a way that can protect the product against the attacks happened by comparing two fingerprinted copies.The watermarks are a string of binary numbers -1s and 1s. Every customer will be distinguished by a series of 1s and -1s generated by a pseudo-random generator. The owner of the image can determine the number of customers and the length of the string as well as this method will add another watermarking values to watermark string to protect the product.

  3. Data-driven remaining useful life prognosis techniques stochastic models, methods and applications

    CERN Document Server

    Si, Xiao-Sheng; Hu, Chang-Hua

    2017-01-01

    This book introduces data-driven remaining useful life prognosis techniques, and shows how to utilize the condition monitoring data to predict the remaining useful life of stochastic degrading systems and to schedule maintenance and logistics plans. It is also the first book that describes the basic data-driven remaining useful life prognosis theory systematically and in detail. The emphasis of the book is on the stochastic models, methods and applications employed in remaining useful life prognosis. It includes a wealth of degradation monitoring experiment data, practical prognosis methods for remaining useful life in various cases, and a series of applications incorporated into prognostic information in decision-making, such as maintenance-related decisions and ordering spare parts. It also highlights the latest advances in data-driven remaining useful life prognosis techniques, especially in the contexts of adaptive prognosis for linear stochastic degrading systems, nonlinear degradation modeling based pro...

  4. Blind image deconvolution methods and convergence

    CERN Document Server

    Chaudhuri, Subhasis; Rameshan, Renu

    2014-01-01

    Blind deconvolution is a classical image processing problem which has been investigated by a large number of researchers over the last four decades. The purpose of this monograph is not to propose yet another method for blind image restoration. Rather the basic issue of deconvolvability has been explored from a theoretical view point. Some authors claim very good results while quite a few claim that blind restoration does not work. The authors clearly detail when such methods are expected to work and when they will not. In order to avoid the assumptions needed for convergence analysis in the

  5. Image-reconstruction methods in positron tomography

    CERN Document Server

    Townsend, David W; CERN. Geneva

    1993-01-01

    Physics and mathematics for medical imaging In the two decades since the introduction of the X-ray scanner into radiology, medical imaging techniques have become widely established as essential tools in the diagnosis of disease. As a consequence of recent technological and mathematical advances, the non-invasive, three-dimensional imaging of internal organs such as the brain and the heart is now possible, not only for anatomical investigations using X-rays but also for studies which explore the functional status of the body using positron-emitting radioisotopes and nuclear magnetic resonance. Mathematical methods which enable three-dimentional distributions to be reconstructed from projection data acquired by radiation detectors suitably positioned around the patient will be described in detail. The lectures will trace the development of medical imaging from simpleradiographs to the present-day non-invasive measurement of in vivo boichemistry. Powerful techniques to correlate anatomy and function that are cur...

  6. Parallel imaging methods for phased array MRI

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Two parallel methods for magnetic resonance imaging (MRI) using radio frequency (RF) phased array surface coils, named spatial local Fourier encoding (SLFE) and spatial RF encoding (SRFE), are presented. The MR signals are acquired from separate channels across the coils, each of which covers a sub-FOV (field-of-view) in a parallel fashion, and the acquired data are combined to form an image of entire FOV. These two parallel encoding techniques can accelerate MR imaging greatly, yet associated artifact may appear, although the SLFE is an effective image reconstruction method which can reduce the localized artifact in some degrees. By the SRFE, RF coil array can be utilized for spatial encoding through a specialized coil design. The images are acquired in a snapshot with a high signal-to-noise ratio (SNR) without the costly gradient system, resulting in great saving of cost. Both mutual induction and aliasing effect of adjacent coils are critical to the success of SRFE. The strategies of inverse source problem and wavelet transform (WT) can be employed to eliminate them. The results simulated by MATLAB are reported.

  7. Intensity inhomogeneity correction of structural MR images: a data-driven approach to define input algorithm parameters

    Directory of Open Access Journals (Sweden)

    Marco eGanzetti

    2016-03-01

    Full Text Available Intensity non-uniformity (INU in magnetic resonance (MR imaging is a major issue when conducting analyses of brain structural properties. An inaccurate INU correction may result in qualitative and quantitative misinterpretations. Several INU correction methods exist, whose performance largely depend on the specific parameter settings that need to be chosen by the user. Here we addressed the question of how to select the best input parameters for a specific INU correction algorithm. Our investigation was based on the INU correction algorithm implemented in SPM, but this can be in principle extended to any other algorithm requiring the selection of input parameters. We conducted a comprehensive comparison of indirect metrics for the assessment of INU correction performance, namely the coefficient of variation of white matter (CV_WM, the coefficient of variation of gray matter (CV_GM, and the coefficient of joint variation between white matter and gray matter (CJV. Using simulated MR data, we observed the CJV to be more accurate than CV_WM and CV_GM, provided that the noise level in the INU-corrected image was controlled by means of spatial smoothing. Based on the CJV, we developed a data-driven approach for selecting INU correction parameters, which could effectively work on actual MR images. To this end, we implemented an enhanced procedure for the definition of white and gray matter masks, based on which the CJV was calculated. Our approach was validated using actual T1-weighted images collected with 1.5 T, 3 T and 7 T MR scanners. We found that our procedure can reliably assist the selection of valid INU correction algorithm parameters, thereby contributing to an enhanced inhomogeneity correction in MR images.

  8. Intensity Inhomogeneity Correction of Structural MR Images: A Data-Driven Approach to Define Input Algorithm Parameters.

    Science.gov (United States)

    Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante

    2016-01-01

    Intensity non-uniformity (INU) in magnetic resonance (MR) imaging is a major issue when conducting analyses of brain structural properties. An inaccurate INU correction may result in qualitative and quantitative misinterpretations. Several INU correction methods exist, whose performance largely depend on the specific parameter settings that need to be chosen by the user. Here we addressed the question of how to select the best input parameters for a specific INU correction algorithm. Our investigation was based on the INU correction algorithm implemented in SPM, but this can be in principle extended to any other algorithm requiring the selection of input parameters. We conducted a comprehensive comparison of indirect metrics for the assessment of INU correction performance, namely the coefficient of variation of white matter (CVWM), the coefficient of variation of gray matter (CVGM), and the coefficient of joint variation between white matter and gray matter (CJV). Using simulated MR data, we observed the CJV to be more accurate than CVWM and CVGM, provided that the noise level in the INU-corrected image was controlled by means of spatial smoothing. Based on the CJV, we developed a data-driven approach for selecting INU correction parameters, which could effectively work on actual MR images. To this end, we implemented an enhanced procedure for the definition of white and gray matter masks, based on which the CJV was calculated. Our approach was validated using actual T1-weighted images collected with 1.5 T, 3 T, and 7 T MR scanners. We found that our procedure can reliably assist the selection of valid INU correction algorithm parameters, thereby contributing to an enhanced inhomogeneity correction in MR images.

  9. Testing the Utility of a Data-Driven Approach for Assessing BMI from Face Images

    DEFF Research Database (Denmark)

    Wolffhechel, Karin Marie Brandt; Hahn, Amanda C.; Jarmer, Hanne Østergaard

    2015-01-01

    Several lines of evidence suggest that facial cues of adiposity may be important for human social interaction. However, tests for quantifiable cues of body mass index (BMI) in the face have examined only a small number of facial proportions and these proportions were found to have relatively low...... predictive power. Here we employed a data-driven approach in which statistical models were built using principal components (PCs) derived from objectively defined shape and color characteristics in face images. The predictive power of these models was then compared with models based on previously studied...... facial proportions (perimeter-to-area ratio, width-to-height ratio, and cheek-to-jaw width). Models based on 2D shape-only PCs, color-only PCs, and 2D shape and color PCs combined each performed significantly and substantially better than models based on one or more of the previously studied facial...

  10. Photoswitchable Magnetic Resonance Imaging Contrast by Improved Light-Driven Coordination-Induced Spin State Switch.

    Science.gov (United States)

    Dommaschk, Marcel; Peters, Morten; Gutzeit, Florian; Schütt, Christian; Näther, Christian; Sönnichsen, Frank D; Tiwari, Sanjay; Riedel, Christian; Boretius, Susann; Herges, Rainer

    2015-06-24

    We present a fully reversible and highly efficient on-off photoswitching of magnetic resonance imaging (MRI) contrast with green (500 nm) and violet-blue (435 nm) light. The contrast change is based on intramolecular light-driven coordination-induced spin state switch (LD-CISSS), performed with azopyridine-substituted Ni-porphyrins. The relaxation time of the solvent protons in 3 mM solutions of the azoporphyrins in DMSO was switched between 3.5 and 1.7 s. The relaxivity of the contrast agent changes by a factor of 6.7. No fatigue or side reaction was observed, even after >100,000 switching cycles in air at room temperature. Electron-donating substituents at the pyridine improve the LD-CISSS in two ways: better photostationary states are achieved, and intramolecular binding is enhanced.

  11. Three-dimensional image signals: processing methods

    Science.gov (United States)

    Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru

    2010-11-01

    Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.

  12. Information-Driven Blind Doppler Shift Estimation and Compensation Methods for Underwater Wireless Sensor Networks

    Science.gov (United States)

    2015-08-24

    Underwater Wireless Sensor Networks The views, opinions and/or findings contained in this report are those of the author(s) and should not contrued as an...reviewed journals: Final Report: Information-Driven Blind Doppler Shift Estimation and Compensation Methods for Underwater Wireless Sensor Networks ...Report Title We investigated different methods for blind Doppler shift estimation and compensation in underwater acoustic wireless sensor networks

  13. A Frequency Splitting Method For CFM Imaging

    DEFF Research Database (Denmark)

    Udesen, Jesper; Gran, Fredrik; Jensen, Jørgen Arendt

    2006-01-01

    of narrow band pulses as in conventional CFM imaging. By appropriate filtration, the returned signals are divided into a number of narrow band signals which are approximately disjoint. After clutter filtering the velocities are found from each frequency band using a conventional autocorrelation estimator......The performance of conventional CFM imaging will often be degraded due to the relatively low number of pulses (4-10) used for each velocity estimate. To circumvent this problem we propose a new method using frequency splitting (FS). The FS method uses broad band chirps as excitation pulses instead...... estimator. In the simulation, the relative mean standard deviation of the velocity estimates over the vessel was 2.43% when using the FD method and the relative mean absolute bias was 1.84%. For the reference 8 oscillation pulse, the relative mean standard deviation over the vessel was 4...

  14. Speckle Suppression Method for SAR Image

    Directory of Open Access Journals (Sweden)

    Jiming Guo

    2013-04-01

    Full Text Available In this study, a new speckle reduction method was proposed in terms of by Bidimensional Empirical Mode Decomposition (BEMD. In this method, the SAR image containing speckle noise was decomposed into a number of elementary components by using BEMD and then the extremal points are done the boundary equivalent extension after screening and the residual continue to be done the boundary equivalent extension until screening is completed, finally, the image was reconstructed, which reduced the speckle noise. Experimental results show that this method has good effect on suppressing speckle noise, compared to the average filter, median filter and gaussian filter and has advantages of sufficiently retaining edge and detail information while suppressing speckle noise.

  15. Neutron imaging with the short-pulse laser driven neutron source at the Trident laser facility

    Science.gov (United States)

    Guler, N.; Volegov, P.; Favalli, A.; Merrill, F. E.; Falk, K.; Jung, D.; Tybo, J. L.; Wilde, C. H.; Croft, S.; Danly, C.; Deppert, O.; Devlin, M.; Fernandez, J.; Gautier, D. C.; Geissel, M.; Haight, R.; Hamilton, C. E.; Hegelich, B. M.; Henzlova, D.; Johnson, R. P.; Schaumann, G.; Schoenberg, K.; Schollmeier, M.; Shimada, T.; Swinhoe, M. T.; Taddeucci, T.; Wender, S. A.; Wurden, G. A.; Roth, M.

    2016-10-01

    Emerging approaches to short-pulse laser-driven neutron production offer a possible gateway to compact, low cost, and intense broad spectrum sources for a wide variety of applications. They are based on energetic ions, driven by an intense short-pulse laser, interacting with a converter material to produce neutrons via breakup and nuclear reactions. Recent experiments performed with the high-contrast laser at the Trident laser facility of Los Alamos National Laboratory have demonstrated a laser-driven ion acceleration mechanism operating in the regime of relativistic transparency, featuring a volumetric laser-plasma interaction. This mechanism is distinct from previously studied ones that accelerate ions at the laser-target surface. The Trident experiments produced an intense beam of deuterons with an energy distribution extending above 100 MeV. This deuteron beam, when directed at a beryllium converter, produces a forward-directed neutron beam with ˜5 × 109 n/sr, in a single laser shot, primarily due to deuteron breakup. The neutron beam has a pulse duration on the order of a few nanoseconds with an energy distribution extending from a few hundreds of keV to almost 80 MeV. For the experiments on neutron-source spot-size measurements, our gated neutron imager was setup to select neutrons in the energy range of 2.5-35 MeV. The spot size of neutron emission at the converter was measured by two different imaging techniques, using a knife-edge and a penumbral aperture, in two different experimental campaigns. The neutron-source spot size is measured ˜1 mm for both experiments. The measurements and analysis reported here give a spatial characterization for this type of neutron source for the first time. In addition, the forward modeling performed provides an empirical estimate of the spatial characteristics of the deuteron ion-beam. These experimental observations, taken together, provide essential yet unique data to benchmark and verify theoretical work into the

  16. System engineering of the visible infrared imaging radiometer suite (VIIRS): improvements in imaging radiometry enabled by innovation driven by requirements

    Science.gov (United States)

    Puschell, Jeffery J.; Ardanuy, Philip E.; Schueler, Carl F.

    2016-09-01

    The Visible Infrared Imaging Radiometer Suite (VIIRS) is the new US operational environmental imaging spectroradiometer in polar orbit. The first VIIRS flight unit onboard Suomi NPP has been providing high-quality visible/infrared Earth observations since 2011. VIIRS provides an unprecedented combination of higher spatial resolution data across a wider area and more complete spectral coverage with onboard calibration than legacy instruments including AVHRR developed in the 1970s for NOAA, OLS developed in the 1970s for US DoD, MODIS developed in the 1990s for the NASA Terra and Aqua satellites and SeaWiFS developed for the commercial SeaStar system in the 1990s. A highly sensitive low light level day/night band (DNB) in VIIRS is improving weather forecasting around the world and providing new ways to observe the Earth from space. VIIRS replaces four legacy sensors with a single instrument enabled by innovations that were driven by requirements defined by NPOESS in the late 1990s. This paper highlights innovations developed by the VIIRS design team in response to challenging driving NPOESS requirements that resulted in remarkable improvements in operational remote sensing.

  17. Survey: interpolation methods in medical image processing.

    Science.gov (United States)

    Lehmann, T M; Gönner, C; Spitzer, K

    1999-11-01

    Image interpolation techniques often are required in medical imaging for image generation (e.g., discrete back projection for inverse Radon transform) and processing such as compression or resampling. Since the ideal interpolation function spatially is unlimited, several interpolation kernels of finite size have been introduced. This paper compares 1) truncated and windowed sinc; 2) nearest neighbor; 3) linear; 4) quadratic; 5) cubic B-spline; 6) cubic; g) Lagrange; and 7) Gaussian interpolation and approximation techniques with kernel sizes from 1 x 1 up to 8 x 8. The comparison is done by: 1) spatial and Fourier analyses; 2) computational complexity as well as runtime evaluations; and 3) qualitative and quantitative interpolation error determinations for particular interpolation tasks which were taken from common situations in medical image processing. For local and Fourier analyses, a standardized notation is introduced and fundamental properties of interpolators are derived. Successful methods should be direct current (DC)-constant and interpolators rather than DC-inconstant or approximators. Each method's parameters are tuned with respect to those properties. This results in three novel kernels, which are introduced in this paper and proven to be within the best choices for medical image interpolation: the 6 x 6 Blackman-Harris windowed sinc interpolator, and the C2-continuous cubic kernels with N = 6 and N = 8 supporting points. For quantitative error evaluations, a set of 50 direct digital X rays was used. They have been selected arbitrarily from clinical routine. In general, large kernel sizes were found to be superior to small interpolation masks. Except for truncated sinc interpolators, all kernels with N = 6 or larger sizes perform significantly better than N = 2 or N = 3 point methods (p cubic 6 x 6 interpolator with continuous second derivatives, as defined in (24), can be recommended for most common interpolation tasks. It appears to be the fastest

  18. Method of improving a digital image

    Science.gov (United States)

    Rahman, Zia-ur (Inventor); Jobson, Daniel J. (Inventor); Woodell, Glenn A. (Inventor)

    1999-01-01

    A method of improving a digital image is provided. The image is initially represented by digital data indexed to represent positions on a display. The digital data is indicative of an intensity value I.sub.i (x,y) for each position (x,y) in each i-th spectral band. The intensity value for each position in each i-th spectral band is adjusted to generate an adjusted intensity value for each position in each i-th spectral band in accordance with ##EQU1## where S is the number of unique spectral bands included in said digital data, W.sub.n is a weighting factor and * denotes the convolution operator. Each surround function F.sub.n (x,y) is uniquely scaled to improve an aspect of the digital image, e.g., dynamic range compression, color constancy, and lightness rendition. The adjusted intensity value for each position in each i-th spectral band is filtered with a common function and then presented to a display device. For color images, a novel color restoration step is added to give the image true-to-life color that closely matches human observation.

  19. Test images for the maximum entropy image restoration method

    Science.gov (United States)

    Mackey, James E.

    1990-01-01

    One of the major activities of any experimentalist is data analysis and reduction. In solar physics, remote observations are made of the sun in a variety of wavelengths and circumstances. In no case is the data collected free from the influence of the design and operation of the data gathering instrument as well as the ever present problem of noise. The presence of significant noise invalidates the simple inversion procedure regardless of the range of known correlation functions. The Maximum Entropy Method (MEM) attempts to perform this inversion by making minimal assumptions about the data. To provide a means of testing the MEM and characterizing its sensitivity to noise, choice of point spread function, type of data, etc., one would like to have test images of known characteristics that can represent the type of data being analyzed. A means of reconstructing these images is presented.

  20. A Simple Method for Measuring the Verticality of Small-Diameter Driven Wells

    DEFF Research Database (Denmark)

    Kjeldsen, Peter; Skov, Bent

    1994-01-01

    The presence of stones, solid waste, and other obstructions can deflect small-diameter driven wells during installation, leading to deviations of the well from its intended position. This could lead to erroneous results, especially for measurements of ground water levels by water level meters....... A simple method was developed to measure deviations from the intended positions of well screens and determine correction factors required for proper measurement of ground water levels in nonvertical wells. The method is based upon measurement of the hydrostatic pressure in the bottom of a water column......, which is established in the well lube. The method was used to correct water level measurement in wells driven through a landfill site. Errors of up to 27 cm in water level were observed at the landfill site. The correction of the water level measurements had a significant effect on estimated local...

  1. Multivariate semiparametric spatial methods for imaging data.

    Science.gov (United States)

    Chen, Huaihou; Cao, Guanqun; Cohen, Ronald A

    2017-04-01

    Univariate semiparametric methods are often used in modeling nonlinear age trajectories for imaging data, which may result in efficiency loss and lower power for identifying important age-related effects that exist in the data. As observed in multiple neuroimaging studies, age trajectories show similar nonlinear patterns for the left and right corresponding regions and for the different parts of a big organ such as the corpus callosum. To incorporate the spatial similarity information without assuming spatial smoothness, we propose a multivariate semiparametric regression model with a spatial similarity penalty, which constrains the variation of the age trajectories among similar regions. The proposed method is applicable to both cross-sectional and longitudinal region-level imaging data. We show the asymptotic rates for the bias and covariance functions of the proposed estimator and its asymptotic normality. Our simulation studies demonstrate that by borrowing information from similar regions, the proposed spatial similarity method improves the efficiency remarkably. We apply the proposed method to two neuroimaging data examples. The results reveal that accounting for the spatial similarity leads to more accurate estimators and better functional clustering results for visualizing brain atrophy pattern.Functional clustering; Longitudinal magnetic resonance imaging (MRI); Penalized B-splines; Region of interest (ROI); Spatial penalty.

  2. Probabilistic density function method for nonlinear dynamical systems driven by colored noise.

    Science.gov (United States)

    Barajas-Solano, David A; Tartakovsky, Alexandre M

    2016-05-01

    We present a probability density function (PDF) method for a system of nonlinear stochastic ordinary differential equations driven by colored noise. The method provides an integrodifferential equation for the temporal evolution of the joint PDF of the system's state, which we close by means of a modified large-eddy-diffusivity (LED) closure. In contrast to the classical LED closure, the proposed closure accounts for advective transport of the PDF in the approximate temporal deconvolution of the integrodifferential equation. In addition, we introduce the generalized local linearization approximation for deriving a computable PDF equation in the form of a second-order partial differential equation. We demonstrate that the proposed closure and localization accurately describe the dynamics of the PDF in phase space for systems driven by noise with arbitrary autocorrelation time. We apply the proposed PDF method to analyze a set of Kramers equations driven by exponentially autocorrelated Gaussian colored noise to study nonlinear oscillators and the dynamics and stability of a power grid. Numerical experiments show the PDF method is accurate when the noise autocorrelation time is either much shorter or longer than the system's relaxation time, while the accuracy decreases as the ratio of the two timescales approaches unity. Similarly, the PDF method accuracy decreases with increasing standard deviation of the noise.

  3. Image processing methods to obtain symmetrical distribution from projection image.

    Science.gov (United States)

    Asano, H; Takenaka, N; Fujii, T; Nakamatsu, E; Tagami, Y; Takeshima, K

    2004-10-01

    Flow visualization and measurement of cross-sectional liquid distribution is very effective to clarify the effects of obstacles in a conduit on heat transfer and flow characteristics of gas-liquid two-phase flow. In this study, two methods to obtain cross-sectional distribution of void fraction are applied to vertical upward air-water two-phase flow. These methods need projection image only from one direction. Radial distributions of void fraction in a circular tube and a circular-tube annuli with a spacer were calculated by Abel transform based on the assumption of axial symmetry. On the other hand, cross-sectional distributions of void fraction in a circular tube with a wire coil whose conduit configuration rotates about the tube central axis periodically were measured by CT method based on the assumption that the relative distributions of liquid phase against the wire were kept along the flow direction.

  4. Image Magnification Method Using Joint Diffusion

    Institute of Scientific and Technical Information of China (English)

    Zhong-Xuan Liu; Hong-Jian Wang; Si-Long Peng

    2004-01-01

    In this paper a new algorithm for image magnification is presented. Because linear magnification/interpolation techniques diminish the contrast and produce sawtooth effects, in recent years, many nonlinear interpolation methods, especially nonlinear diffusion based approaches, have been proposed to solve these problems. Two recently proposed techniques for interpolation by diffusion, forward and backward diffusion (FAB) and level-set reconstruction (LSR), cannot enhance the contrast and smooth edges simultaneously. In this article, a novel Partial Differential Equations (PDE) based approach is presented. The contributions of the paper include:firstly, a unified form of diffusion joining FAB and LSR is constructed to have all of their virtues; secondly, to eliminate artifacts of the joint diffusion, soft constraint takes the place of hard constraint presented by LSR;thirdly, the determination of joint coefficients, criterion for stopping time and color image processing are also discussed. The results demonstrate that the method is visually and quantitatively better than Bicubic, FAB and LSR.

  5. Image Deblurring with Krylov Subspace Methods

    DEFF Research Database (Denmark)

    Hansen, Per Christian

    2011-01-01

    Image deblurring, i.e., reconstruction of a sharper image from a blurred and noisy one, involves the solution of a large and very ill-conditioned system of linear equations, and regularization is needed in order to compute a stable solution. Krylov subspace methods are often ideally suited...... for this task: their iterative nature is a natural way to handle such large-scale problems, and the underlying Krylov subspace provides a convenient mechanism to regularized the problem by projecting it onto a low-dimensional "signal subspace" adapted to the particular problem. In this talk we consider...... the three Krylov subspace methods CGLS, MINRES, and GMRES. We describe their regularizing properties, and we discuss some computational aspects such as preconditioning and stopping criteria....

  6. Medical imaging using a laser-wakefield driven x-ray source

    Science.gov (United States)

    Cole, Jason; Wood, Jonathan; Lopes, Nelson; Poder, Kristjan; Kamperidis, Christos; Alatabi, Saleh; Bryant, Jonathan; Kneip, Stefan; Mecseki, Katalin; Norris, Dominic; Teboul, Lydia; Westerburg, Henrik; Abel, Richard; Jin, Andi; Symes, Dan; Mangles, Stuart; Najmudin, Zulfikar

    2016-10-01

    Laser-wakefield accelerators driven by high-intensity laser pulses are a proven centimetre-scale source of GeV electron beams. One of the proposed uses for these accelerators is the driving of compact hard x-ray synchrotron light sources. Such sources have been shown to be bright, have small source size and high photon energy, and are therefore interesting for imaging applications. By doubling the focal length at the Astra-Gemini laser facility of the Rutherford Appleton Laboratory, UK, we have significantly improved the average betatron x-ray flux compared to previous experiments. This fact, coupled to the stability of the radiation source, facilitated the acquisition of full 3D tomograms of hard bone tissue and soft mouse neonates, the latter requiring the recording of over 500 successive radiographs. Such multimodal performance is unprecedented in the betatron field and indicates the usefulness of these sources in clinical imaging applications, scalable to very high photon flux without compromising source size or photon energy.

  7. A Transformative Imaging Capability Using Laser Driven Multi MeV Photon Sources

    Science.gov (United States)

    Gautier, Donald; Espy, Michelle; Palaniyappan, Sasi; Mendez, Jacob; Nelson, Ronald; Hunter, James; Fernandez, Juan; los alamos national laboratory Team

    2016-10-01

    Recent results from the LANL Trident Laser demonstrate the practical use of a laser of this class ( 70 J, 600 fs) as a multi MeV photon source. The utilization of novel targets operating in the relativistic transparency regime of laser-plasmas has enabled this development. The electron population made from these targets, when coupled to a suitable high-Z converter foil placed near the laser target, produces an intense >1 MeV photon source with a small source size compared to conventional sources. When coupled with efficient imaging detectors, this laser-driven hard x-ray source provides new capabilities to address current non-destructive and dynamic testing problems that require a quantum jump in resolution. ``Flash'' (pulse picosecond) photon imaging, micro-focus resolution enhancement, good object penetration, and magnification (4x) with sufficient dose (>10 Rad/sr) for practical application have all been demonstrated at the LANL Trident Laser, as summarized in this presentation.

  8. Enhancing the (MSLDIP) image steganographic method (ESLDIP method)

    Science.gov (United States)

    Seddik Saad, Al-hussien

    2011-10-01

    Message transmissions over the Internet still have data security problem. Therefore, secure and secret communication methods are needed for transmitting messages over the Internet. Cryptography scrambles the message so that it cannot be understood. However, it makes the message suspicious enough to attract eavesdropper's attention. Steganography hides the secret message within other innocuous-looking cover files (i.e. images, music and video files) so that it cannot be observed [1].The term steganography originates from the Greek root words "steganos'' and "graphein'' which literally mean "covered writing''. It is defined as the science that involves communicating secret data in an appropriate multimedia carrier, e.g., image, audio text and video files [3].Steganographic techniques allow one party to communicate information to another without a third party even knowing that the communication is occurring. The ways to deliver these "secret messages" vary greatly [3].Our proposed method called Enhanced SLDIP (ESLDIP). In which the maximmum hiding capacity (MHC) of proposed ESLDIP method is higher than the previously proposed MSLDIP methods and the PSNR of the ESLDIP method is higher than the MSLDIP PSNR values', which means that the image quality of the ESLDIP method will be better than MSLDIP method and the maximmum hiding capacity (MHC) also improved. The rest of this paper is organized as follows. In section 2, steganography has been discussed; lingo, carriers and types. In section 3, related works are introduced. In section 4, the proposed method will be discussed in details. In section 5, the simulation results are given and Section 6 concludes the paper.

  9. USING H.264/AVC-INTRA FOR DCT BASED SEGMENTATION DRIVEN COMPOUND IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    S. Ebenezer Juliet

    2011-08-01

    Full Text Available This paper presents a one pass block classification algorithm for efficient coding of compound images which consists of multimedia elements like text, graphics and natural images. The objective is to minimize the loss of visual quality of text during compression by separating text information which needs high special resolution than the pictures and background. It segments computer screen images into text/graphics and picture/background classes based on DCT energy in each 4x4 block, and then compresses both text/graphics pixels and picture/background blocks by H.264/AVC with variable quantization parameter. Experimental results show that the single H.264/AVC-INTRA coder with variable quantization outperforms single coders such as JPEG, JPEG-2000 for compound images. Also the proposed method improves the PSNR value significantly than standard JPEG, JPEG-2000 and while keeping competitive compression ratios.

  10. Enhanced damage characterization using wavefield imaging methods

    Science.gov (United States)

    Blackshire, James L.

    2017-02-01

    Wavefield imaging methods are becoming a popular tool for characterizing and studying elastic field interactions in a wide variety of material systems. By using a scanning laser vibrometry detection system, the transient displacement fields generated by an ultrasonic source can be visualized and studied in detail. As a tool for quantitative nondestructive evaluation, the visualization of elastic waves provides a unique opportunity for understanding the scattering of elastic waves from insipient damage, where the detection and characterization of damage features using ultrasound can be enhanced in many instances. In the present effort, the detection and direct imaging of fatigue cracks in metals, and delaminations in composites, is described. An examination of the transient displacement fields near the scattering sites show additional details related to the local damage morphology, which can be difficult to account for using traditional far-field NDE sensing methods. A combination of forward models and experimental wavefield imaging methods were used to explore enhancement opportunities for the full 3-dimensional characterization of surface-breaking cracks and delaminations.

  11. An a posteriori-driven adaptive Mixed High-Order method with application to electrostatics

    Science.gov (United States)

    Di Pietro, Daniele A.; Specogna, Ruben

    2016-12-01

    In this work we propose an adaptive version of the recently introduced Mixed High-Order method and showcase its performance on a comprehensive set of academic and industrial problems in computational electromagnetism. The latter include, in particular, the numerical modeling of comb-drive and MEMS devices. Mesh adaptation is driven by newly derived, residual-based error estimators. The resulting method has several advantageous features: It supports fairly general meshes, it enables arbitrary approximation orders, and has a moderate computational cost thanks to hybridization and static condensation. The a posteriori-driven mesh refinement is shown to significantly enhance the performance on problems featuring singular solutions, allowing to fully exploit the high-order of approximation.

  12. Magnetic Resonance Imaging Methods in Soil Science

    Science.gov (United States)

    Pohlmeier, A.; van Dusschoten, D.; Blümler, P.

    2009-04-01

    Magnetic Resonance Imaging (MRI) is a powerful technique to study water content, dynamics and transport in natural porous media. However, MRI systems and protocols have been developed mainly for medical purposes, i.e. for media with comparably high water contents and long relaxation times. In contrast, natural porous media like soils and rocks are characterized by much lower water contents, typically 0 benefit. Three strategies can be applied for the monitoring of water contents and dynamics in natural porous media: i) Dedicated high-field scanners (with vertical bore) allowing stronger gradients and faster switching so that shorter echo times can be realized. ii) Special measurement sequences using ultrashort rf- and gradient-pulses like single point imaging derivates (SPI, SPRITE)(1) and multi-echo methods, which monitor series of echoes and allow for extrapolation to zero time(2). Hence, the loss of signal during the first echo period may be compensated to determine the initial magnetization (= water content) as well as relaxation time maps simultaneously. iii) Finally low field( strategies will be given. References 1) Pohlmeier et al. Vadose Zone J. 7, 1010-1017 (2008) 2) Edzes et al., Magn. Res. Imag. 16, 185-196 (1998) 3) Raich H, and Blümler P, Concepts in Magn. Reson. B 23B, 16-25 (2004) 4) Pohlmeier et al. Magn. Res. Imag. doi:10.1016/j.mri.2008.06.007 (2008)

  13. Neural tree network method for image segmentation

    Science.gov (United States)

    Samaddar, Sumitro; Mammone, Richard J.

    1994-02-01

    We present an extension of the neural tree network (NTN) architecture to let it solve multi- class classification problems with only binary fan-out. We then demonstrate it's effectiveness by applying it in a method for image segmentation. Each node of the NTN is a multi-layer perceptron and has one output for each segment class. These outputs are treated as probabilities to compute a confidence value for the segmentation of that pixel. Segmentation results with high confidence values are deemed to be correct and not processed further, while those with moderate and low confidence values are deemed to be outliers by this node and passed down the tree to children nodes. These tend to be pixels in boundary of different regions. We have used a realistic case study of segmenting the pole, coil and painted coil regions of light bulb filaments (LBF). The input to the network is a set of maximum, minimum and average of intensities in radial slices of a circular window around a pixel, taken from a front-lit and a back-lit image of an LBF. Training is done with a composite image drawn from images of many LBFs. The results are favorably compared with a traditional segmentation technique applied to the LBF test case.

  14. A flux monitoring method for easy and accurate flow rate measurement in pressure-driven flows.

    Science.gov (United States)

    Siria, Alessandro; Biance, Anne-Laure; Ybert, Christophe; Bocquet, Lydéric

    2012-03-07

    We propose a low-cost and versatile method to measure flow rate in microfluidic channels under pressure-driven flows, thereby providing a simple characterization of the hydrodynamic permeability of the system. The technique is inspired by the current monitoring method usually employed to characterize electro-osmotic flows, and makes use of the measurement of the time-dependent electric resistance inside the channel associated with a moving salt front. We have successfully tested the method in a micrometer-size channel, as well as in a complex microfluidic channel with a varying cross-section, demonstrating its ability in detecting internal shape variations.

  15. Method and apparatus for atomic imaging

    Science.gov (United States)

    Saldin, Dilano K.; de Andres Rodriquez, Pedro L.

    1993-01-01

    A method and apparatus for three dimensional imaging of the atomic environment of disordered adsorbate atoms are disclosed. The method includes detecting and measuring the intensity of a diffuse low energy electron diffraction pattern formed by directing a beam of low energy electrons against the surface of a crystal. Data corresponding to reconstructed amplitudes of a wave form is generated by operating on the intensity data. The data corresponding to the reconstructed amplitudes is capable of being displayed as a three dimensional image of an adsorbate atom. The apparatus includes a source of a beam of low energy electrons and a detector for detecting the intensity distribution of a DLEED pattern formed at the detector when the beam of low energy electrons is directed onto the surface of a crystal. A device responsive to the intensity distribution generates a signal corresponding to the distribution which represents a reconstructed amplitude of a wave form and is capable of being converted into a three dimensional image of the atomic environment of an adsorbate atom on the crystal surface.

  16. A nuclear method to authenticate Buddha images

    Science.gov (United States)

    Khaweerat, S.; Ratanatongchai, W.; Channuie, J.; Wonglee, S.; Picha, R.; Promping, J.; Silva, K.; Liamsuwan, T.

    2015-05-01

    The value of Buddha images in Thailand varies dramatically depending on authentication and provenance. In general, people use their individual skills to make the justification which frequently leads to obscurity, deception and illegal activities. Here, we propose two non-destructive techniques of neutron radiography (NR) and neutron activation autoradiography (NAAR) to reveal respectively structural and elemental profiles of small Buddha images. For NR, a thermal neutron flux of 105 n cm-2s-1 was applied. NAAR needed a higher neutron flux of 1012 n cm-2 s-1 to activate the samples. Results from NR and NAAR revealed unique characteristic of the samples. Similarity of the profile played a key role in the classification of the samples. The results provided visual evidence to enhance the reliability of authenticity approval. The method can be further developed for routine practice which impact thousands of customers in Thailand.

  17. Data-driven fault detection for industrial processes canonical correlation analysis and projection based methods

    CERN Document Server

    Chen, Zhiwen

    2017-01-01

    Zhiwen Chen aims to develop advanced fault detection (FD) methods for the monitoring of industrial processes. With the ever increasing demands on reliability and safety in industrial processes, fault detection has become an important issue. Although the model-based fault detection theory has been well studied in the past decades, its applications are limited to large-scale industrial processes because it is difficult to build accurate models. Furthermore, motivated by the limitations of existing data-driven FD methods, novel canonical correlation analysis (CCA) and projection-based methods are proposed from the perspectives of process input and output data, less engineering effort and wide application scope. For performance evaluation of FD methods, a new index is also developed. Contents A New Index for Performance Evaluation of FD Methods CCA-based FD Method for the Monitoring of Stationary Processes Projection-based FD Method for the Monitoring of Dynamic Processes Benchmark Study and Real-Time Implementat...

  18. Research on pavement crack recognition methods based on image processing

    Science.gov (United States)

    Cai, Yingchun; Zhang, Yamin

    2011-06-01

    In order to overview and analysis briefly pavement crack recognition methods , then find the current existing problems in pavement crack image processing, the popular methods of crack image processing such as neural network method, morphology method, fuzzy logic method and traditional image processing .etc. are discussed, and some effective solutions to those problems are presented.

  19. Corner-Space Renormalization Method for Driven-Dissipative Two-Dimensional Correlated Systems.

    Science.gov (United States)

    Finazzi, S; Le Boité, A; Storme, F; Baksic, A; Ciuti, C

    2015-08-21

    We present a theoretical method to study driven-dissipative correlated quantum systems on lattices with two spatial dimensions (2D). The steady-state density matrix of the lattice is obtained by solving the master equation in a corner of the Hilbert space. The states spanning the corner space are determined through an iterative procedure, using eigenvectors of the density matrix of smaller lattice systems, merging in real space two lattices at each iteration and selecting M pairs of states by maximizing their joint probability. The accuracy of the results is then improved by increasing the dimension M of the corner space until convergence is reached. We demonstrate the efficiency of such an approach by applying it to the driven-dissipative 2D Bose-Hubbard model, describing lattices of coupled cavities with quantum optical nonlinearities.

  20. A Simulation Method for High-Cycle Fatigue-Driven Delamination using a Cohesive Zone Model

    DEFF Research Database (Denmark)

    Bak, Brian Lau Verndal; Turon, A.; Lindgaard, Esben;

    2016-01-01

    on parameter fitting of any kind. The method has been implemented as a zero-thickness eight-node interface element for Abaqus and as a spring element for a simple finite element model in MATLAB. The method has been validated in simulations of mode I, mode II, and mixed-mode crack loading for both self......A novel computational method for simulating fatigue-driven mixed-mode delamination cracks in laminated structures under cyclic loading is presented. The proposed fatigue method is based on linking a cohesive zone model for quasi-static crack growth and a Paris' law-like model described......-similar and non-self-similar crack propagation. The method produces highly accurate results compared with currently available methods and is capable of simulating general mixed-mode non-self-similar crack growth problems....

  1. Energy-Driven Kinetic Monte Carlo Method and Its Application in Fullerene Coalescence.

    Science.gov (United States)

    Ding, Feng; Yakobson, Boris I

    2014-09-04

    Mimicking the conventional barrier-based kinetic Monte Carlo simulation, an energy-driven kinetic Monte Carlo (EDKMC) method was developed to study the structural transformation of carbon nanomaterials. The new method is many orders magnitude faster than standard molecular dynamics or Monte Marlo (MC) simulations and thus allows us to explore rare events within a reasonable computational time. As an example, the temperature dependence of fullerene coalescence was studied. The simulation, for the first time, revealed that short capped single-walled carbon nanotubes (SWNTs) appear as low-energy metastable structures during the structural evolution.

  2. Segmentation of thalamus from MR images via task-driven dictionary learning

    Science.gov (United States)

    Liu, Luoluo; Glaister, Jeffrey; Sun, Xiaoxia; Carass, Aaron; Tran, Trac D.; Prince, Jerry L.

    2016-03-01

    Automatic thalamus segmentation is useful to track changes in thalamic volume over time. In this work, we introduce a task-driven dictionary learning framework to find the optimal dictionary given a set of eleven features obtained from T1-weighted MRI and diffusion tensor imaging. In this dictionary learning framework, a linear classifier is designed concurrently to classify voxels as belonging to the thalamus or non-thalamus class. Morphological post-processing is applied to produce the final thalamus segmentation. Due to the uneven size of the training data samples for the non-thalamus and thalamus classes, a non-uniform sampling scheme is pro- posed to train the classifier to better discriminate between the two classes around the boundary of the thalamus. Experiments are conducted on data collected from 22 subjects with manually delineated ground truth. The experimental results are promising in terms of improvements in the Dice coefficient of the thalamus segmentation overstate-of-the-art atlas-based thalamus segmentation algorithms.

  3. A numerical method for shock driven multiphase flow with evaporating particles

    Science.gov (United States)

    Dahal, Jeevan; McFarland, Jacob A.

    2017-09-01

    A numerical method for predicting the interaction of active, phase changing particles in a shock driven flow is presented in this paper. The Particle-in-Cell (PIC) technique was used to couple particles in a Lagrangian coordinate system with a fluid in an Eulerian coordinate system. The Piecewise Parabolic Method (PPM) hydrodynamics solver was used for solving the conservation equations and was modified with mass, momentum, and energy source terms from the particle phase. The method was implemented in the open source hydrodynamics software FLASH, developed at the University of Chicago. A simple validation of the methods is accomplished by comparing velocity and temperature histories from a single particle simulation with the analytical solution. Furthermore, simple single particle parcel simulations were run at two different sizes to study the effect of particle size on vorticity deposition in a shock-driven multiphase instability. Large particles were found to have lower enstrophy production at early times and higher enstrophy dissipation at late times due to the advection of the particle vorticity source term through the carrier gas. A 2D shock-driven instability of a circular perturbation is studied in simulations and compared to previous experimental data as further validation of the numerical methods. The effect of the particle size distribution and particle evaporation is examined further for this case. The results show that larger particles reduce the vorticity deposition, while particle evaporation increases it. It is also shown that for a distribution of particles sizes the vorticity deposition is decreased compared to single particle size case at the mean diameter.

  4. Biomedical image understanding methods and applications

    CERN Document Server

    Lim, Joo-Hwee; Xiong, Wei

    2015-01-01

    A comprehensive guide to understanding and interpreting digital images in medical and functional applications Biomedical Image Understanding focuses on image understanding and semantic interpretation, with clear introductions to related concepts, in-depth theoretical analysis, and detailed descriptions of important biomedical applications. It covers image processing, image filtering, enhancement, de-noising, restoration, and reconstruction; image segmentation and feature extraction; registration; clustering, pattern classification, and data fusion. With contributions from ex

  5. Digital Watermarking Method Warranting the Lower Limit of Image Quality of Watermarked Images

    Directory of Open Access Journals (Sweden)

    Iwata Motoi

    2010-01-01

    Full Text Available We propose a digital watermarking method warranting the lower limit of the image quality of watermarked images. The proposed method controls the degradation of a watermarked image by using a lower limit image. The lower limit image means the image of the worst quality that users can permit. The proposed method accepts any lower limit image and does not require it at extraction. Therefore lower limit images can be decided flexibly. In this paper, we introduce 2-dimensional human visual MTF model as an example of obtaining lower limit images. Also we use JPEG-compressed images of quality 75% and 50% as lower limit images. We investigate the performance of the proposed method by experiments. Moreover we compare the proposed method using three types of lower limit images with the existing method in view of the tradeoff between PSNR and the robustness against JPEG compression.

  6. An Effective Method of Image Retrieval using Image Mining Techniques

    CERN Document Server

    Kannan, A; Anbazhagan, N; 10.5121/ijma.2010.2402

    2010-01-01

    The present research scholars are having keen interest in doing their research activities in the area of Data mining all over the world. Especially, [13]Mining Image data is the one of the essential features in this present scenario since image data plays vital role in every aspect of the system such as business for marketing, hospital for surgery, engineering for construction, Web for publication and so on. The other area in the Image mining system is the Content-Based Image Retrieval (CBIR) which performs retrieval based on the similarity defined in terms of extracted features with more objectiveness. The drawback in CBIR is the features of the query image alone are considered. Hence, a new technique called Image retrieval based on optimum clusters is proposed for improving user interaction with image retrieval systems by fully exploiting the similarity information. The index is created by describing the images according to their color characteristics, with compact feature vectors, that represent typical co...

  7. Combining knowledge- and data-driven methods for de-identification of clinical narratives.

    Science.gov (United States)

    Dehghan, Azad; Kovacevic, Aleksandar; Karystianis, George; Keane, John A; Nenadic, Goran

    2015-12-01

    A recent promise to access unstructured clinical data from electronic health records on large-scale has revitalized the interest in automated de-identification of clinical notes, which includes the identification of mentions of Protected Health Information (PHI). We describe the methods developed and evaluated as part of the i2b2/UTHealth 2014 challenge to identify PHI defined by 25 entity types in longitudinal clinical narratives. Our approach combines knowledge-driven (dictionaries and rules) and data-driven (machine learning) methods with a large range of features to address de-identification of specific named entities. In addition, we have devised a two-pass recognition approach that creates a patient-specific run-time dictionary from the PHI entities identified in the first step with high confidence, which is then used in the second pass to identify mentions that lack specific clues. The proposed method achieved the overall micro F1-measures of 91% on strict and 95% on token-level evaluation on the test dataset (514 narratives). Whilst most PHI entities can be reliably identified, particularly challenging were mentions of Organizations and Professions. Still, the overall results suggest that automated text mining methods can be used to reliably process clinical notes to identify personal information and thus providing a crucial step in large-scale de-identification of unstructured data for further clinical and epidemiological studies.

  8. Second-order accurate finite volume method for well-driven flows

    CERN Document Server

    Dotlić, Milan; Pokorni, Boris; Pušić, Milenko; Dimkić, Milan

    2013-01-01

    We consider a finite volume method for a well-driven fluid flow in a porous medium. Due to the singularity of the well, modeling in the near-well region with standard numerical schemes results in a completely wrong total well flux and an inaccurate hydraulic head. Local grid refinement can help, but it comes at computational cost. In this article we propose two methods to address well singularity. In the first method the flux through well faces is corrected using a logarithmic function, in a way related to the Peaceman correction. Coupling this correction with a second-order accurate two-point scheme gives a greatly improved total well flux, but the resulting scheme is still not even first order accurate on coarse grids. In the second method fluxes in the near-well region are corrected by representing the hydraulic head as a sum of a logarithmic and a linear function. This scheme is second-order accurate.

  9. Characterisation of deuterium spectra from laser driven multi-species sources by employing differentially filtered image plate detectors in Thomson spectrometers

    CERN Document Server

    Alejo, A; Ahmed, H; Krygier, A G; Doria, D; Clarke, R; Fernandez, J; Freeman, R R; Fuchs, J; Green, A; Green, J S; Jung, D; Kleinschmidt, A; Lewis, C L S; Morrison, J T; Najmudin, Z; Nakamura, H; Nersisyan, G; Norreys, P; Notley, M; Oliver, M; Roth, M; Ruiz, J A; Vassura, L; Zepf, M; Borghesi, M

    2014-01-01

    A novel method for characterising the full spectrum of deuteron ions emitted by laser driven multi-species ion sources is discussed. The procedure is based on using differential filtering over the detector of a Thompson parabola ion spectrometer, which enables discrimination of deuterium ions from heavier ion species with the same charge-to-mass ratio (such as C6+, O8+, etc.). Commonly used Fuji Image plates were used as detectors in the spectrometer, whose absolute response to deuterium ions over a wide range of energies was calibrated by using slotted CR-39 nuclear track detectors. A typical deuterium ion spectrum diagnosed in a recent experimental campaign is presented.

  10. Beam transient analyses of Accelerator Driven Subcritical Reactors based on neutron transport method

    Energy Technology Data Exchange (ETDEWEB)

    He, Mingtao; Wu, Hongchun [School of Nuclear Science and Technology, Xi’an Jiaotong University, Xi’an 710049, Shaanxi (China); Zheng, Youqi, E-mail: yqzheng@mail.xjtu.edu.cn [School of Nuclear Science and Technology, Xi’an Jiaotong University, Xi’an 710049, Shaanxi (China); Wang, Kunpeng [Nuclear and Radiation Safety Center, PO Box 8088, Beijing 100082 (China); Li, Xunzhao; Zhou, Shengcheng [School of Nuclear Science and Technology, Xi’an Jiaotong University, Xi’an 710049, Shaanxi (China)

    2015-12-15

    Highlights: • A transport-based kinetics code for Accelerator Driven Subcritical Reactors is developed. • The performance of different kinetics methods adapted to the ADSR is investigated. • The impacts of neutronic parameters deteriorating with fuel depletion are investigated. - Abstract: The Accelerator Driven Subcritical Reactor (ADSR) is almost external source dominated since there is no additional reactivity control mechanism in most designs. This paper focuses on beam-induced transients with an in-house developed dynamic analysis code. The performance of different kinetics methods adapted to the ADSR is investigated, including the point kinetics approximation and space–time kinetics methods. Then, the transient responds of beam trip and beam overpower are calculated and analyzed for an ADSR design dedicated for minor actinides transmutation. The impacts of some safety-related neutronics parameters deteriorating with fuel depletion are also investigated. The results show that the power distribution varying with burnup leads to large differences in temperature responds during transients, while the impacts of kinetic parameters and feedback coefficients are not very obvious. Classification: Core physic.

  11. Change Detection in Synthetic Aperture Radar Images Using a Multiscale-Driven Approach

    Directory of Open Access Journals (Sweden)

    Olaniyi A. Ajadi

    2016-06-01

    Full Text Available Despite the significant progress that was achieved throughout the recent years, to this day, automatic change detection and classification from synthetic aperture radar (SAR images remains a difficult task. This is, in large part, due to (a the high level of speckle noise that is inherent to SAR data; (b the complex scattering response of SAR even for rather homogeneous targets; (c the low temporal sampling that is often achieved with SAR systems, since sequential images do not always have the same radar geometry (incident angle, orbit path, etc.; and (d the typically limited performance of SAR in delineating the exact boundary of changed regions. With this paper we present a promising change detection method that utilizes SAR images and provides solutions for these previously mentioned difficulties. We will show that the presented approach enables automatic and high-performance change detection across a wide range of spatial scales (resolution levels. The developed method follows a three-step approach of (i initial pre-processing; (ii data enhancement/filtering; and (iii wavelet-based, multi-scale change detection. The stand-alone property of our approach is the high flexibility in applying the change detection approach to a wide range of change detection problems. The performance of the developed approach is demonstrated using synthetic data as well as a real-data application to wildfire progression near Fairbanks, Alaska.

  12. New Methods for Lossless Image Compression Using Arithmetic Coding.

    Science.gov (United States)

    Howard, Paul G.; Vitter, Jeffrey Scott

    1992-01-01

    Identifies four components of a good predictive lossless image compression method: (1) pixel sequence, (2) image modeling and prediction, (3) error modeling, and (4) error coding. Highlights include Laplace distribution and a comparison of the multilevel progressive method for image coding with the prediction by partial precision matching method.…

  13. An Effective Method of Image Retrieval using Image Mining Techniques

    OpenAIRE

    Kannan, A.; Dr.V.Mohan; Dr.N.Anbazhagan

    2010-01-01

    The present research scholars are having keen interest in doing their research activities in the area of Data mining all over the world. Especially, [13]Mining Image data is the one of the essential features in this present scenario since image data plays vital role in every aspect of the system such as business for marketing, hospital for surgery, engineering for construction, Web for publication and so on. The other area in the Image mining system is the Content-Based Image Retrieval (CB...

  14. A data-driven prediction method for fast-slow systems

    Science.gov (United States)

    Groth, Andreas; Chekroun, Mickael; Kondrashov, Dmitri; Ghil, Michael

    2016-04-01

    In this work, we present a prediction method for processes that exhibit a mixture of variability on low and fast scales. The method relies on combining empirical model reduction (EMR) with singular spectrum analysis (SSA). EMR is a data-driven methodology for constructing stochastic low-dimensional models that account for nonlinearity and serial correlation in the estimated noise, while SSA provides a decomposition of the complex dynamics into low-order components that capture spatio-temporal behavior on different time scales. Our study focuses on the data-driven modeling of partial observations from dynamical systems that exhibit power spectra with broad peaks. The main result in this talk is that the combination of SSA pre-filtering with EMR modeling improves, under certain circumstances, the modeling and prediction skill of such a system, as compared to a standard EMR prediction based on raw data. Specifically, it is the separation into "fast" and "slow" temporal scales by the SSA pre-filtering that achieves the improvement. We show, in particular that the resulting EMR-SSA emulators help predict intermittent behavior such as rapid transitions between specific regions of the system's phase space. This capability of the EMR-SSA prediction will be demonstrated on two low-dimensional models: the Rössler system and a Lotka-Volterra model for interspecies competition. In either case, the chaotic dynamics is produced through a Shilnikov-type mechanism and we argue that the latter seems to be an important ingredient for the good prediction skills of EMR-SSA emulators. Shilnikov-type behavior has been shown to arise in various complex geophysical fluid models, such as baroclinic quasi-geostrophic flows in the mid-latitude atmosphere and wind-driven double-gyre ocean circulation models. This pervasiveness of the Shilnikow mechanism of fast-slow transition opens interesting perspectives for the extension of the proposed EMR-SSA approach to more realistic situations.

  15. A comparative study on medical image segmentation methods

    OpenAIRE

    Praylin Selva Blessy SELVARAJ ASSLEY; Helen Sulochana CHELLAKKON

    2014-01-01

    Image segmentation plays an important role in medical images. It has been a relevant research area in computer vision and image analysis. Many segmentation algorithms have been proposed for medical images. This paper makes a review on segmentation methods for medical images. In this survey, segmentation methods are divided into five categories: region based, boundary based, model based, hybrid based and atlas based. The five different categories with their principle ideas, advantages and disa...

  16. Histological image segmentation using fast mean shift clustering method

    OpenAIRE

    Wu, Geming; Zhao, Xinyan; Luo, Shuqian; Shi, Hongli

    2015-01-01

    Background Colour image segmentation is fundamental and critical for quantitative histological image analysis. The complexity of the microstructure and the approach to make histological images results in variable staining and illumination variations. And ultra-high resolution of histological images makes it is hard for image segmentation methods to achieve high-quality segmentation results and low computation cost at the same time. Methods Mean Shift clustering approach is employed for histol...

  17. A discontinuous Galerkin method for gravity-driven viscous fingering instabilities in porous media

    Science.gov (United States)

    Scovazzi, G.; Gerstenberger, A.; Collis, S. S.

    2013-01-01

    We present a new approach to the simulation of gravity-driven viscous fingering instabilities in porous media flow. These instabilities play a very important role during carbon sequestration processes in brine aquifers. Our approach is based on a nonlinear implementation of the discontinuous Galerkin method, and possesses a number of key features. First, the method developed is inherently high order, and is therefore well suited to study unstable flow mechanisms. Secondly, it maintains high-order accuracy on completely unstructured meshes. The combination of these two features makes it a very appealing strategy in simulating the challenging flow patterns and very complex geometries of actual reservoirs and aquifers. This article includes an extensive set of verification studies on the stability and accuracy of the method, and also features a number of computations with unstructured grids and non-standard geometries.

  18. Dual wavelength imaging of a scrape-off layer in an advanced beam-driven field-reversed configuration

    Science.gov (United States)

    Osin, D.; Schindler, T.

    2016-11-01

    A dual wavelength imaging system has been developed and installed on C-2U to capture 2D images of a He jet in the Scrape-Off Layer (SOL) of an advanced beam-driven Field-Reversed Configuration (FRC) plasma. The system was designed to optically split two identical images and pass them through 1 nm FWHM filters. Dual wavelength images are focused adjacent on a large format CCD chip and recorded simultaneously with a time resolution down to 10 μs using a gated micro-channel plate. The relatively compact optical system images a 10 cm plasma region with a spatial resolution of 0.2 cm and can be used in a harsh environment with high electro-magnetic noise and high magnetic field. The dual wavelength imaging system provides 2D images of either electron density or temperature by observing spectral line pairs emitted by He jet atoms in the SOL. A large field of view, combined with good space and time resolution of the imaging system, allows visualization of macro-flows in the SOL. First 2D images of the electron density and temperature observed in the SOL of the C-2U FRC are presented.

  19. Dual wavelength imaging of a scrape-off layer in an advanced beam-driven field-reversed configuration

    Energy Technology Data Exchange (ETDEWEB)

    Osin, D.; Schindler, T., E-mail: dosin@trialphaenergy.com [Tri Alpha Energy, Inc., P.O. Box 7010, Rancho Santa Margarita, California 92688-7010 (United States)

    2016-11-15

    A dual wavelength imaging system has been developed and installed on C-2U to capture 2D images of a He jet in the Scrape-Off Layer (SOL) of an advanced beam-driven Field-Reversed Configuration (FRC) plasma. The system was designed to optically split two identical images and pass them through 1 nm FWHM filters. Dual wavelength images are focused adjacent on a large format CCD chip and recorded simultaneously with a time resolution down to 10 μs using a gated micro-channel plate. The relatively compact optical system images a 10 cm plasma region with a spatial resolution of 0.2 cm and can be used in a harsh environment with high electro-magnetic noise and high magnetic field. The dual wavelength imaging system provides 2D images of either electron density or temperature by observing spectral line pairs emitted by He jet atoms in the SOL. A large field of view, combined with good space and time resolution of the imaging system, allows visualization of macro-flows in the SOL. First 2D images of the electron density and temperature observed in the SOL of the C-2U FRC are presented.

  20. Physiological Imaging-Defined, Response-Driven Subvolumes of a Tumor

    Energy Technology Data Exchange (ETDEWEB)

    Farjam, Reza [Department of Biomedical Engineering, University of Michigan, Ann Arbor, Michigan (United States); Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan (United States); Tsien, Christina I.; Feng, Felix Y. [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan (United States); Gomez-Hassan, Diana [Department of Radiology, University of Michigan, Ann Arbor, Michigan (United States); Hayman, James A.; Lawrence, Theodore S. [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan (United States); Cao, Yue, E-mail: yuecao@umich.edu [Department of Biomedical Engineering, University of Michigan, Ann Arbor, Michigan (United States); Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan (United States); Department of Radiology, University of Michigan, Ann Arbor, Michigan (United States)

    2013-04-01

    Purpose: To develop an image analysis framework to delineate the physiological imaging-defined subvolumes of a tumor in relating to treatment response and outcome. Methods and Materials: Our proposed approach delineates the subvolumes of a tumor based on its heterogeneous distributions of physiological imaging parameters. The method assigns each voxel a probabilistic membership function belonging to the physiological parameter classes defined in a sample of tumors, and then calculates the related subvolumes in each tumor. We applied our approach to regional cerebral blood volume (rCBV) and Gd-DTPA transfer constant (K{sup trans}) images of patients who had brain metastases and were treated by whole-brain radiation therapy (WBRT). A total of 45 lesions were included in the analysis. Changes in the rCBV (or K{sup trans})–defined subvolumes of the tumors from pre-RT to 2 weeks after the start of WBRT (2W) were evaluated for differentiation of responsive, stable, and progressive tumors using the Mann-Whitney U test. Performance of the newly developed metrics for predicting tumor response to WBRT was evaluated by receiver operating characteristic (ROC) curve analysis. Results: The percentage decrease in the high-CBV-defined subvolumes of the tumors from pre-RT to 2W was significantly greater in the group of responsive tumors than in the group of stable and progressive tumors (P<.007). The change in the high-CBV-defined subvolumes of the tumors from pre-RT to 2W was a predictor for post-RT response significantly better than change in the gross tumor volume observed during the same time interval (P=.012), suggesting that the physiological change occurs before the volumetric change. Also, K{sup trans} did not add significant discriminatory information for assessing response with respect to rCBV. Conclusion: The physiological imaging-defined subvolumes of the tumors delineated by our method could be candidates for boost target, for which further development and evaluation

  1. Spike Inference from Calcium Imaging using Sequential Monte Carlo Methods

    OpenAIRE

    NeuroData; Paninski, L

    2015-01-01

    Vogelstein JT, Paninski L. Spike Inference from Calcium Imaging using Sequential Monte Carlo Methods. Statistical and Applied Mathematical Sciences Institute (SAMSI) Program on Sequential Monte Carlo Methods, 2008

  2. Morphology-driven automatic segmentation of MR images of the neonatal brain.

    Science.gov (United States)

    Gui, Laura; Lisowski, Radoslaw; Faundez, Tamara; Hüppi, Petra S; Lazeyras, François; Kocher, Michel

    2012-12-01

    The segmentation of MR images of the neonatal brain is an essential step in the study and evaluation of infant brain development. State-of-the-art methods for adult brain MRI segmentation are not applicable to the neonatal brain, due to large differences in structure and tissue properties between newborn and adult brains. Existing newborn brain MRI segmentation methods either rely on manual interaction or require the use of atlases or templates, which unavoidably introduces a bias of the results towards the population that was used to derive the atlases. We propose a different approach for the segmentation of neonatal brain MRI, based on the infusion of high-level brain morphology knowledge, regarding relative tissue location, connectivity and structure. Our method does not require manual interaction, or the use of an atlas, and the generality of its priors makes it applicable to different neonatal populations, while avoiding atlas-related bias. The proposed algorithm segments the brain both globally (intracranial cavity, cerebellum, brainstem and the two hemispheres) and at tissue level (cortical and subcortical gray matter, myelinated and unmyelinated white matter, and cerebrospinal fluid). We validate our algorithm through visual inspection by medical experts, as well as by quantitative comparisons that demonstrate good agreement with expert manual segmentations. The algorithm's robustness is verified by testing on variable quality images acquired on different machines, and on subjects with variable anatomy (enlarged ventricles, preterm- vs. term-born).

  3. Deformable prostate registration from MR and TRUS images using surface error driven FEM models

    Science.gov (United States)

    Taquee, Farheen; Goksel, Orcun; Mahdavi, S. Sara; Keyes, Mira; Morris, W. James; Spadinger, Ingrid; Salcudean, Septimiu

    2012-02-01

    The fusion of TransRectal Ultrasound (TRUS) and Magnetic Resonance (MR) images of the prostate can aid diagnosis and treatment planning for prostate cancer. Surface segmentations of the prostate are available in both modalities. Our goal is to develop a 3D deformable registration method based on these segmentations and a biomechanical model. The segmented source volume is meshed and a linear finite element model is created for it. This volume is deformed to the target image volume by applying surface forces computed by assuming a negative relative pressure between the non-overlapping regions of the volumes and the overlapping ones. This pressure drives the model to increase the volume overlap until the surfaces are aligned. We tested our algorithm on prostate surfaces extracted from post-operative MR and TRUS images for 14 patients, using a model with elasticity parameters in the range reported in the literature for the prostate. We used three evaluation metrics for validating our technique: the Dice Similarity Coefficient (DSC) (ideally equal to 1.0), which is a measure of volume alignment, the volume change in source surface during registration, which is a measure of volume preservation, and the distance between the urethras to assess the anatomical correctness of the method. We obtained a DSC of 0.96+/-0.02 and a mean distance between the urethras of 1.5+/-1.4 mm. The change in the volume of the source surface was 1.5+/-1.4%. Our results show that this method is a promising tool for physicallybased deformable surface registration.

  4. Fusion Method for Remote Sensing Image Based on Fuzzy Integral

    Directory of Open Access Journals (Sweden)

    Hui Zhou

    2014-01-01

    Full Text Available This paper presents a kind of image fusion method based on fuzzy integral, integrated spectral information, and 2 single factor indexes of spatial resolution in order to greatly retain spectral information and spatial resolution information in fusion of multispectral and high-resolution remote sensing images. Firstly, wavelet decomposition is carried out to two images, respectively, to obtain wavelet decomposition coefficients of the two image and keep coefficient of low frequency of multispectral image, and then optimized fusion is carried out to high frequency part of the two images based on weighting coefficient to generate new fusion image. Finally, evaluation is carried out to the image after fusion with introduction of evaluation indexes of correlation coefficient, mean value of image, standard deviation, distortion degree, information entropy, and so forth. The test results show that this method integrated multispectral information and space high-resolution information in a better way, and it is an effective fusion method of remote sensing image.

  5. A Method of Coding and Decoding in Underwater Image Transmission

    Institute of Scientific and Technical Information of China (English)

    程恩

    2001-01-01

    A new method of coding and decoding in the system of underwater image transmission is introduced, including the rapid digital frequency synthesizer in multiple frequency shift keying,image data generator, image grayscale decoder with intelligent fuzzy algorithm, image restoration and display on microcomputer.

  6. Method for Surface Scanning in Medical Imaging and Related Apparatus

    DEFF Research Database (Denmark)

    2015-01-01

    A method and apparatus for surface scanning in medical imaging is provided. The surface scanning apparatus comprises an image source, a first optical fiber bundle comprising first optical fibers having proximal ends and distal ends, and a first optical coupler for coupling an image from the image...

  7. Data-driven methods to improve baseflow prediction of a regional groundwater model

    Science.gov (United States)

    Xu, Tianfang; Valocchi, Albert J.

    2015-12-01

    Physically-based models of groundwater flow are powerful tools for water resources assessment under varying hydrologic, climate and human development conditions. One of the most important topics of investigation is how these conditions will affect the discharge of groundwater to rivers and streams (i.e. baseflow). Groundwater flow models are based upon discretized solution of mass balance equations, and contain important hydrogeological parameters that vary in space and cannot be measured. Common practice is to use least squares regression to estimate parameters and to infer prediction and associated uncertainty. Nevertheless, the unavoidable uncertainty associated with physically-based groundwater models often results in both aleatoric and epistemic model calibration errors, thus violating a key assumption for regression-based parameter estimation and uncertainty quantification. We present a complementary data-driven modeling and uncertainty quantification (DDM-UQ) framework to improve predictive accuracy of physically-based groundwater models and to provide more robust prediction intervals. First, we develop data-driven models (DDMs) based on statistical learning techniques to correct the bias of the calibrated groundwater model. Second, we characterize the aleatoric component of groundwater model residual using both parametric and non-parametric distribution estimation methods. We test the complementary data-driven framework on a real-world case study of the Republican River Basin, where a regional groundwater flow model was developed to assess the impact of groundwater pumping for irrigation. Compared to using only the flow model, DDM-UQ provides more accurate monthly baseflow predictions. In addition, DDM-UQ yields prediction intervals with coverage probability consistent with validation data. The DDM-UQ framework is computationally efficient and is expected to be applicable to many geoscience models for which model structural error is not negligible.

  8. Task-Driven Dictionary Learning for Hyperspectral Image Classification with Structured Sparsity Constraints

    Science.gov (United States)

    2015-02-03

    0188 3. DATES COVERED (From - To) - UU UU UU UU Approved for public release; distribution is unlimited. Task-Driven Dictionary Learning for... dictionary atoms. As a generative model, it requires the dictionary to be highly redundant in order to ensure both a stable high sparsity level and a low...Research Triangle Park, NC 27709-2211 Sparse representation, supervised dictionarylearning, task-driven dictionary learning, joint sparsity

  9. A New Numerical Method for Solving Radiation Driven Winds from Hot Stars

    CERN Document Server

    Cure, M; Cure, Michel

    2006-01-01

    We present a general method for solving the non--linear differential equation of monotonically increasing steady--state radiation driven winds. We graphically identify all the singular points before transforming the momentum equation to a system of differential equations with all the gradients explicitly give. This permits a topological classification of all singular points and to calculate the maximum and minimum mass--loss of the wind. We use our method to analyse for the first time the topology of the non--rotating frozen in ionisation m--CAK wind, with the inclusion of the finite disk correction factor and find up to 4 singular points, three of the x--type and one attractor--type. The only singular point (and solution passing through) that satisfies the boundary condition at the stellar surface is the standard m--CAK singular point.

  10. Toward data-driven methods in geophysics: the Analog Data Assimilation

    Science.gov (United States)

    Lguensat, Redouane; Tandeo, Pierre; Ailliot, Pierre; Pulido, Manuel; Fablet, Ronan

    2017-04-01

    The Analog Data Assimilation (AnDA) is a recently introduced data-driven methods for data assimilation where the dynamical model is learned from data, contrary to classical data assimilation where a physical model of the dynamics is needed. AnDA relies on replacing the physical dynamical model by a statistical emulator of the dynamics using analog forecasting methods. Then, the analog dynamical model is incorporated in ensemble-based data assimilation algorithms (Ensemble Kalman Filter and Smoother or Particle Filter). The relevance of the proposed AnDA is demonstrated for Lorenz-63 and Lorenz-96 chaotic dynamics. Applications in meteorology and oceanography as well as potential perspectives that are worthy of investigation are further discussed. We expect that the directions of research we suggest will help in bringing more interest in applied machine learning to geophysical sciences.

  11. AN IMAGE RETRIEVAL METHOD BASED ON SPATIAL DISTRIBUTION OF COLOR

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Color histogram is now widely used in image retrieval. Color histogram-based image retrieval methods are simple and efficient but without considering the spatial distribution information of the color. To overcome the shortcoming of conventional color histogram-based image retrieval methods, an image retrieval method based on Radon Transform (RT) is proposed. In order to reduce the computational complexity,wavelet decomposition is used to compress image data. Firstly, images are decomposed by Mallat algorithm.The low-frequency components are then projected by RT to generate the spatial color feature. Finally the moment feature matrices which are saved along with original images are obtained. Experimental results show that the RT based retrieval is more accurate and efficient than traditional color histogram-based method in case that there are obvious objects in images. Further more, RT based retrieval runs significantly faster than the traditional color histogram methods.

  12. 3D Interpolation Method for CT Images of the Lung

    Directory of Open Access Journals (Sweden)

    Noriaki Asada

    2003-06-01

    Full Text Available A 3-D image can be reconstructed from numerous CT images of the lung. The procedure reconstructs a solid from multiple cross section images, which are collected during pulsation of the heart. Thus the motion of the heart is a special factor that must be taken into consideration during reconstruction. The lung exhibits a repeating transformation synchronized to the beating of the heart as an elastic body. There are discontinuities among neighboring CT images due to the beating of the heart, if no special techniques are used in taking CT images. The 3-D heart image is reconstructed from numerous CT images in which both the heart and the lung are taken. Although the outline shape of the reconstructed 3-D heart is quite unnatural, the envelope of the 3-D unnatural heart is fit to the shape of the standard heart. The envelopes of the lung in the CT images are calculated after the section images of the best fitting standard heart are located at the same positions of the CT images. Thus the CT images are geometrically transformed to the optimal CT images fitting best to the standard heart. Since correct transformation of images is required, an Area oriented interpolation method proposed by us is used for interpolation of transformed images. An attempt to reconstruct a 3-D lung image by a series of such operations without discontinuity is shown. Additionally, the same geometrical transformation method to the original projection images is proposed as a more advanced method.

  13. A CT Image Segmentation Algorithm Based on Level Set Method

    Institute of Scientific and Technical Information of China (English)

    QU Jing-yi; SHI Hao-shan

    2006-01-01

    Level Set methods are robust and efficient numerical tools for resolving curve evolution in image segmentation. This paper proposes a new image segmentation algorithm based on Mumford-Shah module. The method is used to CT images and the experiment results demonstrate its efficiency and veracity.

  14. Project-Driven Learning-by-Doing Method for Teaching Software Engineering using Virtualization Technology

    Directory of Open Access Journals (Sweden)

    Kun Ma

    2014-10-01

    Full Text Available Many universities are now offering software engineering an undergraduate level emphasizing knowledge point. However, some enterprise managers reflected that education ignore hands-on ability training, and claimed that there is the isolation between teaching and practice. This paper presents the design of a Software Engineering course (sixth semester in network engineering at University of Jinan for undergraduate Software Engineering students that uses virtualization technology to teach them project-driven learning-by-doing software development process. We present our motivation, challenges encountered, pedagogical goals and approaches, findings (both positive experiences and negative lessons. Our motivation was to teach project-driven Software Engineering using virtualization technology. The course also aims to develop entrepreneurial skills needed for software engineering graduates to better prepare them for the software industry. Billing models of virtualization help pupils and instructors find the cost of the experiment. In pay-as-you-go manner, two labs and three step-by-step projects (single project, pair project, and team project are designed to help the students to complete the assignment excitedly. We conduct some detailed surveys and present the results of student responses. The assessment process designed for this course is illustrated. The paper also shows that learning-by-doing method correlates with the characteristics of different projects, which has resulted in a successful experience as reported by students in an end of a semester survey.

  15. Estimating uncertainty in respondent-driven sampling using a tree bootstrap method.

    Science.gov (United States)

    Baraff, Aaron J; McCormick, Tyler H; Raftery, Adrian E

    2016-12-20

    Respondent-driven sampling (RDS) is a network-based form of chain-referral sampling used to estimate attributes of populations that are difficult to access using standard survey tools. Although it has grown quickly in popularity since its introduction, the statistical properties of RDS estimates remain elusive. In particular, the sampling variability of these estimates has been shown to be much higher than previously acknowledged, and even methods designed to account for RDS result in misleadingly narrow confidence intervals. In this paper, we introduce a tree bootstrap method for estimating uncertainty in RDS estimates based on resampling recruitment trees. We use simulations from known social networks to show that the tree bootstrap method not only outperforms existing methods but also captures the high variability of RDS, even in extreme cases with high design effects. We also apply the method to data from injecting drug users in Ukraine. Unlike other methods, the tree bootstrap depends only on the structure of the sampled recruitment trees, not on the attributes being measured on the respondents, so correlations between attributes can be estimated as well as variability. Our results suggest that it is possible to accurately assess the high level of uncertainty inherent in RDS.

  16. Performance-based parameter tuning method of model-driven PID control systems.

    Science.gov (United States)

    Zhao, Y M; Xie, W F; Tu, X W

    2012-05-01

    In this paper, performance-based parameter tuning method of model-driven Two-Degree-of-Freedom PID (MD TDOF PID) control system has been proposed to enhance the control performances of a process. Known for its ability of stabilizing the unstable processes, fast tracking to the change of set points and rejecting disturbance, the MD TDOF PID has gained research interest recently. The tuning methods for the reported MD TDOF PID are based on internal model control (IMC) method instead of optimizing the performance indices. In this paper, an Integral of Time Absolute Error (ITAE) zero-position-error optimal tuning and noise effect minimizing method is proposed for tuning two parameters in MD TDOF PID control system to achieve the desired regulating and disturbance rejection performance. The comparison with Two-Degree-of-Freedom control scheme by modified smith predictor (TDOF CS MSP) and the designed MD TDOF PID tuned by the IMC tuning method demonstrates the effectiveness of the proposed tuning method.

  17. Perceptual digital imaging methods and applications

    CERN Document Server

    Lukac, Rastislav

    2012-01-01

    Visual perception is a complex process requiring interaction between the receptors in the eye that sense the stimulus and the neural system and the brain that are responsible for communicating and interpreting the sensed visual information. This process involves several physical, neural, and cognitive phenomena whose understanding is essential to design effective and computationally efficient imaging solutions. Building on advances in computer vision, image and video processing, neuroscience, and information engineering, perceptual digital imaging greatly enhances the capabilities of tradition

  18. Image mosaic method based on SIFT features of line segment.

    Science.gov (United States)

    Zhu, Jun; Ren, Mingwu

    2014-01-01

    This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform) feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling.

  19. A study for watermark methods appropriate to medical images.

    Science.gov (United States)

    Cho, Y; Ahn, B; Kim, J S; Kim, I Y; Kim, S I

    2001-06-01

    The network system, including the picture archiving and communication system (PACS), is essential in hospital and medical imaging fields these days. Many medical images are accessed and processed on the web, as well as in PACS. Therefore, any possible accidents caused by the illegal modification of medical images must be prevented. Digital image watermark techniques have been proposed as a method to protect against illegal copying or modification of copyrighted material. Invisible signatures made by a digital image watermarking technique can be a solution to these problems. However, medical images have some different characteristics from normal digital images in that one must not corrupt the information contained in the original medical images. In this study, we suggest modified watermark methods appropriate for medical image processing and communication system that prevent clinically important data contained in original images from being corrupted.

  20. Tilt correction method of text image based on wavelet pyramid

    Science.gov (United States)

    Yu, Mingyang; Zhu, Qiguo

    2017-04-01

    Text images captured by camera may be tilted and distorted, which is unfavorable for document character recognition. Therefore,a method of text image tilt correction based on wavelet pyramid is proposed in this paper. The first step is to convert the text image captured by cameras to binary images. After binarization, the images are layered by wavelet transform to achieve noise reduction, enhancement and compression of image. Afterwards,the image would bedetected for edge by Canny operator, and extracted for straight lines by Radon transform. In the final step, this method calculates the intersection of straight lines and gets the corrected text images according to the intersection points and perspective transformation. The experimental result shows this method can correct text images accurately.

  1. A data-driven method to enhance vibration signal decomposition for rolling bearing fault analysis

    Science.gov (United States)

    Grasso, M.; Chatterton, S.; Pennacchi, P.; Colosimo, B. M.

    2016-12-01

    Health condition analysis and diagnostics of rotating machinery requires the capability of properly characterizing the information content of sensor signals in order to detect and identify possible fault features. Time-frequency analysis plays a fundamental role, as it allows determining both the existence and the causes of a fault. The separation of components belonging to different time-frequency scales, either associated to healthy or faulty conditions, represents a challenge that motivates the development of effective methodologies for multi-scale signal decomposition. In this framework, the Empirical Mode Decomposition (EMD) is a flexible tool, thanks to its data-driven and adaptive nature. However, the EMD usually yields an over-decomposition of the original signals into a large number of intrinsic mode functions (IMFs). The selection of most relevant IMFs is a challenging task, and the reference literature lacks automated methods to achieve a synthetic decomposition into few physically meaningful modes by avoiding the generation of spurious or meaningless modes. The paper proposes a novel automated approach aimed at generating a decomposition into a minimal number of relevant modes, called Combined Mode Functions (CMFs), each consisting in a sum of adjacent IMFs that share similar properties. The final number of CMFs is selected in a fully data driven way, leading to an enhanced characterization of the signal content without any information loss. A novel criterion to assess the dissimilarity between adjacent CMFs is proposed, based on probability density functions of frequency spectra. The method is suitable to analyze vibration signals that may be periodically acquired within the operating life of rotating machineries. A rolling element bearing fault analysis based on experimental data is presented to demonstrate the performances of the method and the provided benefits.

  2. Harmonic Spatial Coherence Imaging: An Ultrasonic Imaging Method Based on Backscatter Coherence

    OpenAIRE

    DAHL, JEREMY J.; Jakovljevic, Marko; Pinton, Gianmarco F.; Trahey, Gregg E.

    2012-01-01

    HSCI and SLSC imaging less sensitive to clutter because it has low spatial coherence. The method is based on the coherence of the second harmonic backscatter. Because the same signals that are used to construct harmonic B-mode images are also used to construct HSCI images, the benefits obtained with harmonic imaging are also applicable to HSCI. Harmonic imaging has been the primary tool for suppressing clutter in diagnostic ultrasound imaging, however second harmonic echoes are not necessaril...

  3. ISAR imaging using the instantaneous range instantaneous Doppler method

    CSIR Research Space (South Africa)

    Wazna, TM

    2015-10-01

    Full Text Available In Inverse Synthetic Aperture Radar (ISAR) imaging, the Range Instantaneous Doppler (RID) method is used to compensate for the nonuniform rotational motion of the target that degrades the Doppler resolution of the ISAR image. The Instantaneous Range...

  4. A new assessment method for image fusion quality

    Science.gov (United States)

    Li, Liu; Jiang, Wanying; Li, Jing; Yuchi, Ming; Ding, Mingyue; Zhang, Xuming

    2013-03-01

    Image fusion quality assessment plays a critically important role in the field of medical imaging. To evaluate image fusion quality effectively, a lot of assessment methods have been proposed. Examples include mutual information (MI), root mean square error (RMSE), and universal image quality index (UIQI). These image fusion assessment methods could not reflect the human visual inspection effectively. To address this problem, we have proposed a novel image fusion assessment method which combines the nonsubsampled contourlet transform (NSCT) with the regional mutual information in this paper. In this proposed method, the source medical images are firstly decomposed into different levels by the NSCT. Then the maximum NSCT coefficients of the decomposed directional images at each level are obtained to compute the regional mutual information (RMI). Finally, multi-channel RMI is computed by the weighted sum of the obtained RMI values at the various levels of NSCT. The advantage of the proposed method lies in the fact that the NSCT can represent image information using multidirections and multi-scales and therefore it conforms to the multi-channel characteristic of human visual system, leading to its outstanding image assessment performance. The experimental results using CT and MRI images demonstrate that the proposed assessment method outperforms such assessment methods as MI and UIQI based measure in evaluating image fusion quality and it can provide consistent results with human visual assessment.

  5. Multiphase Image Segmentation Using the Deformable Simplicial Complex Method

    DEFF Research Database (Denmark)

    Dahl, Vedrana Andersen; Christiansen, Asger Nyman; Bærentzen, Jakob Andreas

    2014-01-01

    The deformable simplicial complex method is a generic method for tracking deformable interfaces. It provides explicit interface representation, topological adaptivity, and multiphase support. As such, the deformable simplicial complex method can readily be used for representing active contours...... in image segmentation based on deformable models. We show the benefits of using the deformable simplicial complex method for image segmentation by segmenting an image into a known number of segments characterized by distinct mean pixel intensities....

  6. Integration of data-driven and physically-based methods to assess shallow landslides susceptibility

    Science.gov (United States)

    Lajas, Sara; Oliveira, Sérgio C.; Zêzere, José Luis

    2016-04-01

    Approaches used to assess shallow landslides susceptibility at the basin scale are conceptually different depending on the use of statistic or deterministic methods. The data-driven methods are sustained in the assumption that the same causes are likely to produce the same effects and for that reason a present/past landslide inventory and a dataset of factors assumed as predisposing factors are crucial for the landslide susceptibility assessment. The physically-based methods are based on a system controlled by physical laws and soil mechanics, where the forces which tend to promote movement are compared with forces that tend to promote resistance to movement. In this case, the evaluation of susceptibility is supported by the calculation of the Factor of safety (FoS), and dependent of the availability of detailed data related with the slope geometry and hydrological and geotechnical properties of the soils and rocks. Within this framework, this work aims to test two hypothesis: (i) although conceptually distinct and based on contrasting procedures, statistic and deterministic methods generate similar shallow landslides susceptibility results regarding the predictive capacity and spatial agreement; and (ii) the integration of the shallow landslides susceptibility maps obtained with data-driven and physically-based methods, for the same study area, generate a more reliable susceptibility model for shallow landslides occurrence. To evaluate these two hypotheses, we select the Information Value data-driven method and the physically-based Infinite Slope model to evaluate shallow landslides in the study area of Monfalim and Louriceira basins (13.9 km2), which is located in the north of Lisbon region (Portugal). The landslide inventory is composed by 111 shallow landslides and was divide in two independent groups based on temporal criteria (age ≤ 1983 and age > 1983): (i) the modelling group (51 cases) was used to define the weights for each predisposing factor

  7. A survey of infrared and visual image fusion methods

    Science.gov (United States)

    Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Hai, Jinjin; He, Kangjian

    2017-09-01

    Infrared (IR) and visual (VI) image fusion is designed to fuse multiple source images into a comprehensive image to boost imaging quality and reduce redundancy information, which is widely used in various imaging equipment to improve the visual ability of human and robot. The accurate, reliable and complementary descriptions of the scene in fused images make these techniques be widely used in various fields. In recent years, a large number of fusion methods for IR and VI images have been proposed due to the ever-growing demands and the progress of image representation methods; however, there has not been published an integrated survey paper about this field in last several years. Therefore, we make a survey to report the algorithmic developments of IR and VI image fusion. In this paper, we first characterize the IR and VI image fusion based applications to represent an overview of the research status. Then we present a synthesize survey of the state of the art. Thirdly, the frequently-used image fusion quality measures are introduced. Fourthly, we perform some experiments of typical methods and make corresponding analysis. At last, we summarize the corresponding tendencies and challenges in IR and VI image fusion. This survey concludes that although various IR and VI image fusion methods have been proposed, there still exist further improvements or potential research directions in different applications of IR and VI image fusion.

  8. Comparative analysis of different methods for image enhancement

    Institute of Scientific and Technical Information of China (English)

    吴笑峰; 胡仕刚; 赵瑾; 李志明; 李劲; 唐志军; 席在芳

    2014-01-01

    Image enhancement technology plays a very important role to improve image quality in image processing. By enhancing some information and restraining other information selectively, it can improve image visual effect. The objective of this work is to implement the image enhancement to gray scale images using different techniques. After the fundamental methods of image enhancement processing are demonstrated, image enhancement algorithms based on space and frequency domains are systematically investigated and compared. The advantage and defect of the above-mentioned algorithms are analyzed. The algorithms of wavelet based image enhancement are also deduced and generalized. Wavelet transform modulus maxima (WTMM) is a method for detecting the fractal dimension of a signal, it is well used for image enhancement. The image techniques are compared by using the mean (μ), standard deviation (s), mean square error (MSE) and PSNR (peak signal to noise ratio). A group of experimental results demonstrate that the image enhancement algorithm based on wavelet transform is effective for image de-noising and enhancement. Wavelet transform modulus maxima method is one of the best methods for image enhancement.

  9. Skeletonization methods for image and volume inpainting

    NARCIS (Netherlands)

    Sobiecki, Andre

    2016-01-01

    Image and shape restoration techniques are increasingly important in computer graphics. Many types of restoration techniques have been proposed in the 2D image-processing and according to our knowledge only one to volumetric data. Well-known examples of such techniques include digital inpainting,

  10. Statistical Smoothing Methods and Image Analysis

    Science.gov (United States)

    1988-12-01

    83 - 111. Rosenfeld, A. and Kak, A.C. (1982). Digital Picture Processing. Academic Press,Qrlando. Serra, J. (1982). Image Analysis and Mat hematical ...hypothesis testing. IEEE Trans. Med. Imaging, MI-6, 313-319. Wicksell, S.D. (1925) The corpuscle problem. A mathematical study of a biometric problem

  11. Image segmentation with a finite element method

    DEFF Research Database (Denmark)

    Bourdin, Blaise

    1999-01-01

    The Mumford-Shah functional for image segmentation is an original approach of the image segmentation problem, based on a minimal energy criterion. Its minimization can be seen as a free discontinuity problem and is based on \\Gamma-convergence and bounded variation functions theories.Some new regu...

  12. Skeletonization methods for image and volume inpainting

    NARCIS (Netherlands)

    Sobiecki, Andre

    2016-01-01

    Image and shape restoration techniques are increasingly important in computer graphics. Many types of restoration techniques have been proposed in the 2D image-processing and according to our knowledge only one to volumetric data. Well-known examples of such techniques include digital inpainting, de

  13. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    Science.gov (United States)

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  14. A Method for Image Decontamination Based on Partial Differential Equation

    Directory of Open Access Journals (Sweden)

    Hou Junping

    2015-01-01

    Full Text Available This paper will introduce the method to apply partial differential equations for the decontamination processing of images. It will establish continuous partial differential mathematical models for image information and use specific solving methods to conduct decontamination processing to images during the process of solving partial differential equations, such as image noise reduction, image denoising and image segmentation. This paper will study the uniqueness of solution for the partial differential equations and the monotonicity that functional constrain has on multipliers by making analysis of the ROF model in the partial differential mathematical model.

  15. A comparative study on medical image segmentation methods

    Directory of Open Access Journals (Sweden)

    Praylin Selva Blessy SELVARAJ ASSLEY

    2014-03-01

    Full Text Available Image segmentation plays an important role in medical images. It has been a relevant research area in computer vision and image analysis. Many segmentation algorithms have been proposed for medical images. This paper makes a review on segmentation methods for medical images. In this survey, segmentation methods are divided into five categories: region based, boundary based, model based, hybrid based and atlas based. The five different categories with their principle ideas, advantages and disadvantages in segmenting different medical images are discussed.

  16. System and method for image mapping and visual attention

    Science.gov (United States)

    Peters, II, Richard A. (Inventor)

    2011-01-01

    A method is described for mapping dense sensory data to a Sensory Ego Sphere (SES). Methods are also described for finding and ranking areas of interest in the images that form a complete visual scene on an SES. Further, attentional processing of image data is best done by performing attentional processing on individual full-size images from the image sequence, mapping each attentional location to the nearest node, and then summing all attentional locations at each node.

  17. Methods of fetal MR: beyond T2-weighted imaging

    Energy Technology Data Exchange (ETDEWEB)

    Brugger, Peter C. [Center of Anatomy and Cell Biology, Integrative Morphology Group, Medical University of Vienna, Waehringerstrasse 13, 1090 Vienna (Austria)]. E-mail: peter.brugger@meduniwien.ac.at; Stuhr, Fritz [Department of Radiology, Medical University of Vienna, Waehringerguertel 18-20, 1090 Vienna (Austria); Lindner, Christian [Department of Radiology, Medical University of Vienna, Waehringerguertel 18-20, 1090 Vienna (Austria); Prayer, Daniela [Department of Radiology, Medical University of Vienna, Waehringerguertel 18-20, 1090 Vienna (Austria)

    2006-02-15

    The present work reviews the basic methods of performing fetal magnetic resonance imaging (MRI). Since fetal MRI differs in many respects from a postnatal study, several factors have to be taken into account to achieve satisfying image quality. Image quality depends on adequate positioning of the pregnant woman in the magnet, use of appropriate coils and the selection of sequences. Ultrafast T2-weighted sequences are regarded as the mainstay of fetal MR-imaging. However, additional sequences, such as T1-weighted images, diffusion-weighted images, echoplanar imaging may provide further information, especially in extra- central-nervous system regions of the fetal body.

  18. Morphology-based fusion method of hyperspectral image

    Science.gov (United States)

    Yue, Song; Zhang, Zhijie; Ren, Tingting; Wang, Chensheng; Yu, Hui

    2014-11-01

    Hyperspectral image analysis method is widely used in all kinds of application including agriculture identification and forest investigation and atmospheric pollution monitoring. In order to accurately and steadily analyze hyperspectral image, considering the spectrum and spatial information which is provided by hyperspectral data together is necessary. The hyperspectral image has the characteristics of large amount of wave bands and information. Corresponding to the characteristics of hyperspectral image, a fast image fusion method that can fuse the hyperspectral image with high fidelity is studied and proposed in this paper. First of all, hyperspectral image is preprocessed before the morphological close operation. The close operation is used to extract wave band characteristic to reduce dimensionality of hyperspectral image. The spectral data is smoothed at the same time to avoid the discontinuity of the data by combination of spatial information and spectral information. On this basis, Mean-shift method is adopted to register key frames. Finally, the selected key frames by fused into one fusing image by the pyramid fusion method. The experiment results show that this method can fuse hyper spectral image in high quality. The fused image's attributes is better than the original spectral images comparing to the spectral images and reach the objective of fusion.

  19. Method for Ultrasonic Imaging and Device for Performing the Method

    Science.gov (United States)

    Madaras, Eric I. (Inventor)

    1997-01-01

    A method for ultrasonic imaging of interior structures and flaws in a test specimen with a smooth or irregular contact surfaces, in which an ultrasonic transducer is coupled acoustically to the contact surface via a plurality of ultrasonic wave guides with equal delay times. The wave guides are thin and bendable, so they adapt to variations in the distance between the transducer and different parts of the contact surface by bending more or less. All parts of the irregular contact surface accordingly receive sound waves that are in phase, even when the contact surface is irregular, so a coherent sound wave is infused in the test specimen. The wave guides can be arranged in the form of an ultrasonic brush, with a flat head for coupling to a flat transducer, and free bristles that can be pressed against the test specimen. By bevelling the bristle ends at a suitable angle, shear mode waves can be infused into the test specimen from a longitudinal mode transducer.

  20. Numerical study of impeller-driven von Karman flows via a volume penalization method

    CERN Document Server

    Kreuzahler, Sebastian; Homann, Holger; Ponty, Yannick; Grauer, Rainer

    2013-01-01

    Simulations of impeller-driven flows in cylindrical geometry are performed via direct numerical simulations (DNS) and compared to flows obtained in the von Karman flow experiments. The geometry of rotating impellers assembled of several basic geometric objects is modeled via a penalization method and implemented in a massive parallel pseudo-spectral Navier-Stokes solver. We performed simulations of impellers with different curvature of blades, especially one resembling the so-called TM28 configuration used in water experiments. The decomposition into poloidal, toroidal components and the mean velocity fields from our simulations are quantitatively in agreement with experimental results. We analyzed the flow structure close to the impeller blades and found different vortex topologies.

  1. Automated segmentation of middle hepatic vein in non-contrast x-ray CT images based on an atlas-driven approach

    Science.gov (United States)

    Kitagawa, Teruhiko; Zhou, Xiangrong; Hara, Takeshi; Fujita, Hiroshi; Yokoyama, Ryujiro; Kondo, Hiroshi; Kanematsu, Masayuki; Hoshi, Hiroaki

    2008-03-01

    In order to support the diagnosis of hepatic diseases, understanding the anatomical structures of hepatic lobes and hepatic vessels is necessary. Although viewing and understanding the hepatic vessels in contrast media-enhanced CT images is easy, the observation of the hepatic vessels in non-contrast X-ray CT images that are widely used for the screening purpose is difficult. We are developing a computer-aided diagnosis (CAD) system to support the liver diagnosis based on non-contrast X-ray CT images. This paper proposes a new approach to segment the middle hepatic vein (MHV), a key structure (landmark) for separating the liver region into left and right lobes. Extraction and classification of hepatic vessels are difficult in non-contrast X-ray CT images because the contrast between hepatic vessels and other liver tissues is low. Our approach uses an atlas-driven method by the following three stages. (1) Construction of liver atlases of left and right hepatic lobes using a learning datasets. (2) Fully-automated enhancement and extraction of hepatic vessels in liver regions. (3) Extraction of MHV based on the results of (1) and (2). The proposed approach was applied to 22 normal liver cases of non-contrast X-ray CT images. The preliminary results show that the proposed approach achieves the success in 14 cases for MHV extraction.

  2. Blind Image Deblurring Driven by Nonlinear Processing in the Edge Domain

    Directory of Open Access Journals (Sweden)

    Stefania Colonnese

    2004-12-01

    Full Text Available This work addresses the problem of blind image deblurring, that is, of recovering an original image observed through one or more unknown linear channels and corrupted by additive noise. We resort to an iterative algorithm, belonging to the class of Bussgang algorithms, based on alternating a linear and a nonlinear image estimation stage. In detail, we investigate the design of a novel nonlinear processing acting on the Radon transform of the image edges. This choice is motivated by the fact that the Radon transform of the image edges well describes the structural image features and the effect of blur, thus simplifying the nonlinearity design. The effect of the nonlinear processing is to thin the blurred image edges and to drive the overall blind restoration algorithm to a sharp, focused image. The performance of the algorithm is assessed by experimental results pertaining to restoration of blurred natural images.

  3. Medical Image Compression using Wavelet Decomposition for Prediction Method

    CERN Document Server

    Ramesh, S M

    2010-01-01

    In this paper offers a simple and lossless compression method for compression of medical images. Method is based on wavelet decomposition of the medical images followed by the correlation analysis of coefficients. The correlation analyses are the basis of prediction equation for each sub band. Predictor variable selection is performed through coefficient graphic method to avoid multicollinearity problem and to achieve high prediction accuracy and compression rate. The method is applied on MRI and CT images. Results show that the proposed approach gives a high compression rate for MRI and CT images comparing with state of the art methods.

  4. Comparison of interpolating methods for image resampling.

    Science.gov (United States)

    Parker, J; Kenyon, R V; Troxel, D E

    1983-01-01

    When resampling an image to a new set of coordinates (for example, when rotating an image), there is often a noticeable loss in image quality. To preserve image quality, the interpolating function used for the resampling should be an ideal low-pass filter. To determine which limited extent convolving functions would provide the best interpolation, five functions were compared: A) nearest neighbor, B) linear, C) cubic B-spline, D) high-resolution cubic spline with edge enhancement (a = -1), and E) high-resolution cubic spline (a = -0.5). The functions which extend over four picture elements (C, D, E) were shown to have a better frequency response than those which extend over one (A) or two (B) pixels. The nearest neighbor function shifted the image up to one-half a pixel. Linear and cubic B-spline interpolation tended to smooth the image. The best response was obtained with the high-resolution cubic spline functions. The location of the resampled points with respect to the initial coordinate system has a dramatic effect on the response of the sampled interpolating function the data are exactly reproduced when the points are aligned, and the response has the most smoothing when the resampled points are equidistant from the original coordinate points. Thus, at the expense of some increase in computing time, image quality can be improved by resampled using the high-resolution cubic spline function as compared to the nearest neighbor, linear, or cubic B-spline functions.

  5. Some selected quantitative methods of thermal image analysis in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image.

  6. A corrected method of distorted printed circuit board image

    Institute of Scientific and Technical Information of China (English)

    Qiao Nao-Sheng; Ye Yu-Tang; Huang Yong-Lin

    2011-01-01

    This paper proposes a corrected method of distorted image based on adaptive control. First, the adaptive control relationship of pixel point positions between distorted image and its corrected image is given by using polynomial fitting,thus control point pairs between the distorted image and its corrected image are found. Secondly, the value of both image distortion centre and polynomial coefficient is obtained with least square method, thus the relationship of each control point pairs is deduced. In the course of distortion image processing, the gray value of the corrected image is changed into integer with bilinear interpolation. Finally, the experiments are performed to correct two distorted printed circuit board images. The results are perfect and the mean square errors of residual error are tiny.

  7. A new method for mobile phone image denoising

    Science.gov (United States)

    Jin, Lianghai; Jin, Min; Li, Xiang; Xu, Xiangyang

    2015-12-01

    Images captured by mobile phone cameras via pipeline processing usually contain various kinds of noises, especially granular noise with different shapes and sizes in both luminance and chrominance channels. In chrominance channels, noise is closely related to image brightness. To improve image quality, this paper presents a new method to denoise such mobile phone images. The proposed scheme converts the noisy RGB image to luminance and chrominance images, which are then denoised by a common filtering framework. The common filtering framework processes a noisy pixel by first excluding the neighborhood pixels that significantly deviate from the (vector) median and then utilizing the other neighborhood pixels to restore the current pixel. In the framework, the strength of chrominance image denoising is controlled by image brightness. The experimental results show that the proposed method obviously outperforms some other representative denoising methods in terms of both objective measure and visual evaluation.

  8. Separation method of heavy-ion particle image from gamma-ray mixed images using an imaging plate

    CERN Document Server

    Yamadera, A; Ohuchi, H; Nakamura, T; Fukumura, A

    1999-01-01

    We have developed a separation method of alpha-ray and gamma-ray images using the imaging plate (IP). The IP from which the first image was read out by an image reader was annealed at 50 deg. C for 2 h in a drying oven and the second image was read out by the image reader. It was found out that an annealing ratio, k, which is defined as a ratio of the photo-stimulated luminescence (PSL) density at the first measurement to that at the second measurement, was different for alpha rays and gamma rays. By subtracting the second image multiplied by a factor of k from the first image, the alpha-ray image was separated from the alpha and gamma-ray mixed images. This method was applied to identify the images of helium, carbon and neon particles of high energies using the heavy-ion medical accelerator, HIMAC. (author)

  9. Mathematical and statistical methods for multistatic imaging

    CERN Document Server

    Ammari, Habib; Jing, Wenjia; Kang, Hyeonbae; Lim, Mikyoung; Sølna, Knut; Wang, Han

    2013-01-01

    This book covers recent mathematical, numerical, and statistical approaches for multistatic imaging of targets with waves at single or multiple frequencies. The waves can be acoustic, elastic or electromagnetic. They are generated by point sources on a transmitter array and measured on a receiver array. An important problem in multistatic imaging is to quantify and understand the trade-offs between data size, computational complexity, signal-to-noise ratio, and resolution. Another fundamental problem is to have a shape representation well suited to solving target imaging problems from multistatic data. In this book the trade-off between resolution and stability when the data are noisy is addressed. Efficient imaging algorithms are provided and their resolution and stability with respect to noise in the measurements analyzed. It also shows that high-order polarization tensors provide an accurate representation of the target. Moreover, a dictionary-matching technique based on new invariants for the generalized ...

  10. Experimental and Other Breast Imaging Methods

    Science.gov (United States)

    ... optical imaging with other tests like MRI or 3D mammography to help diagnose breast cancer. Molecular breast ... radioactive particle to detect cancer cells. The PEM scanner is approved by the Food and Drug Administration ( ...

  11. Quantum dynamic imaging theoretical and numerical methods

    CERN Document Server

    Ivanov, Misha

    2011-01-01

    Studying and using light or "photons" to image and then to control and transmit molecular information is among the most challenging and significant research fields to emerge in recent years. One of the fastest growing areas involves research in the temporal imaging of quantum phenomena, ranging from molecular dynamics in the femto (10-15s) time regime for atomic motion to the atto (10-18s) time scale of electron motion. In fact, the attosecond "revolution" is now recognized as one of the most important recent breakthroughs and innovations in the science of the 21st century. A major participant in the development of ultrafast femto and attosecond temporal imaging of molecular quantum phenomena has been theory and numerical simulation of the nonlinear, non-perturbative response of atoms and molecules to ultrashort laser pulses. Therefore, imaging quantum dynamics is a new frontier of science requiring advanced mathematical approaches for analyzing and solving spatial and temporal multidimensional partial differ...

  12. New mobile methods for dietary assessment: review of image-assisted and image-based dietary assessment methods.

    Science.gov (United States)

    Boushey, C J; Spoden, M; Zhu, F M; Delp, E J; Kerr, D A

    2016-12-12

    For nutrition practitioners and researchers, assessing dietary intake of children and adults with a high level of accuracy continues to be a challenge. Developments in mobile technologies have created a role for images in the assessment of dietary intake. The objective of this review was to examine peer-reviewed published papers covering development, evaluation and/or validation of image-assisted or image-based dietary assessment methods from December 2013 to January 2016. Images taken with handheld devices or wearable cameras have been used to assist traditional dietary assessment methods for portion size estimations made by dietitians (image-assisted methods). Image-assisted approaches can supplement either dietary records or 24-h dietary recalls. In recent years, image-based approaches integrating application technology for mobile devices have been developed (image-based methods). Image-based approaches aim at capturing all eating occasions by images as the primary record of dietary intake, and therefore follow the methodology of food records. The present paper reviews several image-assisted and image-based methods, their benefits and challenges; followed by details on an image-based mobile food record. Mobile technology offers a wide range of feasible options for dietary assessment, which are easier to incorporate into daily routines. The presented studies illustrate that image-assisted methods can improve the accuracy of conventional dietary assessment methods by adding eating occasion detail via pictures captured by an individual (dynamic images). All of the studies reduced underreporting with the help of images compared with results with traditional assessment methods. Studies with larger sample sizes are needed to better delineate attributes with regards to age of user, degree of error and cost.

  13. Novel welding image processing method based on fractal theory

    Institute of Scientific and Technical Information of China (English)

    陈强; 孙振国; 肖勇; 路井荣

    2002-01-01

    Computer vision has come into used in the fields of welding process control and automation. In order to improve precision and rapidity of welding image processing, a novel method based on fractal theory has been put forward in this paper. Compared with traditional methods, the image is preliminarily processed in the macroscopic regions then thoroughly analyzed in the microscopic regions in the new method. With which, an image is divided up to some regions according to the different fractal characters of image edge, and the fuzzy regions including image edges are detected out, then image edges are identified with Sobel operator and curved by LSM (Lease Square Method). Since the data to be processed have been decreased and the noise of image has been reduced, it has been testified through experiments that edges of weld seam or weld pool could be recognized correctly and quickly.

  14. A New Method for Human Microcirculation Image Enhancement

    Institute of Scientific and Technical Information of China (English)

    CHEN Yuan; ZHAO Zhi-min; LIU Lei; LI Peng

    2008-01-01

    Microcirculation images often have uneven illumination and low contrast in the acquisition process, which affect the image reorganization and following process. This paper presents a new method for microcirculatory image illumination correction and con-trast enhancement based on the Contourlet transform. Initially, the image illumination model is extracted by Contourlet transform and then uneven illumination is corrected. Next, in order to restrain noise and enhance image contrast, the probability function asso-ciated with noise coefficient and edge coefficient is established and applied to all Contour-let coefficients. Then, a nonlinear enhancement function is applied to modified Contourlet coefficient to adaptively enhance image contrast. Finally, the enhanced image is obtained by inverse Contourlet transform. We compare this approach with other contrast enhance-ment methods, result showing that our method has a better effect than other enhancement methods, which might be helpful for clinical diagnostics of microcirculation.

  15. Survey: interpolation methods for whole slide image processing.

    Science.gov (United States)

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images.

  16. A New Robust Image Matching Method Based on Distance Reciprocal

    Institute of Scientific and Technical Information of China (English)

    赵春江; 施文康; 邓勇

    2004-01-01

    Object matching between two-dimensional images is an important problem in computer vision. The purpose of object matching is to decide the similarity between two objects. A new robust image matching method based on distance reciprocal was presented. The distance reciprocal is based on human visual perception. This method is simple and effective. Moreover, it is robust against noise. The experiments show that this method outperforms the Hausdorff distance, when the images with noise interfered need to be recognized.

  17. Advanced methods in synthetic aperture radar imaging

    Science.gov (United States)

    Kragh, Thomas

    2012-02-01

    For over 50 years our world has been mapped and measured with synthetic aperture radar (SAR). A SAR system operates by transmitting a series of wideband radio-frequency pulses towards the ground and recording the resulting backscattered electromagnetic waves as the system travels along some one-dimensional trajectory. By coherently processing the recorded backscatter over this extended aperture, one can form a high-resolution 2D intensity map of the ground reflectivity, which we call a SAR image. The trajectory, or synthetic aperture, is achieved by mounting the radar on an aircraft, spacecraft, or even on the roof of a car traveling down the road, and allows for a diverse set of applications and measurement techniques for remote sensing applications. It is quite remarkable that the sub-centimeter positioning precision and sub-nanosecond timing precision required to make this work properly can in fact be achieved under such real-world, often turbulent, vibrationally intensive conditions. Although the basic principles behind SAR imaging and interferometry have been known for decades, in recent years an explosion of data exploitation techniques enabled by ever-faster computational horsepower have enabled some remarkable advances. Although SAR images are often viewed as simple intensity maps of ground reflectivity, SAR is also an exquisitely sensitive coherent imaging modality with a wealth of information buried within the phase information in the image. Some of the examples featured in this presentation will include: (1) Interferometric SAR, where by comparing the difference in phase between two SAR images one can measure subtle changes in ground topography at the wavelength scale. (2) Change detection, in which carefully geolocated images formed from two different passes are compared. (3) Multi-pass 3D SAR tomography, where multiple trajectories can be used to form 3D images. (4) Moving Target Indication (MTI), in which Doppler effects allow one to detect and

  18. Data-Driven Methods for the Detection of Causal Structures in Process Technology

    Directory of Open Access Journals (Sweden)

    Christian Kühnert

    2014-11-01

    Full Text Available In modern industrial plants, process units are strongly cross-linked with eachother, and disturbances occurring in one unit potentially become plant-wide. This can leadto a flood of alarms at the supervisory control and data acquisition system, hiding the originalfault causing the disturbance. Hence, one major aim in fault diagnosis is to backtrackthe disturbance propagation path of the disturbance and to localize the root cause of thefault. Since detecting correlation in the data is not sufficient to describe the direction of thepropagation path, cause-effect dependencies among process variables need to be detected.Process variables that show a strong causal impact on other variables in the process comeinto consideration as being the root cause. In this paper, different data-driven methods areproposed, compared and combined that can detect causal relationships in data while solelyrelying on process data. The information of causal dependencies is used for localization ofthe root cause of a fault. All proposed methods consist of a statistical part, which determineswhether the disturbance traveling from one process variable to a second is significant, and aquantitative part, which calculates the causal information the first process variable has aboutthe second. The methods are tested on simulated data from a chemical stirred-tank reactorand on a laboratory plant.

  19. Standard test method to determine the performance of tiled roofs to wind-driven rain

    Directory of Open Access Journals (Sweden)

    Sánchez de Rojas, M. I.

    2008-09-01

    Full Text Available The extent to which roof coverings can resist water penetration from the combination of wind and rain, commonly referred to as wind driven rain, is important for the design of roofs. A new project of European Standard prEN 15601 (1 specifies a method of test to determine the performance of the roof covering against wind driven rain. The combined action of wind and rain varies considerably with geographical location of a building and the associated differences in the rain and wind climate. Three windrain conditions and one deluge condition covering Northern Europe Coastal, Central Europe and Southern Europe are specified in the project standard, each subdivided into four wind-speeds and rainfall rates to be applied to the test. The project does not contain information on the level of acceptable performance.Para el diseño de los tejados es importante determinar el punto hasta el cual éstos pueden resistirse a la penetración de agua causada por la combinación de viento y lluvia. Un nuevo proyecto de Norma Europeo prEN 15601 (1 especifica un método de ensayo para determinar el comportamiento del tejado frente a la combinación de viento y lluvia. La acción combinada de viento y lluvia varía considerablemente con la situación geográfica de un edificio y las diferencias asociadas al clima de la lluvia y del viento. El proyecto de norma especifica las condiciones de viento y lluvia y una condición de diluvio para cada una de las tres zonas de Europa: Europa del Norte y Costera, Europa Central y Europa del Sur, cada una subdividida en cuatro condiciones de velocidades de viento y caudal de lluvia para ser aplicadas en los ensayos. El proyecto no contiene la información sobre condiciones aceptables.

  20. Characterization of oscillatory instability in lid driven cavity flows using lattice Boltzmann method

    Science.gov (United States)

    Anupindi, Kameswararao; Lai, Weichen; Frankel, Steven

    2014-01-01

    In the present work, lattice Boltzmann method (LBM) is applied for simulating flow in a three-dimensional lid driven cubic and deep cavities. The developed code is first validated by simulating flow in a cubic lid driven cavity at 1000 and 12000 Reynolds numbers following which we study the effect of cavity depth on the steady-oscillatory transition Reynolds number in cavities with depth aspect ratio equal to 1, 2 and 3. Turbulence modeling is performed through large eddy simulation (LES) using the classical Smagorinsky sub-grid scale model to arrive at an optimum mesh size for all the simulations. The simulation results indicate that the first Hopf bifurcation Reynolds number correlates negatively with the cavity depth which is consistent with the observations from two-dimensional deep cavity flow data available in the literature. Cubic cavity displays a steady flow field up to a Reynolds number of 2100, a delayed anti-symmetry breaking oscillatory field at a Reynolds number of 2300, which further gets restored to a symmetry preserving oscillatory flow field at 2350. Deep cavities on the other hand only attain an anti-symmetry breaking flow field from a steady flow field upon increase of the Reynolds number in the range explored. As the present work involved performing a set of time-dependent calculations for several Reynolds numbers and cavity depths, the parallel performance of the code is evaluated a priori by running the code on up to 4096 cores. The computational time required for these runs shows a close to linear speed up over a wide range of processor counts depending on the problem size, which establishes the feasibility of performing a thorough search process such as the one presently undertaken. PMID:24587561

  1. Data-driven and hybrid coastal morphological prediction methods for mesoscale forecasting

    Science.gov (United States)

    Reeve, Dominic E.; Karunarathna, Harshinie; Pan, Shunqi; Horrillo-Caraballo, Jose M.; Różyński, Grzegorz; Ranasinghe, Roshanka

    2016-03-01

    It is now common for coastal planning to anticipate changes anywhere from 70 to 100 years into the future. The process models developed and used for scheme design or for large-scale oceanography are currently inadequate for this task. This has prompted the development of a plethora of alternative methods. Some, such as reduced complexity or hybrid models simplify the governing equations retaining processes that are considered to govern observed morphological behaviour. The computational cost of these models is low and they have proven effective in exploring morphodynamic trends and improving our understanding of mesoscale behaviour. One drawback is that there is no generally agreed set of principles on which to make the simplifying assumptions and predictions can vary considerably between models. An alternative approach is data-driven techniques that are based entirely on analysis and extrapolation of observations. Here, we discuss the application of some of the better known and emerging methods in this category to argue that with the increasing availability of observations from coastal monitoring programmes and the development of more sophisticated statistical analysis techniques data-driven models provide a valuable addition to the armoury of methods available for mesoscale prediction. The continuation of established monitoring programmes is paramount, and those that provide contemporaneous records of the driving forces and the shoreline response are the most valuable in this regard. In the second part of the paper we discuss some recent research that combining some of the hybrid techniques with data analysis methods in order to synthesise a more consistent means of predicting mesoscale coastal morphological evolution. While encouraging in certain applications a universally applicable approach has yet to be found. The route to linking different model types is highlighted as a major challenge and requires further research to establish its viability. We argue that

  2. Efficient hybrid method for time reversal superresolution imaging

    Institute of Scientific and Technical Information of China (English)

    Xiaohua Wang,Wei Gao,; Bingzhong Wang

    2015-01-01

    An efficient hybrid time reversal (TR) imaging method based on signal subspace and noise subspace is proposed for electromagnetic superresolution detecting and imaging. First, the locations of targets are estimated by the transmitting-mode decom-position of the TR operator (DORT) method employing the signal subspace. Then, the TR multiple signal classification (TR-MUSIC) method employing the noise subspace is used in the estimated target area to get the superresolution imaging of targets. Two examples with homogeneous and inhomogeneous background mediums are considered, respectively. The results show that the proposed hybrid method has advantages in CPU time and memory cost because of the combination of rough and fine imaging.

  3. DEVELOPMENT OF IMAGE SELECTION METHOD USING GRAPH CUTS

    Directory of Open Access Journals (Sweden)

    T. Fuse

    2016-06-01

    Full Text Available 3D models have been widely used by spread of many available free-software. Additionally, enormous images can be easily acquired, and images are utilized for creating the 3D models recently. The creation of 3D models by using huge amount of images, however, takes a lot of time and effort, and then efficiency for 3D measurement are required. In the efficient strategy, the accuracy of the measurement is also required. This paper develops an image selection method based on network design that means surveying network construction. The proposed method uses image connectivity graph. The image connectivity graph consists of nodes and edges. The nodes correspond to images to be used. The edges connected between nodes represent image relationships with costs as accuracies of orientation elements. For the efficiency, the image connectivity graph should be constructed with smaller number of edges. Once the image connectivity graph is built, the image selection problem is regarded as combinatorial optimization problem and the graph cuts technique can be applied. In the process of 3D reconstruction, low quality images and similar images are also extracted and removed. Through the experiments, the significance of the proposed method is confirmed. It implies potential to efficient and accurate 3D measurement.

  4. Development of Image Selection Method Using Graph Cuts

    Science.gov (United States)

    Fuse, T.; Harada, R.

    2016-06-01

    3D models have been widely used by spread of many available free-software. Additionally, enormous images can be easily acquired, and images are utilized for creating the 3D models recently. The creation of 3D models by using huge amount of images, however, takes a lot of time and effort, and then efficiency for 3D measurement are required. In the efficient strategy, the accuracy of the measurement is also required. This paper develops an image selection method based on network design that means surveying network construction. The proposed method uses image connectivity graph. The image connectivity graph consists of nodes and edges. The nodes correspond to images to be used. The edges connected between nodes represent image relationships with costs as accuracies of orientation elements. For the efficiency, the image connectivity graph should be constructed with smaller number of edges. Once the image connectivity graph is built, the image selection problem is regarded as combinatorial optimization problem and the graph cuts technique can be applied. In the process of 3D reconstruction, low quality images and similar images are also extracted and removed. Through the experiments, the significance of the proposed method is confirmed. It implies potential to efficient and accurate 3D measurement.

  5. Activity-Based Costing (ABC) and Time-Driven Activity-Based Costing (TDABC): Applicable Methods for University Libraries?

    National Research Council Canada - National Science Library

    Kate-Riin Kont; Signe Jantson

    2011-01-01

    ..., such as “activity-based costing” (ABC) and “time-driven activity-based costing” (TDABC), focusing on the strengths and weaknesses of both methods to determine which of these two is suitable for application in university libraries.Methods...

  6. A fast level set method for synthetic aperture radar ocean image segmentation.

    Science.gov (United States)

    Huang, Xiaoxia; Huang, Bo; Li, Hongga

    2009-01-01

    Segmentation of high noise imagery like Synthetic Aperture Radar (SAR) images is still one of the most challenging tasks in image processing. While level set, a novel approach based on the analysis of the motion of an interface, can be used to address this challenge, the cell-based iterations may make the process of image segmentation remarkably slow, especially for large-size images. For this reason fast level set algorithms such as narrow band and fast marching have been attempted. Built upon these, this paper presents an improved fast level set method for SAR ocean image segmentation. This competent method is dependent on both the intensity driven speed and curvature flow that result in a stable and smooth boundary. Notably, it is optimized to track moving interfaces for keeping up with the point-wise boundary propagation using a single list and a method of fast up-wind scheme iteration. The list facilitates efficient insertion and deletion of pixels on the propagation front. Meanwhile, the local up-wind scheme is used to update the motion of the curvature front instead of solving partial differential equations. Experiments have been carried out on extraction of surface slick features from ERS-2 SAR images to substantiate the efficacy of the proposed fast level set method.

  7. Methods and systems for producing compounded ultrasound images

    DEFF Research Database (Denmark)

    2012-01-01

    Disclosed is a method for producing compounded ultrasound images by beamforming a first and a second low-resolution image using data from a first ultrasound emission, beamforming a third and a fourth low-resolution image using data from a second ultrasound emission, summing said first and said...

  8. An innovative lossless compression method for discrete-color images.

    Science.gov (United States)

    Alzahir, Saif; Borici, Arber

    2015-01-01

    In this paper, we present an innovative method for lossless compression of discrete-color images, such as map images, graphics, GIS, as well as binary images. This method comprises two main components. The first is a fixed-size codebook encompassing 8×8 bit blocks of two-tone data along with their corresponding Huffman codes and their relative probabilities of occurrence. The probabilities were obtained from a very large set of discrete color images which are also used for arithmetic coding. The second component is the row-column reduction coding, which will encode those blocks that are not in the codebook. The proposed method has been successfully applied on two major image categories: 1) images with a predetermined number of discrete colors, such as digital maps, graphs, and GIS images and 2) binary images. The results show that our method compresses images from both categories (discrete color and binary images) with 90% in most case and higher than the JBIG-2 by 5%-20% for binary images, and by 2%-6.3% for discrete color images on average.

  9. A system and method for imaging body areas

    NARCIS (Netherlands)

    Goethals, F.P.C.

    2013-01-01

    The invention relates to a system for imaging one or more external human body areas comprising a photographic device configured to acquire, store and output an image or images of the one or more body areas. The invention also relates to a method for determining a probable disease state of an externa

  10. Emissivity corrected infrared method for imaging anomalous structural heat flows

    Science.gov (United States)

    Del Grande, Nancy K.; Durbin, Philip F.; Dolan, Kenneth W.; Perkins, Dwight E.

    1995-01-01

    A method for detecting flaws in structures using dual band infrared radiation. Heat is applied to the structure being evaluated. The structure is scanned for two different wavelengths and data obtained in the form of images. Images are used to remove clutter to form a corrected image. The existence and nature of a flaw is determined by investigating a variety of features.

  11. Methods of filtering the graph images of the functions

    Directory of Open Access Journals (Sweden)

    Олександр Григорович Бурса

    2017-06-01

    Full Text Available The theoretical aspects of cleaning raster images of scanned graphs of functions from digital, chromatic and luminance distortions by using computer graphics techniques have been considered. The basic types of distortions characteristic of graph images of functions have been stated. To suppress the distortion several methods, providing for high-quality of the resulting images and saving their topological features, were suggested. The paper describes the techniques developed and improved by the authors: the method of cleaning the image of distortions by means of iterative contrasting, based on the step-by-step increase in image contrast in the graph by 1%; the method of small entities distortion restoring, based on the thinning of the known matrix of contrast increase filter (the allowable dimensions of the nucleus dilution radius convolution matrix, which provide for the retention of the graph lines have been established; integration technique of the noise reduction method by means of contrasting and distortion restoring method of small entities with known σ-filter. Each method in the complex has been theoretically substantiated. The developed methods involve treatment of graph images as the entire image (global processing and its fragments (local processing. The metrics assessing the quality of the resulting image with the global and local processing have been chosen, the substantiation of the choice as well as the formulas have been given. The proposed complex methods of cleaning the graphs images of functions from grayscale image distortions is adaptive to the form of an image carrier, the distortion level in the image and its distribution. The presented results of testing the developed complex of methods for a representative sample of images confirm its effectiveness

  12. Training Methods for Image Noise Level Estimation on Wavelet Components

    Directory of Open Access Journals (Sweden)

    A. De Stefano

    2004-12-01

    Full Text Available The estimation of the standard deviation of noise contaminating an image is a fundamental step in wavelet-based noise reduction techniques. The method widely used is based on the mean absolute deviation (MAD. This model-based method assumes specific characteristics of the noise-contaminated image component. Three novel and alternative methods for estimating the noise standard deviation are proposed in this work and compared with the MAD method. Two of these methods rely on a preliminary training stage in order to extract parameters which are then used in the application stage. The sets used for training and testing, 13 and 5 images, respectively, are fully disjoint. The third method assumes specific statistical distributions for image and noise components. Results showed the prevalence of the training-based methods for the images and the range of noise levels considered.

  13. Distributed MIMO-ISAR Sub-image Fusion Method

    Directory of Open Access Journals (Sweden)

    Gu Wenkun

    2017-02-01

    Full Text Available The fast fluctuation associated with maneuvering a target’s radar cross-section often affects the imaging performance stability of traditional monostatic Inverse Synthetic Aperture Radar (ISAR. To address this problem, in this study, we propose an imaging method based on the fusion of sub-images of frequencydiversity-distributed multiple Input-Multiple Output-Inverse Synthetic Aperture Radar (MIMO-ISAR. First, we establish the analytic expression of a two-dimensional ISAR sub-image acquired by different channels of distributed MIMO-ISAR. Then, we derive the distance and azimuth distortion factors of the image acquired by the different channels. By compensating for the distortion of the ISAR image, we ultimately realize distributed MIMO-ISAR fusion imaging. Simulations verify the validity of this imaging method using distributed MIMO-ISAR.

  14. An improved Bayesian matting method based on image statistic characteristics

    Science.gov (United States)

    Sun, Wei; Luo, Siwei; Wu, Lina

    2015-03-01

    Image matting is an important task in image and video editing and has been studied for more than 30 years. In this paper we propose an improved interactive matting method. Starting from a coarse user-guided trimap, we first perform a color estimation based on texture and color information and use the result to refine the original trimap. Then with the new trimap, we apply soft matting process which is improved Bayesian matting with smoothness constraints. Experimental results on natural image show that this method is useful, especially for the images have similar texture feature in the background or the images which is hard to give a precise trimap.

  15. A SAR IMAGE REGISTRATION METHOD BASED ON SIFT ALGORITHM

    Directory of Open Access Journals (Sweden)

    W. Lu

    2017-09-01

    Full Text Available In order to improve the stability and rapidity of synthetic aperture radar (SAR images matching, an effective method was presented. Firstly, the adaptive smoothing filtering was employed for image denoising in image processing based on Wallis filtering to avoid the follow-up noise is amplified. Secondly, feature points were extracted by a simplified SIFT algorithm. Finally, the exact matching of the images was achieved with these points. Compared with the existing methods, it not only maintains the richness of features, but a-lso reduces the noise of the image. The simulation results show that the proposed algorithm can achieve better matching effect.

  16. Investigation of Optimal Integrated Circuit Raster Image Vectorization Method

    Directory of Open Access Journals (Sweden)

    Leonas Jasevičius

    2011-03-01

    Full Text Available Visual analysis of integrated circuit layer requires raster image vectorization stage to extract layer topology data to CAD tools. In this paper vectorization problems of raster IC layer images are presented. Various line extraction from raster images algorithms and their properties are discussed. Optimal raster image vectorization method was developed which allows utilization of common vectorization algorithms to achieve the best possible extracted vector data match with perfect manual vectorization results. To develop the optimal method, vectorized data quality dependence on initial raster image skeleton filter selection was assessed.Article in Lithuanian

  17. a SAR Image Registration Method Based on Sift Algorithm

    Science.gov (United States)

    Lu, W.; Yue, X.; Zhao, Y.; Han, C.

    2017-09-01

    In order to improve the stability and rapidity of synthetic aperture radar (SAR) images matching, an effective method was presented. Firstly, the adaptive smoothing filtering was employed for image denoising in image processing based on Wallis filtering to avoid the follow-up noise is amplified. Secondly, feature points were extracted by a simplified SIFT algorithm. Finally, the exact matching of the images was achieved with these points. Compared with the existing methods, it not only maintains the richness of features, but a-lso reduces the noise of the image. The simulation results show that the proposed algorithm can achieve better matching effect.

  18. A novel image fusion method using WBCT and PCA

    Institute of Scientific and Technical Information of China (English)

    Qiguang Miao; Baoshu Wang

    2008-01-01

    A novel image fusion algorithm based on wavelet-based contourlet transform (WBCT)and principal component analysis(PCA)is proposed.The PCA method is adopted for the low-frequency components.Using the proposed algorithm to choose the greater of the active measures,the region consistency test is performed for the high-frequency components.Experiments show that the proposed method works better in preserving the edge and texture information than wavelet transform method and Laplacian pyramid (LP)method do in image fusion.Four indicators for the fusion image are given to compare the proposed method with other methods.

  19. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey.

  20. Quantum Imaging: New Methods and Applications

    Science.gov (United States)

    2012-01-23

    Broadbent , Curtis 0.88 Armstrong, Gregory 0.45 Mohan, Mishant 0.50 Simon, David 0.50 Minaeva, Olga 0.00 Bonato, Cristian 0.00 Saleh, Mohammed 0.00...043810 (2008). 52. K.W.C. Chan, M.N. O’Sullivan, and R.W. Boyd, “Two-Color Ghost Imaging,” Phys. Rev. A 79, 033808 (2009). 53. J. Broadbent , P. Zerom, H... Broadbent , Petros Zerom, Heedeuk Shin, John C. Howell, and Robert W.Boyd, “Discriminating orthogonal single-photon images,” Phys. Rev. A 79, 033802 (2009

  1. A method for fast automated microscope image stitching.

    Science.gov (United States)

    Yang, Fan; Deng, Zhen-Sheng; Fan, Qiu-Hong

    2013-05-01

    Image stitching is an important technology to produce a panorama or larger image by combining several images with overlapped areas. In many biomedical researches, image stitching is highly desirable to acquire a panoramic image which represents large areas of certain structures or whole sections, while retaining microscopic resolution. In this study, we develop a fast normal light microscope image stitching algorithm based on feature extraction. At first, an algorithm of scale-space reconstruction of speeded-up robust features (SURF) was proposed to extract features from the images to be stitched with a short time and higher repeatability. Then, the histogram equalization (HE) method was employed to preprocess the images to enhance their contrast for extracting more features. Thirdly, the rough overlapping zones of the images preprocessed were calculated by phase correlation, and the improved SURF was used to extract the image features in the rough overlapping areas. Fourthly, the features were corresponded by matching algorithm and the transformation parameters were estimated, then the images were blended seamlessly. Finally, this procedure was applied to stitch normal light microscope images to verify its validity. Our experimental results demonstrate that the improved SURF algorithm is very robust to viewpoint, illumination, blur, rotation and zoom of the images and our method is able to stitch microscope images automatically with high precision and high speed. Also, the method proposed in this paper is applicable to registration and stitching of common images as well as stitching the microscope images in the field of virtual microscope for the purpose of observing, exchanging, saving, and establishing a database of microscope images. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Markov random field driven region-based active contour model (MaRACel): application to medical image segmentation.

    Science.gov (United States)

    Xu, Jun; Monaco, James P; Madabhushi, Anant

    2010-01-01

    In this paper we present a Markov random field (MRF) driven region-based active contour model (MaRACel) for medical image segmentation. State-of-the-art region-based active contour (RAC) models assume that every spatial location in the image is statistically independent of the others, thereby ignoring valuable contextual information. To address this shortcoming we incorporate a MRF prior into the AC model, further generalizing Chan & Vese's (CV) and Rousson and Deriche's (RD) AC models. This incorporation requires a Markov prior that is consistent with the continuous variational framework characteristic of active contours; consequently, we introduce a continuous analogue to the discrete Potts model. To demonstrate the effectiveness of MaRACel, we compare its performance to those of the CV and RD AC models in the following scenarios: (1) the qualitative segmentation of a cancerous lesion in a breast DCE-MR image and (2) the qualitative and quantitative segmentations of prostatic acini (glands) in 200 histopathology images. Across the 200 prostate needle core biopsy histology images, MaRACel yielded an average sensitivity, specificity, and positive predictive value of 71%, 95%, 74% with respect to the segmented gland boundaries; the CV and RD models have corresponding values of 19%, 81%, 20% and 53%, 88%, 56%, respectively.

  3. Laser-driven 6-16 keV x-ray imaging and backlighting with spherical crystals

    Science.gov (United States)

    Schollmeier, M.; Rambo, P. K.; Schwarz, J.; Smith, I. C.; Porter, J. L.

    2014-10-01

    Laser-driven x-ray self-emission imaging or backlighting of High Energy Density Physics experiments requires brilliant sources with keV energies and x-ray crystal imagers with high spatial resolution of about 10 μ m. Spherically curved crystals provide the required resolution when operated at near-normal incidence, which minimizes image aberrations due to astigmatism. However, this restriction dramatically limits the range of suitable crystal and spectral line combinations. We present a survey of crystals and spectral lines for x-ray backlighting and self-emission imaging with energies between 6 and 16 keV. Ray-tracing simulations including crystal rocking curves have been performed to predict image brightness and spatial resolution. Results have been benchmarked to experimental data using both Sandia's 4 kJ, ns Z-Beamlet and 200 J, ps Z-Petawatt laser systems. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND NO. 2014-15552A.

  4. Study on an antagonist differentiated heated lid driven-cavity enclosing a tube: lattice Boltzmann method

    Science.gov (United States)

    Ma, Xiaoyan; Pellerin, Nicolas; Reggio, Marcelo; Bennacer, Rachid

    2017-05-01

    The method of lattice-Boltzmann multiple relaxation time (MRT) is commonly applied to study the conversion system consisting in a combination of forced convection and natural convection occurred in a cavity. Moving the top surface horizontally at a fixed speed, while two vertical walls are applied with constant different temperatures, assuming adiabatic case on both bottom and top walls. We consider a "non-cooperating" situation, where dynamics and buoyancy forces counterbalance. The cavity contains a circular cylinder placed at various positions. Boundary conditions for velocity and temperature have been applied to handle the non-Cartesian boundary of the cylinder. In lattice Boltzmann methods we adopt the double distribution model for calculating both the thermal and hydrodynamic fields. The D2Q5 and D2Q9 lattice are chosen to perform the simulations for a wide range of Reynolds and Rayleigh numbers. By calculating the average Nusselt number, we also investigated the influence of different obstacle positions on characteristics of flow and heat transfer. The results show the influence of the obstacle position on the dimensionless numbers, so as to effect the heat transfer behaviors inside the cavity. It is also indicates that the governing parameters are also related to driven power for the upper surface sliding. Contribution to the topical issue "Materials for Energy harvesting, conversion and storage II (ICOME 2016)", edited by Jean-Michel Nunzi, Rachid Bennacer and Mohammed El Ganaoui

  5. Hypothesis-driven methods to augment human cognition by optimizing cortical oscillations

    Directory of Open Access Journals (Sweden)

    Jörn M. Horschig

    2014-06-01

    Full Text Available Cortical oscillations have been shown to represent fundamental functions of a working brain, e.g. communication, stimulus binding, error monitoring, and inhibition, and are directly linked to behavior. Recent studies intervening with these oscillations have demonstrated effective modulation of both the oscillations and behavior. In this review, we collect evidence in favor of how hypothesis-driven methods can be used to augment cognition by optimizing cortical oscillations. We elaborate their potential usefulness for three target groups: healthy elderly, patients with attention deficit/hyperactivity disorder, and healthy young adults. We discuss the relevance of neuronal oscillations in each group and show how each of them can benefit from the manipulation of functionally-related oscillations. Further, we describe methods for manipulation of neuronal oscillations including direct brain stimulation as well as indirect task alterations. We also discuss practical considerations about the proposed techniques. In conclusion, we propose that insights from neuroscience should guide techniques to augment human cognition, which in turn can provide a better understanding of how the human brain works.

  6. On two methods of statistical image analysis

    NARCIS (Netherlands)

    Missimer, J; Knorr, U; Maguire, RP; Herzog, H; Seitz, RJ; Tellman, L; Leenders, KL

    1999-01-01

    The computerized brain atlas (CBA) and statistical parametric mapping (SPM) are two procedures for voxel-based statistical evaluation of PET activation studies. Each includes spatial standardization of image volumes, computation of a statistic, and evaluation of its significance. In addition, smooth

  7. Ultrasound Imaging Methods for Breast Cancer Detection

    NARCIS (Netherlands)

    Ozmen, N.

    2014-01-01

    The main focus of this thesis is on modeling acoustic wavefield propagation and implementing imaging algorithms for breast cancer detection using ultrasound. As a starting point, we use an integral equation formulation, which can be used to solve both the forward and inverse problems. This thesis c

  8. An Interactive Image Segmentation Method in Hand Gesture Recognition.

    Science.gov (United States)

    Chen, Disi; Li, Gongfa; Sun, Ying; Kong, Jianyi; Jiang, Guozhang; Tang, Heng; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-27

    In order to improve the recognition rate of hand gestures a new interactive image segmentation method for hand gesture recognition is presented, and popular methods, e.g., Graph cut, Random walker, Interactive image segmentation using geodesic star convexity, are studied in this article. The Gaussian Mixture Model was employed for image modelling and the iteration of Expectation Maximum algorithm learns the parameters of Gaussian Mixture Model. We apply a Gibbs random field to the image segmentation and minimize the Gibbs Energy using Min-cut theorem to find the optimal segmentation. The segmentation result of our method is tested on an image dataset and compared with other methods by estimating the region accuracy and boundary accuracy. Finally five kinds of hand gestures in different backgrounds are tested on our experimental platform, and the sparse representation algorithm is used, proving that the segmentation of hand gesture images helps to improve the recognition accuracy.

  9. Human body region enhancement method based on Kinect infrared imaging

    Science.gov (United States)

    Yang, Lei; Fan, Yubo; Song, Xiaowei; Cai, Wenjing

    2016-10-01

    To effectively improve the low contrast of human body region in the infrared images, a combing method of several enhancement methods is utilized to enhance the human body region. Firstly, for the infrared images acquired by Kinect, in order to improve the overall contrast of the infrared images, an Optimal Contrast-Tone Mapping (OCTM) method with multi-iterations is applied to balance the contrast of low-luminosity infrared images. Secondly, to enhance the human body region better, a Level Set algorithm is employed to improve the contour edges of human body region. Finally, to further improve the human body region in infrared images, Laplacian Pyramid decomposition is adopted to enhance the contour-improved human body region. Meanwhile, the background area without human body region is processed by bilateral filtering to improve the overall effect. With theoretical analysis and experimental verification, the results show that the proposed method could effectively enhance the human body region of such infrared images.

  10. Spectral methods for spatial resolution improvement of digital images

    Institute of Scientific and Technical Information of China (English)

    郝鹏威; 徐冠华; 朱重光

    1999-01-01

    A general matrix formula is proposed for signal spectral aliasing of various or mutual resolution, the concept of spectral aliasing matrix is introduced, and some general spectral methods for spatial resolution improvement from multiframes of undersampled digital images are discussed. A simplified iterative method of parallel row-action projection for spectral de-aliasing is also given. The method can be applied to multiframe images with various spatial resolution,relative displacement, dissimilar point spread function, different image radiance, and additive random noise. Some experiments with a resolution test pattern and an image of vertical fin performed the convergence and the effectiveness of the algorithms.

  11. WaveSeq: a novel data-driven method of detecting histone modification enrichments using wavelets.

    Directory of Open Access Journals (Sweden)

    Apratim Mitra

    Full Text Available BACKGROUND: Chromatin immunoprecipitation followed by next-generation sequencing is a genome-wide analysis technique that can be used to detect various epigenetic phenomena such as, transcription factor binding sites and histone modifications. Histone modification profiles can be either punctate or diffuse which makes it difficult to distinguish regions of enrichment from background noise. With the discovery of histone marks having a wide variety of enrichment patterns, there is an urgent need for analysis methods that are robust to various data characteristics and capable of detecting a broad range of enrichment patterns. RESULTS: To address these challenges we propose WaveSeq, a novel data-driven method of detecting regions of significant enrichment in ChIP-Seq data. Our approach utilizes the wavelet transform, is free of distributional assumptions and is robust to diverse data characteristics such as low signal-to-noise ratios and broad enrichment patterns. Using publicly available datasets we showed that WaveSeq compares favorably with other published methods, exhibiting high sensitivity and precision for both punctate and diffuse enrichment regions even in the absence of a control data set. The application of our algorithm to a complex histone modification data set helped make novel functional discoveries which further underlined its utility in such an experimental setup. CONCLUSIONS: WaveSeq is a highly sensitive method capable of accurate identification of enriched regions in a broad range of data sets. WaveSeq can detect both narrow and broad peaks with a high degree of accuracy even in low signal-to-noise ratio data sets. WaveSeq is also suited for application in complex experimental scenarios, helping make biologically relevant functional discoveries.

  12. Kinetic modelling of [{sup 11}C]flumazenil using data-driven methods

    Energy Technology Data Exchange (ETDEWEB)

    Miederer, Isabelle; Ziegler, Sibylle I.; Liedtke, Christoph; Miederer, Matthias; Drzezga, Alexander [Technische Universitaet Muenchen, Department of Nuclear Medicine, Klinikum rechts der Isar, Munich (Germany); Spilker, Mary E. [GE Global Research, Computational Biology and Biostatistics Laboratory, Niscayuna, NY (United States); Sprenger, Till [Technische Universitaet Muenchen, Department of Neurology, Klinikum rechts der Isar, Munich (Germany); Wagner, Klaus J. [Technische Universitaet Muenchen, Department of Anaesthesiology, Klinikum rechts der Isar, Munich (Germany); Boecker, Henning [Universitaet Bonn, Department of Radiology, Bonn (Germany)

    2009-04-15

    [{sup 11}C]Flumazenil (FMZ) is a benzodiazepine receptor antagonist that binds reversibly to central-type gamma-aminobutyric acid (GABA-A) sites. A validated approach for analysis of [{sup 11}C]FMZ is the invasive one-tissue (1T) compartmental model. However, it would be advantageous to analyse FMZ binding with whole-brain pixel-based methods that do not require a-priori hypotheses regarding preselected regions. Therefore, in this study we compared invasive and noninvasive data-driven methods (Logan graphical analysis, LGA; multilinear reference tissue model, MRTM2; spectral analysis, SA; basis pursuit denoising, BPD) with the 1T model. We focused on two aspects: (1) replacing the arterial input function analyses with a reference tissue method using the pons as the reference tissue, and (2) shortening the scan protocol from 90 min to 60 min. Dynamic PET scans were conducted in seven healthy volunteers with arterial blood sampling. Distribution volume ratios (DVRs) were selected as the common outcome measure. The SA, LGA with and without arterial input, and MRTM2 agreed best with the 1T model DVR values. The invasive and noninvasive BPD were slightly less well correlated. The full protocol of a 90-min emission data performed better than the 60-min protocol, but the 60-min protocol still delivered useful data, as assessed by the coefficient of variation, and the correlation and bias analyses. This study showed that the SA, LGA and MRTM2 are valid methods for the quantification of benzodiazepine receptor binding with [{sup 11}C]FMZ using an invasive or noninvasive protocol, and therefore have the potential to reduce the invasiveness of the procedure. (orig.)

  13. Parameter estimation method for blurred cell images from fluorescence microscope

    Science.gov (United States)

    He, Fuyun; Zhang, Zhisheng; Luo, Xiaoshu; Zhao, Shulin

    2016-10-01

    Microscopic cell image analysis is indispensable to cell biology. Images of cells can easily degrade due to optical diffraction or focus shift, as this results in low signal-to-noise ratio (SNR) and poor image quality, hence affecting the accuracy of cell analysis and identification. For a quantitative analysis of cell images, restoring blurred images to improve the SNR is the first step. A parameter estimation method for defocused microscopic cell images based on the power law properties of the power spectrum of cell images is proposed. The circular radon transform (CRT) is used to identify the zero-mode of the power spectrum. The parameter of the CRT curve is initially estimated by an improved differential evolution algorithm. Following this, the parameters are optimized through the gradient descent method. Using synthetic experiments, it was confirmed that the proposed method effectively increased the peak SNR (PSNR) of the recovered images with high accuracy. Furthermore, experimental results involving actual microscopic cell images verified that the superiority of the proposed parameter estimation method for blurred microscopic cell images other method in terms of qualitative visual sense as well as quantitative gradient and PSNR.

  14. Architecture-Driven Level Set Optimization: From Clustering to Subpixel Image Segmentation.

    Science.gov (United States)

    Balla-Arabe, Souleymane; Gao, Xinbo; Ginhac, Dominique; Brost, Vincent; Yang, Fan

    2016-12-01

    Thanks to their effectiveness, active contour models (ACMs) are of great interest for computer vision scientists. The level set methods (LSMs) refer to the class of geometric active contours. Comparing with the other ACMs, in addition to subpixel accuracy, it has the intrinsic ability to automatically handle topological changes. Nevertheless, the LSMs are computationally expensive. A solution for their time consumption problem can be hardware acceleration using some massively parallel devices such as graphics processing units (GPUs). But the question is: which accuracy can we reach while still maintaining an adequate algorithm to massively parallel architecture? In this paper, we attempt to push back the compromise between, speed and accuracy, efficiency and effectiveness, to a higher level, comparing with state-of-the-art methods. To this end, we designed a novel architecture-aware hybrid central processing unit (CPU)-GPU LSM for image segmentation. The initialization step, using the well-known k -means algorithm, is fast although executed on a CPU, while the evolution equation of the active contour is inherently local and therefore suitable for GPU-based acceleration. The incorporation of local statistics in the level set evolution allowed our model to detect new boundaries which are not extracted by the used clustering algorithm. Comparing with some cutting-edge LSMs, the introduced model is faster, more accurate, less subject to giving local minima, and therefore suitable for automatic systems. Furthermore, it allows two-phase clustering algorithms to benefit from the numerous LSM advantages such as the ability to achieve robust and subpixel accurate segmentation results with smooth and closed contours. Intensive experiments demonstrate, objectively and subjectively, the good performance of the introduced framework both in terms of speed and accuracy.

  15. Comparison of Two Distance Based Alignment Method in Medical Imaging

    Science.gov (United States)

    2001-10-25

    very helpful to register large datasets of contours or surfaces, commonly encountered in medical imaging . They do not require special ordering or...COMPARISON OF TWO DISTANCE BASED ALIGNMENT METHOD IN MEDICAL IMAGING G. Bulan, C. Ozturk Institute of Biomedical Engineering, Bogazici University...Two Distance Based Alignment Method in Medical Imaging Contract Number Grant Number Program Element Number Author(s) Project Number Task Number

  16. A New Method of CT MedicalImages Contrast Enhancement

    Institute of Scientific and Technical Information of China (English)

    SUNFeng-rong; LIUWei; WANGChang-yu; MEILiang-mo

    2004-01-01

    A new method of contrast enhancement is proposed in the paper using multiscale edge representation of images, and is applied to the field of CT medical image processing. Comparing to the traditional Window technique, our method is adaptive and meets the demand of radiology clinics more better. The clinical experiment results show the practicality and the potential applied value of our methodin the field of CT medical images contrast enhancement.

  17. Simultaneous multi-headed imager geometry calibration method

    Science.gov (United States)

    Tran, Vi-Hoa; Meikle, Steven Richard; Smith, Mark Frederick

    2008-02-19

    A method for calibrating multi-headed high sensitivity and high spatial resolution dynamic imaging systems, especially those useful in the acquisition of tomographic images of small animals. The method of the present invention comprises: simultaneously calibrating two or more detectors to the same coordinate system; and functionally correcting for unwanted detector movement due to gantry flexing.

  18. Generalized Newton Method for Energy Formulation in Image Processing

    Science.gov (United States)

    2008-04-01

    Blurred (b) - Newton with LH (c) - Standard Newton (d) - Newton with Ls Fig. 5.2. Deblurring of the clown image with different Newton-like methods...proposed method, the inner product can be adapted to the problem at hand. In the second example, Figure 5.2, the 330 × 291 clown image was additionally

  19. Data analysis for mass spectrometry imaging : methods and applications

    NARCIS (Netherlands)

    Abdelmoula, Walid Mohamed

    2017-01-01

    In this dissertation we developed a number of automatic methods for multi-modal data registration, mainly between mass spectrometry imaging, imaging microscopy, and the Allen Brain Atlas. We have shown the importance of these methods for performing large scale preclinical biomarker discovery

  20. Flight path-driven mitigation of wavefront curvature effects in SAR images

    Science.gov (United States)

    Doerry, Armin W.

    2009-06-23

    A wavefront curvature effect associated with a complex image produced by a synthetic aperture radar (SAR) can be mitigated based on which of a plurality of possible flight paths is taken by the SAR when capturing the image. The mitigation can be performed differently for different ones of the flight paths.

  1. An LSB Method Of Image Steganographic Techniques

    Directory of Open Access Journals (Sweden)

    Lalit Kumar Jain

    2015-04-01

    Full Text Available The art of information hiding has received much attention in the recent years as security of information has become a big concern in this internet era. As sharing of sensitive information via a common communication channel has become inevitable. Steganography means hiding a secret message (the embedded message within a larger one (source cover in such a way that an observer cannot detect the presence of contents of the hidden message [1]. Many different carrier file formats can be used, but digital images are the most popular because of their frequency on the Internet [2]. This paper intends to give an overview of image Steganography, its uses and techniques. It also attempts to identify the requirements of a good Steganography algorithm and briefly reflects on which Steganography techniques are more suitable for which applications.

  2. The method of infrared image simulation based on the measured image

    Science.gov (United States)

    Lou, Shuli; Liu, Liang; Ren, Jiancun

    2015-10-01

    The development of infrared imaging guidance technology has promoted the research of infrared imaging simulation technology and the key of infrared imaging simulation is the generation of IR image. The generation of IR image is worthful in military and economy. In order to solve the problem of credibility and economy of infrared scene generation, a method of infrared scene generation based on the measured image is proposed. Through researching on optical properties of ship-target and sea background, ship-target images with various gestures are extracted from recorded images based on digital image processing technology. The ship-target image is zoomed in and out to simulate the relative motion between the viewpoint and the target according to field of view and the distance between the target and the sensor. The gray scale of ship-target image is adjusted to simulate the radiation change of the ship-target according to the distance between the viewpoint and the target and the atmospheric transmission. Frames of recorded infrared images without target are interpolated to simulate high frame rate of missile. Processed ship-target images and sea-background infrared images are synthetized to obtain infrared scenes according to different viewpoints. Experiments proved that this method is flexible and applicable, and the fidelity and the reliability of synthesis infrared images can be guaranteed.

  3. Application of mathematical modelling methods for acoustic images reconstruction

    Science.gov (United States)

    Bolotina, I.; Kazazaeva, A.; Kvasnikov, K.; Kazazaev, A.

    2016-04-01

    The article considers the reconstruction of images by Synthetic Aperture Focusing Technique (SAFT). The work compares additive and multiplicative methods for processing signals received from antenna array. We have proven that the multiplicative method gives a better resolution. The study includes the estimation of beam trajectories for antenna arrays using analytical and numerical methods. We have shown that the analytical estimation method allows decreasing the image reconstruction time in case of linear antenna array implementation.

  4. A Simple Image Encoding Method with Data Lossless Information Hiding

    OpenAIRE

    Zhi-Hui Wang; Chin-Chen Chang; Ming-Chu Li; Tzu-Chuen Lu

    2011-01-01

    In this paper, we propose a simple reversible data hiding method in the spatial domain for block truncation coding (BTC) compressed grayscale images. The BTC method compresses a block of a grayscale image to a bitmap and a pair of quantization numbers. The proposed method first embeds secret bits into a block by changing the order of those two quantization numbers. The compression rate is not enlarged by this embedding scheme. To further improve the hiding capacity, the proposed method embeds...

  5. Imaging System and Method for Biomedical Analysis

    Science.gov (United States)

    2013-03-11

    fluorescent nanoparticles . Generally, Noiseux et al. teach injecting multiple fluorescent nanoparticle dyes into the food sample, imaging the sample a...example, AIDS, malaria , cholera, lymphoma, and typhoid. The present disclosure can be used to capture and count microscopic cells for application as...Base plate 214 is sealed against cover 204 by the adhesive 210. Base plate 214 can have a thickness 216 of, for example, about 100 µm. At least a

  6. Twin-image elimination apparatus and method

    OpenAIRE

    1996-01-01

    The twin-image elimination apparatus of the present invention comprises (a) a scanning light source for emitting a scanning light beam; (b) an interference device which converts the scanning light beam from the scanning light source into a spherical wave and a plane wave having temporal frequencies different from each other and combines the spherical and plane waves together; (c) a scanner for scanning an object with the combined light beam from the interference device; (d) a photodetector fo...

  7. Robust color image hiding method in DCT domain

    Institute of Scientific and Technical Information of China (English)

    LI Qing-zhong; YU Chen; CHU Dong-sheng

    2006-01-01

    This paper presents a robust color image hiding method based on YCbCr color system in discrete cosine transform (DCT) domain,which can hide a secret color image behind a public color cover image and is compatible with the international image compression standard of JPEG.To overcome the grave distortion problem in the restored secret image,this paper proposes a new embedding scheme consisting of reasonable partition of a pixel value and sign embedding.Moreover,based on human visual system (HVS) and fuzzy theory,this paper also presents a fuzzy classification method for DCT sub-blocks to realize the adaptive selection of embedding strength.The experimental results show that the maximum distortion error in pixel value for the extracted secret image is ±1.And the color cover image can provide good quality after embedding large amount of data.

  8. Application of a data-driven simulation method to the reconstruction of the coronal magnetic field

    Institute of Scientific and Technical Information of China (English)

    Yu-Liang Fan; Hua-Ning Wang; Han He; Xiao-Shuai Zhu

    2012-01-01

    Ever since the magnetohydrodynamic (MHD) method for extrapolation of the solar coronal magnetic field was first developed to study the dynamic evolution of twisted magnetic flux tubes,it has proven to be efficient in the reconstruction of the solar coronal magnetic field.A recent example is the so-called data-driven simulation method (DDSM),which has been demonstrated to be valid by an application to model analytic solutions such as a force-free equilibrium given by Low and Lou.We use DDSM for the observed magnetograms to reconstruct the magnetic field above an active region.To avoid an unnecessary sensitivity to boundary conditions,we use a classical total variation diminishing Lax-Friedrichs formulation to iteratively compute the full MHD equations.In order to incorporate a magnetogram consistently and stably,the bottom boundary conditions are derived from the characteristic method.In our simulation,we change the tangential fields continually from an initial potential field to the vector magnetogram.In the relaxation,the initial potential field is changed to a nonlinear magnetic field until the MHD equilibrium state is reached.Such a stable equilibrium is expected to be able to represent the solar atmosphere at a specified time.By inputting the magnetograms before and after the X3.4 flare that occurred on 2006 December 13,we find a topological change after comparing the magnetic field before and after the flare.Some discussions are given regarding the change of magnetic configuration and current distribution.Furthermore,we compare the reconstructed field line configuration with the coronal loop observations by XRT onboard Hinode.The comparison shows a relatively good correlation.

  9. Respondent driven sampling: determinants of recruitment and a method to improve point estimation.

    Directory of Open Access Journals (Sweden)

    Nicky McCreesh

    Full Text Available INTRODUCTION: Respondent-driven sampling (RDS is a variant of a link-tracing design intended for generating unbiased estimates of the composition of hidden populations that typically involves giving participants several coupons to recruit their peers into the study. RDS may generate biased estimates if coupons are distributed non-randomly or if potential recruits present for interview non-randomly. We explore if biases detected in an RDS study were due to either of these mechanisms, and propose and apply weights to reduce bias due to non-random presentation for interview. METHODS: Using data from the total population, and the population to whom recruiters offered their coupons, we explored how age and socioeconomic status were associated with being offered a coupon, and, if offered a coupon, with presenting for interview. Population proportions were estimated by weighting by the assumed inverse probabilities of being offered a coupon (as in existing RDS methods, and also of presentation for interview if offered a coupon by age and socioeconomic status group. RESULTS: Younger men were under-recruited primarily because they were less likely to be offered coupons. The under-recruitment of higher socioeconomic status men was due in part to them being less likely to present for interview. Consistent with these findings, weighting for non-random presentation for interview by age and socioeconomic status group greatly improved the estimate of the proportion of men in the lowest socioeconomic group, reducing the root-mean-squared error of RDS estimates of socioeconomic status by 38%, but had little effect on estimates for age. The weighting also improved estimates for tribe and religion (reducing root-mean-squared-errors by 19-29%, but had little effect for sexual activity or HIV status. CONCLUSIONS: Data collected from recruiters on the characteristics of men to whom they offered coupons may be used to reduce bias in RDS studies. Further evaluation of

  10. Reduced-reference image quality assessment using moment method

    Science.gov (United States)

    Yang, Diwei; Shen, Yuantong; Shen, Yongluo; Li, Hongwei

    2016-10-01

    Reduced-reference image quality assessment (RR IQA) aims to evaluate the perceptual quality of a distorted image through partial information of the corresponding reference image. In this paper, a novel RR IQA metric is proposed by using the moment method. We claim that the first and second moments of wavelet coefficients of natural images can have approximate and regular change that are disturbed by different types of distortions, and that this disturbance can be relevant to human perceptions of quality. We measure the difference of these statistical parameters between reference and distorted image to predict the visual quality degradation. The introduced IQA metric is suitable for implementation and has relatively low computational complexity. The experimental results on Laboratory for Image and Video Engineering (LIVE) and Tampere Image Database (TID) image databases indicate that the proposed metric has a good predictive performance.

  11. A novel duplicate images detection method based on PLSA model

    Science.gov (United States)

    Liao, Xiaofeng; Wang, Yongji; Ding, Liping; Gu, Jian

    2012-01-01

    Web image search results usually contain duplicate copies. This paper considers the problem of detecting and clustering duplicate images contained in web image search results. Detecting and clustering the duplicate images together facilitates users' viewing. A novel method is presented in this paper to detect and cluster duplicate images by measuring similarity between their topics. More specifically, images are viewed as documents consisting of visual words formed by vector quantizing the affine invariant visual features. Then a statistical model widely used in text domain, the PLSA(Probabilistic Latent Semantic Analysis) model, is utilized to map images into a probabilistic latent semantic space. Because the main content remains unchanged despite small digital alteration, duplicate images will be close to each other in the derived semantic space. Based on this, a simple clustering process can successfully detect duplicate images and cluster them together. Comparing to those methods based on comparison between hash value of visual words, this method is more robust to the visual feature level alteration posed on the images. Experiments demonstrates the effectiveness of this method.

  12. High contrast optical imaging methods for image guided laser ablation of dental caries lesions

    OpenAIRE

    LaMantia, Nicole R.; Tom, Henry; Chan, Kenneth H.; Simon, Jacob C.; Darling, Cynthia L.; Fried, Daniel

    2014-01-01

    Laser based methods are well suited for automation and can be used to selectively remove dental caries to minimize the loss of healthy tissues and render the underlying enamel more resistant to acid dissolution. The purpose of this study was to determine which imaging methods are best suited for image-guided ablation of natural non-cavitated carious lesions on occlusal surfaces. Multiple caries imaging methods were compared including near-IR and visible reflectance and quantitative light fluo...

  13. Computer driven optical keratometer and method of evaluating the shape of the cornea

    Science.gov (United States)

    Baroth, Edmund C. (Inventor); Mouneimme, Samih A. (Inventor)

    1994-01-01

    An apparatus and method for measuring the shape of the cornea utilize only one reticle to generate a pattern of rings projected onto the surface of a subject's eye. The reflected pattern is focused onto an imaging device such as a video camera and a computer compares the reflected pattern with a reference pattern stored in the computer's memory. The differences between the reflected and stored patterns are used to calculate the deformation of the cornea which may be useful for pre-and post-operative evaluation of the eye by surgeons.

  14. Stepping Control Method of Linear Displacement Mechanism Driven by TRUM Based on PSoC

    Institute of Scientific and Technical Information of China (English)

    Wang Junping; Liu Weidong; Zhu Hua; Li Yijun; Li Jianjun

    2015-01-01

    A method based on programmable system-on-chip (PSoC) is proposed to realize high resolution stepping motion control of liner displacement mechanism driven by traveling wave rotary ultrasonic motors (TRUM ) .Intel-ligent controller of stepping ultrasonic motor consists of PSoC microprocessor .Continuous square wave signal is sent out by the pulse width modulator (PWM) module inside PSoC ,and converted into sinusoidal signal which is essential to the motor′s normal working by power amplifier circuit .Subsequently ,signal impulse transmission is realized by the counter control break ,and the stepping motion of linear displacement mechanism based on TRUM is achieved .Running status of the ultrasonic motor is controlled by an upper computer .Control command is sent to PSoC through serial communication circuit of RS-232 .Relative program and control interface are written in Lab-View .Finally the mechanism is tested by XL-80 laser interferometer .Test results show that the mechanism can provide a stable motion and a fixed step pitch with the displacement resolution of 6 nm .

  15. Probability Sampling Method for a Hidden Population Using Respondent-Driven Sampling: Simulation for Cancer Survivors.

    Science.gov (United States)

    Jung, Minsoo

    2015-01-01

    When there is no sampling frame within a certain group or the group is concerned that making its population public would bring social stigma, we say the population is hidden. It is difficult to approach this kind of population survey-methodologically because the response rate is low and its members are not quite honest with their responses when probability sampling is used. The only alternative known to address the problems caused by previous methods such as snowball sampling is respondent-driven sampling (RDS), which was developed by Heckathorn and his colleagues. RDS is based on a Markov chain, and uses the social network information of the respondent. This characteristic allows for probability sampling when we survey a hidden population. We verified through computer simulation whether RDS can be used on a hidden population of cancer survivors. According to the simulation results of this thesis, the chain-referral sampling of RDS tends to minimize as the sample gets bigger, and it becomes stabilized as the wave progresses. Therefore, it shows that the final sample information can be completely independent from the initial seeds if a certain level of sample size is secured even if the initial seeds were selected through convenient sampling. Thus, RDS can be considered as an alternative which can improve upon both key informant sampling and ethnographic surveys, and it needs to be utilized for various cases domestically as well.

  16. A DC electrophoresis method for determining electrophoretic mobility through the pressure driven negation of electro osmosis

    Science.gov (United States)

    Karam, Pascal; Pennathur, Sumita

    2016-11-01

    Characterization of the electrophoretic mobility and zeta potential of micro and nanoparticles is important for assessing properties such as stability, charge and size. In electrophoretic techniques for such characterization, the bulk fluid motion due to the interaction between the fluid and the charged surface must be accounted for. Unlike current industrial systems which rely on DLS and oscillating potentials to mitigate electroosmotic flow (EOF), we propose a simple alternative electrophoretic method for optically determining electrophoretic mobility using a DC electric fields. Specifically, we create a system where an adverse pressure gradient counters EOF, and design the geometry of the channel so that the flow profile of the pressure driven flow matches that of the EOF in large regions of the channel (ie. where we observe particle flow). Our specific COMSOL-optimized geometry is two large cross sectional areas adjacent to a central, high aspect ratio channel. We show that this effectively removes EOF from a large region of the channel and allows for the accurate optical characterization of electrophoretic particle mobility, no matter the wall charge or particle size.

  17. Underwater color image segmentation method via RGB channel fusion

    Science.gov (United States)

    Xuan, Li; Mingjun, Zhang

    2017-02-01

    Aiming at the problem of low segmentation accuracy and high computation time by applying existing segmentation methods for underwater color images, this paper has proposed an underwater color image segmentation method via RGB color channel fusion. Based on thresholding segmentation methods to conduct fast segmentation, the proposed method relies on dynamic estimation of the optimal weights for RGB channel fusion to obtain the grayscale image with high foreground-background contrast and reaches high segmentation accuracy. To verify the segmentation accuracy of the proposed method, the authors have conducted various underwater comparative experiments. The experimental results demonstrate that the proposed method is robust to illumination, and it is superior to existing methods in terms of both segmentation accuracy and computation time. Moreover, a segmentation technique is proposed for image sequences for real-time autonomous underwater vehicle operations.

  18. Study on direct measurement method of vorticity from particle images

    Institute of Scientific and Technical Information of China (English)

    RUAN Xiaodong; FU Xin; YANG Huayong

    2007-01-01

    To overcome the shortcomings of conventional methods for vorticity measurement,a new direct measurement of vorticity (DMV) method extracting vorticity from particle images was proposed.Based on the theory of fluid flow,two matched particle patterns were extracted from particle images in the DMV method.The pattern vorticity was determined from the average angular displacement of rotation between the two matched particle patterns.The method was applied on standard particle images,and was compared with the second and third order central finite difference methods.Results show that the accuracy of DMV method is independent of the spatial resolution of the sampling,and the uncertainty errors in the velocity measurement are not propagated into the vorticity.The method is applicable for measuring vorticity of a stronger rotational flow.The time interval of image sampling should be shortened to increase the measurement ranges for higher shearing distortion flows.

  19. Adaptive Image Restoration and Segmentation Method Using Different Neighborhood Sizes

    Directory of Open Access Journals (Sweden)

    Chengcheng Li

    2003-04-01

    Full Text Available The image restoration methods based on the Bayesian's framework and Markov random fields (MRF have been widely used in the image-processing field. The basic idea of all these methods is to use calculus of variation and mathematical statistics to average or estimate a pixel value by the values of its neighbors. After applying this averaging process to the whole image a number of times, the noisy pixels, which are abnormal values, are filtered out. Based on the Tea-trade model, which states that the closer the neighbor, more contribution it makes, almost all of these methods use only the nearest four neighbors for calculation. In our previous research [1, 2], we extended the research on CLRS (image restoration and segmentation by using competitive learning algorithm to enlarge the neighborhood size. The results showed that the longer neighborhood range could improve or worsen the restoration results. We also found that the autocorrelation coefficient was an important factor to determine the proper neighborhood size. We then further realized that the computational complexity increased dramatically along with the enlargement of the neighborhood size. This paper is to further the previous research and to discuss the tradeoff between the computational complexity and the restoration improvement by using longer neighborhood range. We used a couple of methods to construct the synthetic images with the exact correlation coefficients we want and to determine the corresponding neighborhood size. We constructed an image with a range of correlation coefficients by blending some synthetic images. Then an adaptive method to find the correlation coefficients of this image was constructed. We restored the image by applying different neighborhood CLRS algorithm to different parts of the image according to its correlation coefficient. Finally, we applied this adaptive method to some real-world images to get improved restoration results than by using single

  20. Image Classification Workflow Using Machine Learning Methods

    Science.gov (United States)

    Christoffersen, M. S.; Roser, M.; Valadez-Vergara, R.; Fernández-Vega, J. A.; Pierce, S. A.; Arora, R.

    2016-12-01

    Recent increases in the availability and quality of remote sensing datasets have fueled an increasing number of scientifically significant discoveries based on land use classification and land use change analysis. However, much of the software made to work with remote sensing data products, specifically multispectral images, is commercial and often prohibitively expensive. The free to use solutions that are currently available come bundled up as small parts of much larger programs that are very susceptible to bugs and difficult to install and configure. What is needed is a compact, easy to use set of tools to perform land use analysis on multispectral images. To address this need, we have developed software using the Python programming language with the sole function of land use classification and land use change analysis. We chose Python to develop our software because it is relatively readable, has a large body of relevant third party libraries such as GDAL and Spectral Python, and is free to install and use on Windows, Linux, and Macintosh operating systems. In order to test our classification software, we performed a K-means unsupervised classification, Gaussian Maximum Likelihood supervised classification, and a Mahalanobis Distance based supervised classification. The images used for testing were three Landsat rasters of Austin, Texas with a spatial resolution of 60 meters for the years of 1984 and 1999, and 30 meters for the year 2015. The testing dataset was easily downloaded using the Earth Explorer application produced by the USGS. The software should be able to perform classification based on any set of multispectral rasters with little to no modification. Our software makes the ease of land use classification using commercial software available without an expensive license.

  1. A Fast Fractal Image Compression Coding Method

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Fast algorithms for reducing encoding complexity of fractal image coding have recently been an important research topic. Search of the best matched domain block is the most computation intensive part of the fractal encoding process. In this paper, a fast fractal approximation coding scheme implemented on a personal computer based on matching in range block's neighbours is presented. Experimental results show that the proposed algorithm is very simple in implementation, fast in encoding time and high in compression ratio while PSNR is almost the same as compared with Barnsley's fractal block coding .

  2. Research on image matching method of big data image of three-dimensional reconstruction

    Science.gov (United States)

    Zhang, Chunsen; Qiu, Zhenguo; Zhu, Shihuan; Wang, Xiqi; Xu, Xiaolei; Zhong, Sidong

    2015-12-01

    Image matching is the main flow of a three-dimensional reconstruction. With the development of computer processing technology, seeking the image to be matched from the large date image sets which acquired from different image formats, different scales and different locations has put forward a new request for image matching. To establish the three dimensional reconstruction based on image matching from big data images, this paper put forward a new effective matching method based on visual bag of words model. The main technologies include building the bag of words model and image matching. First, extracting the SIFT feature points from images in the database, and clustering the feature points to generate the bag of words model. We established the inverted files based on the bag of words. The inverted files can represent all images corresponding to each visual word. We performed images matching depending on the images under the same word to improve the efficiency of images matching. Finally, we took the three-dimensional model with those images. Experimental results indicate that this method is able to improve the matching efficiency, and is suitable for the requirements of large data reconstruction.

  3. Multi-band Image Registration Method Based on Fourier Transform

    Institute of Scientific and Technical Information of China (English)

    庹红娅; 刘允才

    2004-01-01

    This paper presented a registration method based on Fourier transform for multi-band images which is involved in translation and small rotation. Although different band images differ a lot in the intensity and features,they contain certain common information which we can exploit. A model was given that the multi-band images have linear correlations under the least-square sense. It is proved that the coefficients have no effect on the registration progress if two images have linear correlations. Finally, the steps of the registration method were proposed. The experiments show that the model is reasonable and the results are satisfying.

  4. Method of infrared image enhancement based on histogram

    Institute of Scientific and Technical Information of China (English)

    WANG Liang; YAN Jie

    2011-01-01

    Aiming at the problem in infrared image enhancement, a new method is given based on histogram. Using the gray character- istics of target, the upper-bouod threshold is selected adaptively and the histogram is processed by the threshold. After choosing the gray transform function based on the gray level distribution of image, the gray transformation is done during histogram equalization. Finally, the enhanced image is obtained. Compared with histogram equalization (HE), histogram double equalization (HDE) and plateau histogram equalization (PE), the simulation results demonstrate that the image enhancement effect of this method has obvious superiority. At the same time, its operation speed is fast and real-time ability is excellent.

  5. A method of periodic pattern localization on document images

    Science.gov (United States)

    Chernov, Timofey S.; Nikolaev, Dmitry P.; Kliatskine, Vitali M.

    2015-12-01

    Periodic patterns often present on document images as holograms, watermarks or guilloche elements which are mostly used for fraud protection. Localization of such patterns lets an embedded OCR system to vary its settings depending on pattern presence in particular image regions and improves the precision of pattern removal to preserve as much useful data as possible. Many document images' noise detection and removal methods deal with unstructured noise or clutter on documents with simple background. In this paper we propose a method of periodic pattern localization on document images which uses discrete Fourier transform that works well on documents with complex background.

  6. Split Bregman's optimization method for image construction in compressive sensing

    Science.gov (United States)

    Skinner, D.; Foo, S.; Meyer-Bäse, A.

    2014-05-01

    The theory of compressive sampling (CS) was reintroduced by Candes, Romberg and Tao, and D. Donoho in 2006. Using a priori knowledge that a signal is sparse, it has been mathematically proven that CS can defY Nyquist sampling theorem. Theoretically, reconstruction of a CS image relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The goal of this research is to perfectly reconstruct the original sonar image from a sparse signal while maintaining pertinent information, such as mine-like object, in Side-scan sonar (SSS) images. Goldstein and Osher have shown how to use an iterative method to reconstruct the original image through a method called Split Bregman's iteration. This method "decouples" the energies using portions of the energy from both the !1 and !2 norm. Once the energies are split, Bregman iteration is used to solve the unconstrained optimization problem by recursively solving the problems simultaneously. The faster these two steps or energies can be solved then the faster the overall method becomes. While the majority of CS research is still focused on the medical field, this paper will demonstrate the effectiveness of the Split Bregman's methods on sonar images.

  7. Towards a novel laser-driven method of exotic nuclei extraction-acceleration for fundamental physics and technology

    Science.gov (United States)

    Nishiuchi, M.; Sakaki, H.; Esirkepov, T. Zh.; Nishio, K.; Pikuz, T. A.; Faenov, A. Ya.; Skobelev, I. Yu.; Orlandi, R.; Pirozhkov, A. S.; Sagisaka, A.; Ogura, K.; Kanasaki, M.; Kiriyama, H.; Fukuda, Y.; Koura, H.; Kando, M.; Yamauchi, T.; Watanabe, Y.; Bulanov, S. V.; Kondo, K.; Imai, K.; Nagamiya, S.

    2016-04-01

    A combination of a petawatt laser and nuclear physics techniques can crucially facilitate the measurement of exotic nuclei properties. With numerical simulations and laser-driven experiments we show prospects for the Laser-driven Exotic Nuclei extraction-acceleration method proposed in [M. Nishiuchi et al., Phys, Plasmas 22, 033107 (2015)]: a femtosecond petawatt laser, irradiating a target bombarded by an external ion beam, extracts from the target and accelerates to few GeV highly charged short-lived heavy exotic nuclei created in the target via nuclear reactions.

  8. AN IMPROVED RADIAL BASIS FUNCTION BASED METHOD FOR IMAGE WARPING

    Institute of Scientific and Technical Information of China (English)

    Nie Xuan; Zhao Rongchun; Zhang Cheng; Zhang Xiaoyan

    2005-01-01

    A new image warping method is proposed in this letter, which can warp a given image by some manual defined features. Based on the radial basis interpolation function algorithm, the proposed method can transform the original optimized problem into nonsingular linear problem by adding one-order term and affine differentiable condition. This linear system can get the steady unique solution by choosing suitable kernel function. Furthermore, the proposed method demonstrates how to set up the radial basis function in the target image so as to achieve supports to adopt the backward re-sampling technology accordingly which could gain the very slippery warping image. Theexperimental result shows that the proposed method can implement smooth and gradual image warping with multi-anchor points' accurate interpolation.

  9. Spindle extraction method for ISAR image based on Radon transform

    Science.gov (United States)

    Wei, Xia; Zheng, Sheng; Zeng, Xiangyun; Zhu, Daoyuan; Xu, Gaogui

    2015-12-01

    In this paper, a method of spindle extraction of target in inverse synthetic aperture radar (ISAR) image is proposed which depends on Radon Transform. Firstly, utilizing Radon Transform to detect all straight lines which are collinear with these line segments in image. Then, using Sobel operator to detect image contour. Finally, finding all intersections of each straight line and image contour, the two intersections which have maximum distance between them is the two ends of this line segment and the longest line segment of all line segments is spindle of target. According to the proposed spindle extraction method, one hundred simulated ISAR images which are respectively rotated 0 degrees, 10 degrees, 20 degrees, 30 degrees and 40 degrees in counterclockwise are used to do experiment and the proposed method and the detection results are more close to the real spindle of target than the method based on Hough Transform .

  10. Image Mosaic Method Based on SIFT Features of Line Segment

    Directory of Open Access Journals (Sweden)

    Jun Zhu

    2014-01-01

    Full Text Available This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling.

  11. A rigorous and simpler method of image charges

    Science.gov (United States)

    Ladera, C. L.; Donoso, G.

    2016-07-01

    The method of image charges relies on the proven uniqueness of the solution of the Laplace differential equation for an electrostatic potential which satisfies some specified boundary conditions. Granted by that uniqueness, the method of images is rightly described as nothing but shrewdly guessing which and where image charges are to be placed to solve the given electrostatics problem. Here we present an alternative image charges method that is based not on guessing but on rigorous and simpler theoretical grounds, namely the constant potential inside any conductor and the application of powerful geometric symmetries. The aforementioned required uniqueness and, more importantly, guessing are therefore both altogether dispensed with. Our two new theoretical fundaments also allow the image charges method to be introduced in earlier physics courses for engineering and sciences students, instead of its present and usual introduction in electromagnetic theory courses that demand familiarity with the Laplace differential equation and its boundary conditions.

  12. DATA SYNTHESIS AND METHOD EVALUATION FOR BRAIN IMAGING GENETICS

    OpenAIRE

    Sheng, Jinhua; Kim, Sungeun; Yan, Jingwen; Moore, Jason; Saykin, Andrew; Shen, Li

    2014-01-01

    Brain imaging genetics is an emergent research field where the association between genetic variations such as single nucleotide polymorphisms (SNPs) and neuroimaging quantitative traits (QTs) is evaluated. Sparse canonical correlation analysis (SCCA) is a bi-multivariate analysis method that has the potential to reveal complex multi-SNP-multi-QT associations. We present initial efforts on evaluating a few SCCA methods for brain imaging genetics. This includes a data synthesis method to create...

  13. Whither RDS? An investigation of Respondent Driven Sampling as a method of recruiting mainstream marijuana users

    Directory of Open Access Journals (Sweden)

    Cousineau Marie-Marthe

    2010-07-01

    Full Text Available Abstract Background An important challenge in conducting social research of specific relevance to harm reduction programs is locating hidden populations of consumers of substances like cannabis who typically report few adverse or unwanted consequences of their use. Much of the deviant, pathologized perception of drug users is historically derived from, and empirically supported, by a research emphasis on gaining ready access to users in drug treatment or in prison populations with higher incidence of problems of dependence and misuse. Because they are less visible, responsible recreational users of illicit drugs have been more difficult to study. Methods This article investigates Respondent Driven Sampling (RDS as a method of recruiting experienced marijuana users representative of users in the general population. Based on sampling conducted in a multi-city study (Halifax, Montreal, Toronto, and Vancouver, and compared to samples gathered using other research methods, we assess the strengths and weaknesses of RDS recruitment as a means of gaining access to illicit substance users who experience few harmful consequences of their use. Demographic characteristics of the sample in Toronto are compared with those of users in a recent household survey and a pilot study of Toronto where the latter utilized nonrandom self-selection of respondents. Results A modified approach to RDS was necessary to attain the target sample size in all four cities (i.e., 40 'users' from each site. The final sample in Toronto was largely similar, however, to marijuana users in a random household survey that was carried out in the same city. Whereas well-educated, married, whites and females in the survey were all somewhat overrepresented, the two samples, overall, were more alike than different with respect to economic status and employment. Furthermore, comparison with a self-selected sample suggests that (even modified RDS recruitment is a cost-effective way of

  14. Method for measuring anterior chamber volume by image analysis

    Science.gov (United States)

    Zhai, Gaoshou; Zhang, Junhong; Wang, Ruichang; Wang, Bingsong; Wang, Ningli

    2007-12-01

    Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some measures should be taken to simplify the handcraft preprocess working as to images.

  15. Image analysis benchmarking methods for high-content screen design.

    Science.gov (United States)

    Fuller, C J; Straight, A F

    2010-05-01

    The recent development of complex chemical and small interfering RNA (siRNA) collections has enabled large-scale cell-based phenotypic screening. High-content and high-throughput imaging are widely used methods to record phenotypic data after chemical and small interfering RNA treatment, and numerous image processing and analysis methods have been used to quantify these phenotypes. Currently, there are no standardized methods for evaluating the effectiveness of new and existing image processing and analysis tools for an arbitrary screening problem. We generated a series of benchmarking images that represent commonly encountered variation in high-throughput screening data and used these image standards to evaluate the robustness of five different image analysis methods to changes in signal-to-noise ratio, focal plane, cell density and phenotype strength. The analysis methods that were most reliable, in the presence of experimental variation, required few cells to accurately distinguish phenotypic changes between control and experimental data sets. We conclude that by applying these simple benchmarking principles an a priori estimate of the image acquisition requirements for phenotypic analysis can be made before initiating an image-based screen. Application of this benchmarking methodology provides a mechanism to significantly reduce data acquisition and analysis burdens and to improve data quality and information content.

  16. A Novel Visual Cryptographic Method for Color Images

    Directory of Open Access Journals (Sweden)

    Amarjot Singh

    2013-05-01

    Full Text Available Visual cryptography is considered to be a vital technique for hiding visual data from intruders. Because of its importance, it finds applications in various sectors such as E-voting system, financial documents and copyright protections etc. A number of methods have been proposed in past for encrypting color images such as color decomposition, contrast manipulation, polynomial method, using the difference in color intensity values in a color image etc. The major flaws with most of the earlier proposed methods is the complexity encountered during the implementation of the methods on a wide scale basis, the problem of random pixilation and insertion of noise in encrypted images. This paper presents a simple and highly resistant algorithm for visual cryptography to be performed on color images. The main advantage of the proposed cryptographic algorithm is the robustness and low computational cost with structure simplicity. The proposed algorithm outperformed the conventional methods when tested over sample images proven using key analysis, SSIM and histogram analysis tests. In addition, the proposed method overshadows the standard method in terms of the signal to noise ratio obtained for the encrypted image, which is much better than the SNR value obtained using the standard method. The paper also makes a worst case analysis for the SNR values for both the methods.

  17. Pairwise-Distance-Analysis-Driven Dimensionality Reduction Model with Double Mappings for Hyperspectral Image Visualization

    Directory of Open Access Journals (Sweden)

    Yi Long

    2015-06-01

    Full Text Available This paper describes a novel strategy for the visualization of hyperspectral imagery based on the analysis of image pixel pairwise distances. The goal of this approach is to generate a final color image with excellent interpretability and high contrast at the cost of distorting a few pairwise distances. Specifically, the principle of equal variance is introduced to divide all hyperspectral bands into three subgroups and to ensure the energy is distributed uniformly between them, as in natural color images. Then, after detecting both normal and outlier pixels, these three subgroups are mapped into three color components of the output visualization using two different mapping (i.e., dimensionality reduction schemes for the two types of pixels. The widely-used multidimensional scaling (MDS is used for normal pixels and a new objective function, taking into account the weighting of pairwise distances, is presented for the outlier pixels. The pairwise distance weighting is designed such that small pairwise distances between the outliers and their respective neighbors are emphasized and large deviations are suppressed. This produces an image with high contrast and good interpretability while retaining the detailed information content. The proposed algorithm is compared with several state-of-the-art visualization techniques and evaluated on the well-known AVIRIS hyperspectral images. The effectiveness of the proposed strategy is substantiated both visually and quantitatively.

  18. [An adaptive threshloding segmentation method for urinary sediment image].

    Science.gov (United States)

    Li, Yongming; Zeng, Xiaoping; Qin, Jian; Han, Liang

    2009-02-01

    In this paper is proposed a new method to solve the segmentation of the complicated defocusing urinary sediment image. The main points of the method are: (1) using wavelet transforms and morphology to erase the effect of defocusing and realize the first segmentation, (2) using adaptive threshold processing in accordance to the subimages after wavelet processing, and (3) using 'peel off' algorithm to deal with the overlapped cells' segmentations. The experimental results showed that this method was not affected by the defocusing, and it made good use of many kinds of characteristics of the images. So this new mehtod can get very precise segmentation; it is effective for defocusing urinary sediment image segmentation.

  19. Image Post-Processing Method for Visual Data Mining

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Visual data mining is one of important approach of data mining techniques. Most of them are based on computer graphic techniques but few of them exploit image-processing techniques. This paper proposes an image processing method, named RNAM (resemble neighborhood averaging method), to facilitate visual data mining, which is used to post-process the data mining result-image and help users to discover significant features and useful patterns effectively. The experiments show that the method is intuitive, easily-understanding and effectiveness. It provides a new approach for visual data mining.

  20. Mathematical methods in time series analysis and digital image processing

    CERN Document Server

    Kurths, J; Maass, P; Timmer, J

    2008-01-01

    The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/enviromental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.

  1. Calibrating the pixel-level Kepler imaging data with a causal data-driven model

    CERN Document Server

    Wang, Dun; Hogg, David W; Schölkopf, Bernhard

    2015-01-01

    Astronomical observations are affected by several kinds of noise, each with its own causal source; there is photon noise, stochastic source variability, and residuals coming from imperfect calibration of the detector or telescope. The precision of NASA Kepler photometry for exoplanet science---the most precise photometric measurements of stars ever made---appears to be limited by unknown or untracked variations in spacecraft pointing and temperature, and unmodeled stellar variability. Here we present the Causal Pixel Model (CPM) for Kepler data, a data-driven model intended to capture variability but preserve transit signals. The CPM works at the pixel level so that it can capture very fine-grained information about the variation of the spacecraft. The CPM predicts each target pixel value from a large number of pixels of other stars sharing the instrument variabilities while not containing any information on possible transits in the target star. In addition, we use the target star's future and past (auto-regr...

  2. Hiding Two Binary Images in Grayscale BMP Image via Five Modulus Method

    Directory of Open Access Journals (Sweden)

    Firas A. Jassim

    2014-05-01

    Full Text Available The aim of this study is to hide two binary BMP images in a single BMP grayscale image. The widespread technique in image steganography is to hide one image (stego image into another (cover image. The proposed novel method is to hide two binary images into one grayscale bitmap cover image. First of all, the proposed technique starts with transforming all grayscale cover image pixels into multiples of five using Five Modulus Method (FMM. Clearly, any modulus of five is either 0, 1, 2, 3, or 4. The transformed FMM cover image could be treated as a good host for carrying data. Obviously, it is known that the pixel value for the binary image is either 0 or 1. Therefore, by concatenating the two binary images, the composite results are 00, 01, 10 and 11. In fact, these concatenated values could be mapped using simple mapping that assigns a positive integer value such as 1 for 00, 2 for 01, 3 for 10 and 4 for 11. Consequently, a new matrix will be constructed that contains a number varying from 1 to 4 only. Fortunately, the four integer values are the same as the previously mentioned reminders of division by 5, hence, adding these four integers to the transformed FMM cover image. On the recipient side, a reverse process will be implemented to extract the two binary images. In terms of PSNR values, the cover image and the two extracted stego images have acceptable PSNR values, which yields that the proposed method is very efficient in information hiding.

  3. Advances in the Simultaneous Multiple Surface optical design method for imaging and non-imaging applications

    OpenAIRE

    Wang, Lin

    2012-01-01

    Classical imaging optics has been developed over centuries in many areas, such as its paraxial imaging theory and practical design methods like multi-parametric optimization techniques. Although these imaging optical design methods can provide elegant solutions to many traditional optical problems, there are more and more new design problems, like solar concentrator, illumination system, ultra-compact camera, etc., that require maximum energy transfer efficiency, or ultra-compact optical stru...

  4. Super pixel density based clustering automatic image classification method

    Science.gov (United States)

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu

    2015-12-01

    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.

  5. Multi-port-driven birdcage coil for multiple-mouse MR imaging at 7 T.

    Science.gov (United States)

    Heo, Phil; Seo, Jeung-Hoon; Han, Sang-Doc; Ryu, Yeunchul; Byun, Jong-Deok; Kim, Kyoung-Nam; Lee, Jung Hee

    2016-11-01

    In ultra-high field (UHF) imaging environments, it has been demonstrated that multiple-mouse magnetic resonance imaging (MM-MRI) is dependent on key factors such as the radiofrequency (RF) coil hardware, imaging protocol, and experimental setup for obtaining high-resolution MR images. A key aspect is the RF coil, and a number of MM-MRI studies have investigated the application of single-channel RF transmit (Tx)/receive (Rx) coils or multi-channel phased array (PA) coil configurations under a single gradient coil set. However, despite applying a variety of RF coils, Tx (|B1(+) |)-field inhomogeneity still remains a major problem due to the relative shortening of the effective RF wavelength in the UHF environment. To address this issue, we propose a relatively smaller size of individual Tx-only coils in a multiple birdcage (MBC) coil for MM-MRI to image up to three mice. We use electromagnetic (EM) simulations in the finite-difference time-domain (FDTD) environment to obtain the |B1 |-field distribution. Our results clearly show that the single birdcage (SBC) high-pass filter (HPF) configuration, which is referred to as the SBCHPF , under the absence of an RF shield exhibits a high |B1 |-field intensity in comparison with other coil configurations such as the low-pass filter (LPF) and band-pass filter (BPF) configurations. In a 7-T MRI experiment, the signal-to-noise ratio (SNR) map of the SBCHPF configuration shows the highest coil performance compared to other coil configurations. The MBCHPF coil, which is comprised of a triple-SBCHPF configuration combined with additional decoupling techniques, is developed for simultaneous image acquisition of three mice. SCANNING 38:747-756, 2016. © 2016 Wiley Periodicals, Inc.

  6. A laser driven pulsed X-ray backscatter technique for enhanced penetrative imaging.

    Science.gov (United States)

    Deas, R M; Wilson, L A; Rusby, D; Alejo, A; Allott, R; Black, P P; Black, S E; Borghesi, M; Brenner, C M; Bryant, J; Clarke, R J; Collier, J C; Edwards, B; Foster, P; Greenhalgh, J; Hernandez-Gomez, C; Kar, S; Lockley, D; Moss, R M; Najmudin, Z; Pattathil, R; Symes, D; Whittle, M D; Wood, J C; McKenna, P; Neely, D

    2015-01-01

    X-ray backscatter imaging can be used for a wide range of imaging applications, in particular for industrial inspection and portal security. Currently, the application of this imaging technique to the detection of landmines is limited due to the surrounding sand or soil strongly attenuating the 10s to 100s of keV X-rays required for backscatter imaging. Here, we introduce a new approach involving a 140 MeV short-pulse (laser wakefield acceleration to probe the sample, which produces Bremsstrahlung X-rays within the sample enabling greater depths to be imaged. A variety of detector and scintillator configurations are examined, with the best time response seen from an absorptive coated BaF2 scintillator with a bandpass filter to remove the slow scintillation emission components. An X-ray backscatter image of an array of different density and atomic number items is demonstrated. The use of a compact laser wakefield accelerator to generate the electron source, combined with the rapid development of more compact, efficient and higher repetition rate high power laser systems will make this system feasible for applications in the field. Content includes material subject to Dstl (c) Crown copyright (2014). Licensed under the terms of the Open Government Licence except where otherwise stated. To view this licence, visit http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3 or write to the Information Policy Team, The National Archives, Kew, London TW9 4DU, or email: psi@ nationalarchives.gsi.gov.uk.

  7. Towards semantic-driven high-content image analysis: an operational instantiation for mitosis detection in digital histopathology.

    Science.gov (United States)

    Racoceanu, D; Capron, F

    2015-06-01

    This study concerns a novel symbolic cognitive vision framework emerged from the Cognitive Microscopy (MICO(1)) initiative. MICO aims at supporting the evolution towards digital pathology, by studying cognitive clinical-compliant protocols involving routine virtual microscopy. We instantiate this paradigm in the case of mitotic count as a component of breast cancer grading in histopathology. The key concept of our approach is the role of the semantics as driver of the whole slide image analysis protocol. All the decisions being taken into a semantic and formal world, MICO represents a knowledge-driven platform for digital histopathology. Therefore, the core of this initiative is the knowledge representation and the reasoning. Pathologists' knowledge and strategies are used to efficiently guide image analysis algorithms. In this sense, hard-coded knowledge, semantic and usability gaps are to be reduced by a leading, active role of reasoning and of semantic approaches. Integrating ontologies and reasoning in confluence with modular imaging algorithms, allows the emergence of new clinical-compliant protocols for digital pathology. This represents a promising way to solve decision reproducibility and traceability issues in digital histopathology, while increasing the flexibility of the platform and pathologists' acceptance, the one always having the legal responsibility in the diagnosis process. The proposed protocols open the way to increasingly reliable cancer assessment (i.e. multiple slides per sample analysis), quantifiable and traceable second opinion for cancer grading, and modern capabilities for cancer research support in histopathology (i.e. content and context-based indexing and retrieval). Last, but not least, the generic approach introduced here is applicable for number of additional challenges, related to molecular imaging and, in general, to high-content image exploration.

  8. Image quality improvement for underground radar by block migration method

    Science.gov (United States)

    Ho, Gwangsu; Kawanaka, Akira; Takagi, Mikio

    1993-11-01

    Techniques have been developed which have been imaging optically opaque regions using an electromagnetic wave radar in order to estimate the location of the objects in those regions. One important application of these techniques is the detection of buried pipes and cables. In the case of underground radar, its image quality often becomes low because the nature of the soil is not uniform and an electromagnetic wave is attenuated in soil. Hence, the method which improves the quality of the radar images is required. In this paper, we point out that the quality of underground images can be improved significantly by means of the block migration method. In this method LOT (Lapped Orthogonal Transform) was applied. LOT is a new block transform method in which basis functions overlap in adjacent blocks, and it has a fast computation algorithm. In addition to above, we propose a method of estimating dielectric constant in soil using the processed images. The result of applying the block migration method to the underground radar images are presented. It points out the good capability for the image quality improvement and the application of LOT can improve the influence by blocking and the processing time. Also the dielectric constant in each block can be estimated accurately.

  9. Integration of Architectural and Cytologic Driven Image Algorithms for Prostate Adenocarcinoma Identification

    Directory of Open Access Journals (Sweden)

    Jason Hipp

    2012-01-01

    Full Text Available Introduction: The advent of digital slides offers new opportunities within the practice of pathology such as the use of image analysis techniques to facilitate computer aided diagnosis (CAD solutions. Use of CAD holds promise to enable new levels of decision support and allow for additional layers of quality assurance and consistency in rendered diagnoses. However, the development and testing of prostate cancer CAD solutions requires a ground truth map of the cancer to enable the generation of receiver operator characteristic (ROC curves. This requires a pathologist to annotate, or paint, each of the malignant glands in prostate cancer with an image editor software - a time consuming and exhaustive process.

  10. Multivariate statistical data analysis methods for detecting baroclinic wave interactions in the thermally driven rotating annulus

    Science.gov (United States)

    von Larcher, Thomas; Harlander, Uwe; Alexandrov, Kiril; Wang, Yongtai

    2010-05-01

    Experiments on baroclinic wave instabilities in a rotating cylindrical gap have been long performed, e.g., to unhide regular waves of different zonal wave number, to better understand the transition to the quasi-chaotic regime, and to reveal the underlying dynamical processes of complex wave flows. We present the application of appropriate multivariate data analysis methods on time series data sets acquired by the use of non-intrusive measurement techniques of a quite different nature. While the high accurate Laser-Doppler-Velocimetry (LDV ) is used for measurements of the radial velocity component at equidistant azimuthal positions, a high sensitive thermographic camera measures the surface temperature field. The measurements are performed at particular parameter points, where our former studies show that kinds of complex wave patterns occur [1, 2]. Obviously, the temperature data set has much more information content as the velocity data set due to the particular measurement techniques. Both sets of time series data are analyzed by using multivariate statistical techniques. While the LDV data sets are studied by applying the Multi-Channel Singular Spectrum Analysis (M - SSA), the temperature data sets are analyzed by applying the Empirical Orthogonal Functions (EOF ). Our goal is (a) to verify the results yielded with the analysis of the velocity data and (b) to compare the data analysis methods. Therefor, the temperature data are processed in a way to become comparable to the LDV data, i.e. reducing the size of the data set in such a manner that the temperature measurements would imaginary be performed at equidistant azimuthal positions only. This approach initially results in a great loss of information. But applying the M - SSA to the reduced temperature data sets enable us to compare the methods. [1] Th. von Larcher and C. Egbers, Experiments on transitions of baroclinic waves in a differentially heated rotating annulus, Nonlinear Processes in Geophysics

  11. IMPROVING THE QUALITY OF NEAR-INFRARED IMAGING OF IN VIVOBLOOD VESSELS USING IMAGE FUSION METHODS

    DEFF Research Database (Denmark)

    Jensen, Andreas Kryger; Savarimuthu, Thiusius Rajeeth; Sørensen, Anders Stengaard

    2009-01-01

    We investigate methods for improving the visual quality of in vivo images of blood vessels in the human forearm. Using a near-infrared light source and a dual CCD chip camera system capable of capturing images at visual and nearinfrared spectra, we evaluate three fusion methods in terms of their ...

  12. CMOS low data rate imaging method based on compressed sensing

    Science.gov (United States)

    Xiao, Long-long; Liu, Kun; Han, Da-peng

    2012-07-01

    Complementary metal-oxide semiconductor (CMOS) technology enables the integration of image sensing and image compression processing, making improvements on overall system performance possible. We present a CMOS low data rate imaging approach by implementing compressed sensing (CS). On the basis of the CS framework, the image sensor projects the image onto a separable two-dimensional (2D) basis set and measures the corresponding coefficients obtained. First, the electrical current output from the pixels in a column are combined, with weights specified by voltage, in accordance with Kirchhoff's law. The second computation is performed in an analog vector-matrix multiplier (VMM). Each element of the VMM considers the total value of each column as the input and multiplies it by a unique coefficient. Both weights and coefficients are reprogrammable through analog floating-gate (FG) transistors. The image can be recovered from a percentage of these measurements using an optimization algorithm. The percentage, which can be altered flexibly by programming on the hardware circuit, determines the image compression ratio. These novel designs facilitate image compression during the image-capture phase before storage, and have the potential to reduce power consumption. Experimental results demonstrate that the proposed method achieves a large image compression ratio and ensures imaging quality.

  13. Method, apparatus and software for analyzing perfusion images

    NARCIS (Netherlands)

    Spreeuwers, Lieuwe Jan; Breeuwer, Marcel

    2007-01-01

    The invention relates to a method for analyzing perfusion images, in particular MR pertbsion images, of a human or animal organ including the steps of: (a) defining at least one contour of the organ, and (b) establishing at least one perfusion parameter of a region of interest of said organ within a

  14. Method, apparatus and software for analyzing perfusion images

    NARCIS (Netherlands)

    Spreeuwers, Lieuwe Jan; Breeuwer, Marcel

    2004-01-01

    The invention relates to a method for analyzing perfusion images, in particular MR pertbsion images, of a human or animal organ including the steps of: (a) defining at least one contour of the organ, and (b) establishing at least one perfusion parameter of a region of interest of said organ within a

  15. Method and apparatus for imaging and documenting fingerprints

    Science.gov (United States)

    Fernandez, Salvador M.

    2002-01-01

    The invention relates to a method and apparatus for imaging and documenting fingerprints. A fluorescent dye brought in intimate proximity with the lipid residues of a latent fingerprint is caused to fluoresce on exposure to light energy. The resulting fluorescing image may be recorded photographically.

  16. A Novel Image Fusion Method Based on FRFT-NSCT

    Directory of Open Access Journals (Sweden)

    Peiguang Wang

    2013-01-01

    fused image is obtained by performing the inverse NSCT and inverse FRFT on the combined coefficients. Three modes images and three fusion rules are demonstrated in the proposed algorithm test. The simulation results show that the proposed fusion approach is better than the methods based on NSCT at the same parameters.

  17. Click reaction: An applicable radiolabeling method for molecular imaging

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Ji Young; Lee, Byung Chul [Dept. of Nuclear Medicine, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Sungnam (Korea, Republic of)

    2015-12-15

    In recent years, the click reaction has found rapidly growing applications in the field of radiochemistry, ranging from a practical labeling method to molecular imaging of biomacromolecules. This present review details the development of highly reliable, powerful and selective click chemistry reactions for the rapid synthesis of new radiotracers for molecular imaging.

  18. Click Reaction: An Applicable Radiolabeling Method for Molecular Imaging.

    Science.gov (United States)

    Choi, Ji Young; Lee, Byung Chul

    2015-12-01

    In recent years, the click reaction has found rapidly growing applications in the field of radiochemistry, ranging from a practical labeling method to molecular imaging of biomacromolecules. This present review details the development of highly reliable, powerful and selective click chemistry reactions for the rapid synthesis of new radiotracers for molecular imaging.

  19. A Novel Steganography Method for Hiding BW Images into Gray Bitmap Images via k-Modulus Method

    Directory of Open Access Journals (Sweden)

    Firas A. Jassim

    2013-09-01

    Full Text Available This paper is to create a pragmatic steganographic implementation to hide black and white image which is known as stego image inside another gray bitmap image that known as cover image. First of all, the proposed technique uses k-Modulus Method (K-MM to convert all pixels within the cover image into multiples of positive integer named k. Since the black and white images can be represented using binary representation, i.e. 0 or 1. Then, in this article, the suitable value for the positive integer k is two. Therefore, each pixel inside the cover image is divisible by two and this produces a reminder which is either 0 or 1. Subsequently, the black and white representation of the stego image could be hidden inside the cover image. The ocular differences between the cover image before and after adding the stego image are insignificant. The experimental results show that the PSNR values for the cover image are very high with very small Mean Square Error.

  20. A Precise-Mask-Based Method for Enhanced Image Inpainting

    Directory of Open Access Journals (Sweden)

    Wanxu Zhang

    2016-01-01

    Full Text Available Mask of damage region is the pretreatment step of the image inpainting, which plays a key role in the ultimate effect. However, state-of-the-art methods have attached significance to the inpainting model, and the mask of damage region is usually selected manually or by the conventional threshold-based method. Since manual method is time-consuming and the threshold-based method does not have the same precision for different images, we herein report a new method for automatically constructing the precise mask by the joint filtering of guided filtering and L0 smoothing. It can accurately locate the boundary of damaged region in order to effectively segment the damage region and then greatly improves the ultimate effect of image inpainting. The experimental results show that the proposed method is superior to state-of-the-art methods in the step of constructing inpainting mask, especially for the damaged region with inconspicuous boundary.

  1. Multiphase Image Segmentation Using the Deformable Simplicial Complex Method

    DEFF Research Database (Denmark)

    Dahl, Vedrana Andersen; Christiansen, Asger Nyman; Bærentzen, Jakob Andreas

    2014-01-01

    The deformable simplicial complex method is a generic method for tracking deformable interfaces. It provides explicit interface representation, topological adaptivity, and multiphase support. As such, the deformable simplicial complex method can readily be used for representing active contours in...... in image segmentation based on deformable models. We show the benefits of using the deformable simplicial complex method for image segmentation by segmenting an image into a known number of segments characterized by distinct mean pixel intensities.......The deformable simplicial complex method is a generic method for tracking deformable interfaces. It provides explicit interface representation, topological adaptivity, and multiphase support. As such, the deformable simplicial complex method can readily be used for representing active contours...

  2. Improvements in Sample Selection Methods for Image Classification

    Directory of Open Access Journals (Sweden)

    Thales Sehn Körting

    2014-08-01

    Full Text Available Traditional image classification algorithms are mainly divided into unsupervised and supervised paradigms. In the first paradigm, algorithms are designed to automatically estimate the classes’ distributions in the feature space. The second paradigm depends on the knowledge of a domain expert to identify representative examples from the image to be used for estimating the classification model. Recent improvements in human-computer interaction (HCI enable the construction of more intuitive graphic user interfaces (GUIs to help users obtain desired results. In remote sensing image classification, GUIs still need advancements. In this work, we describe our efforts to develop an improved GUI for selecting the representative samples needed to estimate the classification model. The idea is to identify changes in the common strategies for sample selection to create a user-driven sample selection, which focuses on different views of each sample, and to help domain experts identify explicit classification rules, which is a well-established technique in geographic object-based image analysis (GEOBIA. We also propose the use of the well-known nearest neighbor algorithm to identify similar samples and accelerate the classification.

  3. Quantitative methods for the analysis of electron microscope images

    DEFF Research Database (Denmark)

    Skands, Peter Ulrik Vallø

    1996-01-01

    The topic of this thesis is an general introduction to quantitative methods for the analysis of digital microscope images. The images presented are primarily been acquired from Scanning Electron Microscopes (SEM) and interfermeter microscopes (IFM). The topic is approached though several examples...... foundation of the thesis fall in the areas of: 1) Mathematical Morphology; 2) Distance transforms and applications; and 3) Fractal geometry. Image analysis opens in general the possibility of a quantitative and statistical well founded measurement of digital microscope images. Herein lies also the conditions...

  4. Improved Image Fusion Method Based on NSCT and Accelerated NMF

    OpenAIRE

    Mingdong Li; Siyu Lai; Juan Wang

    2012-01-01

    In order to improve algorithm efficiency and performance, a technique for image fusion based on the Non-subsampled Contourlet Transform (NSCT) domain and an Accelerated Non-negative Matrix Factorization (ANMF)-based algorithm is proposed in this paper. Firstly, the registered source images are decomposed in multi-scale and multi-direction using the NSCT method. Then, the ANMF algorithm is executed on low-frequency sub-images to get the low-pass coefficients. The low frequency fused image can ...

  5. Beam imaging sensor and method for using same

    Science.gov (United States)

    McAninch, Michael D.; Root, Jeffrey J.

    2017-01-03

    The present invention relates generally to the field of sensors for beam imaging and, in particular, to a new and useful beam imaging sensor for use in determining, for example, the power density distribution of a beam including, but not limited to, an electron beam or an ion beam. In one embodiment, the beam imaging sensor of the present invention comprises, among other items, a circumferential slit that is either circular, elliptical or polygonal in nature. In another embodiment, the beam imaging sensor of the present invention comprises, among other things, a discontinuous partially circumferential slit. Also disclosed is a method for using the various beams sensor embodiments of the present invention.

  6. Segmentation of Bacteria Image Based on Level Set Method

    Institute of Scientific and Technical Information of China (English)

    WANG Hua; CHEN Chun-xiao; HU Yong-hong; YANG Wen-ge

    2008-01-01

    In biology ferment engineering, accurate statistics of the quantity of bacte-ria is one of the most important subjects. In this paper, the quantity of bacteria which was observed traditionally manuauy can be detected automatically. Image acquisition and pro-cessing system is designed to accomplish image preprocessing, image segmentation and statistics of the quantity of bacteria. Segmentation of bacteria images is successfully real-ized by means of a region-based level set method and then the quantity of bacteria is com-puted precisely, which plays an important role in optimizing the growth conditions of bac-teria.

  7. Single Molecule Imaging in Living Cell with Optical Method

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Significance, difficult, international developing actuality and our completed works for single molecules imaging in living cell with optical method are described respectively. Additionally we give out some suggestions for the technology development further.

  8. Discrete gradient methods for solving variational image regularisation models

    Science.gov (United States)

    Grimm, V.; McLachlan, Robert I.; McLaren, David I.; Quispel, G. R. W.; Schönlieb, C.-B.

    2017-07-01

    Discrete gradient methods are well-known methods of geometric numerical integration, which preserve the dissipation of gradient systems. In this paper we show that this property of discrete gradient methods can be interesting in the context of variational models for image processing, that is where the processed image is computed as a minimiser of an energy functional. Numerical schemes for computing minimisers of such energies are desired to inherit the dissipative property of the gradient system associated to the energy and consequently guarantee a monotonic decrease of the energy along iterations, avoiding situations in which more computational work might lead to less optimal solutions. Under appropriate smoothness assumptions on the energy functional we prove that discrete gradient methods guarantee a monotonic decrease of the energy towards stationary states, and we promote their use in image processing by exhibiting experiments with convex and non-convex variational models for image deblurring, denoising, and inpainting.

  9. Fluorine-labeled Dasatinib Nanoformulations as Targeted Molecular Imaging Probes in a PDGFB-driven Murine Glioblastoma Model

    Directory of Open Access Journals (Sweden)

    Miriam Benezra

    2012-12-01

    Full Text Available Dasatinib, a new-generation Src and platelet-derived growth factor receptor (PDGFR inhibitor, is currently under evaluation in high-grade glioma clinical trials. To achieve optimum physicochemical and/or biologic properties, alternative drug delivery vehicles may be needed. We used a novel fluorinated dasatinib derivative (F-SKI249380, in combination with nanocarrier vehicles and metabolic imaging tools (microPET to evaluate drug delivery and uptake in a platelet-derived growth factor B (PDGFB-driven genetically engineered mouse model (GEMM of high-grade glioma. We assessed dasatinib survival benefit on the basis of measured tumor volumes. Using brain tumor cells derived from PDGFB-driven gliomas, dose-dependent uptake and time-dependent inhibitory effects of F-SKI249380 on biologic activity were investigated and compared with the parent drug. PDGFR receptor status and tumor-specific targeting were non-invasively evaluated in vivo using 18F-SKI249380 and 18F-SKI249380-containing micellar and liposomal nanoformulations. A statistically significant survival benefit was found using dasatinib (95 mg/kg versus saline vehicle (P < .001 in tumor volume-matched GEMM pairs. Competitive binding and treatment assays revealed comparable biologic properties for F-SKI249380 and the parent drug. In vivo, Significantly higher tumor uptake was observed for 18F-SKI249380-containing micelle formulations [4.9 percentage of the injected dose per gram tissue (%ID/g; P = .002] compared to control values (1.6%ID/g. Saturation studies using excess cold dasatinib showed marked reduction of tumor uptake values to levels in normal brain (1.5%ID/g, consistent with in vivo binding specificity. Using 18F-SKI249380-containing micelles as radiotracers to estimate therapeutic dosing requirements, we calculated intratumoral drug concentrations (24–60 nM that were comparable to in vitro 50% inhibitory concentration values. 18F-SKI249380 is a PDGFR-selective tracer, which

  10. Landweber Iterative Methods for Angle-limited Image Reconstruction

    Institute of Scientific and Technical Information of China (English)

    Gang-rong Qu; Ming Jiang

    2009-01-01

    We introduce a general itcrative scheme for angle-limited image reconstruction based on Landwe-ber's method. We derive a representation formula for this scheme and consequently establish its convergence conditions. Our results suggest certain relaxation strategies for an accelerated convergcnce for angle-limited im-age reconstruction in L2-norm comparing with alternative projection methods. The convolution-backprojection algorithm is given for this iterative process.

  11. Color Image Segmentation Method Based on Improved Spectral Clustering Algorithm

    OpenAIRE

    Dong Qin

    2014-01-01

    Contraposing to the features of image data with high sparsity of and the problems on determination of clustering numbers, we try to put forward an color image segmentation algorithm, combined with semi-supervised machine learning technology and spectral graph theory. By the research of related theories and methods of spectral clustering algorithms, we introduce information entropy conception to design a method which can automatically optimize the scale parameter value. So it avoids the unstab...

  12. A Study of Long-Term fMRI Reproducibility Using Data-Driven Analysis Methods.

    Science.gov (United States)

    Song, Xiaomu; Panych, Lawrence P; Chou, Ying-Hui; Chen, Nan-Kuei

    2014-12-01

    The reproducibility of functional magnetic resonance imaging (fMRI) is important for fMRI-based neuroscience research and clinical applications. Previous studies show considerable variation in amplitude and spatial extent of fMRI activation across repeated sessions on individual subjects even using identical experimental paradigms and imaging conditions. Most existing fMRI reproducibility studies were typically limited by time duration and data analysis techniques. Particularly, the assessment of reproducibility is complicated by a fact that fMRI results may depend on data analysis techniques used in reproducibility studies. In this work, the long-term fMRI reproducibility was investigated with a focus on the data analysis methods. Two spatial smoothing techniques, including a wavelet-domain Bayesian method and the Gaussian smoothing, were evaluated in terms of their effects on the long-term reproducibility. A multivariate support vector machine (SVM)-based method was used to identify active voxels, and compared to a widely used general linear model (GLM)-based method at the group level. The reproducibility study was performed using multisession fMRI data acquired from eight healthy adults over 1.5 years' period of time. Three regions-of-interest (ROI) related to a motor task were defined based upon which the long-term reproducibility were examined. Experimental results indicate that different spatial smoothing techniques may lead to different reproducibility measures, and the wavelet-based spatial smoothing and SVM-based activation detection is a good combination for reproducibility studies. On the basis of the ROIs and multiple numerical criteria, we observed a moderate to substantial within-subject long-term reproducibility. A reasonable long-term reproducibility was also observed from the inter-subject study. It was found that the short-term reproducibility is usually higher than the long-term reproducibility. Furthermore, the results indicate that brain

  13. Liver 4DMRI: A retrospective image-based sorting method

    Energy Technology Data Exchange (ETDEWEB)

    Paganelli, Chiara, E-mail: chiara.paganelli@polimi.it [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milano 20133 (Italy); Summers, Paul [Division of Radiology, Istituto Europeo di Oncologia, Milano 20133 (Italy); Bellomi, Massimo [Division of Radiology, Istituto Europeo di Oncologia, Milano 20133, Italy and Department of Health Sciences, Università di Milano, Milano 20133 (Italy); Baroni, Guido; Riboldi, Marco [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milano 20133, Italy and Bioengineering Unit, Centro Nazionale di Adroterapia Oncologica, Pavia 27100 (Italy)

    2015-08-15

    Purpose: Four-dimensional magnetic resonance imaging (4DMRI) is an emerging technique in radiotherapy treatment planning for organ motion quantification. In this paper, the authors present a novel 4DMRI retrospective image-based sorting method, providing reduced motion artifacts than using a standard monodimensional external respiratory surrogate. Methods: Serial interleaved 2D multislice MRI data were acquired from 24 liver cases (6 volunteers + 18 patients) to test the proposed 4DMRI sorting. Image similarity based on mutual information was applied to automatically identify a stable reference phase and sort the image sequence retrospectively, without the use of additional image or surrogate data to describe breathing motion. Results: The image-based 4DMRI provided a smoother liver profile than that obtained from standard resorting based on an external surrogate. Reduced motion artifacts were observed in image-based 4DMRI datasets with a fitting error of the liver profile measuring 1.2 ± 0.9 mm (median ± interquartile range) vs 2.1 ± 1.7 mm of the standard method. Conclusions: The authors present a novel methodology to derive a patient-specific 4DMRI model to describe organ motion due to breathing, with improved image quality in 4D reconstruction.

  14. A MNCIE method for registration of ultrasound images

    Institute of Scientific and Technical Information of China (English)

    JIN Jing; WANG Qiang; SHEN Yi

    2007-01-01

    A new approach to the problem of registration of ultrasound images is presented, using a concept of Nonlinear Correlation Information Entropy (NCIE) as the matching criterion. The proposed method applies NCIE to measure the correlation degree between the image intensities of corresponding voxel in the floating and reference images. Registration is achieved by adjustment of the relative position until NCIE between the images is maximized. However, unlike mutual information (MI), NCIE varies in the closed interval [0, 1 ], and around the extremum it varies sharply, which makes it possible that thresholds of NCIE can be used to boost the search for the registration transformation. Using this feature of NCIE, we combine the downhill simplex searching algorithm to register the ultrasound images. The simulations are conducted to testify the effectiveness and rapidity of the proposed registration method, in which the ultrasound floating images are aligned to the reference images with required registration accuracy. Moreover, the NCIE based method can overcome local minima problem by setting thresholds and can take care of the differences in contrast between the floating and reference images.

  15. Cardiac MR image segmentation using CHNN and level set method

    Institute of Scientific and Technical Information of China (English)

    王洪元; 周则明; 王平安; 夏德深

    2004-01-01

    Although cardiac magnetic resonance imaging (MRI) can provide high spatial resolution image, the area gray level inhomogenization, weak boundary and artifact often can be found in MR images. So, the MR images segmentation using the gradient-based methods is poor in quality and efficiency. An algorithm, based on the competitive hopfield neural network (CHNN) and the curve propagation, is proposed for cardiac MR images segmentation in this paper. The algorithm is composed of two phases. In first phase, a CHNN is used to classify the image objects, and to make gray level homogenization and to recognize weak boundaries in objects. In second phase, based on the classified results, the level set velocity function is created and the object boundaries are extracted with the curve propagation algorithm of the narrow band-based level set. The test results are promising and encouraging.

  16. Analysis and Comparison of Objective Methods for Image Quality Assessment

    Directory of Open Access Journals (Sweden)

    P. S. Babkin

    2014-01-01

    Full Text Available The purpose of this work is research and modification of the reference objective methods for image quality assessment. The ultimate goal is to obtain a modification of formal assessments that more closely corresponds to the subjective expert estimates (MOS.In considering the formal reference objective methods for image quality assessment we used the results of other authors, which offer results and comparative analyzes of the most effective algorithms. Based on these investigations we have chosen two of the most successful algorithm for which was made a further analysis in the MATLAB 7.8 R 2009 a (PQS and MSSSIM. The publication focuses on the features of the algorithms, which have great importance in practical implementation, but are insufficiently covered in the publications by other authors.In the implemented modification of the algorithm PQS boundary detector Kirsch was replaced by the boundary detector Canny. Further experiments were carried out according to the method of the ITU-R VT.500-13 (01/2012 using monochrome images treated with different types of filters (should be emphasized that an objective assessment of image quality PQS is applicable only to monochrome images. Images were obtained with a thermal imaging surveillance system. The experimental results proved the effectiveness of this modification.In the specialized literature in the field of formal to evaluation methods pictures, this type of modification was not mentioned.The method described in the publication can be applied to various practical implementations of digital image processing.Advisability and effectiveness of using the modified method of PQS to assess the structural differences between the images are shown in the article and this will be used in solving the problems of identification and automatic control.

  17. History Document Image Background Noise and Removal Methods

    Directory of Open Access Journals (Sweden)

    Ganchimeg.G

    2015-12-01

    Full Text Available It is common for archive libraries to provide public access to historical and ancient document image collections. It is common for such document images to require specialized processing in order to remove background noise and become more legible. Document images may be contaminated with noise during transmission, scanning or conversion to digital form. We can categorize noises by identifying their features and can search for similar patterns in a document image to choose appropriate methods for their removal. In this paper, we propose a hybrid binarization approach for improving the quality of old documents using a combination of global and local thresholding. This article also reviews noises that might appear in scanned document images and discusses some noise removal methods.

  18. Generalized Row-Action Methods for Tomographic Imaging

    DEFF Research Database (Denmark)

    Andersen, Martin Skovgaard; Hansen, Per Christian

    2014-01-01

    initial convergence which is desirable in applications where a low-accuracy solution is acceptable. In this paper, we propose relaxed variants of a class of incremental proximal gradient methods, and these variants generalize many existing row-action methods for tomographic imaging. Moreover, they allow......Row-action methods play an important role in tomographic image reconstruction. Many such methods can be viewed as incremental gradient methods for minimizing a sum of a large number of convex functions, and despite their relatively poor global rate of convergence, these methods often exhibit fast...... us to derive new incremental algorithms for tomographic imaging that incorporate different types of prior information via regularization. We demonstrate the efficacy of the approach with some numerical examples....

  19. A fast and accurate method for echocardiography strain rate imaging

    Science.gov (United States)

    Tavakoli, Vahid; Sahba, Nima; Hajebi, Nima; Nambakhsh, Mohammad Saleh

    2009-02-01

    Recently Strain and strain rate imaging have proved their superiority with respect to classical motion estimation methods in myocardial evaluation as a novel technique for quantitative analysis of myocardial function. Here in this paper, we propose a novel strain rate imaging algorithm using a new optical flow technique which is more rapid and accurate than the previous correlation-based methods. The new method presumes a spatiotemporal constancy of intensity and Magnitude of the image. Moreover the method makes use of the spline moment in a multiresolution approach. Moreover cardiac central point is obtained using a combination of center of mass and endocardial tracking. It is proved that the proposed method helps overcome the intensity variations of ultrasound texture while preserving the ability of motion estimation technique for different motions and orientations. Evaluation is performed on simulated, phantom (a contractile rubber balloon) and real sequences and proves that this technique is more accurate and faster than the previous methods.

  20. A Method of Image Symmetry Detection Based on Phase Information

    Institute of Scientific and Technical Information of China (English)

    WU Jun; YANG Zhaoxuan; FENG Dengchao

    2005-01-01

    Traditional methods for detecting symmetry in image suffer greatly from the contrast of image and noise, and they all require some preprocessing. This paper presents a new method of image symmetry detection. This method detects symmetry with phase information utilizing logGabor wavelets, because phase information is stable and significant, while symmetric points produce patterns easy to be recognised and confirmable in local phase. Phase method does not require any preprocessing, and its result is accurate or invariant to contrast, rotation and illumination conditions. This method can detect mirror symmetry, rotating symmetry and curve symmetry at one time. Results of experiment show that, compared with pivotal element algorithm based on intensity information, phase method is more accurate and robust.

  1. Fast Filtered Imaging of the C-2U Advanced Beam-Driven Field-Reversed Configuration

    Science.gov (United States)

    Granstedt, E. M.; Petrov, P.; Knapp, K.; Cordero, M.; Patel, V.; the TAE Team

    2015-11-01

    The goal of the C-2U program is to sustain a Field-Reversed Configuration (FRC) for 5+ ms using neutral beam injection, end-biasing, and various particle fueling techniques. Three high-speed, filtered cameras are used to observe visible light emission from deuterium pellet ablation and compact-toroid injection which are used for auxiliary particle fueling. The instruments are also used to view the dynamics of the macroscopic plasma evolution, identify regions of strong plasma-material interactions, and visualize non-axisymmetric perturbations. To achieve the necessary viewing geometry, imaging lenses are mounted in re-entrant viewports, two of which are mounted on bellows for retraction during gettering and removal if cleaning is necessary. Images are coupled from the imaging lens to the camera via custom lens-based optical periscopes. Each instrument contains a remote-controlled filter wheel which is set between shots to select a particular emission line from neutral D or various charge states of He, C, O, or Ti. Measurements of absolute emissivity and estimates of neutral and impurity density will be presented.

  2. Method and apparatus to image biological interactions in plants

    Science.gov (United States)

    Weisenberger, Andrew; Bonito, Gregory M.; Reid, Chantal D.; Smith, Mark Frederick

    2015-12-22

    A method to dynamically image the actual translocation of molecular compounds of interest in a plant root, root system, and rhizosphere without disturbing the root or the soil. The technique makes use of radioactive isotopes as tracers to label molecules of interest and to image their distribution in the plant and/or soil. The method allows for the study and imaging of various biological and biochemical interactions in the rhizosphere of a plant, including, but not limited to, mycorrhizal associations in such regions.

  3. The pre-image problem in kernel methods.

    Science.gov (United States)

    Kwok, James Tin-yau; Tsang, Ivor Wai-hung

    2004-11-01

    In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.

  4. Living Brain Optical Imaging: Technology, Methods and Applications

    Science.gov (United States)

    Tsytsarev, Vassiliy; Bernardelli, Chad; Maslov, Konstantin I.

    2017-01-01

    Within the last few decades, optical imaging methods have yielded revolutionary results when applied to all parts of the central nervous system. The purpose of this review is to analyze research possibilities and limitations of several novel imaging techniques and show some of the most interesting achievements obtained by these methods. Here we covered intrinsic optical imaging, voltage-sensitive dye, photoacoustic, optical coherence tomography, near-infrared spectroscopy and some other techniques. All of them are mainly applicable for experimental neuroscience but some of them also suitable for the clinical studies.

  5. Soft-tissues Image Processing: Comparison of Traditional Segmentation Methods with 2D active Contour Methods

    Science.gov (United States)

    Mikulka, J.; Gescheidtova, E.; Bartusek, K.

    2012-01-01

    The paper deals with modern methods of image processing, especially image segmentation, classification and evaluation of parameters. It focuses primarily on processing medical images of soft tissues obtained by magnetic resonance tomography (MR). It is easy to describe edges of the sought objects using segmented images. The edges found can be useful for further processing of monitored object such as calculating the perimeter, surface and volume evaluation or even three-dimensional shape reconstruction. The proposed solutions can be used for the classification of healthy/unhealthy tissues in MR or other imaging. Application examples of the proposed segmentation methods are shown. Research in the area of image segmentation focuses on methods based on solving partial differential equations. This is a modern method for image processing, often called the active contour method. It is of great advantage in the segmentation of real images degraded by noise with fuzzy edges and transitions between objects. In the paper, results of the segmentation of medical images by the active contour method are compared with results of the segmentation by other existing methods. Experimental applications which demonstrate the very good properties of the active contour method are given.

  6. Digital image quality measurements by objective and subjective methods from series of parametrically degraded images

    Science.gov (United States)

    Tachó, Aura; Mitjà, Carles; Martínez, Bea; Escofet, Jaume; Ralló, Miquel

    2013-11-01

    Many digital image applications like digitization of cultural heritage for preservation purposes operate with compressed files in one or more image observing steps. For this kind of applications JPEG compression is one of the most widely used. Compression level, final file size and quality loss are parameters that must be managed optimally. Although this loss can be monitored by means of objective image quality measurements, the real challenge is to know how it can be related with the perceived image quality by observers. A pictorial image has been degraded by two different procedures. The first, applying different levels of low pass filtering by convolving the image with progressively broad Gauss kernels. The second, saving the original file to a series of JPEG compression levels. In both cases, the objective image quality measurement is done by analysis of the image power spectrum. In order to obtain a measure of the perceived image quality, both series of degraded images are displayed on a computer screen organized in random pairs. The observers are compelled to choose the best image of each pair. Finally, a ranking is established applying Thurstone scaling method. Results obtained by both measurements are compared between them and with other objective measurement method as the Slanted Edge Test.

  7. Gradient-based image recovery methods from incomplete Fourier measurements.

    Science.gov (United States)

    Patel, Vishal M; Maleh, Ray; Gilbert, Anna C; Chellappa, Rama

    2012-01-01

    A major problem in imaging applications such as magnetic resonance imaging and synthetic aperture radar is the task of trying to reconstruct an image with the smallest possible set of Fourier samples, every single one of which has a potential time and/or power cost. The theory of compressive sensing (CS) points to ways of exploiting inherent sparsity in such images in order to achieve accurate recovery using sub-Nyquist sampling schemes. Traditional CS approaches to this problem consist of solving total-variation (TV) minimization programs with Fourier measurement constraints or other variations thereof. This paper takes a different approach. Since the horizontal and vertical differences of a medical image are each more sparse or compressible than the corresponding TV image, CS methods will be more successful in recovering these differences individually. We develop an algorithm called GradientRec that uses a CS algorithm to recover the horizontal and vertical gradients and then estimates the original image from these gradients. We present two methods of solving the latter inverse problem, i.e., one based on least-square optimization and the other based on a generalized Poisson solver. After a thorough derivation of our complete algorithm, we present the results of various experiments that compare the effectiveness of the proposed method against other leading methods.

  8. Harmonic spatial coherence imaging: an ultrasonic imaging method based on backscatter coherence.

    Science.gov (United States)

    Dahl, Jeremy; Jakovljevic, Marko; Pinton, Gianmarco F; Trahey, Gregg E

    2012-04-01

    We introduce a harmonic version of the short-lag spatial coherence (SLSC) imaging technique, called harmonic spatial coherence imaging (HSCI). The method is based on the coherence of the second-harmonic backscatter. Because the same signals that are used to construct harmonic B-mode images are also used to construct HSCI images, the benefits obtained with harmonic imaging are also obtained with HSCI. Harmonic imaging has been the primary tool for suppressing clutter in diagnostic ultrasound imaging, however secondharmonic echoes are not necessarily immune to the effects of clutter. HSCI and SLSC imaging are less sensitive to clutter because clutter has low spatial coherence. HSCI shows favorable imaging characteristics such as improved contrast-to-noise ratio (CNR), improved speckle SNR, and better delineation of borders and other structures compared with fundamental and harmonic B-mode imaging. CNRs of up to 1.9 were obtained from in vivo imaging of human cardiac tissue with HSCI, compared with 0.6, 0.9, and 1.5 in fundamental B-mode, harmonic B-mode, and SLSC imaging, respectively. In vivo experiments in human liver tissue demonstrated SNRs of up to 3.4 for HSCI compared with 1.9 for harmonic B-mode. Nonlinear simulations of a heart chamber model were consistent with the in vivo experiments.

  9. Optimal method for exoplanet detection by angular differential imaging.

    Science.gov (United States)

    Mugnier, Laurent M; Cornia, Alberto; Sauvage, Jean-François; Rousset, Gérard; Fusco, Thierry; Védrenne, Nicolas

    2009-06-01

    We propose a novel method for the efficient direct detection of exoplanets from the ground using angular differential imaging. The method combines images appropriately, then uses the combined images jointly in a maximum-likelihood framework to estimate the position and intensity of potential planets orbiting the observed star. It takes into account the mixture of photon and detector noises and a positivity constraint on the planet's intensity. A reasonable detection criterion is also proposed based on the computation of the noise propagation from the images to the estimated intensity of the potential planet. The implementation of this method is tested on simulated data that take into account static aberrations before and after the coronagraph, residual turbulence after adaptive optics correction, and noise.

  10. A Secret Image Sharing Method Using Integer Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Li Ching-Chung

    2007-01-01

    Full Text Available A new image sharing method, based on the reversible integer-to-integer (ITI wavelet transform and Shamir's threshold scheme is presented, that provides highly compact shadows for real-time progressive transmission. This method, working in the wavelet domain, processes the transform coefficients in each subband, divides each of the resulting combination coefficients into shadows, and allows recovery of the complete secret image by using any or more shadows . We take advantages of properties of the wavelet transform multiresolution representation, such as coefficient magnitude decay and excellent energy compaction, to design combination procedures for the transform coefficients and processing sequences in wavelet subbands such that small shadows for real-time progressive transmission are obtained. Experimental results demonstrate that the proposed method yields small shadow images and has the capabilities of real-time progressive transmission and perfect reconstruction of secret images.

  11. Method of Infrared Image Enhancement Based on Stationary Wavelet Transform

    Institute of Scientific and Technical Information of China (English)

    QI Fei; LI Yan-jun; ZHANG Ke

    2008-01-01

    Aiming at the problem, i.e. infrared images own the characters of bad contrast ratio and fuzzy edges, a method to enhance the contrast of infrared image is given, which is based on stationary wavelet transform. After making stationary wavelet transform to an infrared image, denoising is done by the proposed method of double-threshold shrinkage in detail coefficient matrixes that have high noisy intensity. For the approximation coefficient matrix with low noisy intensity, enhancement is done by the proposed method based on histogram. The enhanced image can be got by wavelet coefficient reconstruction. Furthermore, an evaluation criterion of enhancement performance is introduced. The results show that this algorithm ensures target enhancement and restrains additive Gauss white noise effectively. At the same time, its amount of calculation is small and operation speed is fast.

  12. Fast Registration Method for Point Clouds Using the Image Information

    Directory of Open Access Journals (Sweden)

    WANG Ruiyan

    2016-01-01

    Full Text Available On the existing laser scanners, there usually is a coaxial camera, which could capture images in the scanning site. For the laser scanners with a coaxial camera, we propose a fast registration method using the image information. Unlike the traditional registration methods that computing the rotation and translation simultaneously, our method calculates them individually. The rotation transformation between the point clouds is obtained by the knowledge of the vision geometry and the image information, while their translation is acquired by our improved ICP algorithm. In the improved ICP algorithm, only the translation vector is updated iteratively, whose input is the point clouds that removing the rotation transformation. Experimental results show that the rotation matrix obtained by the images has a high accuracy. In addition, compared with the traditional ICP algorithm, our algorithm converges faster and is easier to fall into the global optimum.

  13. Shape determination of unidimensional objects: the virtual image correlation method

    Directory of Open Access Journals (Sweden)

    Auradou H.

    2010-06-01

    Full Text Available The proposed method, named Virtual Image Correlation, allows one to identify an analytical expression of the shape of a curvilinear object from its image. It uses a virtual beam, whose curvature field is expressed as a truncated mathematical series. The virtual beam width only needs to be close to the physical one; its gray level (in the transverse direction is bell-shaped. The method consists in finding the coefficients of the series for which the correlation between physical and virtual beams is the best. The accuracy and the robustness of the method is shown by the mean of two examples. The first details a Young’s modulus identification from a cantilever beam image. The second is relative to a thermal plume image, that have a weak contrast and a lot of noise.

  14. Shape determination of unidimensional objects: the virtual image correlation method

    Science.gov (United States)

    Francois, M.; Semin, B.; Auradou, H.; Vatteville, J.

    2010-06-01

    The proposed method, named Virtual Image Correlation, allows one to identify an analytical expression of the shape of a curvilinear object from its image. It uses a virtual beam, whose curvature field is expressed as a truncated mathematical series. The virtual beam width only needs to be close to the physical one; its gray level (in the transverse direction) is bell-shaped. The method consists in finding the coefficients of the series for which the correlation between physical and virtual beams is the best. The accuracy and the robustness of the method is shown by the mean of two examples. The first details a Young’s modulus identification from a cantilever beam image. The second is relative to a thermal plume image, that have a weak contrast and a lot of noise.

  15. A Simple Fusion Method for Image Time Series Based on the Estimation of Image Temporal Validity

    Directory of Open Access Journals (Sweden)

    Mar Bisquert

    2015-01-01

    Full Text Available High-spatial-resolution satellites usually have the constraint of a low temporal frequency, which leads to long periods without information in cloudy areas. Furthermore, low-spatial-resolution satellites have higher revisit cycles. Combining information from high- and low- spatial-resolution satellites is thought a key factor for studies that require dense time series of high-resolution images, e.g., crop monitoring. There are several fusion methods in the bibliography, but they are time-consuming and complicated to implement. Moreover, the local evaluation of the fused images is rarely analyzed. In this paper, we present a simple and fast fusion method based on a weighted average of two input images (H and L, which are weighted by their temporal validity to the image to be fused. The method was applied to two years (2009–2010 of Landsat and MODIS (MODerate Imaging Spectroradiometer images that were acquired over a cropped area in Brazil. The fusion method was evaluated at global and local scales. The results show that the fused images reproduced reliable crop temporal profiles and correctly delineated the boundaries between two neighboring fields. The greatest advantages of the proposed method are the execution time and ease of use, which allow us to obtain a fused image in less than five minutes.

  16. Study of Denoising Method of Images- A Review

    Directory of Open Access Journals (Sweden)

    Ravi Mohan Sairam

    2013-05-01

    Full Text Available This paper attempts to undertake the study of Denoising Methods. Different noise densities have been removed by using filters Wavelet based Methods. Fourier transform method is localized in frequency domain where the Wavelet transform method is localized in both frequency and spatial domain but both the above methods are not data adaptive .Independent Component Analysis (ICA is a higher order statistical tool for the analysis of multidimensional data with inherent data adaptiveness property. In This paper we try to presents a review of some significant work in the area of image denoising and finds the one is better for image denoising. Here, some popular approaches are classified into different groups .after that we conclude for best technique for Image Denoising

  17. High-resolution imaging methods in array signal processing

    DEFF Research Database (Denmark)

    Xenaki, Angeliki

    The purpose of this study is to develop methods in array signal processing which achieve accurate signal reconstruction from limited observations resulting in high-resolution imaging. The focus is on underwater acoustic applications and sonar signal processing both in active (transmit and receive...... in active sonar signal processing for detection and imaging of submerged oil contamination in sea water from a deep-water oil leak. The submerged oil _eld is modeled as a uid medium exhibiting spatial perturbations in the acoustic parameters from their mean ambient values which cause weak scattering......) and passive (only receive) mode. The study addresses the limitations of existing methods and shows that, in many cases, the proposed methods overcome these limitations and outperform traditional methods for acoustic imaging. The project comprises two parts; The first part deals with computational methods...

  18. Research on interpolation methods in medical image processing.

    Science.gov (United States)

    Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian

    2012-04-01

    Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.

  19. Defining the value of magnetic resonance imaging in prostate brachytherapy using time-driven activity-based costing.

    Science.gov (United States)

    Thaker, Nikhil G; Orio, Peter F; Potters, Louis

    Magnetic resonance imaging (MRI) simulation and planning for prostate brachytherapy (PBT) may deliver potential clinical benefits but at an unknown cost to the provider and healthcare system. Time-driven activity-based costing (TDABC) is an innovative bottom-up costing tool in healthcare that can be used to measure the actual consumption of resources required over the full cycle of care. TDABC analysis was conducted to compare patient-level costs for an MRI-based versus traditional PBT workflow. TDABC cost was only 1% higher for the MRI-based workflow, and utilization of MRI allowed for cost shifting from other imaging modalities, such as CT and ultrasound, to MRI during the PBT process. Future initiatives will be required to follow the costs of care over longer periods of time to determine if improvements in outcomes and toxicities with an MRI-based approach lead to lower resource utilization and spending over the long-term. Understanding provider costs will become important as healthcare reform transitions to value-based purchasing and other alternative payment models. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  20. Advanced methods for image registration applied to JET videos

    Energy Technology Data Exchange (ETDEWEB)

    Craciunescu, Teddy, E-mail: teddy.craciunescu@jet.uk [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Murari, Andrea [Consorzio RFX, Associazione EURATOM-ENEA per la Fusione, Padova (Italy); Gelfusa, Michela [Associazione EURATOM-ENEA – University of Rome “Tor Vergata”, Roma (Italy); Tiseanu, Ion; Zoita, Vasile [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Arnoux, Gilles [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon (United Kingdom)

    2015-10-15

    Graphical abstract: - Highlights: • Development of an image registration method for JET IR and fast visible cameras. • Method based on SIFT descriptors and coherent point drift points set registration technique. • Method able to deal with extremely noisy images and very low luminosity images. • Computation time compatible with the inter-shot analysis. - Abstract: The last years have witnessed a significant increase in the use of digital cameras on JET. They are routinely applied for imaging in the IR and visible spectral regions. One of the main technical difficulties in interpreting the data of camera based diagnostics is the presence of movements of the field of view. Small movements occur due to machine shaking during normal pulses while large ones may arise during disruptions. Some cameras show a correlation of image movement with change of magnetic field strength. For deriving unaltered information from the videos and for allowing correct interpretation an image registration method, based on highly distinctive scale invariant feature transform (SIFT) descriptors and on the coherent point drift (CPD) points set registration technique, has been developed. The algorithm incorporates a complex procedure for rejecting outliers. The method has been applied for vibrations correction to videos collected by the JET wide angle infrared camera and for the correction of spurious rotations in the case of the JET fast visible camera (which is equipped with an image intensifier). The method has proved to be able to deal with the images provided by this camera frequently characterized by low contrast and a high level of blurring and noise.

  1. A simple method for panretinal imaging with the slit lamp.

    Science.gov (United States)

    Gellrich, Marcus-Matthias

    2016-12-01

    Slit lamp biomicroscopy of the retina with a convex lens is a key procedure in clinical practice. The methods presented enable ophthalmologists to adequately image large and peripheral parts of the fundus using a video-slit lamp and freely available stitching software. A routine examination of the fundus with a slit lamp and a +90 D lens is recorded on a video film. Later, sufficiently sharp still images are identified on the video sequence. These still images are imported into a freely available image-processing program (Hugin, for stitching mosaics together digitally) and corresponding points are marked on adjacent still images with some overlap. Using the digital stitching program Hugin panoramic overviews of the retina can be built which can extend to the equator. This allows to image diseases involving the whole retina or its periphery by performing a structured fundus examination with a video-slit lamp. Similar images with a video-slit lamp based on a fundus examination through a hand-held non-contact lens have not been demonstrated before. The methods presented enable those ophthalmologists without high-end imaging equipment to monitor pathological fundus findings. The suggested procedure might even be interesting for retinological departments if peripheral findings are to be documented which might be difficult with fundus cameras.

  2. Study on the Medical Image Distributed Dynamic Processing Method

    Institute of Scientific and Technical Information of China (English)

    张全海; 施鹏飞

    2003-01-01

    To meet the challenge of implementing rapidly advanced, time-consuming medical image processing algorithms,it is necessary to develop a medical image processing technology to process a 2D or 3D medical image dynamically on the web. But in a premier system, only static image processing can be provided with the limitation of web technology. The development of Java and CORBA (common object request broker architecture) overcomes the shortcoming of the web static application and makes the dynamic processing of medical images on the web available. To develop an open solution of distributed computing, we integrate the Java, and web with the CORBA and present a web-based medical image dynamic processing methed, which adopts Java technology as the language to program application and components of the web and utilies the CORBA architecture to cope with heterogeneous property of a complex distributed system. The method also provides a platform-independent, transparent processing architecture to implement the advanced image routines and enable users to access large dataset and resources according to the requirements of medical applications. The experiment in this paper shows that the medical image dynamic processing method implemented on the web by using Java and the CORBA is feasible.

  3. Segmentation of stochastic images with a stochastic random walker method.

    Science.gov (United States)

    Pätz, Torben; Preusser, Tobias

    2012-05-01

    We present an extension of the random walker segmentation to images with uncertain gray values. Such gray-value uncertainty may result from noise or other imaging artifacts or more general from measurement errors in the image acquisition process. The purpose is to quantify the influence of the gray-value uncertainty onto the result when using random walker segmentation. In random walker segmentation, a weighted graph is built from the image, where the edge weights depend on the image gradient between the pixels. For given seed regions, the probability is evaluated for a random walk on this graph starting at a pixel to end in one of the seed regions. Here, we extend this method to images with uncertain gray values. To this end, we consider the pixel values to be random variables (RVs), thus introducing the notion of stochastic images. We end up with stochastic weights for the graph in random walker segmentation and a stochastic partial differential equation (PDE) that has to be solved. We discretize the RVs and the stochastic PDE by the method of generalized polynomial chaos, combining the recent developments in numerical methods for the discretization of stochastic PDEs and an interactive segmentation algorithm. The resulting algorithm allows for the detection of regions where the segmentation result is highly influenced by the uncertain pixel values. Thus, it gives a reliability estimate for the resulting segmentation, and it furthermore allows determining the probability density function of the segmented object volume.

  4. A fully automatic image-to-world registration method for image-guided procedure with intraoperative imaging updates

    Science.gov (United States)

    Li, Senhu; Sarment, David

    2016-03-01

    Image-guided procedure with intraoperative imaging updates has made a big impact on minimally invasive surgery. Compact and mobile CT imaging device combining with current commercial available image guided navigation system is a legitimate and cost-efficient solution for a typical operating room setup. However, the process of manual fiducial-based registration between image and physical spaces (image-to-world) is troublesome for surgeons during the procedure, which results in much procedure interruptions and is the main source of registration errors. In this study, we developed a novel method to eliminate the manual registration process. Instead of using probe to manually localize the fiducials during the surgery, a tracking plate with known fiducial positions relative to the reference coordinates is designed and fabricated through 3D printing technique. The workflow and feasibility of this method has been studied through a phantom experiment.

  5. A Method of Removing Reflected Highlight on Images Based on Polarimetric Imaging

    Directory of Open Access Journals (Sweden)

    Fanchao Yang

    2016-01-01

    Full Text Available A method of removing reflected highlight is proposed on polarimetric imaging. Polarization images (0°, 45°, 90°, and 135° and the reflection angle are required in this reflected light removal algorithm. This method is based on the physical model of reflection and refraction, and no additional image processing algorithm is necessary in this algorithm. Compared to traditional polarization method with single polarizer, restricted observation angle of Brewster is not demanded and multiple reflection areas of different polarization orientations can be removed simultaneously. Experimental results, respectively, demonstrate the features of this reflected light removal algorithm, and it can be considered very suitable in polarization remote sensing.

  6. Three-dimensional imaging of vortex structure in a ferroelectric nanoparticle driven by an electric field.

    Science.gov (United States)

    Karpov, D; Liu, Z; Rolo, T Dos Santos; Harder, R; Balachandran, P V; Xue, D; Lookman, T; Fohtung, E

    2017-08-17

    Topological defects of spontaneous polarization are extensively studied as templates for unique physical phenomena and in the design of reconfigurable electronic devices. Experimental investigations of the complex topologies of polarization have been limited to surface phenomena, which has restricted the probing of the dynamic volumetric domain morphology in operando. Here, we utilize Bragg coherent diffractive imaging of a single BaTiO3 nanoparticle in a composite polymer/ferroelectric capacitor to study the behavior of a three-dimensional vortex formed due to competing interactions involving ferroelectric domains. Our investigation of the structural phase transitions under the influence of an external electric field shows a mobile vortex core exhibiting a reversible hysteretic transformation path. We also study the toroidal moment of the vortex under the action of the field. Our results open avenues for the study of the structure and evolution of polar vortices and other topological structures in operando in functional materials under cross field configurations.Imaging of topological states of matter such as vortex configurations has generally been limited to 2D surface effects. Here Karpov et al. study the volumetric structure and dynamics of a vortex core mediated by electric-field induced structural phase transition in a ferroelectric BaTiO3 nanoparticle.

  7. Exploratory Analysis of Multivariate Data (Unsupervised Image Segmentation and Data Driven Linear and Nonlinear Decomposition)

    DEFF Research Database (Denmark)

    Hilger, Klaus Baggesen

    2002-01-01

    This work describes different methods that are useful in the analysis of multivariate single and multiset data. The thesis covers selected aspects of relevant data analysis techniques in this context. Methods dedicated to handling data of a spatial nature are of primary interest with focus on dat...

  8. Data-driven robust approximate optimal tracking control for unknown general nonlinear systems using adaptive dynamic programming method.

    Science.gov (United States)

    Zhang, Huaguang; Cui, Lili; Zhang, Xin; Luo, Yanhong

    2011-12-01

    In this paper, a novel data-driven robust approximate optimal tracking control scheme is proposed for unknown general nonlinear systems by using the adaptive dynamic programming (ADP) method. In the design of the controller, only available input-output data is required instead of known system dynamics. A data-driven model is established by a recurrent neural network (NN) to reconstruct the unknown system dynamics using available input-output data. By adding a novel adjustable term related to the modeling error, the resultant modeling error is first guaranteed to converge to zero. Then, based on the obtained data-driven model, the ADP method is utilized to design the approximate optimal tracking controller, which consists of the steady-state controller and the optimal feedback controller. Further, a robustifying term is developed to compensate for the NN approximation errors introduced by implementing the ADP method. Based on Lyapunov approach, stability analysis of the closed-loop system is performed to show that the proposed controller guarantees the system state asymptotically tracking the desired trajectory. Additionally, the obtained control input is proven to be close to the optimal control input within a small bound. Finally, two numerical examples are used to demonstrate the effectiveness of the proposed control scheme.

  9. High-speed imaging of an ultrasound-driven bubble in contact with a wall: " Narcissus" effect and resolved acoustic streaming

    NARCIS (Netherlands)

    Marmottant, Philippe; Versluis, Michel; Jong, de Nico; Hilgenfeldt, Sascha; Lohse, Detlef

    2006-01-01

    We report microscopic observations of the primary flow oscillation of an acoustically driven bubble in contact with a wall, captured with the ultra high-speed camera Brandaris 128 (Chin et al. 2003). The driving frequency is up to 200 kHz, and the imaging frequency is up to 25 MHz. The details of th

  10. Multi-crack imaging using nonclassical nonlinear acoustic method

    Science.gov (United States)

    Zhang, Lue; Zhang, Ying; Liu, Xiao-Zhou; Gong, Xiu-Fen

    2014-10-01

    Solid materials with cracks exhibit the nonclassical nonlinear acoustical behavior. The micro-defects in solid materials can be detected by nonlinear elastic wave spectroscopy (NEWS) method with a time-reversal (TR) mirror. While defects lie in viscoelastic solid material with different distances from one another, the nonlinear and hysteretic stress—strain relation is established with Preisach—Mayergoyz (PM) model in crack zone. Pulse inversion (PI) and TR methods are used in numerical simulation and defect locations can be determined from images obtained by the maximum value. Since false-positive defects might appear and degrade the imaging when the defects are located quite closely, the maximum value imaging with a time window is introduced to analyze how defects affect each other and how the fake one occurs. Furthermore, NEWS-TR-NEWS method is put forward to improve NEWS-TR scheme, with another forward propagation (NEWS) added to the existing phases (NEWS and TR). In the added phase, scanner locations are determined by locations of all defects imaged in previous phases, so that whether an imaged defect is real can be deduced. NEWS-TR-NEWS method is proved to be effective to distinguish real defects from the false-positive ones. Moreover, it is also helpful to detect the crack that is weaker than others during imaging procedure.

  11. Reconstruction of CT images by the Bayes- back projection method

    CERN Document Server

    Haruyama, M; Takase, M; Tobita, H

    2002-01-01

    In the course of research on quantitative assay of non-destructive measurement of radioactive waste, the have developed a unique program based on the Bayesian theory for reconstruction of transmission computed tomography (TCT) image. The reconstruction of cross-section images in the CT technology usually employs the Filtered Back Projection method. The new imaging reconstruction program reported here is based on the Bayesian Back Projection method, and it has a function of iterative improvement images by every step of measurement. Namely, this method has the capability of prompt display of a cross-section image corresponding to each angled projection data from every measurement. Hence, it is possible to observe an improved cross-section view by reflecting each projection data in almost real time. From the basic theory of Baysian Back Projection method, it can be not only applied to CT types of 1st, 2nd, and 3rd generation. This reported deals with a reconstruction program of cross-section images in the CT of ...

  12. A Quick and Affine Invariance Matching Method for Oblique Images

    Directory of Open Access Journals (Sweden)

    XIAO Xiongwu

    2015-04-01

    Full Text Available This paper proposed a quick, affine invariance matching method for oblique images. It calculated the initial affine matrix by making full use of the two estimated camera axis orientation parameters of an oblique image, then recovered the oblique image to a rectified image by doing the inverse affine transform, and left over by the SIFT method. We used the nearest neighbor distance ratio(NNDR, normalized cross correlation(NCC measure constraints and consistency check to get the coarse matches, then used RANSAC method to calculate the fundamental matrix and the homography matrix. And we got the matches that they were interior points when calculating the homography matrix, then calculated the average value of the matches' principal direction differences. During the matching process, we got the initial matching features by the nearest neighbor(NN matching strategy, then used the epipolar constrains, homography constrains, NCC measure constrains and consistency check of the initial matches' principal direction differences with the calculated average value of the interior matches' principal direction differences to eliminate false matches. Experiments conducted on three pairs of typical oblique images demonstrate that our method takes about the same time as SIFT to match a pair of oblique images with a plenty of corresponding points distributed evenly and an extremely low mismatching rate.

  13. Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method

    Science.gov (United States)

    Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao

    2017-03-01

    Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.

  14. Stabilized Numerical Methods for Stochastic Differential Equations driven by Diffusion and Jump-Diffusion Processes

    OpenAIRE

    Blumenthal, Adrian

    2015-01-01

    Stochastic models that account for sudden, unforeseeable events play a crucial role in many different fields such as finance, economics, biology, chemistry, physics and so on. That kind of stochastic problems can be modeled by stochastic differential equations driven by jump-diffusion processes. In addition, there are situations, where a stochastic model is based on stochastic differential equations with multiple scales. Such stochastic problems are called stiff and lead for classical ex...

  15. Ultrafast optical imaging technology: principles and applications of emerging methods

    Science.gov (United States)

    Mikami, Hideharu; Gao, Liang; Goda, Keisuke

    2016-09-01

    High-speed optical imaging is an indispensable technology for blur-free observation of fast transient dynamics in virtually all areas including science, industry, defense, energy, and medicine. High temporal resolution is particularly important for microscopy as even a slow event appears to occur "fast" in a small field of view. Unfortunately, the shutter speed and frame rate of conventional cameras based on electronic image sensors are significantly constrained by their electrical operation and limited storage. Over the recent years, several unique and unconventional approaches to high-speed optical imaging have been reported to circumvent these technical challenges and achieve a frame rate and shutter speed far beyond what can be reached with the conventional image sensors. In this article, we review the concepts and principles of such ultrafast optical imaging methods, compare their advantages and disadvantages, and discuss an entirely new class of applications that are possible using them.

  16. Non-image-forming light driven functions are preserved in a mouse model of autosomal dominant optic atrophy.

    Science.gov (United States)

    Perganta, Georgia; Barnard, Alun R; Katti, Christiana; Vachtsevanos, Athanasios; Douglas, Ron H; MacLaren, Robert E; Votruba, Marcela; Sekaran, Sumathi

    2013-01-01

    Autosomal dominant optic atrophy (ADOA) is a slowly progressive optic neuropathy that has been associated with mutations of the OPA1 gene. In patients, the disease primarily affects the retinal ganglion cells (RGCs) and causes optic nerve atrophy and visual loss. A subset of RGCs are intrinsically photosensitive, express the photopigment melanopsin and drive non-image-forming (NIF) visual functions including light driven circadian and sleep behaviours and the pupil light reflex. Given the RGC pathology in ADOA, disruption of NIF functions might be predicted. Interestingly in ADOA patients the pupil light reflex was preserved, although NIF behavioural outputs were not examined. The B6; C3-Opa1(Q285STOP) mouse model of ADOA displays optic nerve abnormalities, RGC dendropathy and functional visual disruption. We performed a comprehensive assessment of light driven NIF functions in this mouse model using wheel running activity monitoring, videotracking and pupillometry. Opa1 mutant mice entrained their activity rhythm to the external light/dark cycle, suppressed their activity in response to acute light exposure at night, generated circadian phase shift responses to 480 nm and 525 nm pulses, demonstrated immobility-defined sleep induction following exposure to a brief light pulse at night and exhibited an intensity dependent pupil light reflex. There were no significant differences in any parameter tested relative to wildtype littermate controls. Furthermore, there was no significant difference in the number of melanopsin-expressing RGCs, cell morphology or melanopsin transcript levels between genotypes. Taken together, these findings suggest the preservation of NIF functions in Opa1 mutants. The results provide support to growing evidence that the melanopsin-expressing RGCs are protected in mitochondrial optic neuropathies.

  17. Non-image-forming light driven functions are preserved in a mouse model of autosomal dominant optic atrophy.

    Directory of Open Access Journals (Sweden)

    Georgia Perganta

    Full Text Available Autosomal dominant optic atrophy (ADOA is a slowly progressive optic neuropathy that has been associated with mutations of the OPA1 gene. In patients, the disease primarily affects the retinal ganglion cells (RGCs and causes optic nerve atrophy and visual loss. A subset of RGCs are intrinsically photosensitive, express the photopigment melanopsin and drive non-image-forming (NIF visual functions including light driven circadian and sleep behaviours and the pupil light reflex. Given the RGC pathology in ADOA, disruption of NIF functions might be predicted. Interestingly in ADOA patients the pupil light reflex was preserved, although NIF behavioural outputs were not examined. The B6; C3-Opa1(Q285STOP mouse model of ADOA displays optic nerve abnormalities, RGC dendropathy and functional visual disruption. We performed a comprehensive assessment of light driven NIF functions in this mouse model using wheel running activity monitoring, videotracking and pupillometry. Opa1 mutant mice entrained their activity rhythm to the external light/dark cycle, suppressed their activity in response to acute light exposure at night, generated circadian phase shift responses to 480 nm and 525 nm pulses, demonstrated immobility-defined sleep induction following exposure to a brief light pulse at night and exhibited an intensity dependent pupil light reflex. There were no significant differences in any parameter tested relative to wildtype littermate controls. Furthermore, there was no significant difference in the number of melanopsin-expressing RGCs, cell morphology or melanopsin transcript levels between genotypes. Taken together, these findings suggest the preservation of NIF functions in Opa1 mutants. The results provide support to growing evidence that the melanopsin-expressing RGCs are protected in mitochondrial optic neuropathies.

  18. PRECL: A new method for interferometry imaging from closure phase

    CERN Document Server

    Ikeda, Shiro; Akiyama, Kazunori; Hada, Kazuhiro; Honma, Mareki

    2016-01-01

    For short-wavelength VLBI observations, it is difficult to measure the phase of the visibility function accurately. The closure phases are reliable measurements under this situation, though it is not sufficient to retrieve all of the phase information. We propose a new method, Phase Retrieval from Closure Phase (PRECL). PRECL estimates all the visibility phases only from the closure phases. Combining PRECL with a sparse modeling method we have already proposed, imaging process of VLBI does not rely on dirty image nor self-calibration. The proposed method is tested numerically and the results are promising.

  19. Image Watermarking Method Using Integer-to-Integer Wavelet Transforms

    Institute of Scientific and Technical Information of China (English)

    陈韬; 王京春

    2002-01-01

    Digital watermarking is an efficient method for copyright protection for text, image, audio, and video data. This paper presents a new image watermarking method based on integer-to-integer wavelet transforms. The watermark is embedded in the significant wavelet coefficients by a simple exclusive OR operation. The method avoids complicated computations and high computer memory requirements that are the main drawbacks of common frequency domain based watermarking algorithms. Simulation results show that the embedded watermark is perceptually invisible and robust to various operations, such as low quality joint picture expert group (JPEG) compression, random and Gaussian noises, and smoothing (mean filtering).

  20. Magnetic rotation imaging method to measure the geomagnetic field

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A new imaging method for measuring the geomagnetic field based on the magnetic rotation effect is put forward. With the help of polarization property of the sunlight reflected from the ground and the magnetic rotation of the atmosphere, the geomagnetic field can be measured by an optical system installed on a satellite. According to its principle, the three-dimensional image of the geomagnetic field can be obtained. The measuring speed of this method is very high, and there is no blind spot and distortion. In this paper, the principle of this method is presented, and some key problems are discussed.

  1. New learning subspace method for image feature extraction

    Institute of Scientific and Technical Information of China (English)

    CAO Jian-hai; LI Long; LU Chang-hou

    2006-01-01

    A new method of Windows Minimum/Maximum Module Learning Subspace Algorithm(WMMLSA) for image feature extraction is presented. The WMMLSM is insensitive to the order of the training samples and can regulate effectively the radical vectors of an image feature subspace through selecting the study samples for subspace iterative learning algorithm,so it can improve the robustness and generalization capacity of a pattern subspace and enhance the recognition rate of a classifier. At the same time,a pattern subspace is built by the PCA method. The classifier based on WMMLSM is successfully applied to recognize the pressed characters on the gray-scale images. The results indicate that the correct recognition rate on WMMLSM is higher than that on Average Learning Subspace Method,and that the training speed and the classification speed are both improved. The new method is more applicable and efficient.

  2. Lapped Block Image Analysis via the Method of Legendre Moments

    Directory of Open Access Journals (Sweden)

    El Fadili Hakim

    2003-01-01

    Full Text Available Research investigating the use of Legendre moments for pattern recognition has been performed in recent years. This field of research remains quite open. This paper proposes a new technique based on block-based reconstruction method (BBRM using Legendre moments compared with the global reconstruction method (GRM. For alleviating the blocking artifact involved in the processing, we propose a new approach using lapped block-based reconstruction method (LBBRM. For the problem of selecting the optimal number of moment used to represent a given image, we propose the maximum entropy principle (MEP method. The main motivation of the proposed approaches is to allow fast and efficient reconstruction algorithm, with improvement of the reconstructed images quality. A binary handwritten musical character and multi-gray-level Lena image are used to demonstrate the performance of our algorithm.

  3. An Improved Image Segmentation Algorithm Based on MET Method

    Directory of Open Access Journals (Sweden)

    Z. A. Abo-Eleneen

    2012-09-01

    Full Text Available Image segmentation is a basic component of many computer vision systems and pattern recognition. Thresholding is a simple but effective method to separate objects from the background. A commonly used method, Kittler and Illingworth's minimum error thresholding (MET, improves the image segmentation effect obviously. Its simpler and easier to implement. However, it fails in the presence of skew and heavy-tailed class-conditional distributions or if the histogram is unimodal or close to unimodal. The Fisher information (FI measure is an important concept in statistical estimation theory and information theory. Employing the FI measure, an improved threshold image segmentation algorithm FI-based extension of MET is developed. Comparing with the MET method, the improved method in general can achieve more robust performance when the data for either class is skew and heavy-tailed.

  4. Metal artifact reduction method using metal streaks image subtraction

    Energy Technology Data Exchange (ETDEWEB)

    Pua, Rizza D.; Cho, Seung Ryong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2014-04-15

    Many studies have been dedicated for metal artifact reduction (MAR); however, the methods are successful to varying degrees depending on situations. Sinogram in-painting, filtering, iterative method are some of the major categories of MAR. Each has its own merits and weaknesses. A combination of these methods or hybrid methods have also been developed to make use of the different benefits of two techniques and minimize the unfavorable results. Our method focuses on the in-paitning approach and a hybrid MAR described by Xia et al. Although in-painting scheme is an effective technique in reducing the primary metal artifacts, a major drawback is the reintroduction of new artifacts that can be caused by an inaccurate interpolation process. Furthermore, combining the segmented metal image to the corrected nonmetal image in the final step of a conventional inpainting approach causes an issue of incorrect metal pixel values. Our proposed method begins with a sinogram in-painting approach and ends with an image-based metal artifact reduction scheme. This work provides a simple, yet effective solution for reducing metal artifacts and acquiring the original metal pixel information. The proposed method demonstrated its effectiveness in a simulation setting. The proposed method showed image quality that is comparable to the standard MAR; however, quantitatively more accurate than the standard MAR.

  5. Apparatus and method for motion tracking in brain imaging

    DEFF Research Database (Denmark)

    2013-01-01

    Disclosed is apparatus and method for motion tracking of a subject in medical brain imaging. The method comprises providing a light projector and a first camera; projecting a first pattern sequence (S1) onto a surface region of the subject with the light projector, wherein the subject is positioned...

  6. Classification of Polarimetric SAR Image Based on the Subspace Method

    Science.gov (United States)

    Xu, J.; Li, Z.; Tian, B.; Chen, Q.; Zhang, P.

    2013-07-01

    Land cover classification is one of the most significant applications in remote sensing. Compared to optical sensing technologies, synthetic aperture radar (SAR) can penetrate through clouds and have all-weather capabilities. Therefore, land cover classification for SAR image is important in remote sensing. The subspace method is a novel method for the SAR data, which reduces data dimensionality by incorporating feature extraction into the classification process. This paper uses the averaged learning subspace method (ALSM) method that can be applied to the fully polarimetric SAR image for classification. The ALSM algorithm integrates three-component decomposition, eigenvalue/eigenvector decomposition and textural features derived from the gray-level cooccurrence matrix (GLCM). The study site, locates in the Dingxing county, in Hebei Province, China. We compare the subspace method with the traditional supervised Wishart classification. By conducting experiments on the fully polarimetric Radarsat-2 image, we conclude the proposed method yield higher classification accuracy. Therefore, the ALSM classification method is a feasible and alternative method for SAR image.

  7. Combination of acoustical radiosity and the image source method

    DEFF Research Database (Denmark)

    Koutsouris, Georgios I; Brunskog, Jonas; Jeong, Cheol-Ho

    2013-01-01

    A combined model for room acoustic predictions is developed, aiming to treat both diffuse and specular reflections in a unified way. Two established methods are incorporated: acoustical radiosity, accounting for the diffuse part, and the image source method, accounting for the specular part...

  8. Simulation Methods for High-Cycle Fatigue-Driven Delamination using Cohesive Zone Models - Fundamental Behavior and Benchmark Studies

    DEFF Research Database (Denmark)

    Bak, Brian Lau Verndal; Lindgaard, Esben; Turon, A.;

    2015-01-01

    A novel computational method for simulating fatigue-driven delamination cracks in composite laminated structures under cyclic loading based on a cohesive zone model [2] and new benchmark studies with four other comparable methods [3-6] are presented. The benchmark studies describe and compare...... the traction-separation response in the cohesive zone and the transition phase from quasistatic to fatigue loading for each method. Furthermore, the accuracy of the predicted crack growth rate is studied and compared for each method. It is shown that the method described in [2] is significantly more accurate...... than the other methods [3-6]. Finally, studies are presented of the dependency and sensitivity to the change in different quasi-static material parameters and model specific fitting parameters. It is shown that all the methods except [2] rely on different parameters which are not possible to determine...

  9. A validated active contour method driven by parabolic arc model for detection and segmentation of mitochondria.

    Science.gov (United States)

    Tasel, Serdar F; Mumcuoglu, Erkan U; Hassanpour, Reza Z; Perkins, Guy

    2016-06-01

    Recent studies reveal that mitochondria take substantial responsibility in cellular functions that are closely related to aging diseases caused by degeneration of neurons. These studies emphasize that the membrane and crista morphology of a mitochondrion should receive attention in order to investigate the link between mitochondrial function and its physical structure. Electron microscope tomography (EMT) allows analysis of the inner structures of mitochondria by providing highly detailed visual data from large volumes. Computerized segmentation of mitochondria with minimum manual effort is essential to accelerate the study of mitochondrial structure/function relationships. In this work, we improved and extended our previous attempts to detect and segment mitochondria from transmission electron microcopy (TEM) images. A parabolic arc model was utilized to extract membrane structures. Then, curve energy based active contours were employed to obtain roughly outlined candidate mitochondrial regions. Finally, a validation process was applied to obtain the final segmentation data. 3D extension of the algorithm is also presented in this paper. Our method achieved an average F-score performance of 0.84. Average Dice Similarity Coefficient and boundary error were measured as 0.87 and 14nm respectively.

  10. A validated active contour method driven by parabolic arc model for detection and segmentation of mitochondria

    Science.gov (United States)

    Tasel, Serdar F.; Mumcuoglu, Erkan U.; Hassanpour, Reza Z.; Perkins, Guy

    2017-01-01

    Recent studies reveal that mitochondria take substantial responsibility in cellular functions that are closely related to aging diseases caused by degeneration of neurons. These studies emphasize that the membrane and crista morphology of a mitochondrion should receive attention in order to investigate the link between mitochondrial function and its physical structure. Electron microscope tomography (EMT) allows analysis of the inner structures of mitochondria by providing highly detailed visual data from large volumes. Computerized segmentation of mitochondria with minimum manual effort is essential to accelerate the study of mitochondrial structure/function relationships. In this work, we improved and extended our previous attempts to detect and segment mitochondria from transmission electron microcopy (TEM) images. A parabolic arc model was utilized to extract membrane structures. Then, curve energy based active contours were employed to obtain roughly outlined candidate mitochondrial regions. Finally, a validation process was applied to obtain the final segmentation data. 3D extension of the algorithm is also presented in this paper. Our method achieved an average F-score performance of 0.84. Average Dice Similarity Coefficient and boundary error were measured as 0.87 and 14 nm respectively. PMID:26956730

  11. An effective method on pornographic images realtime recognition

    Science.gov (United States)

    Wang, Baosong; Lv, Xueqiang; Wang, Tao; Wang, Chengrui

    2013-03-01

    In this paper, skin detection, texture filtering and face detection are used to extract feature on an image library, training them with the decision tree arithmetic to create some rules as a decision tree classifier to distinguish an unknown image. Experiment based on more than twenty thousand images, the precision rate can get 76.21% when testing on 13025 pornographic images and elapsed time is less than 0.2s. This experiment shows it has a good popularity. Among the steps mentioned above, proposing a new skin detection model which called irregular polygon region skin detection model based on YCbCr color space. This skin detection model can lower the false detection rate on skin detection. A new method called sequence region labeling on binary connected area can calculate features on connected area, it is faster and needs less memory than other recursive methods.

  12. Infrared medical image visualization and anomalies analysis method

    Science.gov (United States)

    Gong, Jing; Chen, Zhong; Fan, Jing; Yan, Liang

    2015-12-01

    Infrared medical examination finds the diseases through scanning the overall human body temperature and obtaining the temperature anomalies of the corresponding parts with the infrared thermal equipment. In order to obtain the temperature anomalies and disease parts, Infrared Medical Image Visualization and Anomalies Analysis Method is proposed in this paper. Firstly, visualize the original data into a single channel gray image: secondly, turn the normalized gray image into a pseudo color image; thirdly, a method of background segmentation is taken to filter out background noise; fourthly, cluster those special pixels with the breadth-first search algorithm; lastly, mark the regions of the temperature anomalies or disease parts. The test is shown that it's an efficient and accurate way to intuitively analyze and diagnose body disease parts through the temperature anomalies.

  13. A New Method for Determining Geometry of Planetary Images

    CERN Document Server

    Guio, P

    2010-01-01

    This paper presents a novel semi-automatic image processing technique to estimate accurately, and objectively, the disc parameters of a planetary body on an astronomical image. The method relies on the detection of the limb and/or the terminator of the planetary body with the VOronoi Image SEgmentation (VOISE) algorithm (Guio and Achilleos, 2009). The resulting map of the segmentation is then used to identify the visible boundary of the planetary disc. The segments comprising this boundary are then used to perform a "best" fit to an algebraic expression for the limb and/or terminator of the body. We find that we are able to locate the centre of the planetary disc with an accuracy of a few tens of one pixel. The method thus represents a useful processing stage for auroral "imaging" based studies.

  14. Comparative analysis of image classification methods for automatic diagnosis of ophthalmic images

    Science.gov (United States)

    Wang, Liming; Zhang, Kai; Liu, Xiyang; Long, Erping; Jiang, Jiewei; An, Yingying; Zhang, Jia; Liu, Zhenzhen; Lin, Zhuoling; Li, Xiaoyan; Chen, Jingjing; Cao, Qianzhong; Li, Jing; Wu, Xiaohang; Wang, Dongni; Li, Wangting; Lin, Haotian

    2017-01-01

    There are many image classification methods, but it remains unclear which methods are most helpful for analyzing and intelligently identifying ophthalmic images. We select representative slit-lamp images which show the complexity of ocular images as research material to compare image classification algorithms for diagnosing ophthalmic diseases. To facilitate this study, some feature extraction algorithms and classifiers are combined to automatic diagnose pediatric cataract with same dataset and then their performance are compared using multiple criteria. This comparative study reveals the general characteristics of the existing methods for automatic identification of ophthalmic images and provides new insights into the strengths and shortcomings of these methods. The relevant methods (local binary pattern +SVMs, wavelet transformation +SVMs) which achieve an average accuracy of 87% and can be adopted in specific situations to aid doctors in preliminarily disease screening. Furthermore, some methods requiring fewer computational resources and less time could be applied in remote places or mobile devices to assist individuals in understanding the condition of their body. In addition, it would be helpful to accelerate the development of innovative approaches and to apply these methods to assist doctors in diagnosing ophthalmic disease.

  15. Single Transducer Ultrasonic Imaging Method that Eliminates the Effect of Plate Thickness Variation in the Image

    Science.gov (United States)

    Roth, Don J.

    1996-01-01

    This article describes a single transducer ultrasonic imaging method that eliminates the effect of plate thickness variation in the image. The method thus isolates ultrasonic variations due to material microstructure. The use of this method can result in significant cost savings because the ultrasonic image can be interpreted correctly without the need for machining to achieve precise thickness uniformity during nondestructive evaluations of material development. The method is based on measurement of ultrasonic velocity. Images obtained using the thickness-independent methodology are compared with conventional velocity and c-scan echo peak amplitude images for monolithic ceramic (silicon nitride), metal matrix composite and polymer matrix composite materials. It was found that the thickness-independent ultrasonic images reveal and quantify correctly areas of global microstructural (pore and fiber volume fraction) variation due to the elimination of thickness effects. The thickness-independent ultrasonic imaging method described in this article is currently being commercialized under a cooperative agreement between NASA Lewis Research Center and Sonix, Inc.

  16. Method of Fire Image Identification Based on Optimization Theory

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In view of some distinctive characteristics of the early-stage flame image, a corresponding method of characteristic extraction is presented. Also introduced is the application of the improved BP algorithm based on the optimization theory to identifying fire image characteristics. First the optimization of BP neural network adopting Levenberg-Marquardt algorithm with the property of quadratic convergence is discussed, and then a new system of fire image identification is devised. Plenty of experiments and field tests have proved that this system can detect the early-stage fire flame quickly and reliably.

  17. PERFORMANCE OF IMPULSE NOISE DETECTION METHODS IN REMOTE SENSING IMAGES

    Directory of Open Access Journals (Sweden)

    Mrs.V.RADHIKA,

    2010-09-01

    Full Text Available Remote sensing (RS images are affected by different types of noises like Gaussian noise, speckle noise and impulse noise. These noises are introduced into the RS images during acquisition or transmission process. The main challenge in impulse noise removal is to suppress the noise as well as to preserve the details (edges. Removal ofthe impulse noise is done by two stages: detection of noisy pixel and replacement of that pixel. Detecting and Removing or reducing impulse noise is a very active research area in image processing. In this paper three different existing detection methods are discussed with the intension of developing a new one.

  18. Imaging through flesh tissue using fs electronic holographic gating method

    Institute of Scientific and Technical Information of China (English)

    侯比学; 陈国夫; 郝志琦; 丰善; 王淑岩; 王屹山; 王国志

    1999-01-01

    The experimental results of imaging through flesh tissue using fs electronic holographic gating method is reported. In the experiment, Ti: sapphire mode-locked laser is used as light source, of which the repetition rate is 100 MHz, central wavelength 800 mn, duration of pulse 20 fs, output power 80 mW. Tissue is a 7 mm thick chicken slice, and the imaged object is a metal wire with diameter of 0.5 mm. A general CCD is used to record holograms and a clear image of metal wire is obtained. Several relevant problems are discussed.

  19. Moment-Based Method to Estimate Image Affine Transform

    Institute of Scientific and Technical Information of China (English)

    FENG Guo-rui; JIANG Ling-ge

    2005-01-01

    The estimation of affine transform is a crucial problem in the image recognition field. This paper resorted to some invariant properties under translation, rotation and scaling, and proposed a simple method to estimate the affine transform kernel of the two-dimensional gray image. Maps, applying to the original, produce some correlative points that can accurately reflect the affine transform feature of the image. Furthermore, unknown variables existing in the kernel of the transform are calculated. The whole scheme only refers to one-order moment,therefore, it has very good stability.

  20. Research Dynamics of the Classification Methods of Remote Sensing Images

    Institute of Scientific and Technical Information of China (English)

    Yan; ZHANG; Baoguo; WU; Dong; WANG

    2013-01-01

    As the key technology of extracting remote sensing information,the classification of remote sensing images has always been the research focus in the field of remote sensing. The paper introduces the classification process and system of remote sensing images. According to the recent research status of domestic and international remote sensing classification methods,the new study dynamics of remote sensing classification,such as artificial neural networks,support vector machine,active learning and ensemble multi-classifiers,were introduced,providing references for the automatic and intelligent development of remote sensing images classification.

  1. [Ectopic parathyroid glands. Imaging methods and surgical access].

    Science.gov (United States)

    Fialová, M; Adámková, J; Adámek, S; Libánský, P; Kubinyi, J

    2014-08-01

    We discuss the benefits of imaging methods in localizing ectopic parathyroid glands in patients with primary hyperparathyroidism. The ectopic localizations are discussed within the context of the orthotopic norm. In the sample of 123 patients, a 23% rate of ectopic parathyroid glands was detected. Three selected case studies are presented, supporting the benefit of SPECT/CT imaging in terms of surgical access strategy selection.

  2. Immunohistochemical and calcium imaging methods in wholemount rat retina.

    Science.gov (United States)

    Sargoy, Allison; Barnes, Steven; Brecha, Nicholas C; Pérez De Sevilla Müller, Luis

    2014-10-13

    In this paper we describe the tools, reagents, and the practical steps that are needed for: 1) successful preparation of wholemount retinas for immunohistochemistry and, 2) calcium imaging for the study of voltage gated calcium channel (VGCC) mediated calcium signaling in retinal ganglion cells. The calcium imaging method we describe circumvents issues concerning non-specific loading of displaced amacrine cells in the ganglion cell layer.

  3. Immunohistochemical and Calcium Imaging Methods in Wholemount Rat Retina

    OpenAIRE

    SARGOY, ALLISON; Barnes, Steven; Brecha, Nicholas C.; De Sevilla Müller, Luis Pérez

    2014-01-01

    In this paper we describe the tools, reagents, and the practical steps that are needed for: 1) successful preparation of wholemount retinas for immunohistochemistry and, 2) calcium imaging for the study of voltage gated calcium channel (VGCC) mediated calcium signaling in retinal ganglion cells. The calcium imaging method we describe circumvents issues concerning non-specific loading of displaced amacrine cells in the ganglion cell layer.

  4. Apparatus and method for velocity estimation in synthetic aperture imaging

    DEFF Research Database (Denmark)

    2003-01-01

    The invention relates to an apparatus for flow estimation using synthetic aperture imaging. The method uses a Synthetic Transmit Aperture, but unlike previous approaches a new frame is created after every pulse emission. In receive mode parallel beam forming is implemented. The beam formed RF data......). The update signals are used in the velocity estimation processor (8) to correlate the individual measurements to obtain the displacement between high-resolution images and thereby determine the velocity....

  5. Interpretation of the method of images in estimating superconducting levitation

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Diaz, Jose Luis [Departamento de Ingenieria Mecanica, Universidad Carlos III de Madrid, Butarque 15, E28911 Leganes (Spain)], E-mail: jlperez@ing.uc3m.es; Garcia-Prada, Juan Carlos [Departamento de Ingenieria Mecanica, Universidad Carlos III de Madrid, Butarque 15, E28911 Leganes (Spain)

    2007-12-01

    Among different papers devoted to superconducting levitation of a permanent magnet over a superconductor using the method of images, there is a discrepancy of a factor of two when estimating the lift force. This is not a minor matter but an interesting fundamental question that contributes to understanding the physical phenomena of 'imaging' on a superconductor surface. We solve it, make clear the physical behavior underlying it, and suggest the reinterpretation of some previous experiments.

  6. Interpretation of the method of images in estimating superconducting levitation

    Science.gov (United States)

    Perez-Diaz, Jose Luis; Garcia-Prada, Juan Carlos

    2007-12-01

    Among different papers devoted to superconducting levitation of a permanent magnet over a superconductor using the method of images, there is a discrepancy of a factor of two when estimating the lift force. This is not a minor matter but an interesting fundamental question that contributes to understanding the physical phenomena of "imaging" on a superconductor surface. We solve it, make clear the physical behavior underlying it, and suggest the reinterpretation of some previous experiments.

  7. A Single Image Dehazing Method Using Average Saturation Prior

    Directory of Open Access Journals (Sweden)

    Zhenfei Gu

    2017-01-01

    Full Text Available Outdoor images captured in bad weather are prone to yield poor visibility, which is a fatal problem for most computer vision applications. The majority of existing dehazing methods rely on an atmospheric scattering model and therefore share a common limitation; that is, the model is only valid when the atmosphere is homogeneous. In this paper, we propose an improved atmospheric scattering model to overcome this inherent limitation. By adopting the proposed model, a corresponding dehazing method is also presented. In this method, we first create a haze density distribution map of a hazy image, which enables us to segment the hazy image into scenes according to the haze density similarity. Then, in order to improve the atmospheric light estimation accuracy, we define an effective weight assignment function to locate a candidate scene based on the scene segmentation results and therefore avoid most potential errors. Next, we propose a simple but powerful prior named the average saturation prior (ASP, which is a statistic of extensive high-definition outdoor images. Using this prior combined with the improved atmospheric scattering model, we can directly estimate the scene atmospheric scattering coefficient and restore the scene albedo. The experimental results verify that our model is physically valid, and the proposed method outperforms several state-of-the-art single image dehazing methods in terms of both robustness and effectiveness.

  8. Robust image registration using adaptive coherent point drift method

    Science.gov (United States)

    Yang, Lijuan; Tian, Zheng; Zhao, Wei; Wen, Jinhuan; Yan, Weidong

    2016-04-01

    Coherent point drift (CPD) method is a powerful registration tool under the framework of the Gaussian mixture model (GMM). However, the global spatial structure of point sets is considered only without other forms of additional attribute information. The equivalent simplification of mixing parameters and the manual setting of the weight parameter in GMM make the CPD method less robust to outlier and have less flexibility. An adaptive CPD method is proposed to automatically determine the mixing parameters by embedding the local attribute information of features into the construction of GMM. In addition, the weight parameter is treated as an unknown parameter and automatically determined in the expectation-maximization algorithm. In image registration applications, the block-divided salient image disk extraction method is designed to detect sparse salient image features and local self-similarity is used as attribute information to describe the local neighborhood structure of each feature. The experimental results on optical images and remote sensing images show that the proposed method can significantly improve the matching performance.

  9. An adaptive image denoising method based on local parameters optimization

    Indian Academy of Sciences (India)

    Hari Om; Mantosh Biswas

    2014-08-01

    In image denoising algorithms, the noise is handled by either modifying term-by-term, i.e., individual pixels or block-by-block, i.e., group of pixels, using suitable shrinkage factor and threshold function. The shrinkage factor is generally a function of threshold and some other characteristics of the neighbouring pixels of the pixel to be thresholded (denoised). The threshold is determined in terms of the noise variance present in the image and its size. The VisuShrink, SureShrink, and NeighShrink methods are important denoising methods that provide good results. The first two, i.e., VisuShrink and SureShrink methods follow term-by-term approach, i.e., modify the individual pixel and the third one, i.e., NeighShrink and its variants: ModiNeighShrink, IIDMWD, and IAWDMBMC, follow block-by-block approach, i.e., modify the pixels in groups, in order to remove the noise. The VisuShrink, SureShrink, and NeighShrink methods however do not give very good visual quality because they remove too many coefficients due to their high threshold values. In this paper, we propose an image denoising method that uses the local parameters of the neighbouring coefficients of the pixel to be denoised in the noisy image. In our method, we propose two new shrinkage factors and the threshold at each decomposition level, which lead to better visual quality. We also establish the relationship between both the shrinkage factors. We compare the performance of our method with that of the VisuShrink and NeighShrink including various variants. Simulation results show that our proposed method has high peak signal-to-noise ratio and good visual quality of the image as compared to the traditional methods:Weiner filter, VisuShrink, SureShrink, NeighBlock, NeighShrink, ModiNeighShrink, LAWML, IIDMWT, and IAWDMBNC methods.

  10. Hybrid Method for 3D Segmentation of Magnetic Resonance Images

    Institute of Scientific and Technical Information of China (English)

    ZHANGXiang; ZHANGDazhi; TIANJinwen; LIUJian

    2003-01-01

    Segmentation of some complex images, especially in magnetic resonance brain images, is often difficult to perform satisfactory results using only single approach of image segmentation. An approach towards the integration of several techniques seems to be the best solution. In this paper a new hybrid method for 3-dimension segmentation of the whole brain is introduced, based on fuzzy region growing, edge detection and mathematical morphology, The gray-level threshold, controlling the process of region growing, is determined by fuzzy technique. The image gradient feature is obtained by the 3-dimension sobel operator considering a 3×3×3 data block with the voxel to be evaluated at the center, while the gradient magnitude threshold is defined by the gradient magnitude histogram of brain magnetic resonance volume. By the combined methods of edge detection and region growing, the white matter volume of human brain is segmented perfectly. By the post-processing using mathematical morphological techniques, the whole brain region is obtained. In order to investigate the validity of the hybrid method, two comparative experiments, the region growing method using only gray-level feature and the thresholding method by combining gray-level and gradient features, are carried out. Experimental results indicate that the proposed method provides much better results than the traditional method using a single technique in the 3-dimension segmentation of human brain magnetic resonance data sets.

  11. Method for the reduction of image content redundancy in large image databases

    Science.gov (United States)

    Tobin, Kenneth William; Karnowski, Thomas P.

    2010-03-02

    A method of increasing information content for content-based image retrieval (CBIR) systems includes the steps of providing a CBIR database, the database having an index for a plurality of stored digital images using a plurality of feature vectors, the feature vectors corresponding to distinct descriptive characteristics of the images. A visual similarity parameter value is calculated based on a degree of visual similarity between features vectors of an incoming image being considered for entry into the database and feature vectors associated with a most similar of the stored images. Based on said visual similarity parameter value it is determined whether to store or how long to store the feature vectors associated with the incoming image in the database.

  12. Applying the model driven generative domain engineering method to develop self-organizing architectural solutions for mobile robot

    Institute of Scientific and Technical Information of China (English)

    LIANG Hai-hua; ZHU Miao-liang

    2006-01-01

    Model driven generative domain engineering (MDGDE) is a domain engineering method aiming to develop optimized,reusable architectures, components and aspects for application engineering. Agents are regarded in MDGDE as special objects having more autonomy, and taking more initiative. Design of the agent involves three levels of activities: logical analysis and design, physical analysis, physical design. This classification corresponds to domain analysis and design, application analysis, and application design. Agent is an important analysis and design tool for MDGDE because it facilitates development of complex distributed system-the mobile robot. According to MDGDE, we designed a distributed communication middleware and a set of event-driven agents, which enables the robot to initiate actions adaptively to the dynamical changes in the environment. This paper describes our approach as well as its motivations and our practice.

  13. On the pinned field image binarization for signature generation in image ownership verification method

    Directory of Open Access Journals (Sweden)

    Chang Hsuan

    2011-01-01

    Full Text Available Abstract The issue of pinned field image binarization for signature generation in the ownership verification of the protected image is investigated. The pinned field explores the texture information of the protected image and can be employed to enhance the watermark robustness. In the proposed method, four optimization schemes are utilized to determine the threshold values for transforming the pinned field into a binary feature image, which is then utilized to generate an effective signature image. Experimental results show that the utilization of optimization schemes can significantly improve the signature robustness from the previous method (Lee and Chang, Opt. Eng. 49 (9, 097005, 2010. While considering both the watermark retrieval rate and the computation speed, the genetic algorithm is strongly recommended. In addition, compared with Chang and Lin's scheme (J. Syst. Softw. 81 (7, 1118-1129, 2008, the proposed scheme also has better performance.

  14. Microenvironment-Driven Bioelimination of Magnetoplasmonic Nanoassemblies and Their Multimodal Imaging-Guided Tumor Photothermal Therapy.

    Science.gov (United States)

    Li, Linlin; Fu, Shiyan; Chen, Chuanfang; Wang, Xuandong; Fu, Changhui; Wang, Shu; Guo, Weibo; Yu, Xin; Zhang, Xiaodi; Liu, Zhirong; Qiu, Jichuan; Liu, Hong

    2016-07-26

    Biocompatibility and bioelimination are basic requirements for systematically administered nanomaterials for biomedical purposes. Gold-based plasmonic nanomaterials have shown potential applications in photothermal cancer therapy. However, their inability to biodegrade has impeded practical biomedical application. In this study, a kind of bioeliminable magnetoplasmonic nanoassembly (MPNA), assembled from an Fe3O4 nanocluster and gold nanoshell, was elaborately designed for computed tomography, photoacoustic tomography, and magnetic resonance trimodal imaging-guided tumor photothermal therapy. A single dose of photothermal therapy under near-infrared light induced a complete tumor regression in mice. Importantly, MPNAs could respond to the local microenvironment with acidic pH and enzymes where they accumulated including tumors, liver, spleen, etc., collapse into small molecules and discrete nanoparticles, and finally be cleared from the body. With the bioelimination ability from the body, a high dose of 400 mg kg(-1) MPNAs had good biocompatibility. The MPNAs for cancer theranostics pave a way toward biodegradable bio-nanomaterials for biomedical applications.

  15. MALDI-mass spectrometric imaging revealing hypoxia-driven lipids and proteins in a breast tumor model

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Jiang; Chughtai, Kamila; Purvine, Samuel O.; Bhujwalla, Zaver M.; Raman, Venu; Pasa-Tolic, Ljiljana; Heeren, Ronald M.; Glunde, Kristine

    2015-06-16

    Hypoxic areas are a common feature of rapidly growing malignant tumors and their metastases, and are typically spatially heterogeneous. Hypoxia has a strong impact on tumor cell biology and contributes to tumor progression in multiple ways. To date, only a few molecular key players in tumor hypoxia, such as for example hypoxia-inducible factor-1 (HIF-1), have been discovered. The distribution of biomolecules is frequently heterogeneous in the tumor volume, and may be driven by hypoxia and HIF-1α. Understanding the spatially heterogeneous hypoxic response of tumors is critical. Mass spectrometric imaging (MSI) provides a unique way of imaging biomolecular distributions in tissue sections with high spectral and spatial resolution. In this paper, breast tumor xenografts grown from MDA-MB-231-HRE-tdTomato cells, with a red fluorescent tdTomato protein construct under the control of a hypoxia response element (HRE)-containing promoter driven by HIF-1α, were used to detect the spatial distribution of hypoxic regions. We elucidated the 3D spatial relationship between hypoxic regions and the localization of small molecules, metabolites, lipids, and proteins by using principal component analysis – linear discriminant analysis (PCA-LDA) on 3D rendered MSI volume data from MDA-MB-231-HRE-tdTomato breast tumor xenografts. In this study we identified hypoxia-regulated proteins active in several distinct pathways such as glucose metabolism, regulation of actin cytoskeleton, protein folding, translation/ribosome, splicesome, the PI3K-Akt signaling pathway, hemoglobin chaperone, protein processing in endoplasmic reticulum, detoxification of reactive oxygen species, aurora B signaling/apoptotic execution phase, the RAS signaling pathway, the FAS signaling pathway/caspase cascade in apoptosis and telomere stress induced senescence. In parallel we also identified co-localization of hypoxic regions and various lipid species such as PC(16:0/18:1), PC(16:0/18:2), PC(18:0/18:1), PC

  16. Fast method of constructing image correlations to build a free network based on image multivocabulary trees

    Science.gov (United States)

    Zhan, Zongqian; Wang, Xin; Wei, Minglu

    2015-05-01

    In image-based three-dimensional (3-D) reconstruction, one topic of growing importance is how to quickly obtain a 3-D model from a large number of images. The retrieval of the correct and relevant images for the model poses a considerable technological challenge. The "image vocabulary tree" has been proposed as a method to search for similar images. However, a significant drawback of this approach is identified in its low time efficiency and barely satisfactory classification result. The method proposed is inspired by, and improves upon, some recent methods. Specifically, vocabulary quality is considered and multivocabulary trees are designed to improve the classification result. A marked improvement was, indeed, observed in our evaluation of the proposed method. To improve time efficiency, graphics processing unit (GPU) computer unified device architecture parallel computation is applied in the multivocabulary trees. The results of the experiments showed that the GPU was three to four times more efficient than the enumeration matching and CPU methods when the number of images is large. This paper presents a reliable reference method for the rapid construction of a free network to be used for the computing of 3-D information.

  17. Data-driven Green's function retrieval and application to imaging with multidimensional deconvolution

    NARCIS (Netherlands)

    Broggini, F.; Wapenaar, C.P.A.; Van der Neut, J.R.; Snieder, R.

    2014-01-01

    An iterative method is presented that allows one to retrieve the Green's function originating from a virtual source located inside a medium using reflection data measured only at the acquisition surface. In addition to the reflection response, an estimate of the travel times corresponding to the dir

  18. Underwater Image Processing: State of the Art of Restoration and Image Enhancement Methods

    Directory of Open Access Journals (Sweden)

    Silvia Corchs

    2010-01-01

    Full Text Available The underwater image processing area has received considerable attention within the last decades, showing important achievements. In this paper we review some of the most recent methods that have been specifically developed for the underwater environment. These techniques are capable of extending the range of underwater imaging, improving image contrast and resolution. After considering the basic physics of the light propagation in the water medium, we focus on the different algorithms available in the literature. The conditions for which each of them have been originally developed are highlighted as well as the quality assessment methods used to evaluate their performance.

  19. A hybrid method for pancreas extraction from CT image based on level set methods.

    Science.gov (United States)

    Jiang, Huiyan; Tan, Hanqing; Fujita, Hiroshi

    2013-01-01

    This paper proposes a novel semiautomatic method to extract the pancreas from abdominal CT images. Traditional level set and region growing methods that request locating initial contour near the final boundary of object have problem of leakage to nearby tissues of pancreas region. The proposed method consists of a customized fast-marching level set method which generates an optimal initial pancreas region to solve the problem that the level set method is sensitive to the initial contour location and a modified distance regularized level set method which extracts accurate pancreas. The novelty in our method is the proper selection and combination of level set methods, furthermore an energy-decrement algorithm and an energy-tune algorithm are proposed to reduce the negative impact of bonding force caused by connected tissue whose intensity is similar with pancreas. As a result, our method overcomes the shortages of oversegmentation at weak boundary and can accurately extract pancreas from CT images. The proposed method is compared to other five state-of-the-art medical image segmentation methods based on a CT image dataset which contains abdominal images from 10 patients. The evaluated results demonstrate that our method outperforms other methods by achieving higher accuracy and making less false segmentation in pancreas extraction.

  20. Splitting methods in communication, imaging, science, and engineering

    CERN Document Server

    Osher, Stanley; Yin, Wotao

    2016-01-01

    This book is about computational methods based on operator splitting. It consists of twenty-three chapters written by recognized splitting method contributors and practitioners, and covers a vast spectrum of topics and application areas, including computational mechanics, computational physics, image processing, wireless communication, nonlinear optics, and finance. Therefore, the book presents very versatile aspects of splitting methods and their applications, motivating the cross-fertilization of ideas. .

  1. Automated computational aberration correction method for broadband interferometric imaging techniques.

    Science.gov (United States)

    Pande, Paritosh; Liu, Yuan-Zhi; South, Fredrick A; Boppart, Stephen A

    2016-07-15

    Numerical correction of optical aberrations provides an inexpensive and simpler alternative to the traditionally used hardware-based adaptive optics techniques. In this Letter, we present an automated computational aberration correction method for broadband interferometric imaging techniques. In the proposed method, the process of aberration correction is modeled as a filtering operation on the aberrant image using a phase filter in the Fourier domain. The phase filter is expressed as a linear combination of Zernike polynomials with unknown coefficients, which are estimated through an iterative optimization scheme based on maximizing an image sharpness metric. The method is validated on both simulated data and experimental data obtained from a tissue phantom, an ex vivo tissue sample, and an in vivo photoreceptor layer of the human retina.

  2. Images of accretion discs. 1. The eclipse mapping method

    Energy Technology Data Exchange (ETDEWEB)

    Horne, K.

    1985-03-01

    A method of mapping the surface brightness distributions of accretion discs in eclipsing cataclysmic binaries is described and tested with synthetic eclipse data. Accurate synthetic light curves are computed by numerical simulation of the accretion disc eclipse, and images of the disc are reconstructed by maximum entropy methods. The conventional definition of entropy leads to a distorted image of the disc. A modified form of entropy, sensitive to the aximuthal structure of the image but not to its radial profile, suppresses azimuthal structure but correctly recovers the radial structure of the accretion disc. This eclipse mapping method permits powerful tests of accretion disc theory by deriving the spatial structure of discs from observational data with a minimum of model-dependent assumptions.

  3. Domain decomposition methods for solving an image problem

    Energy Technology Data Exchange (ETDEWEB)

    Tsui, W.K.; Tong, C.S. [Hong Kong Baptist College (Hong Kong)

    1994-12-31

    The domain decomposition method is a technique to break up a problem so that ensuing sub-problems can be solved on a parallel computer. In order to improve the convergence rate of the capacitance systems, pre-conditioned conjugate gradient methods are commonly used. In the last decade, most of the efficient preconditioners are based on elliptic partial differential equations which are particularly useful for solving elliptic partial differential equations. In this paper, the authors apply the so called covering preconditioner, which is based on the information of the operator under investigation. Therefore, it is good for various kinds of applications, specifically, they shall apply the preconditioned domain decomposition method for solving an image restoration problem. The image restoration problem is to extract an original image which has been degraded by a known convolution process and additive Gaussian noise.

  4. SAR Data Fusion Imaging Method Oriented to Target Feature Extraction

    Directory of Open Access Journals (Sweden)

    Yang Wei

    2015-02-01

    Full Text Available To deal with the difficulty for target outlines extracting precisely due to neglect of target scattering characteristic variation during the processing of high-resolution space-borne SAR data, a novel fusion imaging method is proposed oriented to target feature extraction. Firstly, several important aspects that affect target feature extraction and SAR image quality are analyzed, including curved orbit, stop-and-go approximation, atmospheric delay, and high-order residual phase error. Furthermore, the corresponding compensation methods are addressed as well. Based on the analysis, the mathematical model of SAR echo combined with target space-time spectrum is established for explaining the space-time-frequency change rule of target scattering characteristic. Moreover, a fusion imaging strategy and method under high-resolution and ultra-large observation angle range conditions are put forward to improve SAR quality by fusion processing in range-doppler and image domain. Finally, simulations based on typical military targets are used to verify the effectiveness of the fusion imaging method.

  5. Sparse diffraction imaging method using an adaptive reweighting homotopy algorithm

    Science.gov (United States)

    Yu, Caixia; Zhao, Jingtao; Wang, Yanfei; Qiu, Zhen

    2017-02-01

    Seismic diffractions carry valuable information from subsurface small-scale geologic discontinuities, such as faults, cavities and other features associated with hydrocarbon reservoirs. However, seismic imaging methods mainly use reflection theory for constructing imaging models, which means a smooth constraint on imaging conditions. In fact, diffractors occupy a small account of distributions in an imaging model and possess discontinuous characteristics. In mathematics, this kind of phenomena can be described by the sparse optimization theory. Therefore, we propose a diffraction imaging method based on a sparsity-constraint model for studying diffractors. A reweighted L 2-norm and L 1-norm minimization model is investigated, where the L 2 term requests a least-square error between modeled diffractions and observed diffractions and the L 1 term imposes sparsity on the solution. In order to efficiently solve this model, we use an adaptive reweighting homotopy algorithm that updates the solutions by tracking a path along inexpensive homotopy steps. Numerical examples and field data application demonstrate the feasibility of the proposed method and show its significance for detecting small-scale discontinuities in a seismic section. The proposed method has an advantage in improving the focusing ability of diffractions and reducing the migration artifacts.

  6. Ultrasonic wavefield imaging: Research tool or emerging NDE method?

    Science.gov (United States)

    Michaels, Jennifer E.

    2017-02-01

    Ultrasonic wavefield imaging refers to acquiring full waveform data over a region of interest for waves generated by a stationary source. Although various implementations of wavefield imaging have existed for many years, the widespread availability of laser Doppler vibrometers that can acquire signals in the high kHz and low MHz range has resulted in a rapid expansion of fundamental research utilizing full wavefield data. In addition, inspection methods based upon wavefield imaging have been proposed for standalone nondestructive evaluation (NDE) with most of these methods coming from the structural health monitoring (SHM) community and based upon guided waves. If transducers are already embedded in or mounted on the structure as part of an SHM system, then a wavefield-based inspection can potentially take place with very little required disassembly. A frequently-proposed paradigm for wavefield NDE is its application as a follow-up inspection method using embedded SHM transducers as guided wave sources if the in situ SHM system generates an alarm. Discussed here is the broad role of wavefield imaging as it relates to ultrasonic NDE, both as a research tool and as an emerging NDE method. Examples of current research are presented based upon both guided and bulk wavefield imaging in metals and composites, drawing primarily from the author's work. Progress towards wavefield NDE is discussed in the context of defect detection and characterization capabilities, scan times, data quality, and required data analysis. Recent research efforts are summarized that can potentially enable wavefield NDE.

  7. Multi-view horizon-driven sea plane estimation for stereo wave imaging on moving vessels

    Science.gov (United States)

    Bergamasco, Filippo; Benetazzo, Alvise; Barbariol, Francesco; Carniel, Sandro; Sclavo, Mauro

    2016-10-01

    In the last few years we faced an increased popularity of stereo imaging as an effective tool to investigate wind sea waves at short and medium scales. Given the advances of computer vision techniques, the recovery of a scattered point-cloud from a sea surface area is nowadays a well consolidated technique producing excellent results both in terms of wave data resolution and accuracy. Nevertheless, almost all the subsequent analyses tasks, from the recovery of directional wave spectra to the estimation of significant wave height, are bound to two limiting conditions. First, wave data are required to be aligned to the mean sea plane. Second, a uniform distribution of 3D point samples is assumed. Since the stereo-camera rig is placed tilted with respect to the sea surface, perspective distortion do not allow these conditions to be met. Errors due to this problem are even more challenging if the optical instrumentation is mounted on a moving vessel, so that the mean sea plane cannot be simply obtained by averaging data from multiple subsequent frames. We address the first problem with two main contributions. First, we propose a novel horizon estimation technique to recover the attitude of a moving stereo rig with respect to the sea plane. Second, an effective weighting scheme is described to account for the non-uniform sampling of the scattered data in the estimation of the sea-plane distance. The interplay of the two allows us to provide a precise point cloud alignment without any external positioning sensor or rig viewpoint pre-calibration. The advantages of the proposed technique are evaluated throughout an experimental section spanning both synthetic and real-world scenarios.

  8. Evaluating image denoising methods in myocardial perfusion single photon emission computed tomography (SPECT) imaging

    Science.gov (United States)

    Skiadopoulos, S.; Karatrantou, A.; Korfiatis, P.; Costaridou, L.; Vassilakos, P.; Apostolopoulos, D.; Panayiotakis, G.

    2009-10-01

    The statistical nature of single photon emission computed tomography (SPECT) imaging, due to the Poisson noise effect, results in the degradation of image quality, especially in the case of lesions of low signal-to-noise ratio (SNR). A variety of well-established single-scale denoising methods applied on projection raw images have been incorporated in SPECT imaging applications, while multi-scale denoising methods with promising performance have been proposed. In this paper, a comparative evaluation study is performed between a multi-scale platelet denoising method and the well-established Butterworth filter applied as a pre- and post-processing step on images reconstructed without and/or with attenuation correction. Quantitative evaluation was carried out employing (i) a cardiac phantom containing two different size cold defects, utilized in two experiments conducted to simulate conditions without and with photon attenuation from myocardial surrounding tissue and (ii) a pilot-verified clinical dataset of 15 patients with ischemic defects. Image noise, defect contrast, SNR and defect contrast-to-noise ratio (CNR) metrics were computed for both phantom and patient defects. In addition, an observer preference study was carried out for the clinical dataset, based on rankings from two nuclear medicine clinicians. Without photon attenuation conditions, denoising by platelet and Butterworth post-processing methods outperformed Butterworth pre-processing for large size defects, while for small size defects, as well as with photon attenuation conditions, all methods have demonstrated similar denoising performance. Under both attenuation conditions, the platelet method showed improved performance with respect to defect contrast, SNR and defect CNR in the case of images reconstructed without attenuation correction, however not statistically significant (p > 0.05). Quantitative as well as preference results obtained from clinical data showed similar performance of the

  9. RGB imaging volumes alignment method for color holographic displays

    Science.gov (United States)

    Zaperty, Weronika; Kozacki, Tomasz; Gierwiało, Radosław; Kujawińska, Małgorzata

    2016-09-01

    Recent advances in holographic displays include increased interest in multiplexing techniques, which allow for extension of viewing angle, hologram resolution increase, or color imaging. In each of these situations, the image is obtained by a composition of a several light wavefronts and therefore some wavefront misalignment occurs. In this work we present a calibration method, that allows for correction of these misalignments by a suitable numerical manipulation of holographic data. For this purpose, we have developed an automated procedure that is based on a measurement of positions of reconstructed synthetic hologram of a target object with focus at two different reconstruction distances. In view of relatively long reconstruction distances in holographic displays, we focus on angular deviations of light beams, which result in a noticeable mutual lateral shift and inclination of the component images in space. A method proposed in this work is implemented in a color holographic display unit (single Spatial Light Modulator - SLM) utilizing Space- Division Method (SDM). In this technique, also referred as Aperture Field Division (AFD) method, a significant wavefront inclination is introduced by a color filter glass mosaic plate (mask) placed in front of the SLM. It is verified that an accuracy of the calibration method, obtained for reconstruction distance 700mm, is 34.5 μm and 0.02°, for the lateral shift and for the angular compensation, respectively. In the final experiment the presented method is verified through real-world object color image reconstruction.

  10. Data-driven sampling method for building 3D anatomical models from serial histology

    Science.gov (United States)

    Salunke, Snehal Ulhas; Ablove, Tova; Danforth, Theresa; Tomaszewski, John; Doyle, Scott

    2017-03-01

    In this work, we investigate the effect of slice sampling on 3D models of tissue architecture using serial histopathology. We present a method for using a single fully-sectioned tissue block as pilot data, whereby we build a fully-realized 3D model and then determine the optimal set of slices needed to reconstruct the salient features of the model objects under biological investigation. In our work, we are interested in the 3D reconstruction of microvessel architecture in the trigone region between the vagina and the bladder. This region serves as a potential avenue for drug delivery to treat bladder infection. We collect and co-register 23 serial sections of CD31-stained tissue images (6 μm thick sections), from which four microvessels are selected for analysis. To build each model, we perform semi-automatic segmentation of the microvessels. Subsampled meshes are then created by removing slices from the stack, interpolating the missing data, and re-constructing the mesh. We calculate the Hausdorff distance between the full and subsampled meshes to determine the optimal sampling rate for the modeled structures. In our application, we found that a sampling rate of 50% (corresponding to just 12 slices) was sufficient to recreate the structure of the microvessels without significant deviation from the fullyrendered mesh. This pipeline effectively minimizes the number of histopathology slides required for 3D model reconstruction, and can be utilized to either (1) reduce the overall costs of a project, or (2) enable additional analysis on the intermediate slides.

  11. Imaging of complex basin structures with the common reflection surface (CRS) stack method

    Science.gov (United States)

    Menyoli, Elive; Gajewski, Dirk; Hübscher, Christian

    2004-06-01

    Common reflection surface (CRS) stack technology is applied to seismic data from certain areas of the Donbas Foldbelt, Ukraine, after conventional seismic methods gave unsatisfactory results. On the conventionally processed post-stack migrated section the areas of interest already showed clear features of the basin structure, but reflector continuity and image quality were poor. It was our objective to improve the image quality in these areas to better support the geological interpretation and the model building. In contrast to the standard common mid-point (CMP) stack, in which a stacking trajectory is used, the CRS method transforms pre-processed multicoverage data into a zero-offset section by summing along stacking surfaces. The stacking operator is an approximation of the reflection response of a curved interface in an inhomogeneous medium. The primary advantage of the data-driven CRS stack method is its model independence and the enhancement of the signal-to-noise ratio of the stacked sections through a stacking reflection response along traces from more than one CMP gather. The presented results show that the multifold strength of the CRS stack is of particular advantage in the case of complex inverted features of Devonian-Carboniferous sediments in the Donbas Foldbelt data. We observe that in these areas where the confidence level for picking and interpretation of the stacking velocity model is low, imaging without a macrovelocity model gives improved results, because errors due to wrong or poor stacking velocity models are avoided.

  12. Images Encryption Method using Steganographic LSB Method, AES and RSA algorithm

    Science.gov (United States)

    Moumen, Abdelkader; Sissaoui, Hocine

    2017-03-01

    Vulnerability of communication of digital images is an extremely important issue nowadays, particularly when the images are communicated through insecure channels. To improve communication security, many cryptosystems have been presented in the image encryption literature. This paper proposes a novel image encryption technique based on an algorithm that is faster than current methods. The proposed algorithm eliminates the step in which the secrete key is shared during the encryption process. It is formulated based on the symmetric encryption, asymmetric encryption and steganography theories. The image is encrypted using a symmetric algorithm, then, the secret key is encrypted by means of an asymmetrical algorithm and it is hidden in the ciphered image using a least significant bits steganographic scheme. The analysis results show that while enjoying the faster computation, our method performs close to optimal in terms of accuracy.

  13. Ortho Image and DTM Generation with Intelligent Methods

    Science.gov (United States)

    Bagheri, H.; Sadeghian, S.

    2013-10-01

    Nowadays the artificial intelligent algorithms has considered in GIS and remote sensing. Genetic algorithm and artificial neural network are two intelligent methods that are used for optimizing of image processing programs such as edge extraction and etc. these algorithms are very useful for solving of complex program. In this paper, the ability and application of genetic algorithm and artificial neural network in geospatial production process like geometric modelling of satellite images for ortho photo generation and height interpolation in raster Digital Terrain Model production process is discussed. In first, the geometric potential of Ikonos-2 and Worldview-2 with rational functions, 2D & 3D polynomials were tested. Also comprehensive experiments have been carried out to evaluate the viability of the genetic algorithm for optimization of rational function, 2D & 3D polynomials. Considering the quality of Ground Control Points, the accuracy (RMSE) with genetic algorithm and 3D polynomials method for Ikonos-2 Geo image was 0.508 pixel sizes and the accuracy (RMSE) with GA algorithm and rational function method for Worldview-2 image was 0.930 pixel sizes. For more another optimization artificial intelligent methods, neural networks were used. With the use of perceptron network in Worldview-2 image, a result of 0.84 pixel sizes with 4 neurons in middle layer was gained. The final conclusion was that with artificial intelligent algorithms it is possible to optimize the existing models and have better results than usual ones. Finally the artificial intelligence methods, like genetic algorithms as well as neural networks, were examined on sample data for optimizing interpolation and for generating Digital Terrain Models. The results then were compared with existing conventional methods and it appeared that these methods have a high capacity in heights interpolation and that using these networks for interpolating and optimizing the weighting methods based on inverse

  14. A NEW LBG-BASED IMAGE COMPRESSION METHOD USING DCT

    Institute of Scientific and Technical Information of China (English)

    Jiang Lai; Huang Cailing; Liao Huilian; Ji Zhen

    2006-01-01

    In this letter, a new Linde-Buzo-Gray (LBG)-based image compression method using Discrete Cosine Transform (DCT) and Vector Quantization (VQ) is proposed. A gray-level image is firstly decomposed into blocks, then each block is subsequently encoded by a 2D DCT coding scheme. The dimension of vectors as the input of a generalized VQ scheme is reduced. The time of encoding by a generalized VQ is reduced with the introduction of DCT process. The experimental results demonstrate the efficiency of the proposed method.

  15. Shape determination of unidimensional objects: the virtual image correlation method

    OpenAIRE

    Auradou H.; Vatteville J.; Semin B.; Francois M.

    2010-01-01

    The proposed method, named Virtual Image Correlation, allows one to identify an analytical expression of the shape of a curvilinear object from its image. It uses a virtual beam, whose curvature field is expressed as a truncated mathematical series. The virtual beam width only needs to be close to the physical one; its gray level (in the transverse direction) is bell-shaped. The method consists in finding the coefficients of the series for which the correlation between physical and virtual ...

  16. Studying depression using imaging and machine learning methods.

    Science.gov (United States)

    Patel, Meenal J; Khalaf, Alexander; Aizenstein, Howard J

    2016-01-01

    Depression is a complex clinical entity that can pose challenges for clinicians regarding both accurate diagnosis and effective timely treatment. These challenges have prompted the development of multiple machine learning methods to help improve the management of this disease. These methods utilize anatomical and physiological data acquired from neuroimaging to create models that can identify depressed patients vs. non-depressed patients and predict treatment outcomes. This article (1) presents a background on depression, imaging, and machine learning methodologies; (2) reviews methodologies of past studies that have used imaging and machine learning to study depression; and (3) suggests directions for future depression-related studies.

  17. Improved image fusion method based on NSCT and accelerated NMF.

    Science.gov (United States)

    Wang, Juan; Lai, Siyu; Li, Mingdong

    2012-01-01

    In order to improve algorithm efficiency and performance, a technique for image fusion based on the Non-subsampled Contourlet Transform (NSCT) domain and an Accelerated Non-negative Matrix Factorization (ANMF)-based algorithm is proposed in this paper. Firstly, the registered source images are decomposed in multi-scale and multi-direction using the NSCT method. Then, the ANMF algorithm is executed on low-frequency sub-images to get the low-pass coefficients. The low frequency fused image can be generated faster in that the update rules for W and H are optimized and less iterations are needed. In addition, the Neighborhood Homogeneous Measurement (NHM) rule is performed on the high-frequency part to achieve the band-pass coefficients. Finally, the ultimate fused image is obtained by integrating all sub-images with the inverse NSCT. The simulated experiments prove that our method indeed promotes performance when compared to PCA, NSCT-based, NMF-based and weighted NMF-based algorithms.

  18. Improved Image Fusion Method Based on NSCT and Accelerated NMF

    Directory of Open Access Journals (Sweden)

    Mingdong Li

    2012-05-01

    Full Text Available In order to improve algorithm efficiency and performance, a technique for image fusion based on the Non-subsampled Contourlet Transform (NSCT domain and an Accelerated Non-negative Matrix Factorization (ANMF-based algorithm is proposed in this paper. Firstly, the registered source images are decomposed in multi-scale and multi-direction using the NSCT method. Then, the ANMF algorithm is executed on low-frequency sub-images to get the low-pass coefficients. The low frequency fused image can be generated faster in that the update rules for W and H are optimized and less iterations are needed. In addition, the Neighborhood Homogeneous Measurement (NHM rule is performed on the high-frequency part to achieve the band-pass coefficients. Finally, the ultimate fused image is obtained by integrating all sub-images with the inverse NSCT. The simulated experiments prove that our method indeed promotes performance when compared to PCA, NSCT-based, NMF-based and weighted NMF-based algorithms.

  19. A New Method for Medical Image Clustering Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Akbar Shahrzad Khashandarag

    2013-01-01

    Full Text Available Segmentation is applied in medical images when the brightness of the images becomes weaker so that making different in recognizing the tissues borders. Thus, the exact segmentation of medical images is an essential process in recognizing and curing an illness. Thus, it is obvious that the purpose of clustering in medical images is the recognition of damaged areas in tissues. Different techniques have been introduced for clustering in different fields such as engineering, medicine, data mining and so on. However, there is no standard technique of clustering to present ideal results for all of the imaging applications. In this paper, a new method combining genetic algorithm and k-means algorithm is presented for clustering medical images. In this combined technique, variable string length genetic algorithm (VGA is used for the determination of the optimal cluster centers. The proposed algorithm has been compared with the k-means clustering algorithm. The advantage of the proposed method is the accuracy in selecting the optimal cluster centers compared with the above mentioned technique.

  20. Optical image cryptosystem using chaotic phase-amplitude masks encoding and least-data-driven decryption by compressive sensing

    Science.gov (United States)

    Lang, Jun; Zhang, Jing

    2015-03-01

    In our proposed optical image cryptosystem, two pairs of phase-amplitude masks are generated from the chaotic web map for image encryption in the 4f double random phase-amplitude encoding (DRPAE) system. Instead of transmitting the real keys and the enormous masks codes, only a few observed measurements intermittently chosen from the masks are delivered. Based on compressive sensing paradigm, we suitably refine the series expansions of web map equations to better reconstruct the underlying system. The parameters of the chaotic equations can be successfully calculated from observed measurements and then can be used to regenerate the correct random phase-amplitude masks for decrypting the encoded information. Numerical simulations have been performed to verify the proposed optical image cryptosystem. This cryptosystem can provide a new key management and distribution method. It has the advantages of sufficiently low occupation of the transmitted key codes and security improvement of information transmission without sending the real keys.