WorldWideScience

Sample records for automatic image processing

  1. An Automatic Number Plate Recognition System under Image Processing

    OpenAIRE

    Sarbjit Kaur

    2016-01-01

    Automatic Number Plate Recognition system is an application of computer vision and image processing technology that takes photograph of vehicles as input image and by extracting their number plate from whole vehicle image , it display the number plate information into text. Mainly the ANPR system consists of 4 phases: - Acquisition of Vehicle Image and Pre-Processing, Extraction of Number Plate Area, Character Segmentation and Character Recognition. The overall accuracy and efficiency of whol...

  2. Automatic quantification of crack patterns by image processing

    Science.gov (United States)

    Liu, Chun; Tang, Chao-Sheng; Shi, Bin; Suo, Wen-Bin

    2013-08-01

    Image processing technologies are proposed to quantify crack patterns. On the basis of the technologies, a software "Crack Image Analysis System" (CIAS) has been developed. An image of soil crack network is used as an example to illustrate the image processing technologies and the operations of the CIAS. The quantification of the crack image involves the following three steps: image segmentation, crack identification and measurement. First, the image is converted to a binary image using a cluster analysis method; noise in the binary image is removed; and crack spaces are fused. Then, the medial axis of the crack network is extracted from the binary image, with which nodes and crack segments can be identified. Finally, various geometric parameters of the crack network can be calculated automatically, such as node number, crack number, clod area, clod perimeter, crack area, width, length, and direction. The thresholds used in the operations are specified by cluster analysis and other innovative methods. As a result, the objects (nodes, cracks and clods) in the crack network can be quantified automatically. The software may be used to study the generation and development of soil crack patterns and rock fractures.

  3. Automatic Denoising and Unmixing in Hyperspectral Image Processing

    Science.gov (United States)

    Peng, Honghong

    This thesis addresses two important aspects in hyperspectral image processing: automatic hyperspectral image denoising and unmixing. The first part of this thesis is devoted to a novel automatic optimized vector bilateral filter denoising algorithm, while the remainder concerns nonnegative matrix factorization with deterministic annealing for unsupervised unmixing in remote sensing hyperspectral images. The need for automatic hyperspectral image processing has been promoted by the development of potent hyperspectral systems, with hundreds of narrow contiguous bands, spanning the visible to the long wave infrared range of the electromagnetic spectrum. Due to the large volume of raw data generated by such sensors, automatic processing in the hyperspectral images processing chain is preferred to minimize human workload and achieve optimal result. Two of the mostly researched processing for such automatic effort are: hyperspectral image denoising, which is an important preprocessing step for almost all remote sensing tasks, and unsupervised unmixing, which decomposes the pixel spectra into a collection of endmember spectral signatures and their corresponding abundance fractions. Two new methodologies are introduced in this thesis to tackle the automatic processing problems described above. Vector bilateral filtering has been shown to provide good tradeoff between noise removal and edge degradation when applied to multispectral/hyperspectral image denoising. It has also been demonstrated to provide dynamic range enhancement of bands that have impaired signal to noise ratios. Typical vector bilateral filtering usage does not employ parameters that have been determined to satisfy optimality criteria. This thesis also introduces an approach for selection of the parameters of a vector bilateral filter through an optimization procedure rather than by ad hoc means. The approach is based on posing the filtering problem as one of nonlinear estimation and minimizing the Stein

  4. An Automatic Number Plate Recognition System under Image Processing

    Directory of Open Access Journals (Sweden)

    Sarbjit Kaur

    2016-03-01

    Full Text Available Automatic Number Plate Recognition system is an application of computer vision and image processing technology that takes photograph of vehicles as input image and by extracting their number plate from whole vehicle image , it display the number plate information into text. Mainly the ANPR system consists of 4 phases: - Acquisition of Vehicle Image and Pre-Processing, Extraction of Number Plate Area, Character Segmentation and Character Recognition. The overall accuracy and efficiency of whole ANPR system depends on number plate extraction phase as character segmentation and character recognition phases are also depend on the output of this phase. Further the accuracy of Number Plate Extraction phase depends on the quality of captured vehicle image. Higher be the quality of captured input vehicle image more will be the chances of proper extraction of vehicle number plate area. The existing methods of ANPR works well for dark and bright/light categories image but it does not work well for Low Contrast, Blurred and Noisy images and the detection of exact number plate area by using the existing ANPR approach is not successful even after applying existing filtering and enhancement technique for these types of images. Due to wrong extraction of number plate area, the character segmentation and character recognition are also not successful in this case by using the existing method. To overcome these drawbacks I proposed an efficient approach for ANPR in which the input vehicle image is pre-processed firstly by iterative bilateral filtering , adaptive histogram equalization and number plate is extracted from pre-processed vehicle image using morphological operations, image subtraction, image binarization/thresholding, sobel vertical edge detection and by boundary box analysis. Sometimes the extracted plate area also contains noise, bolts, frames etc. So the extracted plate area is enhanced by using morphological operations to improve the quality of

  5. Automatic Road Pavement Assessment with Image Processing: Review and Comparison

    Directory of Open Access Journals (Sweden)

    Sylvie Chambon

    2011-01-01

    Full Text Available In the field of noninvasive sensing techniques for civil infrastructures monitoring, this paper addresses the problem of crack detection, in the surface of the French national roads, by automatic analysis of optical images. The first contribution is a state of the art of the image-processing tools applied to civil engineering. The second contribution is about fine-defect detection in pavement surface. The approach is based on a multi-scale extraction and a Markovian segmentation. Third, an evaluation and comparison protocol which has been designed for evaluating this difficult task—the road pavement crack detection—is introduced. Finally, the proposed method is validated, analysed, and compared to a detection approach based on morphological tools.

  6. Image Processing Method for Automatic Discrimination of Hoverfly Species

    Directory of Open Access Journals (Sweden)

    Vladimir Crnojević

    2014-01-01

    Full Text Available An approach to automatic hoverfly species discrimination based on detection and extraction of vein junctions in wing venation patterns of insects is presented in the paper. The dataset used in our experiments consists of high resolution microscopic wing images of several hoverfly species collected over a relatively long period of time at different geographic locations. Junctions are detected using the combination of the well known HOG (histograms of oriented gradients and the robust version of recently proposed CLBP (complete local binary pattern. These features are used to train an SVM classifier to detect junctions in wing images. Once the junctions are identified they are used to extract statistics characterizing the constellations of these points. Such simple features can be used to automatically discriminate four selected hoverfly species with polynomial kernel SVM and achieve high classification accuracy.

  7. Automatic recognition of lactating sow behaviors through depth image processing

    Science.gov (United States)

    Manual observation and classification of animal behaviors is laborious, time-consuming, and of limited ability to process large amount of data. A computer vision-based system was developed that automatically recognizes sow behaviors (lying, sitting, standing, kneeling, feeding, drinking, and shiftin...

  8. A novel automatic image processing algorithm for detection of hard exudates based on retinal image analysis.

    Science.gov (United States)

    Sánchez, Clara I; Hornero, Roberto; López, María I; Aboy, Mateo; Poza, Jesús; Abásolo, Daniel

    2008-04-01

    We present an automatic image processing algorithm to detect hard exudates. Automatic detection of hard exudates from retinal images is an important problem since hard exudates are associated with diabetic retinopathy and have been found to be one of the most prevalent earliest signs of retinopathy. The algorithm is based on Fisher's linear discriminant analysis and makes use of colour information to perform the classification of retinal exudates. We prospectively assessed the algorithm performance using a database containing 58 retinal images with variable colour, brightness, and quality. Our proposed algorithm obtained a sensitivity of 88% with a mean number of 4.83+/-4.64 false positives per image using the lesion-based performance evaluation criterion, and achieved an image-based classification accuracy of 100% (sensitivity of 100% and specificity of 100%).

  9. Automatic calculation of tree diameter from stereoscopic image pairs using digital image processing.

    Science.gov (United States)

    Yi, Faliu; Moon, Inkyu

    2012-06-20

    Automatic operations play an important role in societies by saving time and improving efficiency. In this paper, we apply the digital image processing method to the field of lumbering to automatically calculate tree diameters in order to reduce culler work and enable a third party to verify tree diameters. To calculate the cross-sectional diameter of a tree, the image was first segmented by the marker-controlled watershed transform algorithm based on the hue saturation intensity (HSI) color model. Then, the tree diameter was obtained by measuring the area of every isolated region in the segmented image. Finally, the true diameter was calculated by multiplying the diameter computed in the image and the scale, which was derived from the baseline and disparity of correspondence points from stereoscopic image pairs captured by rectified configuration cameras.

  10. Automatic identification of corrosion damage using image processing techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bento, Mariana P.; Ramalho, Geraldo L.B.; Medeiros, Fatima N.S. de; Ribeiro, Elvis S. [Universidade Federal do Ceara (UFC), Fortaleza, CE (Brazil); Medeiros, Luiz C.L. [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2009-07-01

    This paper proposes a Nondestructive Evaluation (NDE) method for atmospheric corrosion detection on metallic surfaces using digital images. In this study, the uniform corrosion is characterized by texture attributes extracted from co-occurrence matrix and the Self Organizing Mapping (SOM) clustering algorithm. We present a technique for automatic inspection of oil and gas storage tanks and pipelines of petrochemical industries without disturbing their properties and performance. Experimental results are promising and encourage the possibility of using this methodology in designing trustful and robust early failure detection systems. (author)

  11. An Automatic Image Processing System for Glaucoma Screening

    Directory of Open Access Journals (Sweden)

    Ahmed Almazroa

    2017-01-01

    Full Text Available Horizontal and vertical cup to disc ratios are the most crucial parameters used clinically to detect glaucoma or monitor its progress and are manually evaluated from retinal fundus images of the optic nerve head. Due to the rarity of the glaucoma experts as well as the increasing in glaucoma’s population, an automatically calculated horizontal and vertical cup to disc ratios (HCDR and VCDR, resp. can be useful for glaucoma screening. We report on two algorithms to calculate the HCDR and VCDR. In the algorithms, level set and inpainting techniques were developed for segmenting the disc, while thresholding using Type-II fuzzy approach was developed for segmenting the cup. The results from the algorithms were verified using the manual markings of images from a dataset of glaucomatous images (retinal fundus images for glaucoma analysis (RIGA dataset by six ophthalmologists. The algorithm’s accuracy for HCDR and VCDR combined was 74.2%. Only the accuracy of manual markings by one ophthalmologist was higher than the algorithm’s accuracy. The algorithm’s best agreement was with markings by ophthalmologist number 1 in 230 images (41.8% of the total tested images.

  12. Automatic DNA Diagnosis for 1D Gel Electrophoresis Images using Bio-image Processing Technique.

    Science.gov (United States)

    Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Shaw, Philip J; Ukosakit, Kittipat; Tragoonrung, Somvong; Tongsima, Sissades

    2015-01-01

    DNA gel electrophoresis is a molecular biology technique for separating different sizes of DNA fragments. Applications of DNA gel electrophoresis include DNA fingerprinting (genetic diagnosis), size estimation of DNA, and DNA separation for Southern blotting. Accurate interpretation of DNA banding patterns from electrophoretic images can be laborious and error prone when a large number of bands are interrogated manually. Although many bio-imaging techniques have been proposed, none of them can fully automate the typing of DNA owing to the complexities of migration patterns typically obtained. We developed an image-processing tool that automatically calls genotypes from DNA gel electrophoresis images. The image processing workflow comprises three main steps: 1) lane segmentation, 2) extraction of DNA bands and 3) band genotyping classification. The tool was originally intended to facilitate large-scale genotyping analysis of sugarcane cultivars. We tested the proposed tool on 10 gel images (433 cultivars) obtained from polyacrylamide gel electrophoresis (PAGE) of PCR amplicons for detecting intron length polymorphisms (ILP) on one locus of the sugarcanes. These gel images demonstrated many challenges in automated lane/band segmentation in image processing including lane distortion, band deformity, high degree of noise in the background, and bands that are very close together (doublets). Using the proposed bio-imaging workflow, lanes and DNA bands contained within are properly segmented, even for adjacent bands with aberrant migration that cannot be separated by conventional techniques. The software, called GELect, automatically performs genotype calling on each lane by comparing with an all-banding reference, which was created by clustering the existing bands into the non-redundant set of reference bands. The automated genotype calling results were verified by independent manual typing by molecular biologists. This work presents an automated genotyping tool from DNA

  13. Automatic detection of NIL defects using microscopy and image processing

    KAUST Repository

    Pietroy, David

    2013-12-01

    Nanoimprint Lithography (NIL) is a promising technology for low cost and large scale nanostructure fabrication. This technique is based on a contact molding-demolding process, that can produce number of defects such as incomplete filling, negative patterns, sticking. In this paper, microscopic imaging combined to a specific processing algorithm is used to detect numerically defects in printed patterns. Results obtained for 1D and 2D imprinted gratings with different microscopic image magnifications are presented. Results are independent on the device which captures the image (optical, confocal or electron microscope). The use of numerical images allows the possibility to automate the detection and to compute a statistical analysis of defects. This method provides a fast analysis of printed gratings and could be used to monitor the production of such structures. © 2013 Elsevier B.V. All rights reserved.

  14. Image processing tool for automatic feature recognition and quantification

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xing; Stoddard, Ryan J.

    2017-05-02

    A system for defining structures within an image is described. The system includes reading of an input file, preprocessing the input file while preserving metadata such as scale information and then detecting features of the input file. In one version the detection first uses an edge detector followed by identification of features using a Hough transform. The output of the process is identified elements within the image.

  15. An Automatic Image Processing Workflow for Daily Magnetic Resonance Imaging Quality Assurance.

    Science.gov (United States)

    Peltonen, Juha I; Mäkelä, Teemu; Sofiev, Alexey; Salli, Eero

    2017-04-01

    The performance of magnetic resonance imaging (MRI) equipment is typically monitored with a quality assurance (QA) program. The QA program includes various tests performed at regular intervals. Users may execute specific tests, e.g., daily, weekly, or monthly. The exact interval of these measurements varies according to the department policies, machine setup and usage, manufacturer's recommendations, and available resources. In our experience, a single image acquired before the first patient of the day offers a low effort and effective system check. When this daily QA check is repeated with identical imaging parameters and phantom setup, the data can be used to derive various time series of the scanner performance. However, daily QA with manual processing can quickly become laborious in a multi-scanner environment. Fully automated image analysis and results output can positively impact the QA process by decreasing reaction time, improving repeatability, and by offering novel performance evaluation methods. In this study, we have developed a daily MRI QA workflow that can measure multiple scanner performance parameters with minimal manual labor required. The daily QA system is built around a phantom image taken by the radiographers at the beginning of day. The image is acquired with a consistent phantom setup and standardized imaging parameters. Recorded parameters are processed into graphs available to everyone involved in the MRI QA process via a web-based interface. The presented automatic MRI QA system provides an efficient tool for following the short- and long-term stability of MRI scanners.

  16. Fast Implementation of Matched Filter Based Automatic Alignment Image Processing

    Energy Technology Data Exchange (ETDEWEB)

    Awwal, A S; Rice, K; Taha, T

    2008-04-02

    Video images of laser beams imprinted with distinguishable features are used for alignment of 192 laser beams at the National Ignition Facility (NIF). Algorithms designed to determine the position of these beams enable the control system to perform the task of alignment. Centroiding is a common approach used for determining the position of beams. However, real world beam images suffer from intensity fluctuation or other distortions which make such an approach susceptible to higher position measurement variability. Matched filtering used for identifying the beam position results in greater stability of position measurement compared to that obtained using the centroiding technique. However, this gain is achieved at the expense of extra processing time required for each beam image. In this work we explore the possibility of using a field programmable logic array (FPGA) to speed up these computations. The results indicate a performance improvement of 20 using the FPGA relative to a 3 GHz Pentium 4 processor.

  17. Computed radiographic image post-processing for automatic optimal display in picture archiving and communication system

    Science.gov (United States)

    Zhang, Jianguo; Zhou, Zheng; Zhuang, Jun; Huang, H. K.

    2000-04-01

    This paper presents the key post-processing algorithms and their software implementing for CR image automatic optimal display in picture archiving and communication system, which compliant with DICOM model of the image acquisition and presentation chain. With the distributed implementation from the acquisition to the display, we achieved the effects of better image visual quality, fast image communication and display, as well as data integrity of archived CR images in PACS.

  18. Automatic image processing solutions for MRI-guided minimally invasive intervention planning

    OpenAIRE

    Noorda, YH

    2016-01-01

    In this thesis, automatic image processing methods are discussed for the purpose of improving treatment planning of MRI-guided minimally invasive interventions. Specifically, the following topics are addressed: rib detection in MRI, liver motion modeling in MRI and MR-CT registration of planning image for HIFU treatment.

  19. Automatic Detection of Steel Ball's Surface Flaws Based on Image Processing

    Institute of Scientific and Technical Information of China (English)

    YU Zheng-lin; TAN Wei; YANG Dong-lin; CAO Guo-hua

    2007-01-01

    A new method to detect steel ball's surface flaws is presented based on computer techniques of image processing and pattern recognition. The steel ball's surface flaws is the primary factor causing bearing failure. The high efficient and precision detections for the surface flaws of steel ball can be conducted by the presented method, including spot, abrasion, burn, scratch and crack, etc. The design of main components of the detecting system is described in detail including automatic feeding mechanism, automatic spreading mechanism of steel ball's surface, optical system of microscope, image acquisition system, image processing system. The whole automatic system is controlled by an industrial control computer, which can carry out the recognition of flaws of steel ball's surface effectively.

  20. Automatic image processing solutions for MRI-guided minimally invasive intervention planning

    NARCIS (Netherlands)

    Noorda, YH

    2016-01-01

    In this thesis, automatic image processing methods are discussed for the purpose of improving treatment planning of MRI-guided minimally invasive interventions. Specifically, the following topics are addressed: rib detection in MRI, liver motion modeling in MRI and MR-CT registration of planning ima

  1. Automatic image processing solutions for MRI-guided minimally invasive intervention planning

    NARCIS (Netherlands)

    Noorda, YH

    2016-01-01

    In this thesis, automatic image processing methods are discussed for the purpose of improving treatment planning of MRI-guided minimally invasive interventions. Specifically, the following topics are addressed: rib detection in MRI, liver motion modeling in MRI and MR-CT registration of planning ima

  2. Automatic solar feature detection using image processing and pattern recognition techniques

    Science.gov (United States)

    Qu, Ming

    The objective of the research in this dissertation is to develop a software system to automatically detect and characterize solar flares, filaments and Corona Mass Ejections (CMEs), the core of so-called solar activity. These tools will assist us to predict space weather caused by violent solar activity. Image processing and pattern recognition techniques are applied to this system. For automatic flare detection, the advanced pattern recognition techniques such as Multi-Layer Perceptron (MLP), Radial Basis Function (RBF), and Support Vector Machine (SVM) are used. By tracking the entire process of flares, the motion properties of two-ribbon flares are derived automatically. In the applications of the solar filament detection, the Stabilized Inverse Diffusion Equation (SIDE) is used to enhance and sharpen filaments; a new method for automatic threshold selection is proposed to extract filaments from background; an SVM classifier with nine input features is used to differentiate between sunspots and filaments. Once a filament is identified, morphological thinning, pruning, and adaptive edge linking methods are applied to determine filament properties. Furthermore, a filament matching method is proposed to detect filament disappearance. The automatic detection and characterization of flares and filaments have been successfully applied on Halpha full-disk images that are continuously obtained at Big Bear Solar Observatory (BBSO). For automatically detecting and classifying CMEs, the image enhancement, segmentation, and pattern recognition techniques are applied to Large Angle Spectrometric Coronagraph (LASCO) C2 and C3 images. The processed LASCO and BBSO images are saved to file archive, and the physical properties of detected solar features such as intensity and speed are recorded in our database. Researchers are able to access the solar feature database and analyze the solar data efficiently and effectively. The detection and characterization system greatly improves

  3. Image Processing Technique for Automatic Detection of Satellite Streaks

    Science.gov (United States)

    2007-02-01

    satellites actifs et d’autres débris doivent être contrôlées. Dans ces cas, les paramètres orbitaux sont connus, mais après un certain temps cette...artéfacts de capteur (tel que des pixels morts, gradient de fond, bruit) et dégradation du signal (coulage, éblouissement, saturation, etc...Cette étude explique comment les artéfacts du capteur peuvent être corrigés, le fond de l’image enlevé et le bruit partiellement effacé. Puis, elle

  4. Analysis of Fiber deposition using Automatic Image Processing Method

    Science.gov (United States)

    Belka, M.; Lizal, F.; Jedelsky, J.; Jicha, M.

    2013-04-01

    Fibers are permanent threat for a human health. They have an ability to penetrate deeper in the human lung, deposit there and cause health hazards, e.glung cancer. An experiment was carried out to gain more data about deposition of fibers. Monodisperse glass fibers were delivered into a realistic model of human airways with an inspiratory flow rate of 30 l/min. Replica included human airways from oral cavity up to seventh generation of branching. Deposited fibers were rinsed from the model and placed on nitrocellulose filters after the delivery. A new novel method was established for deposition data acquisition. The method is based on a principle of image analysis. The images were captured by high definition camera attached to a phase contrast microscope. Results of new method were compared with standard PCM method, which follows methodology NIOSH 7400, and a good match was found. The new method was found applicable for evaluation of fibers and deposition fraction and deposition efficiency were calculated afterwards.

  5. Analysis of Fiber deposition using Automatic Image Processing Method

    Directory of Open Access Journals (Sweden)

    Jicha M.

    2013-04-01

    Full Text Available Fibers are permanent threat for a human health. They have an ability to penetrate deeper in the human lung, deposit there and cause health hazards, e.glung cancer. An experiment was carried out to gain more data about deposition of fibers. Monodisperse glass fibers were delivered into a realistic model of human airways with an inspiratory flow rate of 30 l/min. Replica included human airways from oral cavity up to seventh generation of branching. Deposited fibers were rinsed from the model and placed on nitrocellulose filters after the delivery. A new novel method was established for deposition data acquisition. The method is based on a principle of image analysis. The images were captured by high definition camera attached to a phase contrast microscope. Results of new method were compared with standard PCM method, which follows methodology NIOSH 7400, and a good match was found. The new method was found applicable for evaluation of fibers and deposition fraction and deposition efficiency were calculated afterwards.

  6. Low-complexity PDE-based approach for automatic microarray image processing.

    Science.gov (United States)

    Belean, Bogdan; Terebes, Romulus; Bot, Adrian

    2015-02-01

    Microarray image processing is known as a valuable tool for gene expression estimation, a crucial step in understanding biological processes within living organisms. Automation and reliability are open subjects in microarray image processing, where grid alignment and spot segmentation are essential processes that can influence the quality of gene expression information. The paper proposes a novel partial differential equation (PDE)-based approach for fully automatic grid alignment in case of microarray images. Our approach can handle image distortions and performs grid alignment using the vertical and horizontal luminance function profiles. These profiles are evolved using a hyperbolic shock filter PDE and then refined using the autocorrelation function. The results are compared with the ones delivered by state-of-the-art approaches for grid alignment in terms of accuracy and computational complexity. Using the same PDE formalism and curve fitting, automatic spot segmentation is achieved and visual results are presented. Considering microarray images with different spots layouts, reliable results in terms of accuracy and reduced computational complexity are achieved, compared with existing software platforms and state-of-the-art methods for microarray image processing.

  7. FPGA based system for automatic cDNA microarray image processing.

    Science.gov (United States)

    Belean, Bogdan; Borda, Monica; Le Gal, Bertrand; Terebes, Romulus

    2012-07-01

    Automation is an open subject in DNA microarray image processing, aiming reliable gene expression estimation. The paper presents a novel shock filter based approach for automatic microarray grid alignment. The proposed method brings up significantly reduced computational complexity compared to state of the art approaches, while similar results in terms of accuracy are achieved. Based on this approach, we also propose an FPGA based system for microarray image analysis that eliminates the shortcomings of existing software platforms: user intervention, increased computational time and cost. Our system includes application-specific architectures which involve algorithm parallelization, aiming fast and automated cDNA microarray image processing. The proposed automated image processing chain is implemented both on a general purpose processor and using the developed hardware architectures as co-processors in a FPGA based system. The comparative results included in the last section show that an important gain in terms of computational time is obtained using hardware based implementations.

  8. Automatic Processing of Chinese GF-1 Wide Field of View Images

    Science.gov (United States)

    Zhang, Y.; Wan, Y.; Wang, B.; Kang, Y.; Xiong, J.

    2015-04-01

    The wide field of view (WFV) imaging instrument carried on the Chinese GF-1 satellite includes four cameras. Each camera has 200km swath-width that can acquire earth image at the same time and the observation can be repeated within only 4 days. This enables the applications of remote sensing imagery to advance from non-scheduled land-observation to periodically land-monitoring in the areas that use the images in such resolutions. This paper introduces an automatic data analysing and processing technique for the wide-swath images acquired by GF-1 satellite. Firstly, the images are validated by a self-adaptive Gaussian mixture model based cloud detection method to confirm whether they are qualified and suitable to be involved into the automatic processing workflow. Then the ground control points (GCPs) are quickly and automatically matched from the public geo-information products such as the rectified panchromatic images of Landsat-8. Before the geometric correction, the cloud detection results are also used to eliminate the invalid GCPs distributed in the cloud covered areas, which obviously reduces the ratio of blunders of GCPs. The geometric correction module not only rectifies the rational function models (RFMs), but also provides the self-calibration model and parameters for the non-linear distortion, and it is iteratively processed to detect blunders. The maximum geometric distortion in WFV image decreases from about 10-15 pixels to 1-2 pixels when compensated by self-calibration model. The processing experiments involve hundreds of WFV images of GF-1 satellite acquired from June to September 2013, which covers the whole mainland of China. All the processing work can be finished by one operator within 2 days on a desktop computer made up by a second-generation Intel Core-i7 CPU and a 4-solid-State-Disk array. The digital ortho maps (DOM) are automatically generated with 3 arc second Shuttle Radar Topography Mission (SRTM). The geometric accuracies of the

  9. Automatic Rice Crop Height Measurement Using a Field Server and Digital Image Processing

    Directory of Open Access Journals (Sweden)

    Tanakorn Sritarapipat

    2014-01-01

    Full Text Available Rice crop height is an important agronomic trait linked to plant type and yield potential. This research developed an automatic image processing technique to detect rice crop height based on images taken by a digital camera attached to a field server. The camera acquires rice paddy images daily at a consistent time of day. The images include the rice plants and a marker bar used to provide a height reference. The rice crop height can be indirectly measured from the images by measuring the height of the marker bar compared to the height of the initial marker bar. Four digital image processing steps are employed to automatically measure the rice crop height: band selection, filtering, thresholding, and height measurement. Band selection is used to remove redundant features. Filtering extracts significant features of the marker bar. The thresholding method is applied to separate objects and boundaries of the marker bar versus other areas. The marker bar is detected and compared with the initial marker bar to measure the rice crop height. Our experiment used a field server with a digital camera to continuously monitor a rice field located in Suphanburi Province, Thailand. The experimental results show that the proposed method measures rice crop height effectively, with no human intervention required.

  10. Automatic rice crop height measurement using a field server and digital image processing.

    Science.gov (United States)

    Sritarapipat, Tanakorn; Rakwatin, Preesan; Kasetkasem, Teerasit

    2014-01-07

    Rice crop height is an important agronomic trait linked to plant type and yield potential. This research developed an automatic image processing technique to detect rice crop height based on images taken by a digital camera attached to a field server. The camera acquires rice paddy images daily at a consistent time of day. The images include the rice plants and a marker bar used to provide a height reference. The rice crop height can be indirectly measured from the images by measuring the height of the marker bar compared to the height of the initial marker bar. Four digital image processing steps are employed to automatically measure the rice crop height: band selection, filtering, thresholding, and height measurement. Band selection is used to remove redundant features. Filtering extracts significant features of the marker bar. The thresholding method is applied to separate objects and boundaries of the marker bar versus other areas. The marker bar is detected and compared with the initial marker bar to measure the rice crop height. Our experiment used a field server with a digital camera to continuously monitor a rice field located in Suphanburi Province, Thailand. The experimental results show that the proposed method measures rice crop height effectively, with no human intervention required.

  11. BRICORK: an automatic machine with image processing for the production of corks

    Science.gov (United States)

    Davies, Roger; Correia, Bento A. B.; Carvalho, Fernando D.; Rodrigues, Fernando C.

    1991-06-01

    The production of cork stoppers from raw cork strip is a manual and labour-intensive process in which a punch-operator quickly inspects all sides of the cork strip for defects and decides where to punch out stoppers. He then positions the strip underneath a rotating tubular cutter and punches out the stoppers one at a time. This procedure is somewhat subjective and prone to error, being dependent on the judgement and accuracy of the operator. This paper describes the machine being developed jointly by Mecanova, Laboratorio Nacional de Engenharia e Tecnologia (LNETI) and Empresa de Investiga&sigmafcoe Desenvolvimento de Electronica SA (EID) which automatically processes cork strip introduced by an unskilled operator. The machine uses both image processing and laser inspection techniques to examine the strip. Defects in the cork are detected and categorised in order to determine regions where stoppers may be punched. The precise locations are then automatically optimised for best usage of the raw material (quantity and quality of stoppers). In order to achieve the required speed of production these image processing techniques may be implemented in hardware. The paper presents results obtained using the vision system software under development together with descriptions of both the image processing and mechanical aspects of the proposed machine.

  12. Automatic screening and classification of diabetic retinopathy and maculopathy using fuzzy image processing.

    Science.gov (United States)

    Rahim, Sarni Suhaila; Palade, Vasile; Shuttleworth, James; Jayne, Chrisina

    2016-12-01

    Digital retinal imaging is a challenging screening method for which effective, robust and cost-effective approaches are still to be developed. Regular screening for diabetic retinopathy and diabetic maculopathy diseases is necessary in order to identify the group at risk of visual impairment. This paper presents a novel automatic detection of diabetic retinopathy and maculopathy in eye fundus images by employing fuzzy image processing techniques. The paper first introduces the existing systems for diabetic retinopathy screening, with an emphasis on the maculopathy detection methods. The proposed medical decision support system consists of four parts, namely: image acquisition, image preprocessing including four retinal structures localisation, feature extraction and the classification of diabetic retinopathy and maculopathy. A combination of fuzzy image processing techniques, the Circular Hough Transform and several feature extraction methods are implemented in the proposed system. The paper also presents a novel technique for the macula region localisation in order to detect the maculopathy. In addition to the proposed detection system, the paper highlights a novel online dataset and it presents the dataset collection, the expert diagnosis process and the advantages of our online database compared to other public eye fundus image databases for diabetic retinopathy purposes.

  13. A Paper on Automatic Fabrics Fault Processing Using Image Processing Technique In MATLAB

    Directory of Open Access Journals (Sweden)

    R.Thilepa

    2011-02-01

    Full Text Available The main objective of this paper is to elaborate how defective fabric parts can beprocessed using Matlab with image processing techniques. In developing countries like Indiaespecially in Tamilnadu, Tirupur the Knitwear capital of the country in three decades yields amajor income for the country. The city also employs either directly or indirectly more than 3lakhs of people and earns almost an income of 12, 000 crores per annum for the country in pastthree decades [2]. To upgrade this process the fabrics when processed in textiles the fault presenton the fabrics can be identified using Matlab with Image processing techniques. This imageprocessing technique is done using Matlab 7.3 and for the taken image, Noise Filtering,Histogram and Thresholding techniques are applied for the image and the output is obtained inthis paper. This research thus implements a textile defect detector with system visionmethodology in image processing.

  14. Automatic Estimation of Live Coffee Leaf Infection Based on Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    Eric Hitimana

    2014-02-01

    Full Text Available Image segmentation is the most challenging issue in computer vision applications. And most difficulties for crops management in agriculture ar e the lack of appropriate methods for detecting the leaf damage for pests’ treatment. In this paper we proposed an automatic method for leaf damage detection and severity estimation o f coffee leaf by avoiding defoliation. After enhancing the contrast of the original image using LUT based gamma correction, the image is processed to remove the background, and the output leaf is clustered using Fuzzy c-means segmentation in V channel of YUV color space to max imize all leaf damage detection, and finally, the severity of leaf is estimated in terms of ratio for leaf pixel distribution between the normal and the detected leaf damage. The results in each proposed method was compared to the current researches and the accuracy is obvious either in the background removal or dama ge detection.

  15. Developing an Intelligent Automatic Appendix Extraction Method from Ultrasonography Based on Fuzzy ART and Image Processing

    Directory of Open Access Journals (Sweden)

    Kwang Baek Kim

    2015-01-01

    Full Text Available Ultrasound examination (US does a key role in the diagnosis and management of the patients with clinically suspected appendicitis which is the most common abdominal surgical emergency. Among the various sonographic findings of appendicitis, outer diameter of the appendix is most important. Therefore, clear delineation of the appendix on US images is essential. In this paper, we propose a new intelligent method to extract appendix automatically from abdominal sonographic images as a basic building block of developing such an intelligent tool for medical practitioners. Knowing that the appendix is located at the lower organ area below the bottom fascia line, we conduct a series of image processing techniques to find the fascia line correctly. And then we apply fuzzy ART learning algorithm to the organ area in order to extract appendix accurately. The experiment verifies that the proposed method is highly accurate (successful in 38 out of 40 cases in extracting appendix.

  16. Extended morphological processing: a practical method for automatic spot detection of biological markers from microscopic images

    Directory of Open Access Journals (Sweden)

    Kimori Yoshitaka

    2010-07-01

    Full Text Available Abstract Background A reliable extraction technique for resolving multiple spots in light or electron microscopic images is essential in investigations of the spatial distribution and dynamics of specific proteins inside cells and tissues. Currently, automatic spot extraction and characterization in complex microscopic images poses many challenges to conventional image processing methods. Results A new method to extract closely located, small target spots from biological images is proposed. This method starts with a simple but practical operation based on the extended morphological top-hat transformation to subtract an uneven background. The core of our novel approach is the following: first, the original image is rotated in an arbitrary direction and each rotated image is opened with a single straight line-segment structuring element. Second, the opened images are unified and then subtracted from the original image. To evaluate these procedures, model images of simulated spots with closely located targets were created and the efficacy of our method was compared to that of conventional morphological filtering methods. The results showed the better performance of our method. The spots of real microscope images can be quantified to confirm that the method is applicable in a given practice. Conclusions Our method achieved effective spot extraction under various image conditions, including aggregated target spots, poor signal-to-noise ratio, and large variations in the background intensity. Furthermore, it has no restrictions with respect to the shape of the extracted spots. The features of our method allow its broad application in biological and biomedical image information analysis.

  17. Automatic segmentation of blood vessels from retinal fundus images through image processing and data mining techniques

    Indian Academy of Sciences (India)

    R Geetharamani; Lakshmi Balasubramanian

    2015-09-01

    Machine Learning techniques have been useful in almost every field of concern. Data Mining, a branch of Machine Learning is one of the most extensively used techniques. The ever-increasing demands in the field of medicine are being addressed by computational approaches in which Big Data analysis, image processing and data mining are on top priority. These techniques have been exploited in the domain of ophthalmology for better retinal fundus image analysis. Blood vessels, one of the most significant retinal anatomical structures are analysed for diagnosis of many diseases like retinopathy, occlusion and many other vision threatening diseases. Vessel segmentation can also be a pre-processing step for segmentation of other retinal structures like optic disc, fovea, microneurysms, etc. In this paper, blood vessel segmentation is attempted through image processing and data mining techniques. The retinal blood vessels were segmented through color space conversion and color channel extraction, image pre-processing, Gabor filtering, image postprocessing, feature construction through application of principal component analysis, k-means clustering and first level classification using Naïve–Bayes classification algorithm and second level classification using C4.5 enhanced with bagging techniques. Association of every pixel against the feature vector necessitates Big Data analysis. The proposed methodology was evaluated on a publicly available database, STARE. The results reported 95.05% accuracy on entire dataset; however the accuracy was 95.20% on normal images and 94.89% on pathological images. A comparison of these results with the existing methodologies is also reported. This methodology can help ophthalmologists in better and faster analysis and hence early treatment to the patients.

  18. Gap-free segmentation of vascular networks with automatic image processing pipeline.

    Science.gov (United States)

    Hsu, Chih-Yang; Ghaffari, Mahsa; Alaraj, Ali; Flannery, Michael; Zhou, Xiaohong Joe; Linninger, Andreas

    2017-03-01

    Current image processing techniques capture large vessels reliably but often fail to preserve connectivity in bifurcations and small vessels. Imaging artifacts and noise can create gaps and discontinuity of intensity that hinders segmentation of vascular trees. However, topological analysis of vascular trees require proper connectivity without gaps, loops or dangling segments. Proper tree connectivity is also important for high quality rendering of surface meshes for scientific visualization or 3D printing. We present a fully automated vessel enhancement pipeline with automated parameter settings for vessel enhancement of tree-like structures from customary imaging sources, including 3D rotational angiography, magnetic resonance angiography, magnetic resonance venography, and computed tomography angiography. The output of the filter pipeline is a vessel-enhanced image which is ideal for generating anatomical consistent network representations of the cerebral angioarchitecture for further topological or statistical analysis. The filter pipeline combined with computational modeling can potentially improve computer-aided diagnosis of cerebrovascular diseases by delivering biometrics and anatomy of the vasculature. It may serve as the first step in fully automatic epidemiological analysis of large clinical datasets. The automatic analysis would enable rigorous statistical comparison of biometrics in subject-specific vascular trees. The robust and accurate image segmentation using a validated filter pipeline would also eliminate operator dependency that has been observed in manual segmentation. Moreover, manual segmentation is time prohibitive given that vascular trees have more than thousands of segments and bifurcations so that interactive segmentation consumes excessive human resources. Subject-specific trees are a first step toward patient-specific hemodynamic simulations for assessing treatment outcomes.

  19. Image processing and pattern recognition with CVIPtools MATLAB toolbox: automatic creation of masks for veterinary thermographic images

    Science.gov (United States)

    Mishra, Deependra K.; Umbaugh, Scott E.; Lama, Norsang; Dahal, Rohini; Marino, Dominic J.; Sackman, Joseph

    2016-09-01

    CVIPtools is a software package for the exploration of computer vision and image processing developed in the Computer Vision and Image Processing Laboratory at Southern Illinois University Edwardsville. CVIPtools is available in three variants - a) CVIPtools Graphical User Interface, b) CVIPtools C library and c) CVIPtools MATLAB toolbox, which makes it accessible to a variety of different users. It offers students, faculty, researchers and any user a free and easy way to explore computer vision and image processing techniques. Many functions have been implemented and are updated on a regular basis, the library has reached a level of sophistication that makes it suitable for both educational and research purposes. In this paper, the detail list of the functions available in the CVIPtools MATLAB toolbox are presented and how these functions can be used in image analysis and computer vision applications. The CVIPtools MATLAB toolbox allows the user to gain practical experience to better understand underlying theoretical problems in image processing and pattern recognition. As an example application, the algorithm for the automatic creation of masks for veterinary thermographic images is presented.

  20. Image processing based automatic diagnosis of glaucoma using wavelet features of segmented optic disc from fundus image.

    Science.gov (United States)

    Singh, Anushikha; Dutta, Malay Kishore; ParthaSarathi, M; Uher, Vaclav; Burget, Radim

    2016-02-01

    Glaucoma is a disease of the retina which is one of the most common causes of permanent blindness worldwide. This paper presents an automatic image processing based method for glaucoma diagnosis from the digital fundus image. In this paper wavelet feature extraction has been followed by optimized genetic feature selection combined with several learning algorithms and various parameter settings. Unlike the existing research works where the features are considered from the complete fundus or a sub image of the fundus, this work is based on feature extraction from the segmented and blood vessel removed optic disc to improve the accuracy of identification. The experimental results presented in this paper indicate that the wavelet features of the segmented optic disc image are clinically more significant in comparison to features of the whole or sub fundus image in the detection of glaucoma from fundus image. Accuracy of glaucoma identification achieved in this work is 94.7% and a comparison with existing methods of glaucoma detection from fundus image indicates that the proposed approach has improved accuracy of classification.

  1. Automatic classification of the acrosome status of boar spermatozoa using digital image processing and LVQ

    NARCIS (Netherlands)

    Alegre, Enrique; Biehl, Michael; Petkov, Nicolai; Sanchez, Lidia

    2008-01-01

    We consider images of boar spermatozoa obtained with ail optical phase-contrast microscope. Our goal is to automatically classify single sperm cells as acrosome-intact (class 1) or acrosome-damaged (class 2). Such classification is important for the estimation of the fertilization potential of a spe

  2. An Automatic Framework Using Space-Time Processing and TR-MUSIC for Subsurface and Through-Wall Multitarget Imaging

    Directory of Open Access Journals (Sweden)

    Si-hao Tan

    2012-01-01

    Full Text Available We present an automatic framework combined space-time signal processing with Time Reversal electromagnetic (EM inversion for subsurface and through-wall multitarget imaging using electromagnetic waves. This framework is composed of a frequency-wavenumber (FK filter to suppress direct wave and medium bounce, a FK migration algorithm to automatically estimate the number of targets and identify target regions, which can be used to reduce the computational complexity of the following imaging algorithm, and a EM inversion algorithm using Time Reversal Multiple Signal Classification (TR-MUSIC to reconstruct hidden objects. The feasibility of the framework is demonstrated with simulated data generated by GPRMAX.

  3. ACIR: automatic cochlea image registration

    Science.gov (United States)

    Al-Dhamari, Ibraheem; Bauer, Sabine; Paulus, Dietrich; Lissek, Friedrich; Jacob, Roland

    2017-02-01

    Efficient Cochlear Implant (CI) surgery requires prior knowledge of the cochlea's size and its characteristics. This information helps to select suitable implants for different patients. To get these measurements, a segmentation method of cochlea medical images is needed. An important pre-processing step for good cochlea segmentation involves efficient image registration. The cochlea's small size and complex structure, in addition to the different resolutions and head positions during imaging, reveals a big challenge for the automated registration of the different image modalities. In this paper, an Automatic Cochlea Image Registration (ACIR) method for multi- modal human cochlea images is proposed. This method is based on using small areas that have clear structures from both input images instead of registering the complete image. It uses the Adaptive Stochastic Gradient Descent Optimizer (ASGD) and Mattes's Mutual Information metric (MMI) to estimate 3D rigid transform parameters. The use of state of the art medical image registration optimizers published over the last two years are studied and compared quantitatively using the standard Dice Similarity Coefficient (DSC). ACIR requires only 4.86 seconds on average to align cochlea images automatically and to put all the modalities in the same spatial locations without human interference. The source code is based on the tool elastix and is provided for free as a 3D Slicer plugin. Another contribution of this work is a proposed public cochlea standard dataset which can be downloaded for free from a public XNAT server.

  4. Development of image-processing software for automatic segmentation of brain tumors in MR images

    Directory of Open Access Journals (Sweden)

    C Vijayakumar

    2011-01-01

    Full Text Available Most of the commercially available software for brain tumor segmentation have limited functionality and frequently lack the careful validation that is required for clinical studies. We have developed an image-analysis software package called ′Prometheus,′ which performs neural system-based segmentation operations on MR images using pre-trained information. The software also has the capability to improve its segmentation performance by using the training module of the neural system. The aim of this article is to present the design and modules of this software. The segmentation module of Prometheus can be used primarily for image analysis in MR images. Prometheus was validated against manual segmentation by a radiologist and its mean sensitivity and specificity was found to be 85.71±4.89% and 93.2±2.87%, respectively. Similarly, the mean segmentation accuracy and mean correspondence ratio was found to be 92.35±3.37% and 0.78±0.046, respectively.

  5. SoilJ - An ImageJ plugin for semi-automatized image-processing of 3-D X-ray images of soil columns

    Science.gov (United States)

    Koestel, John

    2016-04-01

    3-D X-ray imaging is a formidable tool for quantifying soil structural properties which are known to be extremely diverse. This diversity necessitates the collection of large sample sizes for adequately representing the spatial variability of soil structure at a specific sampling site. One important bottleneck of using X-ray imaging is however the large amount of time required by a trained specialist to process the image data which makes it difficult to process larger amounts of samples. The software SoilJ aims at removing this bottleneck by automatizing most of the required image processing steps needed to analyze image data of cylindrical soil columns. SoilJ is a plugin of the free Java-based image-processing software ImageJ. The plugin is designed to automatically process all images located with a designated folder. In a first step, SoilJ recognizes the outlines of the soil column upon which the column is rotated to an upright position and placed in the center of the canvas. Excess canvas is removed from the images. Then, SoilJ samples the grey values of the column material as well as the surrounding air in Z-direction. Assuming that the column material (mostly PVC of aluminium) exhibits a spatially constant density, these grey values serve as a proxy for the image illumination at a specific Z-coordinate. Together with the grey values of the air they are used to correct image illumination fluctuations which often occur along the axis of rotation during image acquisition. SoilJ includes also an algorithm for beam-hardening artefact removal and extended image segmentation options. Finally, SoilJ integrates the morphology analyses plugins of BoneJ (Doube et al., 2006, BoneJ Free and extensible bone image analysis in ImageJ. Bone 47: 1076-1079) and provides an ASCII file summarizing these measures for each investigated soil column, respectively. In the future it is planned to integrate SoilJ into FIJI, the maintained and updated edition of ImageJ with selected

  6. Automatic Image Interpolation Using Homography

    Directory of Open Access Journals (Sweden)

    Chi-Tsung Liu

    2010-01-01

    Full Text Available While taking photographs, we often face the problem that unwanted foreground objects (e.g., vehicles, signs, and pedestrians occlude the main subject(s. We propose to apply image interpolation (also known as inpainting techniques to remove unwanted objects in the photographs and to automatically patch the vacancy after the unwanted objects are removed. When given only a single image, if the information loss after the unwanted objects in images being removed is too great, the patching results are usually unsatisfactory. The proposed inpainting techniques employ the homographic constraints in geometry to incorporate multiple images taken from different viewpoints. Our experiment results showed that the proposed techniques could effectively reduce process in searching for potential patches from multiple input images and decide the best patches for the missing regions.

  7. A Prototype Expert System for Automatic Generation of Image Processing Programs

    Institute of Scientific and Technical Information of China (English)

    宋茂强; FelixGrimm; 等

    1991-01-01

    A prototype expert system for generating image processing programs using the subroutine package SPIDER is described in this paper.Based on an interactive dialog,the system can generate a complete application program using SPIDER routines.

  8. Development of an Image Processing System for Automatic Melanoma Diagnosis from Dermoscopic Images: Preliminary Sudy - Original Article

    Directory of Open Access Journals (Sweden)

    M. Emin Yüksel

    2008-12-01

    Full Text Available Objective: Design and implementation of a medical image processing system that will provide decision support to the clinician in the diagnosis of melanoma type skin cancers by performing the analysis of dermoscopic images.Methods: Visual features of pigmented lesions are converted into measurable numerical quantities by employing digital image processing methods and a classification regarding melanoma diagnosis is performed based on these quantitative data.Results: We achieved numerical results showing asymmetry, border and color features of the pigmentary lesions by using segmentation, image histogram, thresholding, convex hull, color clustering, color quantization and distribution methods. Conclusion: The system under development speeds up the decision process of the clinician. In addition, it allows the diagnosis to be based on more objective data.

  9. Exploring Automatization Processes.

    Science.gov (United States)

    DeKeyser, Robert M.

    1996-01-01

    Presents the rationale for and the results of a pilot study attempting to document in detail how automatization takes place as the result of different kinds of intensive practice. Results show that reaction times and error rates gradually decline with practice, and the practice effect is skill-specific. (36 references) (CK)

  10. Automatic quantitative analysis of microstructure of ductile cast iron using digital image processing

    Directory of Open Access Journals (Sweden)

    Abhijit Malage

    2015-09-01

    Full Text Available Ductile cast iron is preferred as nodular iron or spheroidal graphite iron. Ductile cast iron contains graphite in form of discrete nodules and matrix of ferrite and perlite. In order to determine the mechanical properties, one needs to determine volume of phases in matrix and nodularity in the microstructure of metal sample. Manual methods available for this, are time consuming and accuracy depends on expertize. The paper proposes a novel method for automatic quantitative analysis of microstructure of Ferritic Pearlitic Ductile Iron which calculates volume of phases and nodularity of that sample. This gives results within a very short time (approximately 5 sec with 98% accuracy for volume phases of matrices and 90% of accuracy for nodule detection and analysis which are in the range of standard specified for SG 500/7 and validated by metallurgist.

  11. Using image processing technology and mathematical algorithm in the automatic selection of vocal cord opening and closing images from the larynx endoscopy video.

    Science.gov (United States)

    Kuo, Chung-Feng Jeffrey; Chu, Yueng-Hsiang; Wang, Po-Chun; Lai, Chun-Yu; Chu, Wen-Lin; Leu, Yi-Shing; Wang, Hsing-Won

    2013-12-01

    The human larynx is an important organ for voice production and respiratory mechanisms. The vocal cord is approximated for voice production and open for breathing. The videolaryngoscope is widely used for vocal cord examination. At present, physicians usually diagnose vocal cord diseases by manually selecting the image of the vocal cord opening to the largest extent (abduction), thus maximally exposing the vocal cord lesion. On the other hand, the severity of diseases such as vocal palsy, atrophic vocal cord is largely dependent on the vocal cord closing to the smallest extent (adduction). Therefore, diseases can be assessed by the image of the vocal cord opening to the largest extent, and the seriousness of breathy voice is closely correlated to the gap between vocal cords when closing to the smallest extent. The aim of the study was to design an automatic vocal cord image selection system to improve the conventional selection process by physicians and enhance diagnosis efficiency. Also, due to the unwanted fuzzy images resulting from examination process caused by human factors as well as the non-vocal cord images, texture analysis is added in this study to measure image entropy to establish a screening and elimination system to effectively enhance the accuracy of selecting the image of the vocal cord closing to the smallest extent.

  12. Automatically designing an image processing pipeline for a five-band camera prototype using the local, linear, learned (L3) method

    Science.gov (United States)

    Tian, Qiyuan; Blasinski, Henryk; Lansel, Steven; Jiang, Haomiao; Fukunishi, Munenori; Farrell, Joyce E.; Wandell, Brian A.

    2015-02-01

    The development of an image processing pipeline for each new camera design can be time-consuming. To speed camera development, we developed a method named L3 (Local, Linear, Learned) that automatically creates an image processing pipeline for any design. In this paper, we describe how we used the L3 method to design and implement an image processing pipeline for a prototype camera with five color channels. The process includes calibrating and simulating the prototype, learning local linear transforms and accelerating the pipeline using graphics processing units (GPUs).

  13. Prediction of Excess Air Factor in Automatic Feed Coal Burners by Processing of Flame Images

    Science.gov (United States)

    Talu, Muhammed Fatih; Onat, Cem; Daskin, Mahmut

    2017-05-01

    In this study, the relationship between the visual information gathered from the flame images and the excess air factor λ in coal burners is investigated. In conventional coal burners the excess air factor λ. can be obtained using very expensive air measurement instruments. The proposed method to predict λ for a specific time in the coal burners consists of three distinct and consecutive stages; a) online flame images acquisition using a CCD camera, b) extraction meaningful information (flame intensity and brightness)from flame images, and c) learning these information (image features) with ANNs and estimate λ. Six different feature extraction methods have been used: CDF of Blue Channel, Co-Occurrence Matrix, L ∞-Frobenius Norms, Radiant Energy Signal (RES), PCA and Wavelet. When compared prediction results, it has seen that the use of co-occurrence matrix with ANNs has the best performance (RMSE = 0.07) in terms of accuracy. The results show that the proposed predicting system using flame images can be preferred instead of using expensive devices to measure excess air factor in during combustion.

  14. Process automatization in system administration

    OpenAIRE

    Petauer, Janja

    2013-01-01

    The aim of the thesis is to present automatization of user management in company Studio Moderna. The company has grown exponentially in recent years, that is why we needed to find faster, easier and cheaper way of man- aging user accounts. We automatized processes of creating, changing and removing user accounts within Active Directory. We prepared user interface inside of existing application, used Java Script for drop down menus, wrote script in scripting programming langu...

  15. Automatic annotation of image and video using semantics

    Science.gov (United States)

    Yasaswy, A. R.; Manikanta, K.; Sri Vamshi, P.; Tapaswi, Shashikala

    2010-02-01

    The accumulation of large collections of digital images has created the need for efficient and intelligent schemes for content-based image retrieval. Our goal is to organize the contents semantically, according to meaningful categories. Automatic annotation is the process of automatically assigning descriptions to an image or video that describes the contents of the image or video. In this paper, we examine the problem of automatic captioning of multimedia containing round and square objects. On a given set of images and videos we were able to recognize round and square objects in the images with accuracy up to 80% and videos with accuracy up to 70%.

  16. Automatic image cropping for republishing

    Science.gov (United States)

    Cheatle, Phil

    2010-02-01

    Image cropping is an important aspect of creating aesthetically pleasing web pages and repurposing content for different web or printed output layouts. Cropping provides both the possibility of improving the composition of the image, and also the ability to change the aspect ratio of the image to suit the layout design needs of different document or web page formats. This paper presents a method for aesthetically cropping images on the basis of their content. Underlying the approach is a novel segmentation-based saliency method which identifies some regions as "distractions", as an alternative to the conventional "foreground" and "background" classifications. Distractions are a particular problem with typical consumer photos found on social networking websites such as FaceBook, Flickr etc. Automatic cropping is achieved by identifying the main subject area of the image and then using an optimization search to expand this to form an aesthetically pleasing crop. Evaluation of aesthetic functions like auto-crop is difficult as there is no single correct solution. A further contribution of this paper is an automated evaluation method which goes some way towards handling the complexity of aesthetic assessment. This allows crop algorithms to be easily evaluated against a large test set.

  17. UMLS-based automatic image indexing.

    Science.gov (United States)

    Sneiderman, C; Sneiderman, Charles Alan; Demner-Fushman, D; Demner-Fushman, Dina; Fung, K W; Fung, Kin Wah; Bray, B; Bray, Bruce

    2008-01-01

    To date, most accurate image retrieval techniques rely on textual descriptions of images. Our goal is to automatically generate indexing terms for an image extracted from a biomedical article by identifying Unified Medical Language System (UMLS) concepts in image caption and its discussion in the text. In a pilot evaluation of the suggested image indexing method by five physicians, a third of the automatically identified index terms were found suitable for indexing.

  18. Automatic cobb angle determination from radiographic images

    NARCIS (Netherlands)

    Sardjono, Tri Arief; Wilkinson, Michael H.F.; Veldhuizen, Albert G.; Ooijen, van Peter M.A.; Purnama, Ketut E.; Verkerke, Gijsbertus J.

    2013-01-01

    Study Design. Automatic measurement of Cobb angle in patients with scoliosis. Objective. To test the accuracy of an automatic Cobb angle determination method from frontal radiographical images. Summary of Background Data. Thirty-six frontal radiographical images of patients with scoliosis. Met

  19. Automatic Cobb Angle Determination From Radiographic Images

    NARCIS (Netherlands)

    Sardjono, Tri Arief; Wilkinson, Michael H. F.; Veldhuizen, Albert G.; van Ooijen, Peter M. A.; Purnama, Ketut E.; Verkerke, Gijsbertus J.

    2013-01-01

    Study Design. Automatic measurement of Cobb angle in patients with scoliosis. Objective. To test the accuracy of an automatic Cobb angle determination method from frontal radiographical images. Summary of Background Data. Thirty-six frontal radiographical images of patients with scoliosis. Methods.

  20. Automatic cobb angle determination from radiographic images

    NARCIS (Netherlands)

    Sardjono, Tri Arief; Wilkinson, Michael H.F.; Veldhuizen, Albert G.; van Ooijen, Peter M.A.; Purnama, Ketut E.; Verkerke, Gijsbertus Jacob

    2013-01-01

    Study Design. Automatic measurement of Cobb angle in patients with scoliosis. Objective. To test the accuracy of an automatic Cobb angle determination method from frontal radiographical images. Summary of Background Data. Thirty-six frontal radiographical images of patients with scoliosis. Methods.

  1. Automatic Vessel Segmentation on Retinal Images

    Institute of Scientific and Technical Information of China (English)

    Chun-Yuan Yu; Chia-Jen Chang; Yen-Ju Yao; Shyr-Shen Yu

    2014-01-01

    Several features of retinal vessels can be used to monitor the progression of diseases. Changes in vascular structures, for example, vessel caliber, branching angle, and tortuosity, are portents of many diseases such as diabetic retinopathy and arterial hyper-tension. This paper proposes an automatic retinal vessel segmentation method based on morphological closing and multi-scale line detection. First, an illumination correction is performed on the green band retinal image. Next, the morphological closing and subtraction processing are applied to obtain the crude retinal vessel image. Then, the multi-scale line detection is used to fine the vessel image. Finally, the binary vasculature is extracted by the Otsu algorithm. In this paper, for improving the drawbacks of multi-scale line detection, only the line detectors at 4 scales are used. The experimental results show that the accuracy is 0.939 for DRIVE (digital retinal images for vessel extraction) retinal database, which is much better than other methods.

  2. A novel semi-automatic image processing approach to determine Plasmodium falciparum parasitemia in Giemsa-stained thin blood smears

    Directory of Open Access Journals (Sweden)

    Kuss Claudia

    2008-03-01

    Full Text Available Abstract Background Malaria parasitemia is commonly used as a measurement of the amount of parasites in the patient's blood and a crucial indicator for the degree of infection. Manual evaluation of Giemsa-stained thin blood smears under the microscope is onerous, time consuming and subject to human error. Although automatic assessments can overcome some of these problems the available methods are currently limited by their inability to evaluate cases that deviate from a chosen "standard" model. Results In this study reliable parasitemia counts were achieved even for sub-standard smear and image quality. The outcome was assessed through comparisons with manual evaluations of more than 200 sample smears and related to the complexity of cell overlaps. On average an estimation error of less than 1% with respect to the average of manually obtained parasitemia counts was achieved. In particular the results from the proposed approach are generally within one standard deviation of the counts provided by a comparison group of malariologists yielding a correlation of 0.97. Variations occur mainly for blurred out-of-focus imagery exhibiting larger degrees of cell overlaps in clusters of erythrocytes. The assessment was also carried out in terms of precision and recall and combined in the F-measure providing results generally in the range of 92% to 97% for a variety of smears. In this context the observed trade-off relation between precision and recall guaranteed stable results. Finally, relating the F-measure with the degree of cell overlaps, showed that up to 50% total cell overlap can be tolerated if the smear image is well-focused and the smear itself adequately stained. Conclusion The automatic analysis has proven to be comparable with manual evaluations in terms of accuracy. Moreover, the test results have shown that the proposed comparison-based approach, by exploiting the interrelation between different images and color channels, has successfully

  3. Automatic image classification for the urinoculture screening.

    Science.gov (United States)

    Andreini, Paolo; Bonechi, Simone; Bianchini, Monica; Garzelli, Andrea; Mecocci, Alessandro

    2016-03-01

    Urinary tract infections (UTIs) are considered to be the most common bacterial infection and, actually, it is estimated that about 150 million UTIs occur world wide yearly, giving rise to roughly $6 billion in healthcare expenditures and resulting in 100,000 hospitalizations. Nevertheless, it is difficult to carefully assess the incidence of UTIs, since an accurate diagnosis depends both on the presence of symptoms and on a positive urinoculture, whereas in most outpatient settings this diagnosis is made without an ad hoc analysis protocol. On the other hand, in the traditional urinoculture test, a sample of midstream urine is put onto a Petri dish, where a growth medium favors the proliferation of germ colonies. Then, the infection severity is evaluated by a visual inspection of a human expert, an error prone and lengthy process. In this paper, we propose a fully automated system for the urinoculture screening that can provide quick and easily traceable results for UTIs. Based on advanced image processing and machine learning tools, the infection type recognition, together with the estimation of the bacterial load, can be automatically carried out, yielding accurate diagnoses. The proposed AID (Automatic Infection Detector) system provides support during the whole analysis process: first, digital color images of Petri dishes are automatically captured, then specific preprocessing and spatial clustering algorithms are applied to isolate the colonies from the culture ground and, finally, an accurate classification of the infections and their severity evaluation are performed. The AID system speeds up the analysis, contributes to the standardization of the process, allows result repeatability, and reduces the costs. Moreover, the continuous transition between sterile and external environments (typical of the standard analysis procedure) is completely avoided. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Image simulation for automatic license plate recognition

    Science.gov (United States)

    Bala, Raja; Zhao, Yonghui; Burry, Aaron; Kozitsky, Vladimir; Fillion, Claude; Saunders, Craig; Rodríguez-Serrano, José

    2012-01-01

    Automatic license plate recognition (ALPR) is an important capability for traffic surveillance applications, including toll monitoring and detection of different types of traffic violations. ALPR is a multi-stage process comprising plate localization, character segmentation, optical character recognition (OCR), and identification of originating jurisdiction (i.e. state or province). Training of an ALPR system for a new jurisdiction typically involves gathering vast amounts of license plate images and associated ground truth data, followed by iterative tuning and optimization of the ALPR algorithms. The substantial time and effort required to train and optimize the ALPR system can result in excessive operational cost and overhead. In this paper we propose a framework to create an artificial set of license plate images for accelerated training and optimization of ALPR algorithms. The framework comprises two steps: the synthesis of license plate images according to the design and layout for a jurisdiction of interest; and the modeling of imaging transformations and distortions typically encountered in the image capture process. Distortion parameters are estimated by measurements of real plate images. The simulation methodology is successfully demonstrated for training of OCR.

  5. Image processing

    NARCIS (Netherlands)

    Heijden, van der F.; Spreeuwers, L.J.; Blanken, H.M.; Vries de, A.P.; Blok, H.E.; Feng, L

    2007-01-01

    The field of image processing addresses handling and analysis of images for many purposes using a large number of techniques and methods. The applications of image processing range from enhancement of the visibility of cer- tain organs in medical images to object recognition for handling by industri

  6. Automatic Waterline Extraction from Smartphone Images

    Science.gov (United States)

    Kröhnert, M.

    2016-06-01

    Considering worldwide increasing and devastating flood events, the issue of flood defence and prediction becomes more and more important. Conventional methods for the observation of water levels, for instance gauging stations, provide reliable information. However, they are rather cost-expensive in purchase, installation and maintenance and hence mostly limited for monitoring large streams only. Thus, small rivers with noticeable increasing flood hazard risks are often neglected. State-of-the-art smartphones with powerful camera systems may act as affordable, mobile measuring instruments. Reliable and effective image processing methods may allow the use of smartphone-taken images for mobile shoreline detection and thus for water level monitoring. The paper focuses on automatic methods for the determination of waterlines by spatio-temporal texture measures. Besides the considerable challenge of dealing with a wide range of smartphone cameras providing different hardware components, resolution, image quality and programming interfaces, there are several limits in mobile device processing power. For test purposes, an urban river in Dresden, Saxony was observed. The results show the potential of deriving the waterline with subpixel accuracy by a column-by-column four-parameter logistic regression and polynomial spline modelling. After a transformation into object space via suitable landmarks (which is not addressed in this paper), this corresponds to an accuracy in the order of a few centimetres when processing mobile device images taken from small rivers at typical distances.

  7. Automatic Image-Based Pencil Sketch Rendering

    Institute of Scientific and Technical Information of China (English)

    王进; 鲍虎军; 周伟华; 彭群生; 徐迎庆

    2002-01-01

    This paper presents an automatic image-based approach for converting greyscale images to pencil sketches, in which strokes follow the image features. The algorithm first extracts a dense direction field automatically using Logical/Linear operators which embody the drawing mechanism. Next, a reconstruction approach based on a sampling-and-interpolation scheme is introduced to generate stroke paths from the direction field. Finally, pencil strokes are rendered along the specified paths with consideration of image tone and artificial illumination.As an important application, the technique is applied to render portraits from images with little user interaction. The experimental results demonstrate that the approach can automatically achieve compelling pencil sketches from reference images.

  8. System Supporting Automatic Generation of Finite Element Using Image Information

    Institute of Scientific and Technical Information of China (English)

    J; Fukuda

    2002-01-01

    A mesh generating system has been developed in orde r to prepare large amounts of input data which are needed for easy implementation of a finite element analysis. This system consists of a Pre-Mesh Generator, an Automatic Mesh Generator and a Mesh Modifier. Pre-Mesh Generator produces the shape and sub-block information as input data of Automatic Mesh Generator by c arrying out various image processing with respect to the image information of th e drawing input using scanner. Automatic Mesh Generato...

  9. Automatic segmentation of mammogram and tomosynthesis images

    Science.gov (United States)

    Sargent, Dusty; Park, Sun Young

    2016-03-01

    Breast cancer is a one of the most common forms of cancer in terms of new cases and deaths both in the United States and worldwide. However, the survival rate with breast cancer is high if it is detected and treated before it spreads to other parts of the body. The most common screening methods for breast cancer are mammography and digital tomosynthesis, which involve acquiring X-ray images of the breasts that are interpreted by radiologists. The work described in this paper is aimed at optimizing the presentation of mammography and tomosynthesis images to the radiologist, thereby improving the early detection rate of breast cancer and the resulting patient outcomes. Breast cancer tissue has greater density than normal breast tissue, and appears as dense white image regions that are asymmetrical between the breasts. These irregularities are easily seen if the breast images are aligned and viewed side-by-side. However, since the breasts are imaged separately during mammography, the images may be poorly centered and aligned relative to each other, and may not properly focus on the tissue area. Similarly, although a full three dimensional reconstruction can be created from digital tomosynthesis images, the same centering and alignment issues can occur for digital tomosynthesis. Thus, a preprocessing algorithm that aligns the breasts for easy side-by-side comparison has the potential to greatly increase the speed and accuracy of mammogram reading. Likewise, the same preprocessing can improve the results of automatic tissue classification algorithms for mammography. In this paper, we present an automated segmentation algorithm for mammogram and tomosynthesis images that aims to improve the speed and accuracy of breast cancer screening by mitigating the above mentioned problems. Our algorithm uses information in the DICOM header to facilitate preprocessing, and incorporates anatomical region segmentation and contour analysis, along with a hidden Markov model (HMM) for

  10. Automatic detection of blurred images in UAV image sets

    Science.gov (United States)

    Sieberth, Till; Wackrow, Rene; Chandler, Jim H.

    2016-12-01

    Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This paper describes the development of an automatic filtering process, which is based upon the quantification of blur in an image. Images with known blur are processed digitally to determine a quantifiable measure of image blur. The algorithm is required to process UAV images fast and reliably to relieve the operator from detecting blurred images manually. The newly developed method makes it possible to detect blur caused by linear camera displacement and is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of

  11. Automatic Hierarchical Color Image Classification

    Directory of Open Access Journals (Sweden)

    Jing Huang

    2003-02-01

    Full Text Available Organizing images into semantic categories can be extremely useful for content-based image retrieval and image annotation. Grouping images into semantic classes is a difficult problem, however. Image classification attempts to solve this hard problem by using low-level image features. In this paper, we propose a method for hierarchical classification of images via supervised learning. This scheme relies on using a good low-level feature and subsequently performing feature-space reconfiguration using singular value decomposition to reduce noise and dimensionality. We use the training data to obtain a hierarchical classification tree that can be used to categorize new images. Our experimental results suggest that this scheme not only performs better than standard nearest-neighbor techniques, but also has both storage and computational advantages.

  12. Multiobjective image recognition algorithm in the fully automatic die bonder

    Institute of Scientific and Technical Information of China (English)

    JIANG Kai; CHEN Hai-xia; YUAN Sen-miao

    2006-01-01

    It is a very important task to automatically fix the number of die in the image recognition system of a fully automatic die bonder.A multiobjective image recognition algorithm based on clustering Genetic Algorithm (GA),is proposed in this paper.In the evolutionary process of GA,a clustering method is provided that utilizes information from the template and the fitness landscape of the current population..The whole population is grouped into different niches by the clustering method.Experimental results demonstrated that the number of target images could be determined by the algorithm automatically,and multiple targets could be recognized at a time.As a result,time consumed by one image recognition is shortened,the performance of the image recognition system is improved,and the atomization of the system is fulfilled.

  13. Automatic Image Registration Technique of Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    M. Wahed

    2013-03-01

    Full Text Available Image registration is a crucial step in most image processing tasks for which the final result is achieved from a combination of various resources. Automatic registration of remote-sensing images is a difficult task as it must deal with the intensity changes and variation of scale, rotation and illumination of the images. This paper proposes image registration technique of multi-view, multi- temporal and multi-spectral remote sensing images. Firstly, a preprocessing step is performed by applying median filtering to enhance the images. Secondly, the Steerable Pyramid Transform is adopted to produce multi-resolution levels of reference and sensed images; then, the Scale Invariant Feature Transform (SIFT is utilized for extracting feature points that can deal with the large variations of scale, rotation and illumination between images .Thirdly, matching the features points by using the Euclidian distance ratio; then removing the false matching pairs using the RANdom SAmple Consensus (RANSAC algorithm. Finally, the mapping function is obtained by the affine transformation. Quantitative comparisons of our technique with the related techniques show a significant improvement in the presence of large scale, rotation changes, and the intensity changes. The effectiveness of the proposed technique is demonstrated by the experimental results.

  14. Automatic segmentation of bladder in CT images

    Institute of Scientific and Technical Information of China (English)

    Feng SHI; Jie YANG; Yue-min ZHU

    2009-01-01

    Segmentation of the bladder in computerized tomography (CT) images is an important step in radiation therapy planning of prostate cancer. We present a new segmentation scheme to automatically delineate the bladder contour in CT images with three major steps. First, we use the mean shift algorithm to obtain a clustered image containing the rough contour of the bladder, which is then extracted in the second step by applying a region-growing algorithm with the initial seed point selected from a line-by-line scanning process. The third step is to refine the bladder contour more accurately using the rolling-ball algorithm. These steps are then extended to segment the bladder volume in a slice-by-slice manner. The obtained results were compared to manual segmentation by radiation oncologists. The average values of sensitivity, specificity, positive predictive value, negative predictive value, and Hausdorff distance are 86.5%, 96.3%, 90.5%, 96.5%, and 2.8 pixels, respectively. The results show that the bladder can be accurately segmented.

  15. Automatic cell counting with ImageJ.

    Science.gov (United States)

    Grishagin, Ivan V

    2015-03-15

    Cell counting is an important routine procedure. However, to date there is no comprehensive, easy to use, and inexpensive solution for routine cell counting, and this procedure usually needs to be performed manually. Here, we report a complete solution for automatic cell counting in which a conventional light microscope is equipped with a web camera to obtain images of a suspension of mammalian cells in a hemocytometer assembly. Based on the ImageJ toolbox, we devised two algorithms to automatically count these cells. This approach is approximately 10 times faster and yields more reliable and consistent results compared with manual counting.

  16. Automatic Defect Detection in X-Ray Images Using Image Data Fusion

    Institute of Scientific and Technical Information of China (English)

    TIAN Yuan; DU Dong; CAI Guorui; WANG Li; ZHANG Hua

    2006-01-01

    Automatic defect detection in X-ray images is currently a focus of much research at home and abroad. The technology requires computerized image processing, image analysis, and pattern recognition. This paper describes an image processing method for automatic defect detection using image data fusion which synthesizes several methods including edge extraction, wave profile analyses, segmentation with dynamic threshold, and weld district extraction. Test results show that defects that induce an abrupt change over a predefined extent of the image intensity can be segmented regardless of the number, location, shape, or size. Thus, the method is more robust and practical than the current methods using only one method.

  17. Distance transform for automatic dermatologic images composition

    Science.gov (United States)

    Grana, C.; Pellacani, G.; Seidenari, S.; Cucchiara, R.

    2006-03-01

    In this paper we focus on the problem of automatically registering dermatological images, because even if different products are available, most of them share the problem of a limited field of view on the skin. A possible solution is then the composition of multiple takes of the same lesion with digital software, such as that for panorama images creation. In this work, to perform an automatic selection of matching points the Harris Corner Detector is used, and to cope with outlier couples we employed the RANSAC method. Projective mapping is then used to match the two images. Given a set of correspondence points, Singular Value Decomposition was used to compute the transform parameters. At this point the two images need to be blended together. One initial assumption is often implicitly made: the aim is to merge two rectangular images. But when merging occurs between more than two images iteratively, this assumption will fail. To cope with differently shaped images, we employed the Distance Transform and provided a weighted merging of images. Different tests were conducted with dermatological images, both with standard rectangular frame and with not typical shapes, as for example a ring due to the objective and lens selection. The successive composition of different circular images with other blending functions, such as the Hat function, doesn't correctly get rid of the border and residuals of the circular mask are still visible. By applying Distance Transform blending, the result produced is insensitive of the outer shape of the image.

  18. Automaticity in social-cognitive processes.

    Science.gov (United States)

    Bargh, John A; Schwader, Kay L; Hailey, Sarah E; Dyer, Rebecca L; Boothby, Erica J

    2012-12-01

    Over the past several years, the concept of automaticity of higher cognitive processes has permeated nearly all domains of psychological research. In this review, we highlight insights arising from studies in decision-making, moral judgments, close relationships, emotional processes, face perception and social judgment, motivation and goal pursuit, conformity and behavioral contagion, embodied cognition, and the emergence of higher-level automatic processes in early childhood. Taken together, recent work in these domains demonstrates that automaticity does not result exclusively from a process of skill acquisition (in which a process always begins as a conscious and deliberate one, becoming capable of automatic operation only with frequent use) - there are evolved substrates and early childhood learning mechanisms involved as well.

  19. Image Processing

    Science.gov (United States)

    1993-01-01

    Electronic Imagery, Inc.'s ImageScale Plus software, developed through a Small Business Innovation Research (SBIR) contract with Kennedy Space Flight Center for use on space shuttle Orbiter in 1991, enables astronauts to conduct image processing, prepare electronic still camera images in orbit, display them and downlink images to ground based scientists for evaluation. Electronic Imagery, Inc.'s ImageCount, a spin-off product of ImageScale Plus, is used to count trees in Florida orange groves. Other applications include x-ray and MRI imagery, textile designs and special effects for movies. As of 1/28/98, company could not be located, therefore contact/product information is no longer valid.

  20. Detection of Off-normal Images for NIF Automatic Alignment

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J V; Awwal, A S; McClay, W A; Ferguson, S W; Burkhart, S C

    2005-07-11

    One of the major purposes of National Ignition Facility at Lawrence Livermore National Laboratory is to accurately focus 192 high energy laser beams on a nanoscale (mm) fusion target at the precise location and time. The automatic alignment system developed for NIF is used to align the beams in order to achieve the required focusing effect. However, if a distorted image is inadvertently created by a faulty camera shutter or some other opto-mechanical malfunction, the resulting image termed ''off-normal'' must be detected and rejected before further alignment processing occurs. Thus the off-normal processor acts as a preprocessor to automatic alignment image processing. In this work, we discuss the development of an ''off-normal'' pre-processor capable of rapidly detecting the off-normal images and performing the rejection. Wide variety of off-normal images for each loop is used to develop the criterion for rejections accurately.

  1. AUTOMATIC APPROACH TO VHR SATELLITE IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    P. Kupidura

    2016-06-01

    Full Text Available In this paper, we present a proposition of a fully automatic classification of VHR satellite images. Unlike the most widespread approaches: supervised classification, which requires prior defining of class signatures, or unsupervised classification, which must be followed by an interpretation of its results, the proposed method requires no human intervention except for the setting of the initial parameters. The presented approach bases on both spectral and textural analysis of the image and consists of 3 steps. The first step, the analysis of spectral data, relies on NDVI values. Its purpose is to distinguish between basic classes, such as water, vegetation and non-vegetation, which all differ significantly spectrally, thus they can be easily extracted basing on spectral analysis. The second step relies on granulometric maps. These are the product of local granulometric analysis of an image and present information on the texture of each pixel neighbourhood, depending on the texture grain. The purpose of texture analysis is to distinguish between different classes, spectrally similar, but yet of different texture, e.g. bare soil from a built-up area, or low vegetation from a wooded area. Due to the use of granulometric analysis, based on mathematical morphology opening and closing, the results are resistant to the border effect (qualifying borders of objects in an image as spaces of high texture, which affect other methods of texture analysis like GLCM statistics or fractal analysis. Therefore, the effectiveness of the analysis is relatively high. Several indices based on values of different granulometric maps have been developed to simplify the extraction of classes of different texture. The third and final step of the process relies on a vegetation index, based on near infrared and blue bands. Its purpose is to correct partially misclassified pixels. All the indices used in the classification model developed relate to reflectance values, so the

  2. Automatic Approach to Vhr Satellite Image Classification

    Science.gov (United States)

    Kupidura, P.; Osińska-Skotak, K.; Pluto-Kossakowska, J.

    2016-06-01

    In this paper, we present a proposition of a fully automatic classification of VHR satellite images. Unlike the most widespread approaches: supervised classification, which requires prior defining of class signatures, or unsupervised classification, which must be followed by an interpretation of its results, the proposed method requires no human intervention except for the setting of the initial parameters. The presented approach bases on both spectral and textural analysis of the image and consists of 3 steps. The first step, the analysis of spectral data, relies on NDVI values. Its purpose is to distinguish between basic classes, such as water, vegetation and non-vegetation, which all differ significantly spectrally, thus they can be easily extracted basing on spectral analysis. The second step relies on granulometric maps. These are the product of local granulometric analysis of an image and present information on the texture of each pixel neighbourhood, depending on the texture grain. The purpose of texture analysis is to distinguish between different classes, spectrally similar, but yet of different texture, e.g. bare soil from a built-up area, or low vegetation from a wooded area. Due to the use of granulometric analysis, based on mathematical morphology opening and closing, the results are resistant to the border effect (qualifying borders of objects in an image as spaces of high texture), which affect other methods of texture analysis like GLCM statistics or fractal analysis. Therefore, the effectiveness of the analysis is relatively high. Several indices based on values of different granulometric maps have been developed to simplify the extraction of classes of different texture. The third and final step of the process relies on a vegetation index, based on near infrared and blue bands. Its purpose is to correct partially misclassified pixels. All the indices used in the classification model developed relate to reflectance values, so the preliminary step

  3. Automatic and strategic processes in advertising effects

    DEFF Research Database (Denmark)

    Grunert, Klaus G.

    1996-01-01

    , and can easily be adapted to situational circumstances. Both the perception of advertising and the way advertising influences brand evaluation involves both processes. Automatic processes govern the recognition of advertising stimuli, the relevance decision which determines further higher-level processing...... are at variance with current notions about advertising effects. For example, the att span problem will be relevant only for strategic processes, not for automatic processes, a certain amount of learning can occur with very little conscious effort, and advertising's effect on brand evaluation may be more stable...

  4. Automatic and strategic processes in advertising effects

    DEFF Research Database (Denmark)

    Grunert, Klaus G.

    1996-01-01

    , and can easily be adapted to situational circumstances. Both the perception of advertising and the way advertising influences brand evaluation involves both processes. Automatic processes govern the recognition of advertising stimuli, the relevance decision which determines further higher-level processing...... are at variance with current notions about advertising effects. For example, the att span problem will be relevant only for strategic processes, not for automatic processes, a certain amount of learning can occur with very little conscious effort, and advertising's effect on brand evaluation may be more stable...

  5. A Review of Methods of Instance-based Automatic Image Annotation

    Directory of Open Access Journals (Sweden)

    Morad Derakhshan

    2016-12-01

    Full Text Available Today, to use automatic image annotation in order to fill the semantic gap between low level features of images and understanding their information in retrieving process has become popular. Since automatic image annotation is crucial in understanding digital images several methods have been proposed to automatically annotate an image. One of the most important of these methods is instance-based image annotation. As these methods are vastly used in this paper, the most important instance-based image annotation methods are analyzed. First of all the main parts of instance-based automatic image annotation are analyzed. Afterwards, the main methods of instance-based automatic image annotation are reviewed and compared based on various features. In the end the most important challenges and open-ended fields in instance-based image annotation are analyzed.

  6. Automatic Contour Extraction from 2D Image

    Directory of Open Access Journals (Sweden)

    Panagiotis GIOANNIS

    2011-03-01

    Full Text Available Aim: To develop a method for automatic contour extraction from a 2D image. Material and Method: The method is divided in two basic parts where the user initially chooses the starting point and the threshold. Finally the method is applied to computed tomography of bone images. Results: An interesting method is developed which can lead to a successful boundary extraction of 2D images. Specifically data extracted from a computed tomography images can be used for 2D bone reconstruction. Conclusions: We believe that such an algorithm or part of it can be applied on several other applications for shape feature extraction in medical image analysis and generally at computer graphics.

  7. Automatic composition of MRI and SPECT images

    Energy Technology Data Exchange (ETDEWEB)

    Nishimura, Hiromi [Research Inst. of Brain and Blood Vessels, Akita (Japan)

    1999-12-01

    The new method to automatically compose MRI image and SPECT image was devised to support the SPECT image which was inferior in the morphological information. This method is a kind of the coordinate transformation to obtain maximal agreement between images using cross correlation of MRI image and SPECT image as the evaluation function to show the degree of the agreement. For the calculation of the cross correlation, MRI T1 weighted image and the morphological information of SPECT image treated by the spatial quadratic differentiation (Laplacian) were used. This method does not require to fix the control point in the tomographic imaging, and can be also applied to PET other than SPECT. This is also useful to follow up the chronological change of a patient by composition among SPECT images and among PET images. Since this method is focused on the internal structure of brain, it is also useful for cases such as cerebral infarction which brain structure has little change. But this method is still under the trial and the examination of the accuracy remained. (K.H.)

  8. Automatic Detection of Vehicles Using Intensity Laser and Anaglyph Image

    Directory of Open Access Journals (Sweden)

    Hideo Araki

    2006-12-01

    Full Text Available In this work is presented a methodology to automatic car detection motion presents in digital aerial image on urban area using intensity, anaglyph and subtracting images. The anaglyph image is used to identify the motion cars on the expose take, because the cars provide red color due the not homology between objects. An implicit model was developed to provide a digital pixel value that has the specific propriety presented early, using the ratio between the RGB color of car object in the anaglyph image. The intensity image is used to decrease the false positive and to do the processing to work into roads and streets. The subtracting image is applied to decrease the false positives obtained due the markings road. The goal of this paper is automatically detect motion cars presents in digital aerial image in urban areas. The algorithm implemented applies normalization on the left and right images and later form the anaglyph with using the translation. The results show the applicability of proposed method and it potentiality on the automatic car detection and presented the performance of proposed methodology.

  9. Automatic cloud coverage assessment of Formosat-2 image

    Science.gov (United States)

    Hsu, Kuo-Hsien

    2011-11-01

    Formosat-2 satellite equips with the high-spatial-resolution (2m ground sampling distance) remote sensing instrument. It has been being operated on the daily-revisiting mission orbit by National Space organization (NSPO) of Taiwan since May 21 2004. NSPO has also serving as one of the ground receiving stations for daily processing the received Formosat- 2 images. The current cloud coverage assessment of Formosat-2 image for NSPO Image Processing System generally consists of two major steps. Firstly, an un-supervised K-means method is used for automatically estimating the cloud statistic of Formosat-2 image. Secondly, manual estimation of cloud coverage from Formosat-2 image is processed by manual examination. Apparently, a more accurate Automatic Cloud Coverage Assessment (ACCA) method certainly increases the efficiency of processing step 2 with a good prediction of cloud statistic. In this paper, mainly based on the research results from Chang et al, Irish, and Gotoh, we propose a modified Formosat-2 ACCA method which considered pre-processing and post-processing analysis. For pre-processing analysis, cloud statistic is determined by using un-supervised K-means classification, Sobel's method, Otsu's method, non-cloudy pixels reexamination, and cross-band filter method. Box-Counting fractal method is considered as a post-processing tool to double check the results of pre-processing analysis for increasing the efficiency of manual examination.

  10. Conscious and Automatic Processes in Language Learning.

    Science.gov (United States)

    Carroll, John B.

    1981-01-01

    Proposes theory that the learning processes of first- and second-language learners are fundamentally the same, differing only in kinds of information used by both kinds of learners and the degree of automatization attained. Suggests designing second-language learning processes to simulate those occurring in natural settings. (Author/BK)

  11. Automatic correction system of the laminating machine based on image processing%基于图像处理的贴合机自动纠偏系统

    Institute of Scientific and Technical Information of China (English)

    赵茹; 陶晓杰; 王鹏飞

    2013-01-01

    贴合机自动纠偏系统主要用来对采集的图像进行X、Y、R方向的偏差计算和偏差数据的输出.文中首先对贴合机自动纠偏系统进行了设计,并对采集到的图像进行预处理,之后采用Hough变换对图像进行角度检测,采用插值和相位相关的图像配准算法对图像进行位移偏差计算.最后对系统进行标定.通过传送偏移数据到控制机,可实现对贴合物品的精确定位.%Automatic correction system of the laminating machine is main used to output the deviation calculation and deviation data of the X, Y, R direction of the acquired images. Firstly, in the thesis, laminating machine automatic correction system has been designed, and collected images has been preprocessed by the automatic correction system. After that we use the Hough transform to carry out angle detection of the image. Interpolation and image registration algorithm of phase correlation have been used in displacement deviation calculation. Finally is the system calibration. By transmitting the offset data to the control computer, It is easy to achieving precise positioning of the laminated slices.

  12. Image Control In Automatic Welding Vision System

    Science.gov (United States)

    Richardson, Richard W.

    1988-01-01

    Orientation and brightness varied to suit welding conditions. Commands from vision-system computer drive servomotors on iris and Dove prism, providing proper light level and image orientation. Optical-fiber bundle carries view of weld area as viewed along axis of welding electrode. Image processing described in companion article, "Processing Welding Images for Robot Control" (MFS-26036).

  13. Image Control In Automatic Welding Vision System

    Science.gov (United States)

    Richardson, Richard W.

    1988-01-01

    Orientation and brightness varied to suit welding conditions. Commands from vision-system computer drive servomotors on iris and Dove prism, providing proper light level and image orientation. Optical-fiber bundle carries view of weld area as viewed along axis of welding electrode. Image processing described in companion article, "Processing Welding Images for Robot Control" (MFS-26036).

  14. Image Semantic Automatic Annotation by Relevance Feedback

    Institute of Scientific and Technical Information of China (English)

    ZHANG Tong-zhen; SHEN Rui-min

    2007-01-01

    A large semantic gap exists between content based index retrieval (CBIR) and high-level semantic, additional semantic information should be attached to the images, it refers in three respects including semantic representation model, semantic information building and semantic retrieval techniques. In this paper, we introduce an associated semantic network and an automatic semantic annotation system. In the system, a semantic network model is employed as the semantic representation model, it uses semantic keywords, linguistic ontology and low-level features in semantic similarity calculating. Through several times of users' relevance feedback, semantic network is enriched automatically. To speed up the growth of semantic network and get a balance annotation, semantic seeds and semantic loners are employed especially.

  15. Automatic dirt trail analysis in dermoscopy images.

    Science.gov (United States)

    Cheng, Beibei; Joe Stanley, R; Stoecker, William V; Osterwise, Christopher T P; Stricklin, Sherea M; Hinton, Kristen A; Moss, Randy H; Oliviero, Margaret; Rabinovitz, Harold S

    2013-02-01

    Basal cell carcinoma (BCC) is the most common cancer in the US. Dermatoscopes are devices used by physicians to facilitate the early detection of these cancers based on the identification of skin lesion structures often specific to BCCs. One new lesion structure, referred to as dirt trails, has the appearance of dark gray, brown or black dots and clods of varying sizes distributed in elongated clusters with indistinct borders, often appearing as curvilinear trails. In this research, we explore a dirt trail detection and analysis algorithm for extracting, measuring, and characterizing dirt trails based on size, distribution, and color in dermoscopic skin lesion images. These dirt trails are then used to automatically discriminate BCC from benign skin lesions. For an experimental data set of 35 BCC images with dirt trails and 79 benign lesion images, a neural network-based classifier achieved a 0.902 are under a receiver operating characteristic curve using a leave-one-out approach. Results obtained from this study show that automatic detection of dirt trails in dermoscopic images of BCC is feasible. This is important because of the large number of these skin cancers seen every year and the challenge of discovering these earlier with instrumentation. © 2011 John Wiley & Sons A/S.

  16. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    Energy Technology Data Exchange (ETDEWEB)

    Qiu, J [Taishan Medical University, Taian, Shandong (China); Washington University in St Louis, St Louis, MO (United States); Li, H. Harlod; Zhang, T; Yang, D [Washington University in St Louis, St Louis, MO (United States); Ma, F [Taishan Medical University, Taian, Shandong (China)

    2015-06-15

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The most important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.

  17. Research on automatic human chromosome image analysis

    Science.gov (United States)

    Ming, Delie; Tian, Jinwen; Liu, Jian

    2007-11-01

    Human chromosome karyotyping is one of the essential tasks in cytogenetics, especially in genetic syndrome diagnoses. In this thesis, an automatic procedure is introduced for human chromosome image analysis. According to different status of touching and overlapping chromosomes, several segmentation methods are proposed to achieve the best results. Medial axis is extracted by the middle point algorithm. Chromosome band is enhanced by the algorithm based on multiscale B-spline wavelets, extracted by average gray profile, gradient profile and shape profile, and calculated by the WDD (Weighted Density Distribution) descriptors. The multilayer classifier is used in classification. Experiment results demonstrate that the algorithms perform well.

  18. An automatic coastline detector for use with SAR images

    Energy Technology Data Exchange (ETDEWEB)

    Erteza, Ireena A.

    1998-09-01

    SAR imagery for coastline detection has many potential advantages over conventional optical stereoscopic techniques. For example, SAR does not have restrictions on being collected during daylight or when there is no cloud cover. In addition, the techniques for coastline detection witth SAR images can be automated. In this paper, we present the algorithmic development of an automatic coastline detector for use with SAR imagery. Three main algorithms comprise the automatic coastline detection algorithm, The first algorithm considers the image pre-processing steps that must occur on the original image in order to accentuate the land/water boundary. The second algorithm automatically follows along the accentuated land/water boundary and produces a single-pixel-wide coastline. The third algorithm identifies islands and marks them. This report describes in detail the development of these three algorithms. Examples of imagery are used throughout the paper to illustrate the various steps in algorithms. Actual code is included in appendices. The algorithms presented are preliminary versions that can be applied to automatic coastline detection in SAR imagery. There are many variations and additions to the algorithms that can be made to improve robustness and automation, as required by a particular application.

  19. Automatic seagrass pattern identification on sonar images

    Science.gov (United States)

    Rahnemoonfar, Maryam; Rahman, Abdullah

    2016-05-01

    Natural and human-induced disturbances are resulting in degradation and loss of seagrass. Freshwater flooding, severe meteorological events and invasive species are among the major natural disturbances. Human-induced disturbances are mainly due to boat propeller scars in the shallow seagrass meadows and anchor scars in the deeper areas. Therefore, there is a vital need to map seagrass ecosystems in order to determine worldwide abundance and distribution. Currently there is no established method for mapping the pothole or scars in seagrass. One of the most precise sensors to map the seagrass disturbance is side scan sonar. Here we propose an automatic method which detects seagrass potholes in sonar images. Side scan sonar images are notorious for having speckle noise and uneven illumination across the image. Moreover, disturbance presents complex patterns where most segmentation techniques will fail. In this paper, by applying mathematical morphology technique and calculating the local standard deviation of the image, the images were enhanced and the pothole patterns were identified. The proposed method was applied on sonar images taken from Laguna Madre in Texas. Experimental results show the effectiveness of the proposed method.

  20. BgCut: automatic ship detection from UAV images.

    Science.gov (United States)

    Xu, Chao; Zhang, Dongping; Zhang, Zhengning; Feng, Zhiyong

    2014-01-01

    Ship detection in static UAV aerial images is a fundamental challenge in sea target detection and precise positioning. In this paper, an improved universal background model based on Grabcut algorithm is proposed to segment foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches.

  1. Automatic system for detecting pornographic images

    Science.gov (United States)

    Ho, Kevin I. C.; Chen, Tung-Shou; Ho, Jun-Der

    2002-09-01

    Due to the dramatic growth of network and multimedia technology, people can more easily get variant information by using Internet. Unfortunately, it also makes the diffusion of illegal and harmful content much easier. So, it becomes an important topic for the Internet society to protect and safeguard Internet users from these content that may be encountered while surfing on the Net, especially children. Among these content, porno graphs cause more serious harm. Therefore, in this study, we propose an automatic system to detect still colour porno graphs. Starting from this result, we plan to develop an automatic system to search porno graphs or to filter porno graphs. Almost all the porno graphs possess one common characteristic that is the ratio of the size of skin region and non-skin region is high. Based on this characteristic, our system first converts the colour space from RGB colour space to HSV colour space so as to segment all the possible skin-colour regions from scene background. We also apply the texture analysis on the selected skin-colour regions to separate the skin regions from non-skin regions. Then, we try to group the adjacent pixels located in skin regions. If the ratio is over a given threshold, we can tell if the given image is a possible porno graph. Based on our experiment, less than 10% of non-porno graphs are classified as pornography, and over 80% of the most harmful porno graphs are classified correctly.

  2. Fully Automatic 3D Reconstruction of Histological Images

    CERN Document Server

    Bagci, Ulas

    2009-01-01

    In this paper, we propose a computational framework for 3D volume reconstruction from 2D histological slices using registration algorithms in feature space. To improve the quality of reconstructed 3D volume, first, intensity variations in images are corrected by an intensity standardization process which maps image intensity scale to a standard scale where similar intensities correspond to similar tissues. Second, a subvolume approach is proposed for 3D reconstruction by dividing standardized slices into groups. Third, in order to improve the quality of the reconstruction process, an automatic best reference slice selection algorithm is developed based on an iterative assessment of image entropy and mean square error of the registration process. Finally, we demonstrate that the choice of the reference slice has a significant impact on registration quality and subsequent 3D reconstruction.

  3. Automatic cloud classification of whole sky images

    Directory of Open Access Journals (Sweden)

    A. Heinle

    2010-05-01

    Full Text Available The recently increasing development of whole sky imagers enables temporal and spatial high-resolution sky observations. One application already performed in most cases is the estimation of fractional sky cover. A distinction between different cloud types, however, is still in progress. Here, an automatic cloud classification algorithm is presented, based on a set of mainly statistical features describing the color as well as the texture of an image. The k-nearest-neighbour classifier is used due to its high performance in solving complex issues, simplicity of implementation and low computational complexity. Seven different sky conditions are distinguished: high thin clouds (cirrus and cirrostratus, high patched cumuliform clouds (cirrocumulus and altocumulus, stratocumulus clouds, low cumuliform clouds, thick clouds (cumulonimbus and nimbostratus, stratiform clouds and clear sky. Based on the Leave-One-Out Cross-Validation the algorithm achieves an accuracy of about 97%. In addition, a test run of random images is presented, still outperforming previous algorithms by yielding a success rate of about 75%, or up to 88% if only "serious" errors with respect to radiation impact are considered. Reasons for the decrement in accuracy are discussed, and ideas to further improve the classification results, especially in problematic cases, are investigated.

  4. Towards automatic planning for manufacturing generative processes

    Energy Technology Data Exchange (ETDEWEB)

    CALTON,TERRI L.

    2000-05-24

    Generative process planning describes methods process engineers use to modify manufacturing/process plans after designs are complete. A completed design may be the result from the introduction of a new product based on an old design, an assembly upgrade, or modified product designs used for a family of similar products. An engineer designs an assembly and then creates plans capturing manufacturing processes, including assembly sequences, component joining methods, part costs, labor costs, etc. When new products originate as a result of an upgrade, component geometry may change, and/or additional components and subassemblies may be added to or are omitted from the original design. As a result process engineers are forced to create new plans. This is further complicated by the fact that the process engineer is forced to manually generate these plans for each product upgrade. To generate new assembly plans for product upgrades, engineers must manually re-specify the manufacturing plan selection criteria and re-run the planners. To remedy this problem, special-purpose assembly planning algorithms have been developed to automatically recognize design modifications and automatically apply previously defined manufacturing plan selection criteria and constraints.

  5. Embryonic Heart Morphogenesis from Confocal Microscopy Imaging and Automatic Segmentation

    Directory of Open Access Journals (Sweden)

    Hongda Mao

    2013-01-01

    Full Text Available Embryonic heart morphogenesis (EHM is a complex and dynamic process where the heart transforms from a single tube into a four-chambered pump. This process is of great biological and clinical interest but is still poorly understood for two main reasons. On the one hand, the existing imaging modalities for investigating EHM suffered from either limited penetration depth or limited spatial resolution. On the other hand, current works typically adopted manual segmentation, which was tedious, subjective, and time consuming considering the complexity of developing heart geometry and the large size of images. In this paper, we propose to utilize confocal microscopy imaging with tissue optical immersion clearing technique to image the heart at different stages of development for EHM study. The imaging method is able to produce high spatial resolution images and achieve large penetration depth at the same time. Furthermore, we propose a novel convex active contour model for automatic image segmentation. The model has the ability to deal with intensity fall-off in depth which is characterized by confocal microscopy images. We acquired the images of embryonic quail hearts from day 6 to day 14 of incubation for EHM study. The experimental results were promising and provided us with an insight view of early heart growth pattern and also paved the road for data-driven heart growth modeling.

  6. Reliability and effectiveness of clickthrough data for automatic image annotation

    NARCIS (Netherlands)

    Tsikrika, T.; Diou, C.; De Vries, A.P.; Delopoulos, A.

    2010-01-01

    Automatic image annotation using supervised learning is performed by concept classifiers trained on labelled example images. This work proposes the use of clickthrough data collected from search logs as a source for the automatic generation of concept training data, thus avoiding the expensive manua

  7. Automatic measurements of choroidal thickness in EDI-OCT images.

    Science.gov (United States)

    Tian, Jing; Marziliano, Pina; Baskaran, Mani; Tun, Tin Aung; Aung, Tin

    2012-01-01

    Enhanced Depth Imaging (EDI) optical coherence tomography (OCT) provides high-definition cross-sectional images of the choroid in vivo, and hence is used in many clinical studies. However, measurement of choroidal thickness depends on the manual labeling, which is tedious and subjective of inter-observer differences. In this paper, we propose a fast and accurate algorithm that could measure the choroidal thickness automatically. The lower boundary of the choroid is detected by searching the biggest gradient value above the retinal pigment epithelium (RPE) and the upper boundary is formed by finding the shortest path of the graph formed by valley pixels using dynamic programming. The average of Dice's Coefficient on 10 EDI-OCT images is 94.3%, which shows good consistency of the algorithm with the manual labeling. The processing time for each image is about 2 seconds.

  8. BgCut: Automatic Ship Detection from UAV Images

    Directory of Open Access Journals (Sweden)

    Chao Xu

    2014-01-01

    foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches.

  9. A HYBRID METHOD FOR AUTOMATIC COUNTING OF MICROORGANISMS IN MICROSCOPIC IMAGES

    OpenAIRE

    2016-01-01

    Microscopic image analysis is an essential process to enable the automatic enumeration and quantitative analysis of microbial images. There are several system are available for numerating microbial growth. Some of the existing method may be inefficient to accurately count the overlapped microorganisms. Therefore, in this paper we proposed an efficient method for automatic segmentation and counting of microorganisms in microscopic images. This method uses a hybrid approach based on...

  10. Automatic Scheme for Fused Medical Image Segmentation with Nonsubsampled Contourlet Transform

    Directory of Open Access Journals (Sweden)

    Ch.Hima Bindu

    2012-10-01

    Full Text Available Medical image segmentation has become an essential technique in clinical and research- oriented applications. Because manual segmentation methods are tedious, and semi-automatic segmentation lacks the flexibility, fully-automatic methods have become the preferred type of medical image segmentation. This work proposes a robust fully automatic segmentation scheme based on the modified contouring technique. The entire scheme consists of three stages. In the first stage, the Nonsubsampled Contourlet Transform (NSCT of image is computed. This is followed by the fusion of coefficients using fusion method. For that fused image local threshold is computed. This is followed by the second stage in which the initial points are determined by computation of global threshold. Finally, in the third stage, searching procedure is started from each initial point to obtain closed-loop contours. The whole process is fully automatic. This avoids the disadvantages of semi-automatic schemes such as manually selecting the initial contours and points.

  11. Automatic measurement of the sinus of Valsalva by image analysis.

    Science.gov (United States)

    Mairesse, Fabrice; Blanchard, Cédric; Boucher, Arnaud; Sliwa, Tadeusz; Lalande, Alain; Voisin, Yvon

    2017-09-01

    Despite the importance of the morphology of the sinus of Valsalva in the behavior of heart valves and the proper irrigation of coronary arteries, the study of these sinuses from medical imaging is still limited to manual radii measurements. This paper aims to present an automatic method to measure the sinuses of Valsalva on medical images, more specifically on cine MRI and Xray CT. This paper introduces an enhanced method to automatically localize and extract each sinus of Valsalva edge and its relevant points. Compared to classical active contours, this new image approach enhances the edge extraction of the Sinus of Valsalva. Our process not only allows image segmentation but also a complex study of the considered region including morphological classification, metrological characterization, valve tracking and 2D modeling. The method was successfully used on single or multiplane cine MRI and aortic CT angiographies. The localization is robust and the proposed edge extractor is more efficient than the state-of-the-art methods (average success rate for MRI examinations=84% ± 24%, average success rate for CT examinations=89% ± 11%). Moreover, deduced measurements are close to manual ones. The software produces accurate measurements of the sinuses of Valsalva. The robustness and the reproducibility of results will help for a better understanding of sinus of Valsalva pathologies and constitutes a first step to the design of complex prostheses adapted to each patient. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Automatic character detection and segmentation in natural scene images

    Institute of Scientific and Technical Information of China (English)

    ZHU Kai-hua; QI Fei-hu; JIANG Ren-jie; XU Li

    2007-01-01

    We present a robust connected-component (CC) based method for automatic detection and segmentation of text in real-scene images. This technique can be applied in robot vision, sign recognition, meeting processing and video indexing. First, a Non-Linear Niblack method (NLNiblack) is proposed to decompose the image into candidate CCs. Then, all these CCs are fed into a cascade of classifiers trained by Adaboost algorithm. Each classifier in the cascade responds to one feature of the CC. Proposed here are 12 novel features which are insensitive to noise, scale, text orientation and text language. The classifier cascade allows non-text CCs of the image to be rapidly discarded while more computation is spent on promising text-like CCs. The CCs passing through the cascade are considered as text components and are used to form the segmentation result. A prototype system was built,with experimental results proving the effectiveness and efficiency of the proposed method.

  13. Automatic evaluation of nickel alloy secondary phases from SEM images.

    Science.gov (United States)

    de Albuquerque, Victor Hugo C; Silva, Cleiton Carvalho; Menezes, Thiago Ivo de S; Farias, Jesualdo Pereira; Tavares, João Manuel R S

    2011-01-01

    Quantitative metallography is a technique to determine and correlate the microstructures of materials with their properties and behavior. Generic commercial image processing and analysis software packages have been used to quantify material phases from metallographic images. However, these all-purpose solutions also have some drawbacks, particularly when applied to segmentation of material phases. To overcome such limitations, this work presents a new solution to automatically segment and quantify material phases from SEM metallographic images. The solution is based on a neuronal network and in this work was used to identify the secondary phase precipitated in the gamma matrix of a Nickel base alloy. The results obtained by the new solution were validated by visual inspection and compared with the ones obtained by a commonly used commercial software. The conclusion is that the new solution is precise, reliable and more accurate and faster than the commercial software.

  14. Automatic phases recognition in pituitary surgeries by microscope images classification

    OpenAIRE

    Lalys, Florent; Riffaud, Laurent; Morandi, Xavier; Jannin, Pierre

    2010-01-01

    International audience; The segmentation of the surgical workflow might be helpful for providing context-sensitive user interfaces, or generating automatic report. Our approach focused on the automatic recognition of surgical phases by microscope image classification. Our workflow, including images features extraction, image database labelisation, Principal Component Analysis (PCA) transformation and 10-fold cross-validation studies was performed on a specific type of neurosurgical interventi...

  15. Processing of intentional and automatic number magnitudes in children born prematurely: evidence from fMRI.

    Science.gov (United States)

    Klein, Elise; Moeller, Korbinian; Kiechl-Kohlendorfer, Ursula; Kremser, Christian; Starke, Marc; Cohen Kadosh, Roi; Pupp-Peglow, Ulrike; Schocke, Michael; Kaufmann, Liane

    2014-01-01

    This study examined the neural correlates of intentional and automatic number processing (indexed by number comparison and physical Stroop task, respectively) in 6- and 7-year-old children born prematurely. Behavioral results revealed significant numerical distance and size congruity effects. Imaging results disclosed (1) largely overlapping fronto-parietal activation for intentional and automatic number processing, (2) a frontal to parietal shift of activation upon considering the risk factors gestational age and birth weight, and (3) a task-specific link between math proficiency and functional magnetic resonance imaging (fMRI) signal within distinct regions of the parietal lobes-indicating commonalities but also specificities of intentional and automatic number processing.

  16. Word Processing in Dyslexics: An Automatic Decoding Deficit?

    Science.gov (United States)

    Yap, Regina; Van Der Leu, Aryan

    1993-01-01

    Compares dyslexic children with normal readers on measures of phonological decoding and automatic word processing. Finds that dyslexics have a deficit in automatic phonological decoding skills. Discusses results within the framework of the phonological deficit and the automatization deficit hypotheses. (RS)

  17. Super pixel density based clustering automatic image classification method

    Science.gov (United States)

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu

    2015-12-01

    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.

  18. Automatic Age Estimation System for Face Images

    OpenAIRE

    Chin-Teng Lin; Dong-Lin Li; Jian-Hao Lai; Ming-Feng Han; Jyh-Yeong Chang

    2012-01-01

    Humans are the most important tracking objects in surveillance systems. However, human tracking is not enough to provide the required information for personalized recognition. In this paper, we present a novel and reliable framework for automatic age estimation based on computer vision. It exploits global face features based on the combination of Gabor wavelets and orthogonal locality preserving projections. In addition, the proposed system can extract face aging features automatically in rea...

  19. Building an Image-Based System to automatically Score psoriasis

    DEFF Research Database (Denmark)

    G{'o}mez, D. Delgado; Carstensen, Jens Michael; Ersbøll, Bjarne Kjær

    2003-01-01

    the images. The system is tested on patients with the dermatological disease psoriasis. Temporal series of images are taken for each patient and the lesions are automatically extracted. Results indicate that to the images obtained are a good source for obtaining derived variables to track the lesion....

  20. Automatic finger joint synovitis localization in ultrasound images

    Science.gov (United States)

    Nurzynska, Karolina; Smolka, Bogdan

    2016-04-01

    A long-lasting inflammation of joints results between others in many arthritis diseases. When not cured, it may influence other organs and general patients' health. Therefore, early detection and running proper medical treatment are of big value. The patients' organs are scanned with high frequency acoustic waves, which enable visualization of interior body structures through an ultrasound sonography (USG) image. However, the procedure is standardized, different projections result in a variety of possible data, which should be analyzed in short period of time by a physician, who is using medical atlases as a guidance. This work introduces an efficient framework based on statistical approach to the finger joint USG image, which enables automatic localization of skin and bone regions, which are then used for localization of the finger joint synovitis area. The processing pipeline realizes the task in real-time and proves high accuracy when compared to annotation prepared by the expert.

  1. Magnetic resonance cholangiopancreatography image enhancement for automatic disease detection.

    Science.gov (United States)

    Logeswaran, Rajasvaran

    2010-07-28

    To sufficiently improve magnetic resonance cholangiopancreatography (MRCP) quality to enable reliable computer-aided diagnosis (CAD). A set of image enhancement strategies that included filters (i.e. Gaussian, median, Wiener and Perona-Malik), wavelets (i.e. contourlet, ridgelet and a non-orthogonal noise compensation implementation), graph-cut approaches using lazy-snapping and Phase Unwrapping MAxflow, and binary thresholding using a fixed threshold and dynamic thresholding via histogram analysis were implemented to overcome the adverse characteristics of MRCP images such as acquisition noise, artifacts, partial volume effect and large inter- and intra-patient image intensity variations, all of which pose problems in application development. Subjective evaluation of several popular pre-processing techniques was undertaken to improve the quality of the 2D MRCP images and enhance the detection of the significant biliary structures within them, with the purpose of biliary disease detection. The results varied as expected since each algorithm capitalized on different characteristics of the images. For denoising, the Perona-Malik and contourlet approaches were found to be the most suitable. In terms of extraction of the significant biliary structures and removal of background, the thresholding approaches performed well. The interactive scheme performed the best, especially by using the strengths of the graph-cut algorithm enhanced by user-friendly lazy-snapping for foreground and background marker selection. Tests show promising results for some techniques, but not others, as viable image enhancement modules for automatic CAD systems for biliary and liver diseases.

  2. AUTOMATION OF IMAGE DATA PROCESSING

    Directory of Open Access Journals (Sweden)

    Preuss Ryszard

    2014-12-01

    Full Text Available This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft . At present, image data obtained by various registration systems (metric and non - metric cameras placed on airplanes , satellites , or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images . For fast images georeferencing automatic image matching algorithms are currently applied . They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage . Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object ( area. In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic , DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules . I mage processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters . The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system.

  3. 基于图像处理的水厂加矾量自动决策系统%Flocculating agent adding automatic decision system based on image processing in waterworks

    Institute of Scientific and Technical Information of China (English)

    刘倩; 王良元; 程恩; 袁飞

    2013-01-01

    The status of the auto flocculating agent adding system in waterworks at home and abroad is summarized.Aiming at its defects,a set of flocculating agent adding automatic decision system based on image processing is designed.The hardware part of the system uses PC as its main controller,circumscribed by video surveillance system and flocculating agent adding control circuit,while the software part of the system uses C++ as its basis,OpenCV as the image processing and machine vision library,and the software system interface based on MFC is developed.This system can snapshot water quality pictures automatically,process the water quality pictures to get some critical water quality parameters,and control the flocculating agent adding properly and automatically according to water quality changes.%文章总结了国内外水厂加矾控制系统的现状,针对其不足设计了一套基于图像处理的水厂加矾量自动决策系统.该系统硬件部分以PC为主控制器,外接视频监控系统及投药控制电路,系统软件部分以C++编程语言为基础,OpenCV为图像处理和机器视觉库,开发出了基于MFC的软件系统界面.该系统可自动抓拍水质图像,对其进行相关图像处理后得到关键的水质参数,并根据水质变化适当自动地控制絮凝剂的投加量.

  4. Automatic extraction of planetary image features

    Science.gov (United States)

    LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)

    2013-01-01

    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.

  5. Automatic Age Estimation System for Face Images

    Directory of Open Access Journals (Sweden)

    Chin-Teng Lin

    2012-11-01

    Full Text Available Humans are the most important tracking objects in surveillance systems. However, human tracking is not enough to provide the required information for personalized recognition. In this paper, we present a novel and reliable framework for automatic age estimation based on computer vision. It exploits global face features based on the combination of Gabor wavelets and orthogonal locality preserving projections. In addition, the proposed system can extract face aging features automatically in real‐time. This means that the proposed system has more potential in applications compared to other semi‐automatic systems. The results obtained from this novel approach could provide clearer insight for operators in the field of age estimation to develop real‐world applications.

  6. Automatic segmentation of the choroid in enhanced depth imaging optical coherence tomography images.

    Science.gov (United States)

    Tian, Jing; Marziliano, Pina; Baskaran, Mani; Tun, Tin Aung; Aung, Tin

    2013-03-01

    Enhanced Depth Imaging (EDI) optical coherence tomography (OCT) provides high-definition cross-sectional images of the choroid in vivo, and hence is used in many clinical studies. However, the quantification of the choroid depends on the manual labelings of two boundaries, Bruch's membrane and the choroidal-scleral interface. This labeling process is tedious and subjective of inter-observer differences, hence, automatic segmentation of the choroid layer is highly desirable. In this paper, we present a fast and accurate algorithm that could segment the choroid automatically. Bruch's membrane is detected by searching the pixel with the biggest gradient value above the retinal pigment epithelium (RPE) and the choroidal-scleral interface is delineated by finding the shortest path of the graph formed by valley pixels using Dijkstra's algorithm. The experiments comparing automatic segmentation results with the manual labelings are conducted on 45 EDI-OCT images and the average of Dice's Coefficient is 90.5%, which shows good consistency of the algorithm with the manual labelings. The processing time for each image is about 1.25 seconds.

  7. An Automatic Image Inpainting Algorithm Based on FCM

    Directory of Open Access Journals (Sweden)

    Jiansheng Liu

    2014-01-01

    Full Text Available There are many existing image inpainting algorithms in which the repaired area should be manually determined by users. Aiming at this drawback of the traditional image inpainting algorithms, this paper proposes an automatic image inpainting algorithm which automatically identifies the repaired area by fuzzy C-mean (FCM algorithm. FCM algorithm classifies the image pixels into a number of categories according to the similarity principle, making the similar pixels clustering into the same category as possible. According to the provided gray value of the pixels to be inpainted, we calculate the category whose distance is the nearest to the inpainting area and this category is to be inpainting area, and then the inpainting area is restored by the TV model to realize image automatic inpainting.

  8. Automatic Image Segmentation Using Active Contours with Univariate Marginal Distribution

    Directory of Open Access Journals (Sweden)

    I. Cruz-Aceves

    2013-01-01

    Full Text Available This paper presents a novel automatic image segmentation method based on the theory of active contour models and estimation of distribution algorithms. The proposed method uses the univariate marginal distribution model to infer statistical dependencies between the control points on different active contours. These contours have been generated through an alignment process of reference shape priors, in order to increase the exploration and exploitation capabilities regarding different interactive segmentation techniques. This proposed method is applied in the segmentation of the hollow core in microscopic images of photonic crystal fibers and it is also used to segment the human heart and ventricular areas from datasets of computed tomography and magnetic resonance images, respectively. Moreover, to evaluate the performance of the medical image segmentations compared to regions outlined by experts, a set of similarity measures has been adopted. The experimental results suggest that the proposed image segmentation method outperforms the traditional active contour model and the interactive Tseng method in terms of segmentation accuracy and stability.

  9. A HYBRID METHOD FOR AUTOMATIC COUNTING OF MICROORGANISMS IN MICROSCOPIC IMAGES

    Directory of Open Access Journals (Sweden)

    P.Kalavathi

    2016-03-01

    Full Text Available Microscopic image analysis is an essential process to enable the automatic enumeration and quantitative analysis of microbial images. There are several system are available for numerating microbial growth. Some of the existing method may be inefficient to accurately count the overlapped microorganisms. Therefore, in this paper we proposed an efficient method for automatic segmentation and counting of microorganisms in microscopic images. This method uses a hybrid approach based on morphological operation, active contour model and counting by region labelling process. The colony count value obtained by this proposed method is compared with the manual count and the count value obtained from the existing method

  10. Natural language processing techniques for automatic test ...

    African Journals Online (AJOL)

    Journal of Computer Science and Its Application ... The questions were generated by first extracting the text from the materials supplied by the ... Keywords: Discourse Connectives, Machine Learning, Automatic Test Generation E-Learning.

  11. Automatic Seamless Stitching Method for CCD Images of Chang'E-I Lunar Mission

    Institute of Scientific and Technical Information of China (English)

    Mengjie Ye; Jian Li; Yanyan Liang; Zhanchuan Cai; Zesheng Tang

    2011-01-01

    A novel automatic seamless stitching method is presented.Compared to the traditional method,it can speed the processing and minimize the utilization of human resources to produce global lunar map.Meanwhile,a new global image map of the Moon with spatial resolution of~120 m has been completed by the proposed method from Chang'E-1 CCD image data.

  12. Concrete dam construction using computerized aggregate plant (CAP). ; Automatic production control using image processing. Jidoka kotsuzai plant (CAP) ni yoru concrete dam seko. ; Gazo shori wo chushinnishita seisanryo no jido seigyo

    Energy Technology Data Exchange (ETDEWEB)

    Aso, K.; Wakiyama, I.; Kita, Y. (Hazama Gumi, Ltd., Tokyo (Japan))

    1992-10-25

    For an aggregate plant that crushes and sorts out rocks using crushers and screens, a computerized aggregate plant (CAP) was structured utilizing the latest micro computers and communications technology. While local automations have been carried out in other plants using relays and sequencers, this CAP development has been targeted at further economic optimization and manpower saving with the main aims placed on machine control using operation control and feedback control based on the quantity control method. The system consists of the crushing control system to adjust automatically the vibration feeders by detecting empty-full levels in the hoppers and load current in the crushers; the image processing system to analyze still images photographed by a CCD camera and measure amount of aggregates transported and grain shapes the automatic damper system to adjust amounts of materials unloaded from and charged into the crushers using a computer; and the system to link batcher plants with the aggregate plant. The system was given verification tests at several dam sites. 7 figs., 5 tabs.

  13. Image analysis techniques associated with automatic data base generation.

    Science.gov (United States)

    Bond, A. D.; Ramapriyan, H. K.; Atkinson, R. J.; Hodges, B. C.; Thomas, D. T.

    1973-01-01

    This paper considers some basic problems relating to automatic data base generation from imagery, the primary emphasis being on fast and efficient automatic extraction of relevant pictorial information. Among the techniques discussed are recursive implementations of some particular types of filters which are much faster than FFT implementations, a 'sequential similarity detection' technique of implementing matched filters, and sequential linear classification of multispectral imagery. Several applications of the above techniques are presented including enhancement of underwater, aerial and radiographic imagery, detection and reconstruction of particular types of features in images, automatic picture registration and classification of multiband aerial photographs to generate thematic land use maps.

  14. Agile multi-scale decompositions for automatic image registration

    Science.gov (United States)

    Murphy, James M.; Leija, Omar Navarro; Le Moigne, Jacqueline

    2016-05-01

    In recent works, the first and third authors developed an automatic image registration algorithm based on a multiscale hybrid image decomposition with anisotropic shearlets and isotropic wavelets. This prototype showed strong performance, improving robustness over registration with wavelets alone. However, this method imposed a strict hierarchy on the order in which shearlet and wavelet features were used in the registration process, and also involved an unintegrated mixture of MATLAB and C code. In this paper, we introduce a more agile model for generating features, in which a flexible and user-guided mix of shearlet and wavelet features are computed. Compared to the previous prototype, this method introduces a flexibility to the order in which shearlet and wavelet features are used in the registration process. Moreover, the present algorithm is now fully coded in C, making it more efficient and portable than the mixed MATLAB and C prototype. We demonstrate the versatility and computational efficiency of this approach by performing registration experiments with the fully-integrated C algorithm. In particular, meaningful timing studies can now be performed, to give a concrete analysis of the computational costs of the flexible feature extraction. Examples of synthetically warped and real multi-modal images are analyzed.

  15. Automatic airline baggage counting using 3D image segmentation

    Science.gov (United States)

    Yin, Deyu; Gao, Qingji; Luo, Qijun

    2017-06-01

    The baggage number needs to be checked automatically during baggage self-check-in. A fast airline baggage counting method is proposed in this paper using image segmentation based on height map which is projected by scanned baggage 3D point cloud. There is height drop in actual edge of baggage so that it can be detected by the edge detection operator. And then closed edge chains are formed from edge lines that is linked by morphological processing. Finally, the number of connected regions segmented by closed chains is taken as the baggage number. Multi-bag experiment that is performed on the condition of different placement modes proves the validity of the method.

  16. Automatic Segmentation of Dermoscopic Images by Iterative Classification

    Directory of Open Access Journals (Sweden)

    Maciel Zortea

    2011-01-01

    Full Text Available Accurate detection of the borders of skin lesions is a vital first step for computer aided diagnostic systems. This paper presents a novel automatic approach to segmentation of skin lesions that is particularly suitable for analysis of dermoscopic images. Assumptions about the image acquisition, in particular, the approximate location and color, are used to derive an automatic rule to select small seed regions, likely to correspond to samples of skin and the lesion of interest. The seed regions are used as initial training samples, and the lesion segmentation problem is treated as binary classification problem. An iterative hybrid classification strategy, based on a weighted combination of estimated posteriors of a linear and quadratic classifier, is used to update both the automatically selected training samples and the segmentation, increasing reliability and final accuracy, especially for those challenging images, where the contrast between the background skin and lesion is low.

  17. Using full-reference image quality metrics for automatic image sharpening

    Science.gov (United States)

    Krasula, Lukas; Fliegel, Karel; Le Callet, Patrick; Klíma, Miloš

    2014-05-01

    Image sharpening is a post-processing technique employed for the artificial enhancement of the perceived sharpness by shortening the transitions between luminance levels or increasing the contrast on the edges. The greatest challenge in this area is to determine the level of perceived sharpness which is optimal for human observers. This task is complex because the enhancement is gained only until the certain threshold. After reaching it, the quality of the resulting image drops due to the presence of annoying artifacts. Despite the effort dedicated to the automatic sharpness estimation, none of the existing metrics is designed for localization of this threshold. Nevertheless, it is a very important step towards the automatic image sharpening. In this work, possible usage of full-reference image quality metrics for finding the optimal amount of sharpening is proposed and investigated. The intentionally over-sharpened "anchor image" was included to the calculation as the "anti-reference" and the final metric score was computed from the differences between reference, processed, and anchor versions of the scene. Quality scores obtained from the subjective experiment were used to determine the optimal combination of partial metric values. Five popular fidelity metrics - SSIM, MS-SSIM, IW-SSIM, VIF, and FSIM - were tested. The performance of the proposed approach was then verified in the subjective experiment.

  18. Computational Intelligence in Image Processing

    CERN Document Server

    Siarry, Patrick

    2013-01-01

    Computational intelligence based techniques have firmly established themselves as viable, alternate, mathematical tools for more than a decade. They have been extensively employed in many systems and application domains, among these signal processing, automatic control, industrial and consumer electronics, robotics, finance, manufacturing systems, electric power systems, and power electronics. Image processing is also an extremely potent area which has attracted the atten­tion of many researchers who are interested in the development of new computational intelligence-based techniques and their suitable applications, in both research prob­lems and in real-world problems. Part I of the book discusses several image preprocessing algorithms; Part II broadly covers image compression algorithms; Part III demonstrates how computational intelligence-based techniques can be effectively utilized for image analysis purposes; and Part IV shows how pattern recognition, classification and clustering-based techniques can ...

  19. Automatic Image Registration Algorithm Based on Wavelet Transform

    Institute of Scientific and Technical Information of China (English)

    LIU Qiong; NI Guo-qiang

    2006-01-01

    An automatic image registration approach based on wavelet transform is proposed. This proposed method utilizes multiscale wavelet transform to extract feature points. A coarse-to-fine feature matching method is utilized in the feature matching phase. A two-way matching method based on cross-correlation to get candidate point pairs and a fine matching based on support strength combine to form the matching algorithm. At last, based on an affine transformation model, the parameters are iteratively refined by using the least-squares estimation approach. Experimental results have verified that the proposed algorithm can realize automatic registration of various kinds of images rapidly and effectively.

  20. AUTOMATIC MULTILEVEL IMAGE SEGMENTATION BASED ON FUZZY REASONING

    Directory of Open Access Journals (Sweden)

    Liang Tang

    2011-05-01

    Full Text Available An automatic multilevel image segmentation method based on sup-star fuzzy reasoning (SSFR is presented. Using the well-known sup-star fuzzy reasoning technique, the proposed algorithm combines the global statistical information implied in the histogram with the local information represented by the fuzzy sets of gray-levels, and aggregates all the gray-levels into several classes characterized by the local maximum values of the histogram. The presented method has the merits of determining the number of the segmentation classes automatically, and avoiding to calculating thresholds of segmentation. Emulating and real image segmentation experiments demonstrate that the SSFR is effective.

  1. Fast and automatic thermographic material identification for the recycling process

    Science.gov (United States)

    Haferkamp, Heinz; Burmester, Ingo

    1998-03-01

    Within the framework of the future closed loop recycling process the automatic and economical sorting of plastics is a decisive element. The at the present time available identification and sorting systems are not yet suitable for the sorting of technical plastics since essential demands, as the realization of high recognition reliability and identification rates considering the variety of technical plastics, can not be guaranteed. Therefore the Laser Zentrum Hannover e.V. in cooperation with the Hoerotron GmbH and the Preussag Noell GmbH has carried out investigations on a rapid thermographic and laser-supported material- identification-system for automatic material-sorting- systems. The automatic identification of different engineering plastics coming from electronic or automotive waste is possible. Identification rates up to 10 parts per second are allowed by the effort from fast IR line scanners. The procedure is based on the following principle: within a few milliseconds a spot on the relevant sample is heated by a CO2 laser. The samples different and specific chemical and physical material properties cause different temperature distributions on their surfaces that are measured by a fast IR-linescan system. This 'thermal impulse response' has to be analyzed by means of a computer system. Investigations have shown that it is possible to analyze more than 18 different sorts of plastics at a frequency of 10 Hz. Crucial for the development of such a system is the rapid processing of imaging data, the minimization of interferences caused by oscillating samples geometries, and a wide range of possible additives in plastics in question. One possible application area is sorting of plastics coming from car- and electronic waste recycling.

  2. Automatic processing of multimodal tomography datasets.

    Science.gov (United States)

    Parsons, Aaron D; Price, Stephen W T; Wadeson, Nicola; Basham, Mark; Beale, Andrew M; Ashton, Alun W; Mosselmans, J Frederick W; Quinn, Paul D

    2017-01-01

    With the development of fourth-generation high-brightness synchrotrons on the horizon, the already large volume of data that will be collected on imaging and mapping beamlines is set to increase by orders of magnitude. As such, an easy and accessible way of dealing with such large datasets as quickly as possible is required in order to be able to address the core scientific problems during the experimental data collection. Savu is an accessible and flexible big data processing framework that is able to deal with both the variety and the volume of data of multimodal and multidimensional scientific datasets output such as those from chemical tomography experiments on the I18 microfocus scanning beamline at Diamond Light Source.

  3. AUTOMATIC 3D MAPPING USING MULTIPLE UNCALIBRATED CLOSE RANGE IMAGES

    Directory of Open Access Journals (Sweden)

    M. Rafiei

    2013-09-01

    Full Text Available Automatic three-dimensions modeling of the real world is an important research topic in the geomatics and computer vision fields for many years. By development of commercial digital cameras and modern image processing techniques, close range photogrammetry is vastly utilized in many fields such as structure measurements, topographic surveying, architectural and archeological surveying, etc. A non-contact photogrammetry provides methods to determine 3D locations of objects from two-dimensional (2D images. Problem of estimating the locations of 3D points from multiple images, often involves simultaneously estimating both 3D geometry (structure and camera pose (motion, it is commonly known as structure from motion (SfM. In this research a step by step approach to generate the 3D point cloud of a scene is considered. After taking images with a camera, we should detect corresponding points in each two views. Here an efficient SIFT method is used for image matching for large baselines. After that, we must retrieve the camera motion and 3D position of the matched feature points up to a projective transformation (projective reconstruction. Lacking additional information on the camera or the scene makes the parallel lines to be unparalleled. The results of SfM computation are much more useful if a metric reconstruction is obtained. Therefor multiple views Euclidean reconstruction applied and discussed. To refine and achieve the precise 3D points we use more general and useful approach, namely bundle adjustment. At the end two real cases have been considered to reconstruct (an excavation and a tower.

  4. Image feature meaning for automatic key-frame extraction

    Science.gov (United States)

    Di Lecce, Vincenzo; Guerriero, Andrea

    2003-12-01

    Video abstraction and summarization, being request in several applications, has address a number of researches to automatic video analysis techniques. The processes for automatic video analysis are based on the recognition of short sequences of contiguous frames that describe the same scene, shots, and key frames representing the salient content of the shot. Since effective shot boundary detection techniques exist in the literature, in this paper we will focus our attention on key frames extraction techniques to identify the low level visual features of the frames that better represent the shot content. To evaluate the features performance, key frame automatically extracted using these features, are compared to human operator video annotations.

  5. STUDY OF AUTOMATIC IMAGE RECTIFICATION AND REGISTRATION OF SCANNED HISTORICAL AERIAL PHOTOGRAPHS

    OpenAIRE

    Chen , H.R.; Tseng, Y H

    2016-01-01

    Historical aerial photographs directly provide good evidences of past times. The Research Center for Humanities and Social Sciences (RCHSS) of Taiwan Academia Sinica has collected and scanned numerous historical maps and aerial images of Taiwan and China. Some maps or images have been geo-referenced manually, but most of historical aerial images have not been registered since there are no GPS or IMU data for orientation assisting in the past. In our research, we developed an automatic process...

  6. Automatic localization of landmark sets in head CT images with regression forests for image registration initialization

    Science.gov (United States)

    Zhang, Dongqing; Liu, Yuan; Noble, Jack H.; Dawant, Benoit M.

    2016-03-01

    Cochlear Implants (CIs) are electrode arrays that are surgically inserted into the cochlea. Individual contacts stimulate frequency-mapped nerve endings thus replacing the natural electro-mechanical transduction mechanism. CIs are programmed post-operatively by audiologists but this is currently done using behavioral tests without imaging information that permits relating electrode position to inner ear anatomy. We have recently developed a series of image processing steps that permit the segmentation of the inner ear anatomy and the localization of individual contacts. We have proposed a new programming strategy that uses this information and we have shown in a study with 68 participants that 78% of long term recipients preferred the programming parameters determined with this new strategy. A limiting factor to the large scale evaluation and deployment of our technique is the amount of user interaction still required in some of the steps used in our sequence of image processing algorithms. One such step is the rough registration of an atlas to target volumes prior to the use of automated intensity-based algorithms when the target volumes have very different fields of view and orientations. In this paper we propose a solution to this problem. It relies on a random forest-based approach to automatically localize a series of landmarks. Our results obtained from 83 images with 132 registration tasks show that automatic initialization of an intensity-based algorithm proves to be a reliable technique to replace the manual step.

  7. Automatic Information Processing and High Performance Skills

    Science.gov (United States)

    1992-10-01

    Society Twenty-Sixth Annual Meeting (pp. 10-14). Santa Monica, CA: Human Factors Society. Shiffrin , R. M. (1988). Attention. In R. C. Atkinson , R. J...Learning. Memory . and Cognition, 1A, 562-569. Shiffrin , R. M., and Dumais, S. T. (1981). The development of automatism. In J. R. Anderson (Ed.), Cognitive...Change and Skill Acquisition in Visual Search ................................... 43 iii TABLE OF CONTENTS (Continued) Consistent Memory and Visual Search

  8. Image processing techniques for remote sensing data

    Digital Repository Service at National Institute of Oceanography (India)

    RameshKumar, M.R.

    interpretation and for processing of scene data for autonomous machine perception. The technique of digital image processing are used for' automatic character/pattern recognition, industrial robots for product assembly and inspection, military recognizance... number of techniques have been suggested for restoration 37 of degraded images like inverse filter, wiener filter and constrained least square filter etc. The primary objective of scene analysis is to deduce from a single two dimensional image...

  9. Automatic guiding of the primary image of solar Gregory telescopes

    NARCIS (Netherlands)

    Küveler, G.; Wiehr, E.; Thomas, D.; Harzer, M.; Bianda, M.; Epple, A.; Sütterlin, P.; Weisshaar, E.

    1998-01-01

    The primary image reflected from the field-stop of solar Gregory telescopes is used for automatic guiding. This new system avoids temporal varying influences from the bending of the telescope tube by the main mirror's gravity and from offsets between the telescope and a separate guiding refractor.

  10. Automatic crop row detection from UAV images

    DEFF Research Database (Denmark)

    Midtiby, Henrik; Rasmussen, Jesper

    Images from Unmanned Aerial Vehicles can provide information about the weed distribution in fields. A direct way is to quantify the amount of vegetation present in different areas of the field. The limitation of this approach is that it includes both crops and weeds in the reported num- bers. To get...

  11. Automatic crop row detection from UAV images

    DEFF Research Database (Denmark)

    Midtiby, Henrik; Rasmussen, Jesper

    Images from Unmanned Aerial Vehicles can provide information about the weed distribution in fields. A direct way is to quantify the amount of vegetation present in different areas of the field. The limitation of this approach is that it includes both crops and weeds in the reported num- bers. To get...

  12. Statistical Image Processing.

    Science.gov (United States)

    1982-11-16

    spectral analysist texture image analysis and classification, __ image software package, automatic spatial clustering.ITWA domenit hi ba apa for...ICOLOR(256),IBW(256) 1502 FORMATO (30( CNO(N): fF12.1)) 1503 FORMAT(o *FMINo DMRGE:0f2E20.8) 1504 FORMAT(/o IMRGE:or15) 1505 FOR14ATV FIRST SUBIMAGE:v...1506 FORMATO ’ JOIN CLUSTER NL:0) 1507 FORMAT( NEW CLUSTER:O) 1508 FORMAT( LLBS.GE.600) 1532 FORMAT(15XoTHETA ,7X, SIGMA-SQUAREr3Xe MERGING-DISTANCE

  13. A fast, automatic camera image stabilization benchmarking scheme

    Science.gov (United States)

    Yu, Jun; Craver, Scott

    2012-01-01

    While image stabilization(IS ) has become a default functionality for most digital cameras, there is a lack of automatic IS evaluation scheme. Most publicly known camera IS reviews either require human visual assessment or resort to some generic blur metric. The former is slow and inconsistent, and the latter may not be easily scalable with respect to resolution variation and exposure variation when comparing different cameras. We proposed a histogram based automatic IS evaluation scheme, which employs a white noise pattern as shooting target. It is able to produce accurate and consistent IS benchmarks in a very fast manner.

  14. An image analysis approach for automatically re-orienteering CT images for dental implants.

    Science.gov (United States)

    Cucchiara, Rita; Lamma, Evelina; Sansoni, Tommaso

    2004-06-01

    In the last decade, computerized tomography (CT) has become the most frequently used imaging modality to obtain a correct pre-operative implant planning. In this work, we present an image analysis and computer vision approach able to identify, from the reconstructed 3D data set, the optimal cutting plane specific to each implant to be planned, in order to obtain the best view of the implant site and to have correct measures. If the patient requires more implants, different cutting planes are automatically identified, and the axial and cross-sectional images can be re-oriented accordingly to each of them. In the paper, we describe the defined algorithms in order to recognize 3D markers (each one aligned with a missed tooth for which an implant has to be planned) in the 3D reconstructed space, and the results in processing real exams, in terms of effectiveness and precision and reproducibility of the measure.

  15. EXIF Custom: Automatic image metadata extraction for Scratchpads and Drupal

    Directory of Open Access Journals (Sweden)

    Ed Baker

    2013-09-01

    Full Text Available Many institutions and individuals use embedded metadata to aid in the management of their image collections. Many deskop image management solutions such as Adobe Bridge and online tools such as Flickr also make use of embedded metadata to describe, categorise and license images. Until now Scratchpads (a data management system and virtual research environment for biodiversity  have not made use of these metadata, and users have had to manually re-enter this information if they have wanted to display it on their Scratchpad site. The Drupal described here allows users to map metadata embedded in their images to the associated field in the Scratchpads image form using one or more customised mappings. The module works seamlessly with the bulk image uploader used on Scratchpads and it is therefore possible to upload hundreds of images easily with automatic metadata (EXIF, XMP and IPTC extraction and mapping.

  16. Beyond behaviorism: on the automaticity of higher mental processes.

    Science.gov (United States)

    Bargh, J A; Ferguson, M J

    2000-11-01

    The first 100 years of experimental psychology were dominated by 2 major schools of thought: behaviorism and cognitive science. Here the authors consider the common philosophical commitment to determinism by both schools, and how the radical behaviorists' thesis of the determined nature of higher mental processes is being pursued today in social cognition research on automaticity. In harmony with "dual process" models in contemporary cognitive science, which equate determined processes with those that are automatic and which require no intervening conscious choice or guidance, as opposed to "controlled" processes which do, the social cognition research on the automaticity of higher mental processes provides compelling evidence for the determinism of those processes. This research has revealed that social interaction, evaluation and judgment, and the operation of internal goal structures can all proceed without the intervention of conscious acts of will and guidance of the process.

  17. Study of Automatic Image Rectification and Registration of Scanned Historical Aerial Photographs

    Science.gov (United States)

    Chen, H. R.; Tseng, Y. H.

    2016-06-01

    Historical aerial photographs directly provide good evidences of past times. The Research Center for Humanities and Social Sciences (RCHSS) of Taiwan Academia Sinica has collected and scanned numerous historical maps and aerial images of Taiwan and China. Some maps or images have been geo-referenced manually, but most of historical aerial images have not been registered since there are no GPS or IMU data for orientation assisting in the past. In our research, we developed an automatic process of matching historical aerial images by SIFT (Scale Invariant Feature Transform) for handling the great quantity of images by computer vision. SIFT is one of the most popular method of image feature extracting and matching. This algorithm extracts extreme values in scale space into invariant image features, which are robust to changing in rotation scale, noise, and illumination. We also use RANSAC (Random sample consensus) to remove outliers, and obtain good conjugated points between photographs. Finally, we manually add control points for registration through least square adjustment based on collinear equation. In the future, we can use image feature points of more photographs to build control image database. Every new image will be treated as query image. If feature points of query image match the features in database, it means that the query image probably is overlapped with control images.With the updating of database, more and more query image can be matched and aligned automatically. Other research about multi-time period environmental changes can be investigated with those geo-referenced temporal spatial data.

  18. Hyperspectral image processing methods

    Science.gov (United States)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  19. Design and FPGA implementation of real-time automatic image enhancement algorithm

    Science.gov (United States)

    Dong, GuoWei; Hou, ZuoXun; Tang, Qi; Pan, Zheng; Li, Xin

    2016-11-01

    In order to improve image processing quality and boost processing rate, this paper proposes an real-time automatic image enhancement algorithm. It is based on the histogram equalization algorithm and the piecewise linear enhancement algorithm, and it calculate the relationship of the histogram and the piecewise linear function by analyzing the histogram distribution for adaptive image enhancement. Furthermore, the corresponding FPGA processing modules are designed to implement the methods. Especially, the high-performance parallel pipelined technology and inner potential parallel processing ability of the modules are paid more attention to ensure the real-time processing ability of the complete system. The simulations and the experimentations show that the algorithm is based on the design and implementation of FPGA hardware circuit less cost on hardware, high real-time performance, the good processing performance in different sceneries. The algorithm can effectively improve the image quality, and would have wide prospect on imaging processing field.

  20. Building an Image-Based System to automatically Score psoriasis

    DEFF Research Database (Denmark)

    G{'o}mez, D. Delgado; Carstensen, Jens Michael; Ersbøll, Bjarne Kjær

    2003-01-01

    Nowadays the medical tracking of dermatological diseases is imprecise. The main reason is the lack of suitable objective methods to evaluate the lesion. The severity of the disease is scored by doctors just through their visual examination. In this work, a system to take accurate images...... of dermatological lesions has been developed. Mathematical methods can be applied to these images to obtain values that summarize the lesion and help to track its evolution. The system is composed of two elements. A precise image acquisition equipment and a statistical procedure to extract the lesions from...... the images. The system is tested on patients with the dermatological disease psoriasis. Temporal series of images are taken for each patient and the lesions are automatically extracted. Results indicate that to the images obtained are a good source for obtaining derived variables to track the lesion....

  1. Automatic Blastomere Recognition from a Single Embryo Image

    Directory of Open Access Journals (Sweden)

    Yun Tian

    2014-01-01

    Full Text Available The number of blastomeres of human day 3 embryos is one of the most important criteria for evaluating embryo viability. However, due to the transparency and overlap of blastomeres, it is a challenge to recognize blastomeres automatically using a single embryo image. This study proposes an approach based on least square curve fitting (LSCF for automatic blastomere recognition from a single image. First, combining edge detection, deletion of multiple connected points, and dilation and erosion, an effective preprocessing method was designed to obtain part of blastomere edges that were singly connected. Next, an automatic recognition method for blastomeres was proposed using least square circle fitting. This algorithm was tested on 381 embryo microscopic images obtained from the eight-cell period, and the results were compared with those provided by experts. Embryos were recognized with a 0 error rate occupancy of 21.59%, and the ratio of embryos in which the false recognition number was less than or equal to 2 was 83.16%. This experiment demonstrated that our method could efficiently and rapidly recognize the number of blastomeres from a single embryo image without the need to reconstruct the three-dimensional model of the blastomeres first; this method is simple and efficient.

  2. [Automatic houses detection with color aerial images based on image segmentation].

    Science.gov (United States)

    He, Pei-Pei; Wan, You-Chuan; Jiang, Peng-Rui; Gao, Xian-Jun; Qin, Jia-Xin

    2014-07-01

    In order to achieve housing automatic detection from high-resolution aerial imagery, the present paper utilized the color information and spectral characteristics of the roofing material, with the image segmentation theory, to study the housing automatic detection method. Firstly, This method proposed in this paper converts the RGB color space to HIS color space, uses the characteristics of each component of the HIS color space and the spectral characteristics of the roofing material for image segmentation to isolate red tiled roofs and gray cement roof areas, and gets the initial segmentation housing areas by using the marked watershed algorithm. Then, region growing is conducted in the hue component with the seed segment sample by calculating the average hue in the marked region. Finally through the elimination of small spots and rectangular fitting process to obtain a clear outline of the housing area. Compared with the traditional pixel-based region segmentation algorithm, the improved method proposed in this paper based on segment growing is in a one-dimensional color space to reduce the computation without human intervention, and can cater to the geometry information of the neighborhood pixels so that the speed and accuracy of the algorithm has been significantly improved. A case study was conducted to apply the method proposed in this paper to high resolution aerial images, and the experimental results demonstrate that this method has a high precision and rational robustness.

  3. Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation.

    Science.gov (United States)

    Chiu, Stephanie J; Li, Xiao T; Nicholas, Peter; Toth, Cynthia A; Izatt, Joseph A; Farsiu, Sina

    2010-08-30

    Segmentation of anatomical and pathological structures in ophthalmic images is crucial for the diagnosis and study of ocular diseases. However, manual segmentation is often a time-consuming and subjective process. This paper presents an automatic approach for segmenting retinal layers in Spectral Domain Optical Coherence Tomography images using graph theory and dynamic programming. Results show that this method accurately segments eight retinal layer boundaries in normal adult eyes more closely to an expert grader as compared to a second expert grader.

  4. Automatic Image Segmentation based on MRF-MAP

    CERN Document Server

    Qiyang, Zhao

    2012-01-01

    Solving the Maximum a Posteriori on Markov Random Field, MRF-MAP, is a prevailing method in recent interactive image segmentation tools. Although mathematically explicit in its computational targets, and impressive for the segmentation quality, MRF-MAP is hard to accomplish without the interactive information from users. So it is rarely adopted in the automatic style up to today. In this paper, we present an automatic image segmentation algorithm, NegCut, based on the approximation to MRF-MAP. First we prove MRF-MAP is NP-hard when the probabilistic models are unknown, and then present an approximation function in the form of minimum cuts on graphs with negative weights. Finally, the binary segmentation is taken from the largest eigenvector of the target matrix, with a tuned version of the Lanczos eigensolver. It is shown competitive at the segmentation quality in our experiments.

  5. Automatic and hierarchical segmentation of the human skeleton in CT images

    Science.gov (United States)

    Fu, Yabo; Liu, Shi; Li, H. Harold; Yang, Deshan

    2017-04-01

    Accurate segmentation of each bone of the human skeleton is useful in many medical disciplines. The results of bone segmentation could facilitate bone disease diagnosis and post-treatment assessment, and support planning and image guidance for many treatment modalities including surgery and radiation therapy. As a medium level medical image processing task, accurate bone segmentation can facilitate automatic internal organ segmentation by providing stable structural reference for inter- or intra-patient registration and internal organ localization. Even though bones in CT images can be visually observed with minimal difficulty due to the high image contrast between the bony structures and surrounding soft tissues, automatic and precise segmentation of individual bones is still challenging due to the many limitations of the CT images. The common limitations include low signal-to-noise ratio, insufficient spatial resolution, and indistinguishable image intensity between spongy bones and soft tissues. In this study, a novel and automatic method is proposed to segment all the major individual bones of the human skeleton above the upper legs in CT images based on an articulated skeleton atlas. The reported method is capable of automatically segmenting 62 major bones, including 24 vertebrae and 24 ribs, by traversing a hierarchical anatomical tree and by using both rigid and deformable image registration. The degrees of freedom of femora and humeri are modeled to support patients in different body and limb postures. The segmentation results are evaluated using the Dice coefficient and point-to-surface error (PSE) against manual segmentation results as the ground-truth. The results suggest that the reported method can automatically segment and label the human skeleton into detailed individual bones with high accuracy. The overall average Dice coefficient is 0.90. The average PSEs are 0.41 mm for the mandible, 0.62 mm for cervical vertebrae, 0.92 mm for thoracic

  6. Automatic and hierarchical segmentation of the human skeleton in CT images.

    Science.gov (United States)

    Fu, Yabo; Liu, Shi; Li, Hui Harold; Yang, Deshan

    2017-02-14

    Accurate segmentation of each bone in human skeleton is useful in many medical disciplines. Results of bone segmentation could facilitate bone disease diagnosis and post-treatment assessment, and support planning and image guidance for many treatment modalities including surgery and radiation therapy. As a medium level medical image processing task, accurate bone segmentation can facilitate automatic internal organ segmentation by providing stable structural reference for inter- or intra-patient registration and internal organ localization. Even though bones in CT images can be visually observed with minimal difficulties due to high image contrast between bony structures and surrounding soft tissues, automatic and precise segmentation of individual bones is still challenging due to many limitations in the CT images. The common limitations include low signal-to-noise ratio, insufficient spatial resolution, and indistinguishable image intensity between spongy bones and soft tissues. In this study, a novel and automatic method is proposed to segment all major individual bones of human skeleton above the upper legs in the CT images based on an articulated skeleton atlas. The reported method is capable of automatically segmenting 62 major bones, including 24 vertebrae and 24 ribs, by traversing a hierarchical anatomical tree and by using both rigid and deformable image registration. Degrees of freedom of femora and humeri are modeled to support patients in different body and limb postures. Segmentation results are evaluated using Dice coefficient and point-to-surface error (PSE) against manual segmentation results as ground truth. The results suggest that the reported method can automatically segment and label human skeleton into detailed individual bones with high accuracy. The overall average Dice coefficient is 0.90. The average PSEs are 0.41 mm for mandible, 0.62 mm for cervical vertebrae, 0.92 mm for thoracic vertebrae, and 1.45 mm for pelvis bones.

  7. Examination of the semi-automatic calculation technique of vegetation cover rate by digital camera images.

    Science.gov (United States)

    Takemine, S.; Rikimaru, A.; Takahashi, K.

    The rice is one of the staple foods in the world High quality rice production requires periodically collecting rice growth data to control the growth of rice The height of plant the number of stem the color of leaf is well known parameters to indicate rice growth Rice growth diagnosis method based on these parameters is used operationally in Japan although collecting these parameters by field survey needs a lot of labor and time Recently a laborsaving method for rice growth diagnosis is proposed which is based on vegetation cover rate of rice Vegetation cover rate of rice is calculated based on discriminating rice plant areas in a digital camera image which is photographed in nadir direction Discrimination of rice plant areas in the image was done by the automatic binarization processing However in the case of vegetation cover rate calculation method depending on the automatic binarization process there is a possibility to decrease vegetation cover rate against growth of rice In this paper a calculation method of vegetation cover rate was proposed which based on the automatic binarization process and referred to the growth hysteresis information For several images obtained by field survey during rice growing season vegetation cover rate was calculated by the conventional automatic binarization processing and the proposed method respectively And vegetation cover rate of both methods was compared with reference value obtained by visual interpretation As a result of comparison the accuracy of discriminating rice plant areas was increased by the proposed

  8. Automatic Moth Detection from Trap Images for Pest Management

    OpenAIRE

    Ding, Weiguang; Taylor, Graham

    2016-01-01

    Monitoring the number of insect pests is a crucial component in pheromone-based pest management systems. In this paper, we propose an automatic detection pipeline based on deep learning for identifying and counting pests in images taken inside field traps. Applied to a commercial codling moth dataset, our method shows promising performance both qualitatively and quantitatively. Compared to previous attempts at pest detection, our approach uses no pest-specific engineering which enables it to ...

  9. Automatic x-ray image contrast enhancement based on parameter auto-optimization.

    Science.gov (United States)

    Qiu, Jianfeng; Harold Li, H; Zhang, Tiezhi; Ma, Fangfang; Yang, Deshan

    2017-09-06

    Insufficient image contrast associated with radiation therapy daily setup x-ray images could negatively affect accurate patient treatment setup. We developed a method to perform automatic and user-independent contrast enhancement on 2D kilo voltage (kV) and megavoltage (MV) x-ray images. The goal was to provide tissue contrast optimized for each treatment site in order to support accurate patient daily treatment setup and the subsequent offline review. The proposed method processes the 2D x-ray images with an optimized image processing filter chain, which consists of a noise reduction filter and a high-pass filter followed by a contrast limited adaptive histogram equalization (CLAHE) filter. The most important innovation is to optimize the image processing parameters automatically to determine the required image contrast settings per disease site and imaging modality. Three major parameters controlling the image processing chain, i.e., the Gaussian smoothing weighting factor for the high-pass filter, the block size, and the clip limiting parameter for the CLAHE filter, were determined automatically using an interior-point constrained optimization algorithm. Fifty-two kV and MV x-ray images were included in this study. The results were manually evaluated and ranked with scores from 1 (worst, unacceptable) to 5 (significantly better than adequate and visually praise worthy) by physicians and physicists. The average scores for the images processed by the proposed method, the CLAHE, and the best window-level adjustment were 3.92, 2.83, and 2.27, respectively. The percentage of the processed images received a score of 5 were 48, 29, and 18%, respectively. The proposed method is able to outperform the standard image contrast adjustment procedures that are currently used in the commercial clinical systems. When the proposed method is implemented in the clinical systems as an automatic image processing filter, it could be useful for allowing quicker and potentially more

  10. Automatic target recognition using polarization-sensitive thermal imaging

    Science.gov (United States)

    Chun, Cornell S. L.; Sadjadi, Firooz A.; Ferris, David D., Jr.

    1995-07-01

    The performance of automatic target recognition (ATR) systems using thermal infrared images is limited by the low contrast in intensity for terrestrial scenes. We are developing a thermal imaging technique where, in each image pixel, a combination of intensity and polarization data is captured simultaneously. In this paper, we demonstrate, using synthetic polarization images, that a combination of intensity and polarization data will significantly improve the performance of detection and classification functions in an ATR system. The images were generated using a ray tracing computer program, modified to calculate the polarization characteristics of thermal radiation emitted from surfaces. We developed novel polarization- sensitive target edge detection, segmentation, and recognition algorithms. A set of performance metrics for the evaluation showed that, for large ranges of viewing elevation and aspect angles, using a combination of polarization and intensity data significantly improves the performance of the algorithms over using only the intensity data.

  11. Towards an automatic tool for resolution evaluation of mammographic images

    Energy Technology Data Exchange (ETDEWEB)

    De Oliveira, J. E. E. [FUMEC, Av. Alfonso Pena 3880, CEP 30130-009 Belo Horizonte - MG (Brazil); Nogueira, M. S., E-mail: juliae@fumec.br [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Pte. Antonio Carlos 6627, 31270-901, Belo Horizonte - MG (Brazil)

    2014-08-15

    Medical images are important for diagnosis purposes as they are related to patients medical history and pathology. Breast cancer represents a leading cause of death among women worldwide, and its early detection is the most effective method of reducing mortality. In a way to identify small structures with low density differences, a high image quality is required with the use of low doses of radiation. The analysis of the quality of the obtained image from a mammogram is performed from an image of a simulated breast and this is a fundamental key point for a program of quality control of mammography equipment s. In a control program of mammographic equipment s, besides the analysis of the quality of mammographic images, each element of the chain which composes the formation of the image is also analyzed: X-rays equipment s, radiographic films, and operating conditions. This control allows that an effective and efficient exam can be provided to the population and is within the standards of quality required for the early detection of breast cancer. However, according to the State Program of Quality Control in Mammography of Minas Gerais, Brazil, only 40% of the mammographies have provided a simulated image with a minimum level of quality, thus reinforcing the need for monitoring the images. The reduction of the morbidity and mortality indexes, with optimization and assurance of access to diagnosis and breast cancer treatment in the state of Minas Gerais, Brazil, may be the result of a mammographic exam which has a final image with good quality and which automatic evaluation is not subjective. The reason is that one has to consider the hypothesis that humans are subjective when performing the image analysis and that the evaluation of the image can be executed by a computer with objectivity. In 2007, in order to maintain the standard quality needed to mammography, the State Health Secretariat of Minas Gerais, Brazil, established a Program of Monthly Monitoring the

  12. Automatic abstraction of interference fringes with image technique

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    An automatic abstraction technique of interference fringes used in phase-modulation and phase-scanning-modulation interferometer is presented.For the measurement of amplitudes of interference fringes,fringes are fitted and their central points are determined automatically according to their distribution rules.However,for the measurement of their phases,fringes should be recognized and processed with different calculating algorithms and least-square optimization methods depending on the scanning modulation mode.When this technique is used for measurement of surface roughness,the measurement uncertainty is better than 5nm and the repeatability is less than 5%.

  13. Automatic processing of unattended object features by functional connectivity

    Directory of Open Access Journals (Sweden)

    Katja Martina Mayer

    2013-05-01

    Full Text Available Observers can selectively attend to object features that are relevant for a task. However, unattended task-irrelevant features may still be processed and possibly integrated with the attended features. This study investigated the neural mechanisms for processing both task-relevant (attended and task-irrelevant (unattended object features. The Garner paradigm was adapted for functional magnetic resonance imaging (fMRI to test whether specific brain areas process the conjunction of features or whether multiple interacting areas are involved in this form of feature integration. Observers attended to shape, colour, or non-rigid motion of novel objects while unattended features changed from trial to trial (change blocks or remained constant (no-change blocks during a given block. This block manipulation allowed us to measure the extent to which unattended features affected neural responses which would reflect the extent to which multiple object features are automatically processed. We did not find Garner interference at the behavioural level. However, we designed the experiment to equate performance across block types so that any fMRI results could not be due solely to differences in task difficulty between change and no-change blocks. Attention to specific features localised several areas known to be involved in object processing. No area showed larger responses on change blocks compared to no-change blocks. However, psychophysiological interaction analyses revealed that several functionally-localised areas showed significant positive interactions with areas in occipito-temporal and frontal areas that depended on block type. Overall, these findings suggest that both regional responses and functional connectivity are crucial for processing multi-featured objects.

  14. Robust automatic segmentation of corneal layer boundaries in SDOCT images using graph theory and dynamic programming.

    Science.gov (United States)

    Larocca, Francesco; Chiu, Stephanie J; McNabb, Ryan P; Kuo, Anthony N; Izatt, Joseph A; Farsiu, Sina

    2011-06-01

    Segmentation of anatomical structures in corneal images is crucial for the diagnosis and study of anterior segment diseases. However, manual segmentation is a time-consuming and subjective process. This paper presents an automatic approach for segmenting corneal layer boundaries in Spectral Domain Optical Coherence Tomography images using graph theory and dynamic programming. Our approach is robust to the low-SNR and different artifact types that can appear in clinical corneal images. We show that our method segments three corneal layer boundaries in normal adult eyes more accurately compared to an expert grader than a second grader-even in the presence of significant imaging outliers.

  15. Automatizations processes influence on organizations structure

    Directory of Open Access Journals (Sweden)

    Vace¾ Rastislav

    2003-09-01

    Full Text Available Has been influenced organization structure on processes? If yes, what is the rate? Is approach toward organization structures bordered by aspect of hierarchy? On these and same questions replay that contribution which in detail sight describe uncertainty managing of process in dependence on the type of organization structure.

  16. Automatization techniques for processing biomedical signals using machine learning methods

    OpenAIRE

    Artés Rodríguez, Antonio

    2008-01-01

    The Signal Processing Group (Department of Signal Theory and Communications, University Carlos III, Madrid, Spain) offers the expertise of its members in the automatic processing of biomedical signals. The main advantages in this technology are the decreased cost, the time saved and the increased reliability of the results. Technical cooperation for the research and development with internal and external funding is sought.

  17. Image processing mini manual

    Science.gov (United States)

    Matthews, Christine G.; Posenau, Mary-Anne; Leonard, Desiree M.; Avis, Elizabeth L.; Debure, Kelly R.; Stacy, Kathryn; Vonofenheim, Bill

    1992-01-01

    The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC.

  18. Image Processing Software

    Science.gov (United States)

    1992-01-01

    To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.

  19. Automatic nonrigid registration of whole body CT mice images.

    Science.gov (United States)

    Li, Xia; Yankeelov, Thomas E; Peterson, Todd E; Gore, John C; Dawant, Benoit M

    2008-04-01

    Three-dimensional intra- and intersubject registration of image volumes is important for tasks that include quantification of temporal/longitudinal changes, atlas-based segmentation, computing population averages, or voxel and tensor-based morphometry. While a number of methods have been proposed to address this problem, few have focused on the problem of registering whole body image volumes acquired either from humans or small animals. These image volumes typically contain a large number of articulated structures, which makes registration more difficult than the registration of head images, to which the majority of registration algorithms have been applied. This article presents a new method for the automatic registration of whole body computed tomography (CT) volumes, which consists of two main steps. Skeletons are first brought into approximate correspondence with a robust point-based method. Transformations so obtained are refined with an intensity-based nonrigid registration algorithm that includes spatial adaptation of the transformation's stiffness. The approach has been applied to whole body CT images of mice, to CT images of the human upper torso, and to human head and neck CT images. To validate the authors method on soft tissue structures, which are difficult to see in CT images, the authors use coregistered magnetic resonance images. They demonstrate that the approach they propose can successfully register image volumes even when these volumes are very different in size and shape or if they have been acquired with the subjects in different positions.

  20. Image automatic mosaics based on contour phase correlation

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jing; HU Zhiping; LIU Zhitai; OU Zongying

    2007-01-01

    The image planar mosaics is studied,and an image automatic mosaics algorithm on the basis of contour phase correlation is proposed in this paper.To begin with,by taking into account mere translations and rotations between images,a contour phase correlation algorithm is used to realize the preliminary alignments of images,and the initial projective transformation matrices are obtained.Then,an optimization algorithm is used to optimize the initial projective transformation matrices,and complete the precise image mosaics.The contour phase correlation is an improvement on the conventional phase correlation in two aspects:First,the contours of images are extracted,and the phase correlation is applied to the contours of images instead of the whole original images;Second,when there are multiple peak values approximate to the maximum peak value in the δ function array,their corresponding translations can be regarded as candidate translations and calculated separately,and the best translation can be determined by the optimization of conformability of two images in the overlapping area.The running results show that the proposed algorithm can consistently yield high-quality mosaics,even in the cases of poor or differential lighting conditions,existences of minor rotations,and other complicated displacements between images.

  1. Automatic pelvis segmentation from x-ray images of a mouse model

    Science.gov (United States)

    Al Okashi, Omar M.; Du, Hongbo; Al-Assam, Hisham

    2017-05-01

    The automatic detection and quantification of skeletal structures has a variety of different applications for biological research. Accurate segmentation of the pelvis from X-ray images of mice in a high-throughput project such as the Mouse Genomes Project not only saves time and cost but also helps achieving an unbiased quantitative analysis within the phenotyping pipeline. This paper proposes an automatic solution for pelvis segmentation based on structural and orientation properties of the pelvis in X-ray images. The solution consists of three stages including pre-processing image to extract pelvis area, initial pelvis mask preparation and final pelvis segmentation. Experimental results on a set of 100 X-ray images showed consistent performance of the algorithm. The automated solution overcomes the weaknesses of a manual annotation procedure where intra- and inter-observer variations cannot be avoided.

  2. Alexithymia and automatic processing of emotional stimuli: a systematic review.

    Science.gov (United States)

    Donges, Uta-Susan; Suslow, Thomas

    2017-04-01

    Alexithymia is a personality trait characterized by difficulties in recognizing and verbalizing emotions and the utilization of a cognitive style that is oriented toward external events, rather than intrapsychic experiences. Alexithymia is considered a vulnerability factor influencing onset and course of many psychiatric disorders. Even though emotions are, in general, elicited involuntarily and emerge without conscious effort, it is surprising that little attention in etiological considerations concerning alexithymia has been given to deficits in automatic emotion processing and their neurobiological bases. In this article, results from studies using behavioral or neurobiological research methods were systematically reviewed in which automatic processing of external emotional information was investigated as a function of alexithymia in healthy individuals. Twenty-two studies were identified through a literature search of Psycinfo, PubMed, and Web of Science databases from 1990 to 2016. The review reveals deficits in the automatic processing of emotional stimuli in alexithymia at a behavioral and neurobiological level. The vast majority of the reviewed studies examined visual processing. The alexithymia facets externally oriented thinking and difficulties identifying feelings were found to be related to impairments in the automatic processing of threat-related facial expressions. Alexithymic individuals manifest low reactivity to barely visible negative emotional stimuli in brain regions responsible for appraisal, encoding, and affective response, e.g. amygdala, occipitotemporal areas, and insula. Against this background, it appears plausible to assume that deficits in automatic emotion processing could be factors contributing to alexithymic personality characteristics. Directions for future research on alexithymia and automatic emotion perception are suggested.

  3. Automatic Distinguishing Oil Bearing Reservoirs from Water Bearing Reservoirs Based on Neural Networks and Image Process Technology%基于神经网络与图象处理技术的油水层综合判别

    Institute of Scientific and Technical Information of China (English)

    许少华; 梁久祯; 麻成斗; 孙文德

    2001-01-01

    提出了一种基于神经网络与图象处理技术相结合的油水层综合判别方法。首先将数字化测井曲线和地层参数经预处理转化为二值点阵图象模式,经过点阵数据编码压缩提取和记忆曲线所表征的地层模式特征,然后利用BP算法与遗传算法相结合的方法训练多层前馈神经网络。所得神经网络稳定、学习收敛速度快,同时有很强的记忆能力和推广能力,此模型对解决油水层综合判别问题具有良好的适应性。通过对大庆油田采油八厂升平油田葡萄花油层8口井的资料处理,取得了很好的效果。%In this paper we propose an automatic distinguishing oil bearing reservoirs from water bearing reservoirs based on neural networks and image process technology. First, we translate digital well measure curves and stratum parameters into binary image modes. Second, through contracting binary data codes, we distill and store stratum mode characters token by well measure curves. Last, we combine BP algorithm and genetic algorithm to train a multilayers forward neural network. The neural network keeps properties of being stable, fast learning, awfully memorable and generalized ability. This model is suitable to solve issues of Automatic distinguishing oil bearing reservoirs from water bearing reservoirs . Testing on 8 wells data of Putaohua oil layer in the eighth plant of Daqing oil field, we obtain nice results.

  4. Regional Image Features Model for Automatic Classification between Normal and Glaucoma in Fundus and Scanning Laser Ophthalmoscopy (SLO) Images.

    Science.gov (United States)

    Haleem, Muhammad Salman; Han, Liangxiu; Hemert, Jano van; Fleming, Alan; Pasquale, Louis R; Silva, Paolo S; Song, Brian J; Aiello, Lloyd Paul

    2016-06-01

    Glaucoma is one of the leading causes of blindness worldwide. There is no cure for glaucoma but detection at its earliest stage and subsequent treatment can aid patients to prevent blindness. Currently, optic disc and retinal imaging facilitates glaucoma detection but this method requires manual post-imaging modifications that are time-consuming and subjective to image assessment by human observers. Therefore, it is necessary to automate this process. In this work, we have first proposed a novel computer aided approach for automatic glaucoma detection based on Regional Image Features Model (RIFM) which can automatically perform classification between normal and glaucoma images on the basis of regional information. Different from all the existing methods, our approach can extract both geometric (e.g. morphometric properties) and non-geometric based properties (e.g. pixel appearance/intensity values, texture) from images and significantly increase the classification performance. Our proposed approach consists of three new major contributions including automatic localisation of optic disc, automatic segmentation of disc, and classification between normal and glaucoma based on geometric and non-geometric properties of different regions of an image. We have compared our method with existing approaches and tested it on both fundus and Scanning laser ophthalmoscopy (SLO) images. The experimental results show that our proposed approach outperforms the state-of-the-art approaches using either geometric or non-geometric properties. The overall glaucoma classification accuracy for fundus images is 94.4% and accuracy of detection of suspicion of glaucoma in SLO images is 93.9 %.

  5. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception

    DEFF Research Database (Denmark)

    Liu, Tongran; Xiao, Tong; Li, Xiaoyan

    2015-01-01

    The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolesce......-attentive change detection on social-emotional information.......The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent...

  6. Difference image analysis: Automatic kernel design using information criteria

    CERN Document Server

    Bramich, D M; Alsubai, K A; Bachelet, E; Mislis, D; Parley, N

    2015-01-01

    We present a selection of methods for automatically constructing an optimal kernel model for difference image analysis which require very few external parameters to control the kernel design. Each method consists of two components; namely, a kernel design algorithm to generate a set of candidate kernel models, and a model selection criterion to select the simplest kernel model from the candidate models that provides a sufficiently good fit to the target image. We restricted our attention to the case of solving for a spatially-invariant convolution kernel composed of delta basis functions, and we considered 19 different kernel solution methods including six employing kernel regularisation. We tested these kernel solution methods by performing a comprehensive set of image simulations and investigating how their performance in terms of model error, fit quality, and photometric accuracy depends on the properties of the reference and target images. We find that the irregular kernel design algorithm employing unreg...

  7. Bayesian Framework for Automatic Image Annotation Using Visual Keywords

    Science.gov (United States)

    Agrawal, Rajeev; Wu, Changhua; Grosky, William; Fotouhi, Farshad

    In this paper, we propose a Bayesian probability based framework, which uses visual keywords and already available text keywords to automatically annotate the images. Taking the cue from document classification, an image can be considered as a document and objects present in it as words. Using this concept, we can create visual keywords by dividing an image into tiles based on a certain template size. Visual keywords are simple vector quantization of small-sized image tiles. We estimate the conditional probability of a text keyword in the presence of visual keywords, described by a multivariate Gaussian distribution. We demonstrate the effectiveness of our approach by comparing predicted text annotations with manual annotations and analyze the effect of text annotation length on the performance.

  8. FULLY AUTOMATIC FRAMEWORK FOR SEGMENTATION OF BRAIN MRI IMAGE

    Institute of Scientific and Technical Information of China (English)

    Lin Pan; Zheng Chongxun; Yang Yong; Gu Jianwen

    2005-01-01

    Objective To propose an automatic framework for segmentation of brain image in this paper. Methods The brain MRI image segmentation framework consists of three-step segmentation procedures. First, Non-brain structures removal by level set method. Then, the non-uniformity correction method is based on computing estimates of tissue intensity variation. Finally, it uses a statistical model based on Markov random filed for MRI brain image segmentation. The brain tissue can be classified into cerebrospinal fluid, white matter and gray matter. Results To evaluate the proposed our method, we performed two sets of experiments, one on simulated MR and another on real MR brain data. Conclusion The efficacy of the brain MRI image segmentation framework has been demonstrated by the extensive experiments. In the future, we are also planning on a large-scale clinical evaluation of this segmentation framework.

  9. 基于数字图像处理的自动考勤系统%Automatic Attendance System Based on Digital Image Processing

    Institute of Scientific and Technical Information of China (English)

    吴兴蛟; 吴晟; 刘安琪; 普丽; 南峰涛

    2015-01-01

    Student attendance in class management in colleges and universities are all through the teacher in class at-tendance record lectures ,real-time ,effectiveness is usually not guaranteed ,and the index of students attendance is not fair . Based on the design of a digital image based intelligent attendance ,to enhance the accuracy in attendance and attendance sys-tem is mainly aimed at the current university students’ online and students leave school attendance management and design of information system make full use of informatization ,automation ,the condition of can better regulate the school rules and save time ,so as to improve general attendance .Operation results show that by using the system can significantly reduce the waste of class time ,constraints of the students’ attendance .%目前高校学生上课考勤管理都是通过任课老师上课点名来记录学生上课情况,实时性、有效性通常得不到保障,而且同学出勤得不到公平的指标。基于此设计一套基于数字图像的智能考勤,用以提高考勤的准确率以及学生的出勤率,系统主要是针对目前高校学生在线请假以及学生上课出勤管理而设计充分利用信息化、自动化的条件,可以更好规范学校规章节省上课时间,从而提高出勤率。操作结果表明,通过使用系统可大幅度减少课堂时间的浪费,约束学生的出勤。

  10. Cascaded-Automatic Segmentation for Schistosoma japonicum eggs in images of fecal samples.

    Science.gov (United States)

    Zhang, Junjie; Lin, Yunyu; Liu, Yan; Li, Zhengyu; Li, Zhong; Hu, Shan; Liu, Zhiyuan; Lin, Dandan; Wu, Zhongdao

    2014-09-01

    To recognize parasite eggs automatically, the automatic segmentation of parasite egg images is very important for the extraction of characteristics and genera classification. A Cascaded-Automatic Segmentation approach was proposed. Firstly, image contrast between the border of an egg and its background for all samples was strengthened by the Radon-Like Features algorithm and the enhanced image was processed into a binary image to get an initial set. Then, the elliptical targets are located with Randomized Hough Transform (RHT). The fitted data of an elliptical border are considered the initial border data and the accurate border of a Schistosoma japonicum egg can be finally segmented using an Active Contour Model (Snake). Seventy-three cases of S. japonicum eggs in fecal samples were found; 61 images contained a parasite egg and 12 did not. Although the illumination, noise pollution, boundary definitions of eggs, and egg position are different, they are all segmented and labeled accurately. The results proved that accurate borders of S. japonicum eggs could be recognized precisely using the proposed method, and the robustness of the method is good even in images with heavy noise. This indicates that the proposed method can overcome the disadvantages of the traditional threshold segmentation method, which has limited adaptability to images with heavy background noise. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Color Image Segmentation Based on Different Color Space Models Using Automatic GrabCut

    Directory of Open Access Journals (Sweden)

    Dina Khattab

    2014-01-01

    Full Text Available This paper presents a comparative study using different color spaces to evaluate the performance of color image segmentation using the automatic GrabCut technique. GrabCut is considered as one of the semiautomatic image segmentation techniques, since it requires user interaction for the initialization of the segmentation process. The automation of the GrabCut technique is proposed as a modification of the original semiautomatic one in order to eliminate the user interaction. The automatic GrabCut utilizes the unsupervised Orchard and Bouman clustering technique for the initialization phase. Comparisons with the original GrabCut show the efficiency of the proposed automatic technique in terms of segmentation, quality, and accuracy. As no explicit color space is recommended for every segmentation problem, automatic GrabCut is applied with RGB, HSV, CMY, XYZ, and YUV color spaces. The comparative study and experimental results using different color images show that RGB color space is the best color space representation for the set of the images used.

  12. Image Processing in Intelligent Medical Robotic Systems

    Directory of Open Access Journals (Sweden)

    Shashev Dmitriy

    2016-01-01

    Full Text Available The paper deals with the use of high-performance computing systems with the parallel-operation architecture in intelligent medical systems, such as medical robotic systems, based on a computer vision system, is an automatic control system with the strict requirements, such as high reliability, accuracy and speed of performance. It shows the basic block-diagram of an automatic control system based on a computer vision system. The author considers the possibility of using a reconfigurable computing environment in such systems. The design principles of the reconfigurable computing environment allows to improve a reliability, accuracy and performance of whole system many times. The article contains the brief overview and the theory of the research, demonstrates the use of reconfigurable computing environments for the image preprocessing, namely morphological image processing operations. Present results of the successful simulation of the reconfigurable computing environment and implementation of the morphological image processing operations on the test image in the MATLAB Simulink.

  13. Automatic script identification from images using cluster-based templates

    Energy Technology Data Exchange (ETDEWEB)

    Hochberg, J.; Kerns, L.; Kelly, P.; Thomas, T.

    1995-02-01

    We have developed a technique for automatically identifying the script used to generate a document that is stored electronically in bit image form. Our approach differs from previous work in that the distinctions among scripts are discovered by an automatic learning procedure, without any handson analysis. We first develop a set of representative symbols (templates) for each script in our database (Cyrillic, Roman, etc.). We do this by identifying all textual symbols in a set of training documents, scaling each symbol to a fixed size, clustering similar symbols, pruning minor clusters, and finding each cluster`s centroid. To identify a new document`s script, we identify and scale a subset of symbols from the document and compare them to the templates for each script. We choose the script whose templates provide the best match. Our current system distinguishes among the Armenian, Burmese, Chinese, Cyrillic, Ethiopic, Greek, Hebrew, Japanese, Korean, Roman, and Thai scripts with over 90% accuracy.

  14. Automatic/Control Processing and Attention.

    Science.gov (United States)

    1982-04-01

    experiments. American Scientist, 1969, 57, 421-457. (a) Sternberg, S. The discovery of processing stages: Extensions of Donder’s method . In W. G. Koster...Research Institute, Alexandria, VA H. O’Neil, Army Research Institute, Alexandria, VA R. Sasmor, Army Reseach Institute, Alexandria, VA J. Ward, U.S...DARPA, Arlington, VA P. Chapin, Linguistics Program, NSF, Washington, DC S. Chipman, National Institute of Education, Washington, DC W. McLaurin, Camp

  15. Automatic view synthesis by image-domain-warping.

    Science.gov (United States)

    Stefanoski, Nikolce; Wang, Oliver; Lang, Manuel; Greisen, Pierre; Heinzle, Simon; Smolic, Aljosa

    2013-09-01

    Today, stereoscopic 3D (S3D) cinema is already mainstream, and almost all new display devices for the home support S3D content. S3D distribution infrastructure to the home is already established partly in the form of 3D Blu-ray discs, video on demand services, or television channels. The necessity to wear glasses is, however, often considered as an obstacle, which hinders broader acceptance of this technology in the home. Multiviewautostereoscopic displays enable a glasses free perception of S3D content for several observers simultaneously, and support head motion parallax in a limited range. To support multiviewautostereoscopic displays in an already established S3D distribution infrastructure, a synthesis of new views from S3D video is needed. In this paper, a view synthesis method based on image-domain-warping (IDW) is presented that automatically synthesizes new views directly from S3D video and functions completely. IDW relies on an automatic and robust estimation of sparse disparities and image saliency information, and enforces target disparities in synthesized images using an image warping framework. Two configurations of the view synthesizer in the scope of a transmission and view synthesis framework are analyzed and evaluated. A transmission and view synthesis system that uses IDW is recently submitted to MPEG's call for proposals on 3D video technology, where it is ranked among the four best performing proposals.

  16. Automatic removal of manually induced artefacts in ultrasound images of thyroid gland.

    Science.gov (United States)

    Narayan, Nikhil S; Marziliano, Pina; Hobbs, Christopher G L

    2013-01-01

    Manually induced artefacts, like caliper marks and anatomical labels, render an ultrasound (US) image incapable of being subjected to further processes of Computer Aided Diagnosis (CAD). In this paper, we propose a technique to remove these artefacts and restore the image as accurately as possible. The technique finds application as a pre-processing step when developing unsupervised segmentation algorithms for US images that deal with automatic estimation of the number of segments and clustering. The novelty of the algorithm lies in the image processing pipeline chosen to automatically identify the artefacts and is developed based on the histogram properties of the artefacts. The algorithm was able to successfully restore the images to a high quality when it was executed on a dataset of 18 US images of the thyroid gland on which the artefacts were induced manually by a doctor. Further experiments on an additional dataset of 10 unmarked US images of the thyroid gland on which the artefacts were simulated using Matlab showed that the restored images were again of high quality with a PSNR > 38 dB and free of any manually induced artefacts.

  17. Automatic anatomy recognition on CT images with pathology

    Science.gov (United States)

    Huang, Lidong; Udupa, Jayaram K.; Tong, Yubing; Odhner, Dewey; Torigian, Drew A.

    2016-03-01

    Body-wide anatomy recognition on CT images with pathology becomes crucial for quantifying body-wide disease burden. This, however, is a challenging problem because various diseases result in various abnormalities of objects such as shape and intensity patterns. We previously developed an automatic anatomy recognition (AAR) system [1] whose applicability was demonstrated on near normal diagnostic CT images in different body regions on 35 organs. The aim of this paper is to investigate strategies for adapting the previous AAR system to diagnostic CT images of patients with various pathologies as a first step toward automated body-wide disease quantification. The AAR approach consists of three main steps - model building, object recognition, and object delineation. In this paper, within the broader AAR framework, we describe a new strategy for object recognition to handle abnormal images. In the model building stage an optimal threshold interval is learned from near-normal training images for each object. This threshold is optimally tuned to the pathological manifestation of the object in the test image. Recognition is performed following a hierarchical representation of the objects. Experimental results for the abdominal body region based on 50 near-normal images used for model building and 20 abnormal images used for object recognition show that object localization accuracy within 2 voxels for liver and spleen and 3 voxels for kidney can be achieved with the new strategy.

  18. Automatic medical X-ray image classification using annotation.

    Science.gov (United States)

    Zare, Mohammad Reza; Mueen, Ahmed; Seng, Woo Chaw

    2014-02-01

    The demand for automatically classification of medical X-ray images is rising faster than ever. In this paper, an approach is presented to gain high accuracy rate for those classes of medical database with high ratio of intraclass variability and interclass similarities. The classification framework was constructed via annotation using the following three techniques: annotation by binary classification, annotation by probabilistic latent semantic analysis, and annotation using top similar images. Next, final annotation was constructed by applying ranking similarity on annotated keywords made by each technique. The final annotation keywords were then divided into three levels according to the body region, specific bone structure in body region as well as imaging direction. Different weights were given to each level of the keywords; they are then used to calculate the weightage for each category of medical images based on their ground truth annotation. The weightage computed from the generated annotation of query image was compared with the weightage of each category of medical images, and then the query image would be assigned to the category with closest weightage to the query image. The average accuracy rate reported is 87.5 %.

  19. Automatic color based reassembly of fragmented images and paintings.

    Science.gov (United States)

    Tsamoura, Efthymia; Pitas, Ioannis

    2010-03-01

    The problem of reassembling image fragments arises in many scientific fields, such as forensics and archaeology. In the field of archaeology, the pictorial excavation findings are almost always in the form of painting fragments. The manual execution of this task is very difficult, as it requires great amount of time, skill and effort. Thus, the automation of such a work is very important and can lead to faster, more efficient, painting reassembly and to a significant reduction in the human effort involved. In this paper, an integrated method for automatic color based 2-D image fragment reassembly is presented. The proposed 2-D reassembly technique is divided into four steps. Initially, the image fragments which are probably spatially adjacent, are identified utilizing techniques employed in content based image retrieval systems. The second operation is to identify the matching contour segments for every retained couple of image fragments, via a dynamic programming technique. The next step is to identify the optimal transformation in order to align the matching contour segments. Many registration techniques have been evaluated to this end. Finally, the overall image is reassembled from its properly aligned fragments. This is achieved via a novel algorithm, which exploits the alignment angles found during the previous step. In each stage, the most robust algorithms having the best performance are investigated and their results are fed to the next step. We have experimented with the proposed method using digitally scanned images of actual torn pieces of paper image prints and we produced very satisfactory reassembly results.

  20. A Novel, Automatic Quality Control Scheme for Real Time Image Transmission

    Directory of Open Access Journals (Sweden)

    S. Ramachandran

    2002-01-01

    Full Text Available A novel scheme to compute energy on-the-fly and thereby control the quality of the image frames dynamically is presented along with its FPGA implementation. This scheme is suitable for incorporation in image compression systems such as video encoders. In this new scheme, processing is automatically stopped when the desired quality is achieved for the image being processed by using a concept called pruning. Pruning also increases the processing speed by a factor of more than two when compared to the conventional method of processing without pruning. An MPEG-2 encoder implemented using this scheme is capable of processing good quality monochrome and color images of sizes up to 1024 × 768 pixels at the rate of 42 and 28 frames per second, respectively, with a compression ratio of over 17:1. The encoder is also capable of working in the fixed pruning level mode with user programmable features.

  1. A Simple Blueprint for Automatic Boolean Query Processing.

    Science.gov (United States)

    Salton, G.

    1988-01-01

    Describes a new Boolean retrieval environment in which an extended soft Boolean logic is used to automatically construct queries from original natural language formulations provided by users. Experimental results that compare the retrieval effectiveness of this method to conventional Boolean and vector processing are discussed. (27 references)…

  2. Automatic Microaneurysm Detection and Characterization Through Digital Color Fundus Images

    Energy Technology Data Exchange (ETDEWEB)

    Martins, Charles; Veras, Rodrigo; Ramalho, Geraldo; Medeiros, Fatima; Ushizima, Daniela

    2008-08-29

    Ocular fundus images can provide information about retinal, ophthalmic, and even systemic diseases such as diabetes. Microaneurysms (MAs) are the earliest sign of Diabetic Retinopathy, a frequently observed complication in both type 1 and type 2 diabetes. Robust detection of MAs in digital color fundus images is critical in the development of automated screening systems for this kind of disease. Automatic grading of these images is being considered by health boards so that the human grading task is reduced. In this paper we describe segmentation and the feature extraction methods for candidate MAs detection.We show that the candidate MAs detected with the methodology have been successfully classified by a MLP neural network (correct classification of 84percent).

  3. Hybrid Generative/Discriminative Learning for Automatic Image Annotation

    CERN Document Server

    Yang, Shuang Hong; Zha, Hongyuan

    2012-01-01

    Automatic image annotation (AIA) raises tremendous challenges to machine learning as it requires modeling of data that are both ambiguous in input and output, e.g., images containing multiple objects and labeled with multiple semantic tags. Even more challenging is that the number of candidate tags is usually huge (as large as the vocabulary size) yet each image is only related to a few of them. This paper presents a hybrid generative-discriminative classifier to simultaneously address the extreme data-ambiguity and overfitting-vulnerability issues in tasks such as AIA. Particularly: (1) an Exponential-Multinomial Mixture (EMM) model is established to capture both the input and output ambiguity and in the meanwhile to encourage prediction sparsity; and (2) the prediction ability of the EMM model is explicitly maximized through discriminative learning that integrates variational inference of graphical models and the pairwise formulation of ordinal regression. Experiments show that our approach achieves both su...

  4. Image Processing Software

    Science.gov (United States)

    Bosio, M. A.

    1990-11-01

    ABSTRACT: A brief description of astronomical image software is presented. This software was developed in a Digital Micro Vax II Computer System. : St presenta una somera descripci6n del software para procesamiento de imagenes. Este software fue desarrollado en un equipo Digital Micro Vax II. : DATA ANALYSIS - IMAGE PROCESSING

  5. The development of automatic associative processes and children's false memories.

    Science.gov (United States)

    Wimmer, Marina C; Howe, Mark L

    2009-12-01

    We investigated children's ability to generate associations and how automaticity of associative activation unfolds developmentally. Children generated associative responses using a single associate paradigm (Experiment 1) or a Deese/Roediger-McDermott (DRM)-like multiple associates paradigm (Experiment 2). The results indicated that children's ability to generate meaningful word associates, and the automaticity with which they were generated, increased between 5, 7, and 11 years of age. These findings suggest that children's domain-specific knowledge base and the associative connections among related concepts are present and continue to develop from a very early age. Moreover, there is an increase in how these concepts are automatically activated with age, something that results from domain-general developments in speed of processing. These changes are consistent with the neurodevelopmental literature and together may provide a more complete explanation of the development of memory illusions.

  6. Correlation analysis-based image segmentation approach for automatic agriculture vehicle

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    It is important to segment image correctly to extract guidance information for automatic agriculture vehicle. If we can make the computer know where the crops are, we can extract the guidance line easily. Images were divided into some rectangle small windows, then a pair of 1-D arrays was constructed in each small windows. The correlation coefficients of every small window constructed the features to segment images. The results showed that correlation analysis is a potential approach for processing complex farmland for guidance system, and more correlation analysis methods must be researched.

  7. Automatic dental arch detection and panoramic image synthesis from CT images.

    Science.gov (United States)

    Sa-Ing, Vera; Wangkaoom, Kongyot; Thongvigitmanee, Saowapak S

    2013-01-01

    Due to accurate 3D information, computed tomography (CT), especially cone-beam CT or dental CT, has been widely used for diagnosis and treatment planning in dentistry. Axial images acquired from both medical and dental CT scanners can generate synthetic panoramic images similar to typical 2D panoramic radiographs. However, the conventional way to reconstruct the simulated panoramic images is to manually draw the dental arch on axial images. In this paper, we propose a new fast algorithm for automatic detection of the dental arch. Once the dental arch is computed, a series of synthetic panoramic images as well as a ray-sum panoramic image can be automatically generated. We have tested the proposed algorithm on 120 CT axial images and all of them can provide the decent estimate of the dental arch. The results show that our proposed algorithm can mostly detect the correct dental arch.

  8. Automatic extraction of disease-specific features from Doppler images

    Science.gov (United States)

    Negahdar, Mohammadreza; Moradi, Mehdi; Parajuli, Nripesh; Syeda-Mahmood, Tanveer

    2017-03-01

    Flow Doppler imaging is widely used by clinicians to detect diseases of the valves. In particular, continuous wave (CW) Doppler mode scan is routinely done during echocardiography and shows Doppler signal traces over multiple heart cycles. Traditionally, echocardiographers have manually traced such velocity envelopes to extract measurements such as decay time and pressure gradient which are then matched to normal and abnormal values based on clinical guidelines. In this paper, we present a fully automatic approach to deriving these measurements for aortic stenosis retrospectively from echocardiography videos. Comparison of our method with measurements made by echocardiographers shows large agreement as well as identification of new cases missed by echocardiographers.

  9. Medical image processing

    CERN Document Server

    Dougherty, Geoff

    2011-01-01

    This book is designed for end users in the field of digital imaging, who wish to update their skills and understanding with the latest techniques in image analysis. This book emphasizes the conceptual framework of image analysis and the effective use of image processing tools. It uses applications in a variety of fields to demonstrate and consolidate both specific and general concepts, and to build intuition, insight and understanding. Although the chapters are essentially self-contained they reference other chapters to form an integrated whole. Each chapter employs a pedagogical approach to e

  10. On the Control of Automatic Processes: A Parallel Distributed Processing Account of the Stroop Effect.

    Science.gov (United States)

    Cohen, Jonathan D.; And Others

    1990-01-01

    It is proposed that attributes of automatization depend on the strength of a processing pathway, and that strength increases with training. With the Stroop effect as an example, automatic processes are shown through simulation to be continuous and to emerge gradually with practice. (SLD)

  11. Automatic analysis of the micronucleus test in primary human lymphocytes using image analysis.

    Science.gov (United States)

    Frieauff, W; Martus, H J; Suter, W; Elhajouji, A

    2013-01-01

    The in vitro micronucleus test (MNT) is a well-established test for early screening of new chemical entities in industrial toxicology. For assessing the clastogenic or aneugenic potential of a test compound, micronucleus induction in cells has been shown repeatedly to be a sensitive and a specific parameter. Various automated systems to replace the tedious and time-consuming visual slide analysis procedure as well as flow cytometric approaches have been discussed. The ROBIAS (Robotic Image Analysis System) for both automatic cytotoxicity assessment and micronucleus detection in human lymphocytes was developed at Novartis where the assay has been used to validate positive results obtained in the MNT in TK6 cells, which serves as the primary screening system for genotoxicity profiling in early drug development. In addition, the in vitro MNT has become an accepted alternative to support clinical studies and will be used for regulatory purposes as well. The comparison of visual with automatic analysis results showed a high degree of concordance for 25 independent experiments conducted for the profiling of 12 compounds. For concentration series of cyclophosphamide and carbendazim, a very good correlation between automatic and visual analysis by two examiners could be established, both for the relative division index used as cytotoxicity parameter, as well as for micronuclei scoring in mono- and binucleated cells. Generally, false-positive micronucleus decisions could be controlled by fast and simple relocation of the automatically detected patterns. The possibility to analyse 24 slides within 65h by automatic analysis over the weekend and the high reproducibility of the results make automatic image processing a powerful tool for the micronucleus analysis in primary human lymphocytes. The automated slide analysis for the MNT in human lymphocytes complements the portfolio of image analysis applications on ROBIAS which is supporting various assays at Novartis.

  12. Biomedical Image Processing

    CERN Document Server

    Deserno, Thomas Martin

    2011-01-01

    In modern medicine, imaging is the most effective tool for diagnostics, treatment planning and therapy. Almost all modalities have went to directly digital acquisition techniques and processing of this image data have become an important option for health care in future. This book is written by a team of internationally recognized experts from all over the world. It provides a brief but complete overview on medical image processing and analysis highlighting recent advances that have been made in academics. Color figures are used extensively to illustrate the methods and help the reader to understand the complex topics.

  13. Automatically designed machine vision system for the localization of CCA transverse section in ultrasound images.

    Science.gov (United States)

    Benes, Radek; Karasek, Jan; Burget, Radim; Riha, Kamil

    2013-01-01

    The common carotid artery (CCA) is a source of important information that doctors can use to evaluate the patients' health. The most often measured parameters are arterial stiffness, lumen diameter, wall thickness, and other parameters where variation with time is usually measured. Unfortunately, the manual measurement of dynamic parameters of the CCA is time consuming, and therefore, for practical reasons, the only alternative is automatic approach. The initial localization of artery is important and must precede the main measurement. This article describes a novel method for the localization of CCA in the transverse section of a B-mode ultrasound image. The novel method was designed automatically by using the grammar-guided genetic programming (GGGP). The GGGP searches for the best possible combination of simple image processing tasks (independent building blocks). The best possible solution is represented with the highest detection precision. The method is tested on a validation database of CCA images that was specially created for this purpose and released for use by other scientists. The resulting success of the proposed solution was 82.7%, which exceeded the current state of the art by 4% while the computation time requirements were acceptable. The paper also describes an automatic method that was used in designing the proposed solution. This automatic method provides a universal approach to designing complex solutions with the support of evolutionary algorithms.

  14. Towards Automatic Processing of Virtual City Models for Simulations

    Science.gov (United States)

    Piepereit, R.; Schilling, A.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2016-10-01

    Especially in the field of numerical simulations, such as flow and acoustic simulations, the interest in using virtual 3D models to optimize urban systems is increasing. The few instances in which simulations were already carried out in practice have been associated with an extremely high manual and therefore uneconomical effort for the processing of models. Using different ways of capturing models in Geographic Information System (GIS) and Computer Aided Engineering (CAE), increases the already very high complexity of the processing. To obtain virtual 3D models suitable for simulation, we developed a tool for automatic processing with the goal to establish ties between the world of GIS and CAE. In this paper we introduce a way to use Coons surfaces for the automatic processing of building models in LoD2, and investigate ways to simplify LoD3 models in order to reduce unnecessary information for a numerical simulation.

  15. Automatic segmentation of brain images: selection of region extraction methods

    Science.gov (United States)

    Gong, Leiguang; Kulikowski, Casimir A.; Mezrich, Reuben S.

    1991-07-01

    In automatically analyzing brain structures from a MR image, the choice of low level region extraction methods depends on the characteristics of both the target object and the surrounding anatomical structures in the image. The authors have experimented with local thresholding, global thresholding, and other techniques, using various types of MR images for extracting the major brian landmarks and different types of lesions. This paper describes specifically a local- binary thresholding method and a new global-multiple thresholding technique developed for MR image segmentation and analysis. The initial testing results on their segmentation performance are presented, followed by a comparative analysis of the two methods and their ability to extract different types of normal and abnormal brain structures -- the brain matter itself, tumors, regions of edema surrounding lesions, multiple sclerosis lesions, and the ventricles of the brain. The analysis and experimental results show that the global multiple thresholding techniques are more than adequate for extracting regions that correspond to the major brian structures, while local binary thresholding is helpful for more accurate delineation of small lesions such as those produced by MS, and for the precise refinement of lesion boundaries. The detection of other landmarks, such as the interhemispheric fissure, may require other techniques, such as line-fitting. These experiments have led to the formulation of a set of generic computer-based rules for selecting the appropriate segmentation packages for particular types of problems, based on which further development of an innovative knowledge- based, goal directed biomedical image analysis framework is being made. The system will carry out the selection automatically for a given specific analysis task.

  16. Automatic Grid Processing of Large Photographic Plates (With 3 Figures)

    Science.gov (United States)

    Dumoulin, B.; Québatte, J.; West, R. M.

    In an earlier article (42.034.211), the authors have described a new method for the development of large photographic plates. This method, which is called grid processing, is based on the rapid motion of a metallic grid in the developer, very close to the emulsion surface. Experience has shown that this method is superior to the classical tray rocker. The authors here report the continuation of the tests, now with the aid of the prototype of an automatic grid processing machine.

  17. Automatic quantitative analysis of cardiac MR perfusion images

    Science.gov (United States)

    Breeuwer, Marcel M.; Spreeuwers, Luuk J.; Quist, Marcel J.

    2001-07-01

    Magnetic Resonance Imaging (MRI) is a powerful technique for imaging cardiovascular diseases. The introduction of cardiovascular MRI into clinical practice is however hampered by the lack of efficient and accurate image analysis methods. This paper focuses on the evaluation of blood perfusion in the myocardium (the heart muscle) from MR images, using contrast-enhanced ECG-triggered MRI. We have developed an automatic quantitative analysis method, which works as follows. First, image registration is used to compensate for translation and rotation of the myocardium over time. Next, the boundaries of the myocardium are detected and for each position within the myocardium a time-intensity profile is constructed. The time interval during which the contrast agent passes for the first time through the left ventricle and the myocardium is detected and various parameters are measured from the time-intensity profiles in this interval. The measured parameters are visualized as color overlays on the original images. Analysis results are stored, so that they can later on be compared for different stress levels of the heart. The method is described in detail in this paper and preliminary validation results are presented.

  18. Automatic image annotation and retrieval using group sparsity.

    Science.gov (United States)

    Zhang, Shaoting; Huang, Junzhou; Li, Hongsheng; Metaxas, Dimitris N

    2012-06-01

    Automatically assigning relevant text keywords to images is an important problem. Many algorithms have been proposed in the past decade and achieved good performance. Efforts have focused upon model representations of keywords, whereas properties of features have not been well investigated. In most cases, a group of features is preselected, yet important feature properties are not well used to select features. In this paper, we introduce a regularization-based feature selection algorithm to leverage both the sparsity and clustering properties of features, and incorporate it into the image annotation task. Using this group-sparsity-based method, the whole group of features [e.g., red green blue (RGB) or hue, saturation, and value (HSV)] is either selected or removed. Thus, we do not need to extract this group of features when new data comes. A novel approach is also proposed to iteratively obtain similar and dissimilar pairs from both the keyword similarity and the relevance feedback. Thus, keyword similarity is modeled in the annotation framework. We also show that our framework can be employed in image retrieval tasks by selecting different image pairs. Extensive experiments are designed to compare the performance between features, feature combinations, and regularization-based feature selection methods applied on the image annotation task, which gives insight into the properties of features in the image annotation task. The experimental results demonstrate that the group-sparsity-based method is more accurate and stable than others.

  19. Methods in Astronomical Image Processing

    Science.gov (United States)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  20. STUDY OF AUTOMATIC IMAGE RECTIFICATION AND REGISTRATION OF SCANNED HISTORICAL AERIAL PHOTOGRAPHS

    Directory of Open Access Journals (Sweden)

    H. R. Chen

    2016-06-01

    Full Text Available Historical aerial photographs directly provide good evidences of past times. The Research Center for Humanities and Social Sciences (RCHSS of Taiwan Academia Sinica has collected and scanned numerous historical maps and aerial images of Taiwan and China. Some maps or images have been geo-referenced manually, but most of historical aerial images have not been registered since there are no GPS or IMU data for orientation assisting in the past. In our research, we developed an automatic process of matching historical aerial images by SIFT (Scale Invariant Feature Transform for handling the great quantity of images by computer vision. SIFT is one of the most popular method of image feature extracting and matching. This algorithm extracts extreme values in scale space into invariant image features, which are robust to changing in rotation scale, noise, and illumination. We also use RANSAC (Random sample consensus to remove outliers, and obtain good conjugated points between photographs. Finally, we manually add control points for registration through least square adjustment based on collinear equation. In the future, we can use image feature points of more photographs to build control image database. Every new image will be treated as query image. If feature points of query image match the features in database, it means that the query image probably is overlapped with control images.With the updating of database, more and more query image can be matched and aligned automatically. Other research about multi-time period environmental changes can be investigated with those geo-referenced temporal spatial data.

  1. Automatic Image Registration of Multi-Modal Remotely Sensed Data with Global Shearlet Features

    Science.gov (United States)

    Murphy, James M.; Le Moigne, Jacqueline; Harding, David J.

    2016-01-01

    Automatic image registration is the process of aligning two or more images of approximately the same scene with minimal human assistance. Wavelet-based automatic registration methods are standard, but sometimes are not robust to the choice of initial conditions. That is, if the images to be registered are too far apart relative to the initial guess of the algorithm, the registration algorithm does not converge or has poor accuracy, and is thus not robust. These problems occur because wavelet techniques primarily identify isotropic textural features and are less effective at identifying linear and curvilinear edge features. We integrate the recently developed mathematical construction of shearlets, which is more effective at identifying sparse anisotropic edges, with an existing automatic wavelet-based registration algorithm. Our shearlet features algorithm produces more distinct features than wavelet features algorithms; the separation of edges from textures is even stronger than with wavelets. Our algorithm computes shearlet and wavelet features for the images to be registered, then performs least squares minimization on these features to compute a registration transformation. Our algorithm is two-staged and multiresolution in nature. First, a cascade of shearlet features is used to provide a robust, though approximate, registration. This is then refined by registering with a cascade of wavelet features. Experiments across a variety of image classes show an improved robustness to initial conditions, when compared to wavelet features alone.

  2. Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis

    Directory of Open Access Journals (Sweden)

    Yehu Shen

    2014-01-01

    Full Text Available Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying.

  3. Automatic segmentation of seeds and fluoroscope tracking (FTRAC) fiducial in prostate brachytherapy x-ray images

    Science.gov (United States)

    Kuo, Nathanael; Lee, Junghoon; Deguet, Anton; Song, Danny; Burdette, E. Clif; Prince, Jerry

    2010-02-01

    C-arm X-ray fluoroscopy-based radioactive seed localization for intraoperative dosimetry of prostate brachytherapy is an active area of research. The fluoroscopy tracking (FTRAC) fiducial is an image-based tracking device composed of radio-opaque BBs, lines, and ellipses that provides an effective means for pose estimation so that three-dimensional reconstruction of the implanted seeds from multiple X-ray images can be related to the ultrasound-computed prostate volume. Both the FTRAC features and the brachytherapy seeds must be segmented quickly and accurately during the surgery, but current segmentation algorithms are inhibitory in the operating room (OR). The first reason is that current algorithms require operators to manually select a region of interest (ROI), preventing automatic pipelining from image acquisition to seed reconstruction. Secondly, these algorithms fail often, requiring operators to manually correct the errors. We propose a fast and effective ROI-free automatic FTRAC and seed segmentation algorithm to minimize such human intervention. The proposed algorithm exploits recent image processing tools to make seed reconstruction as easy and convenient as possible. Preliminary results on 162 patient images show this algorithm to be fast, effective, and accurate for all features to be segmented. With near perfect success rates and subpixel differences to manual segmentation, our automatic FTRAC and seed segmentation algorithm shows promising results to save crucial time in the OR while reducing errors.

  4. The image processing handbook

    CERN Document Server

    Russ, John C

    2006-01-01

    Now in its fifth edition, John C. Russ's monumental image processing reference is an even more complete, modern, and hands-on tool than ever before. The Image Processing Handbook, Fifth Edition is fully updated and expanded to reflect the latest developments in the field. Written by an expert with unequalled experience and authority, it offers clear guidance on how to create, select, and use the most appropriate algorithms for a specific application. What's new in the Fifth Edition? ·       A new chapter on the human visual process that explains which visual cues elicit a response from the vie

  5. An Automatic Development Process for Integrated Modular Avionics Software

    Directory of Open Access Journals (Sweden)

    Ying Wang

    2013-05-01

    Full Text Available With the ever-growing avionics functions, the modern avionics architecture is evolving from traditional federated architecture to Integrated Modular Avionics (IMA. ARINC653 is a major industry standard to support partitioning concept introduced in IMA to achieve security isolation between avionics functions with different criticalities. To decrease the complexity and improve the reliability of the design and implementation of IMA-based avionics software, this paper proposes an automatic development process based on Architecture Analysis & Design Language. An automatic model transformation approach from domain-specific models to platform-specific ARINC653 models and safety-critical ARINC653-compliant code generation technology are respectively presented during this process. A simplified multi-task flight application as a case study with preliminary experiment result is given to show the validity of this process.

  6. Automatic tracking of arbitrarily shaped implanted markers in kilovoltage projection images: A feasibility study

    Energy Technology Data Exchange (ETDEWEB)

    Regmi, Rajesh; Lovelock, D. Michael; Hunt, Margie; Zhang, Pengpeng; Pham, Hai; Xiong, Jianping; Yorke, Ellen D.; Mageras, Gig S., E-mail: magerasg@mskcc.org [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York 10065 (United States); Goodman, Karyn A.; Rimner, Andreas [Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, New York 10065 (United States); Mostafavi, Hassan [Ginzton Technology Center, Varian Medical Systems, Palo Alto, California 94304 (United States)

    2014-07-15

    Purpose: Certain types of commonly used fiducial markers take on irregular shapes upon implantation in soft tissue. This poses a challenge for methods that assume a predefined shape of markers when automatically tracking such markers in kilovoltage (kV) radiographs. The authors have developed a method of automatically tracking regularly and irregularly shaped markers using kV projection images and assessed its potential for detecting intrafractional target motion during rotational treatment. Methods: Template-based matching used a normalized cross-correlation with simplex minimization. Templates were created from computed tomography (CT) images for phantom studies and from end-expiration breath-hold planning CT for patient studies. The kV images were processed using a Sobel filter to enhance marker visibility. To correct for changes in intermarker relative positions between simulation and treatment that can introduce errors in automatic matching, marker offsets in three dimensions were manually determined from an approximately orthogonal pair of kV images. Two studies in anthropomorphic phantom were carried out, one using a gold cylindrical marker representing regular shape, another using a Visicoil marker representing irregular shape. Automatic matching of templates to cone beam CT (CBCT) projection images was performed to known marker positions in phantom. In patient data, automatic matching was compared to manual matching as an approximate ground truth. Positional discrepancy between automatic and manual matching of less than 2 mm was assumed as the criterion for successful tracking. Tracking success rates were examined in kV projection images from 22 CBCT scans of four pancreas, six gastroesophageal junction, and one lung cancer patients. Each patient had at least one irregularly shaped radiopaque marker implanted in or near the tumor. In addition, automatic tracking was tested in intrafraction kV images of three lung cancer patients with irregularly shaped

  7. Semi-automatic breast ultrasound image segmentation based on mean shift and graph cuts.

    Science.gov (United States)

    Zhou, Zhuhuang; Wu, Weiwei; Wu, Shuicai; Tsui, Po-Hsiang; Lin, Chung-Chih; Zhang, Ling; Wang, Tianfu

    2014-10-01

    Computerized tumor segmentation on breast ultrasound (BUS) images remains a challenging task. In this paper, we proposed a new method for semi-automatic tumor segmentation on BUS images using Gaussian filtering, histogram equalization, mean shift, and graph cuts. The only interaction required was to select two diagonal points to determine a region of interest (ROI) on an input image. The ROI image was shrunken by a factor of 2 using bicubic interpolation to reduce computation time. The shrunken image was smoothed by a Gaussian filter and then contrast-enhanced by histogram equalization. Next, the enhanced image was filtered by pyramid mean shift to improve homogeneity. The object and background seeds for graph cuts were automatically generated on the filtered image. Using these seeds, the filtered image was then segmented by graph cuts into a binary image containing the object and background. Finally, the binary image was expanded by a factor of 2 using bicubic interpolation, and the expanded image was processed by morphological opening and closing to refine the tumor contour. The method was implemented with OpenCV 2.4.3 and Visual Studio 2010 and tested for 38 BUS images with benign tumors and 31 BUS images with malignant tumors from different ultrasound scanners. Experimental results showed that our method had a true positive rate (TP) of 91.7%, a false positive (FP) rate of 11.9%, and a similarity (SI) rate of 85.6%. The mean run time on Intel Core 2.66 GHz CPU and 4 GB RAM was 0.49 ± 0.36 s. The experimental results indicate that the proposed method may be useful in BUS image segmentation. © The Author(s) 2014.

  8. Automatic comic page image understanding based on edge segment analysis

    Science.gov (United States)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  9. Fast and automatic ultrasound simulation from CT images.

    Science.gov (United States)

    Cong, Weijian; Yang, Jian; Liu, Yue; Wang, Yongtian

    2013-01-01

    Ultrasound is currently widely used in clinical diagnosis because of its fast and safe imaging principles. As the anatomical structures present in an ultrasound image are not as clear as CT or MRI. Physicians usually need advance clinical knowledge and experience to distinguish diseased tissues. Fast simulation of ultrasound provides a cost-effective way for the training and correlation of ultrasound and the anatomic structures. In this paper, a novel method is proposed for fast simulation of ultrasound from a CT image. A multiscale method is developed to enhance tubular structures so as to simulate the blood flow. The acoustic response of common tissues is generated by weighted integration of adjacent regions on the ultrasound propagation path in the CT image, from which parameters, including attenuation, reflection, scattering, and noise, are estimated simultaneously. The thin-plate spline interpolation method is employed to transform the simulation image between polar and rectangular coordinate systems. The Kaiser window function is utilized to produce integration and radial blurring effects of multiple transducer elements. Experimental results show that the developed method is very fast and effective, allowing realistic ultrasound to be fast generated. Given that the developed method is fully automatic, it can be utilized for ultrasound guided navigation in clinical practice and for training purpose.

  10. Fast and Automatic Ultrasound Simulation from CT Images

    Directory of Open Access Journals (Sweden)

    Weijian Cong

    2013-01-01

    Full Text Available Ultrasound is currently widely used in clinical diagnosis because of its fast and safe imaging principles. As the anatomical structures present in an ultrasound image are not as clear as CT or MRI. Physicians usually need advance clinical knowledge and experience to distinguish diseased tissues. Fast simulation of ultrasound provides a cost-effective way for the training and correlation of ultrasound and the anatomic structures. In this paper, a novel method is proposed for fast simulation of ultrasound from a CT image. A multiscale method is developed to enhance tubular structures so as to simulate the blood flow. The acoustic response of common tissues is generated by weighted integration of adjacent regions on the ultrasound propagation path in the CT image, from which parameters, including attenuation, reflection, scattering, and noise, are estimated simultaneously. The thin-plate spline interpolation method is employed to transform the simulation image between polar and rectangular coordinate systems. The Kaiser window function is utilized to produce integration and radial blurring effects of multiple transducer elements. Experimental results show that the developed method is very fast and effective, allowing realistic ultrasound to be fast generated. Given that the developed method is fully automatic, it can be utilized for ultrasound guided navigation in clinical practice and for training purpose.

  11. Image processing occupancy sensor

    Science.gov (United States)

    Brackney, Larry J.

    2016-09-27

    A system and method of detecting occupants in a building automation system environment using image based occupancy detection and position determinations. In one example, the system includes an image processing occupancy sensor that detects the number and position of occupants within a space that has controllable building elements such as lighting and ventilation diffusers. Based on the position and location of the occupants, the system can finely control the elements to optimize conditions for the occupants, optimize energy usage, among other advantages.

  12. Quantum image processing?

    Science.gov (United States)

    Mastriani, Mario

    2017-01-01

    This paper presents a number of problems concerning the practical (real) implementation of the techniques known as quantum image processing. The most serious problem is the recovery of the outcomes after the quantum measurement, which will be demonstrated in this work that is equivalent to a noise measurement, and it is not considered in the literature on the subject. It is noteworthy that this is due to several factors: (1) a classical algorithm that uses Dirac's notation and then it is coded in MATLAB does not constitute a quantum algorithm, (2) the literature emphasizes the internal representation of the image but says nothing about the classical-to-quantum and quantum-to-classical interfaces and how these are affected by decoherence, (3) the literature does not mention how to implement in a practical way (at the laboratory) these proposals internal representations, (4) given that quantum image processing works with generic qubits, this requires measurements in all axes of the Bloch sphere, logically, and (5) among others. In return, the technique known as quantum Boolean image processing is mentioned, which works with computational basis states (CBS), exclusively. This methodology allows us to avoid the problem of quantum measurement, which alters the results of the measured except in the case of CBS. Said so far is extended to quantum algorithms outside image processing too.

  13. Quadrant Dynamic with Automatic Plateau Limit Histogram Equalization for Image Enhancement

    Directory of Open Access Journals (Sweden)

    P. Jagatheeswari

    2014-01-01

    Full Text Available The fundamental and important preprocessing stage in image processing is the image contrast enhancement technique. Histogram equalization is an effective contrast enhancement technique. In this paper, a histogram equalization based technique called quadrant dynamic with automatic plateau limit histogram equalization (QDAPLHE is introduced. In this method, a hybrid of dynamic and clipped histogram equalization methods are used to increase the brightness preservation and to reduce the overenhancement. Initially, the proposed QDAPLHE algorithm passes the input image through a median filter to remove the noises present in the image. Then the histogram of the filtered image is divided into four subhistograms while maintaining second separated point as the mean brightness. Then the clipping process is implemented by calculating automatically the plateau limit as the clipped level. The clipped portion of the histogram is modified to reduce the loss of image intensity value. Finally the clipped portion is redistributed uniformly to the entire dynamic range and the conventional histogram equalization is executed in each subhistogram independently. Based on the qualitative and the quantitative analysis, the QDAPLHE method outperforms some existing methods in literature.

  14. The Automatic and Controlled Processing of Temporal and Spatial Patterns.

    Science.gov (United States)

    1980-02-01

    Atkinson and Juola, 1973; Slhffrin and Geisler, 1973; and Corballis, 1975; Posner and Snyder, 1975). Schneider and Shiffrin (1977; Shiffrin and Schneider...Besides the frame size, Schneider and Shiffrin (1977) also varied the memory set size to study the differential load requirements of CM and VM...theoretical level, Shiffrin and Schneider (1977) described an automatic process as a sequence of memory nodes that nearly always become active in

  15. Automatic detection of larynx cancer from contrast-enhanced magnetic resonance images

    Science.gov (United States)

    Doshi, Trushali; Soraghan, John; Grose, Derek; MacKenzie, Kenneth; Petropoulakis, Lykourgos

    2015-03-01

    Detection of larynx cancer from medical imaging is important for the quantification and for the definition of target volumes in radiotherapy treatment planning (RTP). Magnetic resonance imaging (MRI) is being increasingly used in RTP due to its high resolution and excellent soft tissue contrast. Manually detecting larynx cancer from sequential MRI is time consuming and subjective. The large diversity of cancer in terms of geometry, non-distinct boundaries combined with the presence of normal anatomical regions close to the cancer regions necessitates the development of automatic and robust algorithms for this task. A new automatic algorithm for the detection of larynx cancer from 2D gadoliniumenhanced T1-weighted (T1+Gd) MRI to assist clinicians in RTP is presented. The algorithm employs edge detection using spatial neighborhood information of pixels and incorporates this information in a fuzzy c-means clustering process to robustly separate different tissues types. Furthermore, it utilizes the information of the expected cancerous location for cancer regions labeling. Comparison of this automatic detection system with manual clinical detection on real T1+Gd axial MRI slices of 2 patients (24 MRI slices) with visible larynx cancer yields an average dice similarity coefficient of 0.78+/-0.04 and average root mean square error of 1.82+/-0.28 mm. Preliminary results show that this fully automatic system can assist clinicians in RTP by obtaining quantifiable and non-subjective repeatable detection results in a particular time-efficient and unbiased fashion.

  16. Image processing of 2D crystal images.

    Science.gov (United States)

    Arheit, Marcel; Castaño-Díez, Daniel; Thierry, Raphaël; Gipson, Bryant R; Zeng, Xiangyan; Stahlberg, Henning

    2013-01-01

    Electron crystallography of membrane proteins uses cryo-transmission electron microscopy to image frozen-hydrated 2D crystals. The processing of recorded images exploits the periodic arrangement of the structures in the images to extract the amplitudes and phases of diffraction spots in Fourier space. However, image imperfections require a crystal unbending procedure to be applied to the image before evaluation in Fourier space. We here describe the process of 2D crystal image unbending, using the 2dx software system.

  17. Automatic Boat Identification System for VIIRS Low Light Imaging Data

    Directory of Open Access Journals (Sweden)

    Christopher D. Elvidge

    2015-03-01

    Full Text Available The ability for satellite sensors to detect lit fishing boats has been known since the 1970s. However, the use of the observations has been limited by the lack of an automatic algorithm for reporting the location and brightness of offshore lighting features arising from boats. An examination of lit fishing boat features in Visible Infrared Imaging Radiometer Suite (VIIRS day/night band (DNB data indicates that the features are essentially spikes. We have developed a set of algorithms for automatic detection of spikes and characterization of the sharpness of spike features. A spike detection algorithm generates a list of candidate boat detections. A second algorithm measures the height of the spikes for the discard of ionospheric energetic particle detections and to rate boat detections as either strong or weak. A sharpness index is used to label boat detections that appear blurry due to the scattering of light by clouds. The candidate spikes are then filtered to remove features on land and gas flares. A validation study conducted using analyst selected boat detections found the automatic algorithm detected 99.3% of the reference pixel set. VIIRS boat detection data can provide fishery agencies with up-to-date information of fishing boat activity and changes in this activity in response to new regulations and enforcement regimes. The data can provide indications of illegal fishing activity in restricted areas and incursions across Exclusive Economic Zone (EEZ boundaries. VIIRS boat detections occur widely offshore from East and Southeast Asia, South America and several other regions.

  18. Altering automatic verbal processes with transcranial direct current stimulation

    Directory of Open Access Journals (Sweden)

    Tracy D Vannorsdall

    2012-08-01

    Full Text Available AbstractBackground: Word retrieval during verbal fluency tasks utilizes both automatic and controlled cognitive processes. A distinction has been made between the generation of clusters and switches on verbal fluency tasks. Clusters, or the reporting of contiguous words within semantic or phonemic subcategories, are thought to reflect a relatively automatic processes In contrast, switching from one subcategory to another is thought to represent more controlled, effortful form of cognitive processing. Objective: In this single-blind experiment, we investigated whether tDCS can modify qualitative aspects of verbal fluency, such as clustering and switching, in healthy adults. Methods: Participants were randomly assigned to receive 1mA of either anodal/excitatory or cathodal/inhibitory active tDCS over the left prefrontal cortex in addition to sham stimulation. In the last segment of each 30-minute session, participants completed letter- and category-cued fluency tasks.Results: Anodal tDCS increased both overall productivity and the number and proportion of words in clusters during category-guided verbal fluency, whereas cathodal stimulation produced the opposite effect. Conclusions: tDCS can selectively alter automatic aspects of speeded lexical retrieval in a polarity-dependent fashion during a category-guided fluency task.  

  19. Automatic analysis of image of surface structure of cell wall-deficient EVC.

    Science.gov (United States)

    Li, S; Hu, K; Cai, N; Su, W; Xiong, H; Lou, Z; Lin, T; Hu, Y

    2001-01-01

    Some computer applications for cell characterization in medicine and biology, such as analysis of surface structure of cell wall-deficient EVC (El Tor Vibrio of Cholera), operate with cell samples taken from very small areas of interest. In order to perform texture characterization in such an application, only a few texture operators can be employed: the operators should be insensitive to noise and image distortion and be reliable in order to estimate texture quality from images. Therefore, we introduce wavelet theory and mathematical morphology to analyse the cellular surface micro-area image obtained by SEM (Scanning Electron Microscope). In order to describe the quality of surface structure of cell wall-deficient EVC, we propose a fully automatic computerized method. The image analysis process is carried out in two steps. In the first, we decompose the given image by dyadic wavelet transform and form an image approximation with higher resolution, by doing so, we perform edge detection of given images efficiently. In the second, we introduce many operations of mathematical morphology to obtain morphological quantitative parameters of surface structure of cell wall-deficient EVC. The obtained results prove that the method can eliminate noise, detect the edge and extract the feature parameters validly. In this work, we have built automatic analytic software named "EVC.CELL".

  20. Automatic and controlled processing in the corticocerebellar system.

    Science.gov (United States)

    Ramnani, Narender

    2014-01-01

    During learning, performance changes often involve a transition from controlled processing in which performance is flexible and responsive to ongoing error feedback, but effortful and slow, to a state in which processing becomes swift and automatic. In this state, performance is unencumbered by the requirement to process feedback, but its insensitivity to feedback reduces its flexibility. Many properties of automatic processing are similar to those that one would expect of forward models, and many have suggested that these may be instantiated in cerebellar circuitry. Since hierarchically organized frontal lobe areas can both send and receive commands, I discuss the possibility that they can act both as controllers and controlled objects and that their behaviors can be independently modeled by forward models in cerebellar circuits. Since areas of the prefrontal cortex contribute to this hierarchically organized system and send outputs to the cerebellar cortex, I suggest that the cerebellum is likely to contribute to the automation of cognitive skills, and to the formation of habitual behavior which is resistant to error feedback. An important prerequisite to these ideas is that cerebellar circuitry should have access to higher order error feedback that signals the success or failure of cognitive processing. I have discussed the pathways through which such feedback could arrive via the inferior olive and the dopamine system. Cerebellar outputs inhibit both the inferior olive and the dopamine system. It is possible that learned representations in the cerebellum use this as a mechanism to suppress the processing of feedback in other parts of the nervous system. Thus, cerebellar processes that control automatic performance may be completed without triggering the engagement of controlled processes by prefrontal mechanisms.

  1. Feeding People's Curiosity: Leveraging the Cloud for Automatic Dissemination of Mars Images

    Science.gov (United States)

    Knight, David; Powell, Mark

    2013-01-01

    Smartphones and tablets have made wireless computing ubiquitous, and users expect instant, on-demand access to information. The Mars Science Laboratory (MSL) operations software suite, MSL InterfaCE (MSLICE), employs a different back-end image processing architecture compared to that of the Mars Exploration Rovers (MER) in order to better satisfy modern consumer-driven usage patterns and to offer greater server-side flexibility. Cloud services are a centerpiece of the server-side architecture that allows new image data to be delivered automatically to both scientists using MSLICE and the general public through the MSL website (http://mars.jpl.nasa.gov/msl/).

  2. Feeding People's Curiosity: Leveraging the Cloud for Automatic Dissemination of Mars Images

    Science.gov (United States)

    Knight, David; Powell, Mark

    2013-01-01

    Smartphones and tablets have made wireless computing ubiquitous, and users expect instant, on-demand access to information. The Mars Science Laboratory (MSL) operations software suite, MSL InterfaCE (MSLICE), employs a different back-end image processing architecture compared to that of the Mars Exploration Rovers (MER) in order to better satisfy modern consumer-driven usage patterns and to offer greater server-side flexibility. Cloud services are a centerpiece of the server-side architecture that allows new image data to be delivered automatically to both scientists using MSLICE and the general public through the MSL website (http://mars.jpl.nasa.gov/msl/).

  3. Tomographic brain imaging with nucleolar detail and automatic cell counting

    Science.gov (United States)

    Hieber, Simone E.; Bikis, Christos; Khimchenko, Anna; Schweighauser, Gabriel; Hench, Jürgen; Chicherova, Natalia; Schulz, Georg; Müller, Bert

    2016-09-01

    Brain tissue evaluation is essential for gaining in-depth insight into its diseases and disorders. Imaging the human brain in three dimensions has always been a challenge on the cell level. In vivo methods lack spatial resolution, and optical microscopy has a limited penetration depth. Herein, we show that hard X-ray phase tomography can visualise a volume of up to 43 mm3 of human post mortem or biopsy brain samples, by demonstrating the method on the cerebellum. We automatically identified 5,000 Purkinje cells with an error of less than 5% at their layer and determined the local surface density to 165 cells per mm2 on average. Moreover, we highlight that three-dimensional data allows for the segmentation of sub-cellular structures, including dendritic tree and Purkinje cell nucleoli, without dedicated staining. The method suggests that automatic cell feature quantification of human tissues is feasible in phase tomograms obtained with isotropic resolution in a label-free manner.

  4. Image Processing for Teaching.

    Science.gov (United States)

    Greenberg, R.; And Others

    1993-01-01

    The Image Processing for Teaching project provides a powerful medium to excite students about science and mathematics, especially children from minority groups and others whose needs have not been met by traditional teaching. Using professional-quality software on microcomputers, students explore a variety of scientific data sets, including…

  5. Image-Processing Program

    Science.gov (United States)

    Roth, D. J.; Hull, D. R.

    1994-01-01

    IMAGEP manipulates digital image data to effect various processing, analysis, and enhancement functions. It is keyboard-driven program organized into nine subroutines. Within subroutines are sub-subroutines also selected via keyboard. Algorithm has possible scientific, industrial, and biomedical applications in study of flows in materials, analysis of steels and ores, and pathology, respectively.

  6. MatchGUI: A Graphical MATLAB-Based Tool for Automatic Image Co-Registration

    Science.gov (United States)

    Ansar, Adnan I.

    2011-01-01

    MatchGUI software, based on MATLAB, automatically matches two images and displays the match result by superimposing one image on the other. A slider bar allows focus to shift between the two images. There are tools for zoom, auto-crop to overlap region, and basic image markup. Given a pair of ortho-rectified images (focused primarily on Mars orbital imagery for now), this software automatically co-registers the imagery so that corresponding image pixels are aligned. MatchGUI requires minimal user input, and performs a registration over scale and inplane rotation fully automatically

  7. Sorting Olive Batches for the Milling Process Using Image Processing

    Directory of Open Access Journals (Sweden)

    Daniel Aguilera Puerto

    2015-07-01

    Full Text Available The quality of virgin olive oil obtained in the milling process is directly bound to the characteristics of the olives. Hence, the correct classification of the different incoming olive batches is crucial to reach the maximum quality of the oil. The aim of this work is to provide an automatic inspection system, based on computer vision, and to classify automatically different batches of olives entering the milling process. The classification is based on the differentiation between ground and tree olives. For this purpose, three different species have been studied (Picudo, Picual and Hojiblanco. The samples have been obtained by picking the olives directly from the tree or from the ground. The feature vector of the samples has been obtained on the basis of the olive image histograms. Moreover, different image preprocessing has been employed, and two classification techniques have been used: these are discriminant analysis and neural networks. The proposed methodology has been validated successfully, obtaining good classification results.

  8. Automatic diagnosis of retinal diseases from color retinal images

    CERN Document Server

    Jayanthi, D; SwarnaParvathi, S

    2010-01-01

    Teleophthalmology holds a great potential to improve the quality, access, and affordability in health care. For patients, it can reduce the need for travel and provide the access to a superspecialist. Ophthalmology lends itself easily to telemedicine as it is a largely image based diagnosis. The main goal of the proposed system is to diagnose the type of disease in the retina and to automatically detect and segment retinal diseases without human supervision or interaction. The proposed system will diagnose the disease present in the retina using a neural network based classifier.The extent of the disease spread in the retina can be identified by extracting the textural features of the retina. This system will diagnose the following type of diseases: Diabetic Retinopathy and Drusen.

  9. Automatic segmentation of trophectoderm in microscopic images of human blastocysts.

    Science.gov (United States)

    Singh, Amarjot; Au, Jason; Saeedi, Parvaneh; Havelock, Jon

    2015-01-01

    Accurate assessment of embryos viability is an extremely important task in the optimization of in vitro fertilization treatment outcome. One of the common ways of assessing the quality of a human embryo is grading it on its fifth day of development based on morphological quality of its three main components (Trophectoderm, Inner Cell Mass, and the level of expansion or the thickness of its Zona Pellucida). In this study, we propose a fully automatic method for segmentation and measurement of TE region of blastocysts (day-5 human embryos). Here, we eliminate the inhomogeneities of the blastocysts surface using the Retinex theory and further apply a level-set algorithm to segment the TE regions. We have tested our method on a dataset of 85 images and have been able to achieve a segmentation accuracy of 84.6% for grade A, 89.0% for grade B, and 91.7% for grade C embryos.

  10. Hyperspectral image processing

    CERN Document Server

    Wang, Liguo

    2016-01-01

    Based on the authors’ research, this book introduces the main processing techniques in hyperspectral imaging. In this context, SVM-based classification, distance comparison-based endmember extraction, SVM-based spectral unmixing, spatial attraction model-based sub-pixel mapping, and MAP/POCS-based super-resolution reconstruction are discussed in depth. Readers will gain a comprehensive understanding of these cutting-edge hyperspectral imaging techniques. Researchers and graduate students in fields such as remote sensing, surveying and mapping, geosciences and information systems will benefit from this valuable resource.

  11. Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines

    Science.gov (United States)

    Gibbons, Steven J.; Kværna, Tormod; Harris, David B.; Dodge, Douglas A.

    2016-04-01

    Aftershock sequences following very large earthquakes present enormous challenges to near-realtime generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase association algorithms and a significant deterioration in the quality of underlying fully automatic event bulletins. Current processing pipelines were designed a generation ago and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams which are then scanned by a phase association algorithm to form event hypotheses. We consider the scenario where a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located using a separate specially targeted semi-automatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid search algorithm which may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove over half of the original detections which could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Further reductions in the number of detections in the parametric data streams are likely using correlation and subspace detectors and/or empirical matched

  12. Image processing techniques for acoustic images

    Science.gov (United States)

    Murphy, Brian P.

    1991-06-01

    The primary goal of this research is to test the effectiveness of various image processing techniques applied to acoustic images generated in MATLAB. The simulated acoustic images have the same characteristics as those generated by a computer model of a high resolution imaging sonar. Edge detection and segmentation are the two image processing techniques discussed in this study. The two methods tested are a modified version of the Kalman filtering and median filtering.

  13. Retinomorphic image processing.

    Science.gov (United States)

    Ghosh, Kuntal; Bhaumik, Kamales; Sarkar, Sandip

    2008-01-01

    The present work is aimed at understanding and explaining some of the aspects of visual signal processing at the retinal level while exploiting the same towards the development of some simple techniques in the domain of digital image processing. Classical studies on retinal physiology revealed the nature of contrast sensitivity of the receptive field of bipolar or ganglion cells, which lie in the outer and inner plexiform layers of the retina. To explain these observations, a difference of Gaussian (DOG) filter was suggested, which was subsequently modified to a Laplacian of Gaussian (LOG) filter for computational ease in handling two-dimensional retinal inputs. Till date almost all image processing algorithms, used in various branches of science and engineering had followed LOG or one of its variants. Recent observations in retinal physiology however, indicate that the retinal ganglion cells receive input from a larger area than the classical receptive fields. We have proposed an isotropic model for the non-classical receptive field of the retinal ganglion cells, corroborated from these recent observations, by introducing higher order derivatives of Gaussian expressed as linear combination of Gaussians only. In digital image processing, this provides a new mechanism of edge detection on one hand and image half-toning on the other. It has also been found that living systems may sometimes prefer to "perceive" the external scenario by adding noise to the received signals in the pre-processing level for arriving at better information on light and shade in the edge map. The proposed model also provides explanation to many brightness-contrast illusions hitherto unexplained not only by the classical isotropic model but also by some other Gestalt and Constructivist models or by non-isotropic multi-scale models. The proposed model is easy to implement both in the analog and digital domain. A scheme for implementation in the analog domain generates a new silicon retina

  14. Automatic Scheme for Fused Medical Image Segmentation with Nonsubsampled Contourlet Transform

    OpenAIRE

    Ch.Hima Bindu; Dr. K. Satya Prasad

    2012-01-01

    Medical image segmentation has become an essential technique in clinical and research- oriented applications. Because manual segmentation methods are tedious, and semi-automatic segmentation lacks the flexibility, fully-automatic methods have become the preferred type of medical image segmentation. This work proposes a robust fully automatic segmentation scheme based on the modified contouring technique. The entire scheme consists of three stages. In the first stage, the Nonsubsampled Contour...

  15. Automatic Semiconductor Wafer Image Segmentation for Defect Detection Using Multilevel Thresholding

    Directory of Open Access Journals (Sweden)

    Saad N.H.

    2016-01-01

    Full Text Available Quality control is one of important process in semiconductor manufacturing. A lot of issues trying to be solved in semiconductor manufacturing industry regarding the rate of production with respect to time. In most semiconductor assemblies, a lot of wafers from various processes in semiconductor wafer manufacturing need to be inspected manually using human experts and this process required full concentration of the operators. This human inspection procedure, however, is time consuming and highly subjective. In order to overcome this problem, implementation of machine vision will be the best solution. This paper presents automatic defect segmentation of semiconductor wafer image based on multilevel thresholding algorithm which can be further adopted in machine vision system. In this work, the defect image which is in RGB image at first is converted to the gray scale image. Median filtering then is implemented to enhance the gray scale image. Then the modified multilevel thresholding algorithm is performed to the enhanced image. The algorithm worked in three main stages which are determination of the peak location of the histogram, segmentation the histogram between the peak and determination of first global minimum of histogram that correspond to the threshold value of the image. The proposed approach is being evaluated using defected wafer images. The experimental results shown that it can be used to segment the defect correctly and outperformed other thresholding technique such as Otsu and iterative thresholding.

  16. Fully-automatic laser welding and micro-sculpting with universal in situ inline coherent imaging

    CERN Document Server

    Webster, Paul J L; Ji, Yang; Galbraith, Christopher M; Kinross, Alison W; Van Vlack, Cole; Fraser, James M

    2014-01-01

    Though new affordable high power laser technologies make possible many processing applications in science and industry, depth control remains a serious technical challenge. Here we show that inline coherent imaging, with line rates up to 312 kHz and microsecond-duration capture times, is capable of directly measuring laser penetration depth in a process as violent as kW-class keyhole welding. We exploit ICI's high speed, high dynamic range and robustness to interference from other optical sources to achieve fully automatic, adaptive control of laser welding as well as ablation, achieving micron-scale sculpting in vastly different heterogeneous biological materials.

  17. Automatic laser welding and milling with in situ inline coherent imaging.

    Science.gov (United States)

    Webster, P J L; Wright, L G; Ji, Y; Galbraith, C M; Kinross, A W; Van Vlack, C; Fraser, J M

    2014-11-01

    Although new affordable high-power laser technologies enable many processing applications in science and industry, depth control remains a serious technical challenge. In this Letter we show that inline coherent imaging (ICI), with line rates up to 312 kHz and microsecond-duration capture times, is capable of directly measuring laser penetration depth, in a process as violent as kW-class keyhole welding. We exploit ICI's high speed, high dynamic range, and robustness to interference from other optical sources to achieve automatic, adaptive control of laser welding, as well as ablation, achieving 3D micron-scale sculpting in vastly different heterogeneous biological materials.

  18. AUTOMATIC URBAN ILLEGAL BUILDING DETECTION USING MULTI-TEMPORAL SATELLITE IMAGES AND GEOSPATIAL INFORMATION SYSTEMS

    Directory of Open Access Journals (Sweden)

    N. Khalili Moghadam

    2015-12-01

    Full Text Available With the unprecedented growth of urban population and urban development, we are faced with the growing trend of illegal building (IB construction. Field visit, as the currently used method of IB detection, is time and man power consuming, in addition to its high cost. Therefore, an automatic IB detection is required. Acquiring multi-temporal satellite images and using image processing techniques for automatic change detection is one of the optimum methods which can be used in IB monitoring. In this research an automatic method of IB detection has been proposed. Two-temporal panchromatic satellite images of IRS-P5 of the study area in a part of Tehran, the city map and an updated spatial database of existing buildings were used to detect the suspected IBs. In the pre-processing step, the images were geometrically and radiometrically corrected. In the next step, the changed pixels were detected using K-means clustering technique because of its quickness and less user’s intervention required. Then, all the changed pixels of each building were identified and the change percentage of each building with the standard threshold of changes was compared to detect the buildings which are under construction. Finally, the IBs were detected by checking the municipality database. The unmatched constructed buildings with municipal database will be field checked to identify the IBs. The results show that out of 343 buildings appeared in the images; only 19 buildings were detected as under construction and three of them as unlicensed buildings. Furthermore, the overall accuracies of 83%, 79% and 75% were obtained for K-means change detection, detection of under construction buildings and IBs detection, respectively.

  19. FULLY AUTOMATIC IMAGE-BASED REGISTRATION OF UNORGANIZED TLS DATA

    Directory of Open Access Journals (Sweden)

    M. Weinmann

    2012-09-01

    Full Text Available The estimation of the transformation parameters between different point clouds is still a crucial task as it is usually followed by scene reconstruction, object detection or object recognition. Therefore, the estimates should be as accurate as possible. Recent developments show that it is feasible to utilize both the measured range information and the reflectance information sampled as image, as 2D imagery provides additional information. In this paper, an image-based registration approach for TLS data is presented which consists of two major steps. In the first step, the order of the scans is calculated by checking the similarity of the respective reflectance images via the total number of SIFT correspondences between them. Subsequently, in the second step, for each SIFT correspondence the respective SIFT features are filtered with respect to their reliability concerning the range information and projected to 3D space. Combining the 3D points with 2D observations on a virtual plane yields 3D-to-2D correspondences from which the coarse transformation parameters can be estimated via a RANSAC-based registration scheme including the EPnP algorithm. After this coarse registration, the 3D points are again checked for consistency by using constraints based on the 3D distance, and, finally, the remaining 3D points are used for an ICP-based fine registration. Thus, the proposed methodology provides a fast, reliable, accurate and fully automatic image-based approach for the registration of unorganized point clouds without the need of a priori information about the order of the scans, the presence of regular surfaces or human interaction.

  20. Scheduling algorithms for automatic control systems for technological processes

    Science.gov (United States)

    Chernigovskiy, A. S.; Tsarev, R. Yu; Kapulin, D. V.

    2017-01-01

    Wide use of automatic process control systems and the usage of high-performance systems containing a number of computers (processors) give opportunities for creation of high-quality and fast production that increases competitiveness of an enterprise. Exact and fast calculations, control computation, and processing of the big data arrays – all of this requires the high level of productivity and, at the same time, minimum time of data handling and result receiving. In order to reach the best time, it is necessary not only to use computing resources optimally, but also to design and develop the software so that time gain will be maximal. For this purpose task (jobs or operations), scheduling techniques for the multi-machine/multiprocessor systems are applied. Some of basic task scheduling methods for the multi-machine process control systems are considered in this paper, their advantages and disadvantages come to light, and also some usage considerations, in case of the software for automatic process control systems developing, are made.

  1. Network patterns recognition for automatic dermatologic images classification

    Science.gov (United States)

    Grana, Costantino; Daniele, Vanini; Pellacani, Giovanni; Seidenari, Stefania; Cucchiara, Rita

    2007-03-01

    In this paper we focus on the problem of automatic classification of melanocytic lesions, aiming at identifying the presence of reticular patterns. The recognition of reticular lesions is an important step in the description of the pigmented network, in order to obtain meaningful diagnostic information. Parameters like color, size or symmetry could benefit from the knowledge of having a reticular or non-reticular lesion. The detection of network patterns is performed with a three-steps procedure. The first step is the localization of line points, by means of the line points detection algorithm, firstly described by Steger. The second step is the linking of such points into a line considering the direction of the line at its endpoints and the number of line points connected to these. Finally a third step discards the meshes which couldn't be closed at the end of the linking procedure and the ones characterized by anomalous values of area or circularity. The number of the valid meshes left and their area with respect to the whole area of the lesion are the inputs of a discriminant function which classifies the lesions into reticular and non-reticular. This approach was tested on two balanced (both sets are formed by 50 reticular and 50 non-reticular images) training and testing sets. We obtained above 86% correct classification of the reticular and non-reticular lesions on real skin images, with a specificity value never lower than 92%.

  2. Automatic wound infection interpretation for postoperative wound image

    Science.gov (United States)

    Hsu, Jui-Tse; Ho, Te-Wei; Shih, Hsueh-Fu; Chang, Chun-Che; Lai, Feipei; Wu, Jin-Ming

    2017-02-01

    With the growing demand for more efficient wound care after surgery, there is a necessity to develop a machine learning based image analysis approach to reduce the burden for health care professionals. The aim of this study was to propose a novel approach to recognize wound infection on the postsurgical site. Firstly, we proposed an optimal clustering method based on unimodal-rosin threshold algorithm to extract the feature points from a potential wound area into clusters for regions of interest (ROI). Each ROI was regarded as a suture site of the wound area. The automatic infection interpretation based on the support vector machine is available to assist physicians doing decision-making in clinical practice. According to clinical physicians' judgment criteria and the international guidelines for wound infection interpretation, we defined infection detector modules as the following: (1) Swelling Detector, (2) Blood Region Detector, (3) Infected Detector, and (4) Tissue Necrosis Detector. To validate the capability of the proposed system, a retrospective study using the confirmation wound pictures that were used for diagnosis by surgical physicians as the gold standard was conducted to verify the classification models. Currently, through cross validation of 42 wound images, our classifiers achieved 95.23% accuracy, 93.33% sensitivity, 100% specificity, and 100% positive predictive value. We believe this ability could help medical practitioners in decision making in clinical practice.

  3. An Automatic Eye Detection Method for Gray Intensity Facial Images

    Directory of Open Access Journals (Sweden)

    M Hassaballah

    2011-07-01

    Full Text Available Eyes are the most salient and stable features in the human face, and hence automatic extraction or detection of eyes is often considered as the most important step in many applications, such as face identification and recognition. This paper presents a method for eye detection of still grayscale images. The method is based on two facts: eye regions exhibit unpredictable local intensity, therefore entropy in eye regions is high and the center of eye (iris is too dark circle (low intensity compared to the neighboring regions. A score based on the entropy of eye and darkness of iris is used to detect eye center coordinates. Experimental results on two databases; namely, FERET with variations in views and BioID with variations in gaze directions and uncontrolled conditions show that the proposed method is robust against gaze direction, variations in views and variety of illumination. It can achieve a correct detection rate of 97.8% and 94.3% on a set containing 2500 images of FERET and BioID databases respectively. Moreover, in the cases with glasses and severe conditions, the performance is still acceptable.

  4. Automatic Registration of Terrestrial Laser Scanning Point Clouds using Panoramic Reflectance Images.

    Science.gov (United States)

    Kang, Zhizhong; Li, Jonathan; Zhang, Liqiang; Zhao, Qile; Zlatanova, Sisi

    2009-01-01

    This paper presents a new approach to the automatic registration of terrestrial laser scanning (TLS) point clouds using panoramic reflectance images. The approach follows a two-step procedure that includes both pair-wise registration and global registration. The pair-wise registration consists of image matching (pixel-to-pixel correspondence) and point cloud registration (point-to-point correspondence), as the correspondence between the image and the point cloud (pixel-to-point) is inherent to the reflectance images. False correspondences are removed by a geometric invariance check. The pixel-to-point correspondence and the computation of the rigid transformation parameters (RTPs) are integrated into an iterative process that allows for the pair-wise registration to be optimised. The global registration of all point clouds is obtained by a bundle adjustment using a circular self-closure constraint. Our approach is tested with both indoor and outdoor scenes acquired by a FARO LS 880 laser scanner with an angular resolution of 0.036° and 0.045°, respectively. The results show that the pair-wise and global registration accuracies are of millimetre and centimetre orders, respectively, and that the process is fully automatic and converges quickly.

  5. Automatic Rotation Recovery Algorithm for Accurate Digital Image and Video Watermarks Extraction

    Directory of Open Access Journals (Sweden)

    Nasr addin Ahmed Salem Al-maweri

    2016-11-01

    Full Text Available Research in digital watermarking has evolved rapidly in the current decade. This evolution brought various different methods and algorithms for watermarking digital images and videos. Introduced methods in the field varies from weak to robust according to how tolerant the method is implemented to keep the existence of the watermark in the presence of attacks. Rotation attacks applied to the watermarked media is one of the serious attacks which many, if not most, algorithms cannot survive. In this paper, a new automatic rotation recovery algorithm is proposed. This algorithm can be plugged to any image or video watermarking algorithm extraction component. The main job for this method is to detect the geometrical distortion happens to the watermarked image/images sequence; recover the distorted scene to its original state in a blind and automatic way and then send it to be used by the extraction procedure. The work is limited to have a recovery process to zero padded rotations for now, cropped images after rotation is left as future work. The proposed algorithm is tested on top of extraction component. Both recovery accuracy and the extracted watermarks accuracy showed high performance level.

  6. Design of a distributed CORBA based image processing server.

    Science.gov (United States)

    Giess, C; Evers, H; Heid, V; Meinzer, H P

    2000-01-01

    This paper presents the design and implementation of a distributed image processing server based on CORBA. Existing image processing tools were encapsulated in a common way with this server. Data exchange and conversion is done automatically inside the server, hiding these tasks from the user. The different image processing tools are visible as one large collection of algorithms and due to the use of CORBA are accessible via intra-/internet.

  7. Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery.

    Science.gov (United States)

    Wang, Junchen; Suenaga, Hideyuki; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro; Liao, Hongen

    2014-04-01

    Computer-assisted oral and maxillofacial surgery (OMS) has been rapidly evolving since the last decade. State-of-the-art surgical navigation in OMS still suffers from bulky tracking sensors, troublesome image registration procedures, patient movement, loss of depth perception in visual guidance, and low navigation accuracy. We present an augmented reality navigation system with automatic marker-free image registration using 3-D image overlay and stereo tracking for dental surgery. A customized stereo camera is designed to track both the patient and instrument. Image registration is performed by patient tracking and real-time 3-D contour matching, without requiring any fiducial and reference markers. Real-time autostereoscopic 3-D imaging is implemented with the help of a consumer-level graphics processing unit. The resulting 3-D image of the patient's anatomy is overlaid on the surgical site by a half-silvered mirror using image registration and IP-camera registration to guide the surgeon by exposing hidden critical structures. The 3-D image of the surgical instrument is also overlaid over the real one for an augmented display. The 3-D images present both stereo and motion parallax from which depth perception can be obtained. Experiments were performed to evaluate various aspects of the system; the overall image overlay error of the proposed system was 0.71 mm.

  8. Automatic Image Registration Using Free and Open Source Software

    OpenAIRE

    Giri Babu, D.; Raja Shekhar, S. S.; Chandrasekar, K.; M. V. R. Sesha Sai; P.G. Diwakar; Dadhwal, V. K.

    2014-01-01

    Image registration is the most critical operation in remote sensing applications to enable location based referencing and analysis of earth features. This is the first step for any process involving identification, time series analysis or change detection using a large set of imagery over a region. Most of the reliable procedures involve time consuming and laborious manual methods of finding the corresponding matching features of the input image with respect to reference. Also the ...

  9. IMAGE ENHANCEMENT USING IMAGE FUSION AND IMAGE PROCESSING TECHNIQUES

    OpenAIRE

    Arjun Nelikanti

    2015-01-01

    Principle objective of Image enhancement is to process an image so that result is more suitable than original image for specific application. Digital image enhancement techniques provide a multitude of choices for improving the visual quality of images. Appropriate choice of such techniques is greatly influenced by the imaging modality, task at hand and viewing conditions. This paper will provide a combination of two concepts, image fusion by DWT and digital image processing techniques. The e...

  10. Cognitive effort and pupil dilation in controlled and automatic processes

    Science.gov (United States)

    Querino, Emanuel; dos Santos, Lafaiete; Ginani, Giuliano; Nicolau, Eduardo; Miranda, Débora; Romano-Silva, Marco; Malloy-Diniz, Leandro

    2015-01-01

    The Five Digits Test (FDT) is a Stroop paradigm test that aims to evaluate executive functions. It is composed of four parts, two of which are related to automatic and two of which are related to controlled processes. It is known that pupillary diameter increases as the task’s cognitive demand increases. In the present study, we evaluated whether the pupillary diameter could distinguish cognitive effort between automated and controlled cognitive processing during the FDT as the task progressed. As a control task, we used a simple reading paradigm with a similar visual aspect as the FDT. We then divided each of the four parts into two blocks in order to evaluate the differences between the first and second half of the task. Results indicated that, compared to a control task, the FDT required higher cognitive effort for each consecutive part. Moreover, the first half of every part of the FDT induced dilation more than the second. The differences in pupil dilation during the first half of the four FDT parts were statistically significant between the parts 2 and 4 (p=0.023), and between the parts 3 and 4 (p=0.006). These results provide further evidence that cognitive effort and pupil diameter can distinguish controlled from automatic processes.

  11. Cognitive effort and pupil dilation in controlled and automatic processes.

    Science.gov (United States)

    Querino, Emanuel; Dos Santos, Lafaiete; Ginani, Giuliano; Nicolau, Eduardo; Miranda, Débora; Romano-Silva, Marco; Malloy-Diniz, Leandro

    2015-01-01

    The Five Digits Test (FDT) is a Stroop paradigm test that aims to evaluate executive functions. It is composed of four parts, two of which are related to automatic and two of which are related to controlled processes. It is known that pupillary diameter increases as the task's cognitive demand increases. In the present study, we evaluated whether the pupillary diameter could distinguish cognitive effort between automated and controlled cognitive processing during the FDT as the task progressed. As a control task, we used a simple reading paradigm with a similar visual aspect as the FDT. We then divided each of the four parts into two blocks in order to evaluate the differences between the first and second half of the task. Results indicated that, compared to a control task, the FDT required higher cognitive effort for each consecutive part. Moreover, the first half of every part of the FDT induced dilation more than the second. The differences in pupil dilation during the first half of the four FDT parts were statistically significant between the parts 2 and 4 (p=0.023), and between the parts 3 and 4 (p=0.006). These results provide further evidence that cognitive effort and pupil diameter can distinguish controlled from automatic processes.

  12. Automatic perceptual simulation of first language meanings during second language sentence processing in bilinguals.

    Science.gov (United States)

    Vukovic, Nikola; Williams, John N

    2014-01-01

    Research supports the claim that, when understanding language, people perform mental simulation using those parts of the brain which support sensation, action, and emotion. A major criticism of the findings quoted as evidence for embodied simulation, however, is that they could be a result of conscious image generation strategies. Here we exploit the well-known fact that bilinguals routinely and automatically activate both their languages during comprehension to test whether this automatic process is, in turn, modulated by embodied simulatory processes. Dutch participants heard English sentences containing interlingual homophones and implying specific distance relations, and had to subsequently respond to pictures of objects matching or mismatching this implied distance. Participants were significantly slower to reject critical items when their perceptual features matched said distance relationship. These results suggest that bilinguals not only activate task-irrelevant meanings of interlingual homophones, but also automatically simulate these meanings in a detailed perceptual fashion. Our study supports the claim that embodied simulation is not due to participants' conscious strategies, but is an automatic component of meaning construction. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. CIE L*a*b*: comparison of digital images obtained photographically by manual and automatic modes

    Directory of Open Access Journals (Sweden)

    Fabiana Takatsui

    2012-12-01

    Full Text Available The aim of this study was to analyze the color alterations performed by the CIE L*a*b* system in the digital imaging of shade guide tabs, which were obtained photographically according to the automatic and manual modes. This study also sought to examine the observers' agreement in quantifying the coordinates. Four Vita Lumin Vaccum shade guide tabs were used: A3.5, B1, B3 and C4. An EOS Canon digital camera was used to record the digital images of the shade tabs, and the images were processed using Adobe Photoshop software. A total of 80 observations (five replicates of each shade according to two observers in two modes, specifically, automatic and manual were obtained, leading to color values of L*, a* and b*. The color difference (ΔE between the modes was calculated and classified as either clinically acceptable or unacceptable. The results indicated that there was agreement between the two observers in obtaining the L*, a* and b* values related to all guides. However, the B1, B3, and C4 shade tabs had ΔE values classified as clinically acceptable (ΔE = 0.44, ΔE = 2.04 and ΔE = 2.69, respectively. The A3.5 shade tab had a ΔE value classified as clinically unacceptable (ΔE = 4.17, as it presented higher values for luminosity in the automatic mode (L* = 54.0 than in the manual mode (L* = 50.6. It was concluded that the B1, B3 and C4 shade tabs can be used at any of the modes in digital camera (manual or automatic, which was a different finding from that observed for the A3.5 shade tab.

  14. Semi-Automatic Image Labelling Using Depth Information

    Directory of Open Access Journals (Sweden)

    Mostafa Pordel

    2015-05-01

    Full Text Available Image labeling tools help to extract objects within images to be used as ground truth for learning and testing in object detection processes. The inputs for such tools are usually RGB images. However with new widely available low-cost sensors like Microsoft Kinect it is possible to use depth images in addition to RGB images. Despite many existing powerful tools for image labeling, there is a need for RGB-depth adapted tools. We present a new interactive labeling tool that partially automates image labeling, with two major contributions. First, the method extends the concept of image segmentation from RGB to RGB-depth using Fuzzy C-Means clustering, connected component labeling and superpixels, and generates bounding pixels to extract the desired objects. Second, it minimizes the interaction time needed for object extraction by doing an efficient segmentation in RGB-depth space. Very few clicks are needed for the entire procedure compared to existing, tools. When the desired object is the closest object to the camera, which is often the case in robotics applications, no clicks at all are required to accurately extract the object.

  15. Sensitometric comparison of E and F dental radiographic films using manual and automatic processing systems

    Directory of Open Access Journals (Sweden)

    Dabaghi A.

    2008-04-01

    Full Text Available Background and Aim: Processing conditions affect sensitometric properties of X-ray films. In this study, we aimed to evaluate the sensitometric characteristics of InSight (IP, a new F-speed film, in fresh and used processing solutions in dental office condition and compare them with Ektaspeed Plus (EP.Materials and Methods: In this experimental in vitro study, an aluminium step wedge was used to construct characteristic curves for InSight and Ektaspeed Plus films (Kodak Eastman, Rochester, USA.All films were processed in Champion solution (X-ray Iran, Tehran, Iran both manually and automatically in a period of six days. Unexposed films of both types were processed manually and automatically to determine base plus fog density. Speed and film contrast were measured according to ISO definition. Data were analyzed using one-way ANOVA and T tests with P<0.05 as the level of significance.Results: IP was 20 to 22% faster than EP and showed to be an F-speed film when processed in automatic condition and E-F film when processed manually. Also it was F-speed in fresh solution and E-speed in old solution. IP and EP contrasts were similar in automatic processing but EP contrast was higher when processed manually. Both EP and IP films had standard values of base plus fog (<0.35 and B+F densities were decreased in old solution.Conclusion: Based on the results of this study, InSight is a F-speed film with a speed of at least 20% greater than Ektaspeed. In addition, it reduces patient exposure with no damage to image quality.

  16. Using Dual-Task Methodology to Dissociate Automatic from Nonautomatic Processes Involved in Artificial Grammar Learning

    Science.gov (United States)

    Hendricks, Michelle A.; Conway, Christopher M.; Kellogg, Ronald T.

    2013-01-01

    Previous studies have suggested that both automatic and intentional processes contribute to the learning of grammar and fragment knowledge in artificial grammar learning (AGL) tasks. To explore the relative contribution of automatic and intentional processes to knowledge gained in AGL, we utilized dual-task methodology to dissociate automatic and…

  17. scikit-image: image processing in Python.

    Science.gov (United States)

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  18. scikit-image: image processing in Python

    Directory of Open Access Journals (Sweden)

    Stéfan van der Walt

    2014-06-01

    Full Text Available scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  19. Tracer-specific PET and SPECT templates for automatic co-registration of functional rat brain images

    NARCIS (Netherlands)

    Vállez Garcia, David; Schwarz, Adam J; Dierckx, Rudi; Koole, Michel; Doorduin, Janine

    2014-01-01

    Objectives: Template based spatial co-registration of PET and SPECT data is an important first step in its semi- automatic processing, facilitating VOI- and voxel-based analysis. Although this procedure is standard in human, using corresponding MRI images, these systems are often not accessible for

  20. Curvelet based automatic segmentation of supraspinatus tendon from ultrasound image: a focused assistive diagnostic method.

    Science.gov (United States)

    Gupta, Rishu; Elamvazuthi, Irraivan; Dass, Sarat Chandra; Faye, Ibrahima; Vasant, Pandian; George, John; Izza, Faizatul

    2014-12-04

    Disorders of rotator cuff tendons results in acute pain limiting the normal range of motion for shoulder. Of all the tendons in rotator cuff, supraspinatus (SSP) tendon is affected first of any pathological changes. Diagnosis of SSP tendon using ultrasound is considered to be operator dependent with its accuracy being related to operator's level of experience. The automatic segmentation of SSP tendon ultrasound image was performed to provide focused and more accurate diagnosis. The image processing techniques were employed for automatic segmentation of SSP tendon. The image processing techniques combines curvelet transform and mathematical concepts of logical and morphological operators along with area filtering. The segmentation assessment was performed using true positives rate, false positives rate and also accuracy of segmentation. The specificity and sensitivity of the algorithm was tested for diagnosis of partial thickness tears (PTTs) and full thickness tears (FTTs). The ultrasound images of SSP tendon were taken from medical center with the help of experienced radiologists. The algorithm was tested on 116 images taken from 51 different patients. The accuracy of segmentation of SSP tendon was calculated to be 95.61% in accordance with the segmentation performed by radiologists, with true positives rate of 91.37% and false positives rate of 8.62%. The specificity and sensitivity was found to be 93.6%, 94% and 95%, 95.6% for partial thickness tears and full thickness tears respectively. The proposed methodology was successfully tested over a database of more than 116 US images, for which radiologist assessment and validation was performed. The segmentation of SSP tendon from ultrasound images helps in focused, accurate and more reliable diagnosis which has been verified with the help of two experienced radiologists. The specificity and sensitivity for accurate detection of partial and full thickness tears has been considerably increased after segmentation when

  1. Image Processing and its Military Applications

    Directory of Open Access Journals (Sweden)

    V. V.D. Shah

    1987-10-01

    Full Text Available One of the important breakthroughs, image processing is the stand alone, non-human image understanding system (IUS. The task of understanding images becomes monumental as one tries to define what understanding really is. Both pattern recognition and artificial intelligence are used in addition to traditional signal processing. Scene analysis procedures using edge and texture segmentation can be considered as the early stages of image understanding process. Symbolic representation and relationship grammers come at subsequent stages. Thus it is not reasonable to put a man into a loop of signal processing at certain sensors such as remotely piloted vehicles, satellites and spacecrafts. Consequently smart sensors and semi-automatic processes are being developed. Land remote sensing has been another important application of the image processing. With the introduction of programmes like Star Wars this particular application has gained a special importance from the Military's point of view. This paper provides an overview of digital image processing and explores the scope of the technology of remote sensing and IUSs from the Military's point of view. An example of the autonomous vehicle project now under progress in the US is described in detail to elucidate the impact of IUSs.

  2. Evaluating the effectiveness of treatment of corneal ulcers via computer-based automatic image analysis

    Science.gov (United States)

    Otoum, Nesreen A.; Edirisinghe, Eran A.; Dua, Harminder; Faraj, Lana

    2012-06-01

    Corneal Ulcers are a common eye disease that requires prompt treatment. Recently a number of treatment approaches have been introduced that have been proven to be very effective. Unfortunately, the monitoring process of the treatment procedure remains manual and hence time consuming and prone to human errors. In this research we propose an automatic image analysis based approach to measure the size of an ulcer and its subsequent further investigation to determine the effectiveness of any treatment process followed. In Ophthalmology an ulcer area is detected for further inspection via luminous excitation of a dye. Usually in the imaging systems utilised for this purpose (i.e. a slit lamp with an appropriate dye) the ulcer area is excited to be luminous green in colour as compared to rest of the cornea which appears blue/brown. In the proposed approach we analyse the image in the HVS colour space. Initially a pre-processing stage that carries out a local histogram equalisation is used to bring back detail in any over or under exposed areas. Secondly we deal with the removal of potential reflections from the affected areas by making use of image registration of two candidate corneal images based on the detected corneal areas. Thirdly the exact corneal boundary is detected by initially registering an ellipse to the candidate corneal boundary detected via edge detection and subsequently allowing the user to modify the boundary to overlap with the boundary of the ulcer being observed. Although this step makes the approach semi automatic, it removes the impact of breakages of the corneal boundary due to occlusion, noise, image quality degradations. The ratio between the ulcer area confined within the corneal area to the corneal area is used as a measure of comparison. We demonstrate the use of the proposed tool in the analysis of the effectiveness of a treatment procedure adopted for corneal ulcers in patients by comparing the variation of corneal size over time.

  3. Fully automatic algorithm for the analysis of vessels in the angiographic image of the eye fundus

    Directory of Open Access Journals (Sweden)

    Koprowski Robert

    2012-06-01

    Full Text Available Abstract Background The available scientific literature contains descriptions of manual, semi-automated and automated methods for analysing angiographic images. The presented algorithms segment vessels calculating their tortuosity or number in a given area. We describe a statistical analysis of the inclination of the vessels in the fundus as related to their distance from the center of the optic disc. Methods The paper presents an automated method for analysing vessels which are found in angiographic images of the eye using a Matlab implemented algorithm. It performs filtration and convolution operations with suggested masks. The result is an image containing information on the location of vessels and their inclination angle in relation to the center of the optic disc. This is a new approach to the analysis of vessels whose usefulness has been confirmed in the diagnosis of hypertension. Results The proposed algorithm analyzed and processed the images of the eye fundus using a classifier in the form of decision trees. It enabled the proper classification of healthy patients and those with hypertension. The result is a very good separation of healthy subjects from the hypertensive ones: sensitivity - 83%, specificity - 100%, accuracy - 96%. This confirms a practical usefulness of the proposed method. Conclusions This paper presents an algorithm for the automatic analysis of morphological parameters of the fundus vessels. Such an analysis is performed during fluorescein angiography of the eye. The presented algorithm automatically calculates the global statistical features connected with both tortuosity of vessels and their total area or their number.

  4. Fully automatic algorithm for the analysis of vessels in the angiographic image of the eye fundus.

    Science.gov (United States)

    Koprowski, Robert; Teper, Sławomir Jan; Węglarz, Beata; Wylęgała, Edward; Krejca, Michał; Wróbel, Zygmunt

    2012-06-22

    The available scientific literature contains descriptions of manual, semi-automated and automated methods for analysing angiographic images. The presented algorithms segment vessels calculating their tortuosity or number in a given area. We describe a statistical analysis of the inclination of the vessels in the fundus as related to their distance from the center of the optic disc. The paper presents an automated method for analysing vessels which are found in angiographic images of the eye using a Matlab implemented algorithm. It performs filtration and convolution operations with suggested masks. The result is an image containing information on the location of vessels and their inclination angle in relation to the center of the optic disc. This is a new approach to the analysis of vessels whose usefulness has been confirmed in the diagnosis of hypertension. The proposed algorithm analyzed and processed the images of the eye fundus using a classifier in the form of decision trees. It enabled the proper classification of healthy patients and those with hypertension. The result is a very good separation of healthy subjects from the hypertensive ones: sensitivity - 83%, specificity - 100%, accuracy - 96%. This confirms a practical usefulness of the proposed method. This paper presents an algorithm for the automatic analysis of morphological parameters of the fundus vessels. Such an analysis is performed during fluorescein angiography of the eye. The presented algorithm automatically calculates the global statistical features connected with both tortuosity of vessels and their total area or their number.

  5. Automatic performance tuning of parallel and accelerated seismic imaging kernels

    KAUST Repository

    Haberdar, Hakan

    2014-01-01

    With the increased complexity and diversity of mainstream high performance computing systems, significant effort is required to tune parallel applications in order to achieve the best possible performance for each particular platform. This task becomes more and more challenging and requiring a larger set of skills. Automatic performance tuning is becoming a must for optimizing applications such as Reverse Time Migration (RTM) widely used in seismic imaging for oil and gas exploration. An empirical search based auto-tuning approach is applied to the MPI communication operations of the parallel isotropic and tilted transverse isotropic kernels. The application of auto-tuning using the Abstract Data and Communication Library improved the performance of the MPI communications as well as developer productivity by providing a higher level of abstraction. Keeping productivity in mind, we opted toward pragma based programming for accelerated computation on latest accelerated architectures such as GPUs using the fairly new OpenACC standard. The same auto-tuning approach is also applied to the OpenACC accelerated seismic code for optimizing the compute intensive kernel of the Reverse Time Migration application. The application of such technique resulted in an improved performance of the original code and its ability to adapt to different execution environments.

  6. Automatic registration of terrestrial point cloud using panoramic reflectance images

    NARCIS (Netherlands)

    Kang, Z.

    2008-01-01

    Much attention is paid to registration of terrestrial point clouds nowadays. Research is carried out towards improved efficiency and automation of the registration process. This paper reports a new approach for point clouds registration utilizing reflectance panoramic images. The approach follows a

  7. Comparative analysis of image classification methods for automatic diagnosis of ophthalmic images

    Science.gov (United States)

    Wang, Liming; Zhang, Kai; Liu, Xiyang; Long, Erping; Jiang, Jiewei; An, Yingying; Zhang, Jia; Liu, Zhenzhen; Lin, Zhuoling; Li, Xiaoyan; Chen, Jingjing; Cao, Qianzhong; Li, Jing; Wu, Xiaohang; Wang, Dongni; Li, Wangting; Lin, Haotian

    2017-01-01

    There are many image classification methods, but it remains unclear which methods are most helpful for analyzing and intelligently identifying ophthalmic images. We select representative slit-lamp images which show the complexity of ocular images as research material to compare image classification algorithms for diagnosing ophthalmic diseases. To facilitate this study, some feature extraction algorithms and classifiers are combined to automatic diagnose pediatric cataract with same dataset and then their performance are compared using multiple criteria. This comparative study reveals the general characteristics of the existing methods for automatic identification of ophthalmic images and provides new insights into the strengths and shortcomings of these methods. The relevant methods (local binary pattern +SVMs, wavelet transformation +SVMs) which achieve an average accuracy of 87% and can be adopted in specific situations to aid doctors in preliminarily disease screening. Furthermore, some methods requiring fewer computational resources and less time could be applied in remote places or mobile devices to assist individuals in understanding the condition of their body. In addition, it would be helpful to accelerate the development of innovative approaches and to apply these methods to assist doctors in diagnosing ophthalmic disease.

  8. A rapid automatic analyzer and its methodology for effective bentonite content based on image recognition technology

    Directory of Open Access Journals (Sweden)

    Wei Long

    2016-09-01

    Full Text Available Fast and accurate determination of effective bentonite content in used clay bonded sand is very important for selecting the correct mixing ratio and mixing process to obtain high-performance molding sand. Currently, the effective bentonite content is determined by testing the ethylene blue absorbed in used clay bonded sand, which is usually a manual operation with some disadvantages including complicated process, long testing time and low accuracy. A rapid automatic analyzer of the effective bentonite content in used clay bonded sand was developed based on image recognition technology. The instrument consists of auto stirring, auto liquid removal, auto titration, step-rotation and image acquisition components, and processor. The principle of the image recognition method is first to decompose the color images into three-channel gray images based on the photosensitive degree difference of the light blue and dark blue in the three channels of red, green and blue, then to make the gray values subtraction calculation and gray level transformation of the gray images, and finally, to extract the outer circle light blue halo and the inner circle blue spot and calculate their area ratio. The titration process can be judged to reach the end-point while the area ratio is higher than the setting value.

  9. DIGITAL IMAGES PROCESSING IN RADIOGRAPHY

    OpenAIRE

    Pilař, Martin

    2010-01-01

    This thesis is focused primarily on digital image processing and modern imaging modalities algorithms. An algorithm means a method for solving a problem or an instruction. In image processing an algorithm presents the process from data acquisition to the resulting image displayed on the monitor. Therefore, in the first part of the thesis a brief overview of principles of imaging modalities used in radiodiagnostics is given. Collected data have to be analyzed and modelled in a certain way. The...

  10. Automatic and effortful processing of self-statements in depression.

    Science.gov (United States)

    Wang, Catharina E; Brennen, Tim; Holte, Arne

    2006-01-01

    Clark and Beck (1999) and Williams et al. (1997) have come up with quite different conclusions regarding which cognitive processes are most affected by negative self-schemata and negative knowledge structures. In order to increase the understanding of differences in effortful and automatic processing in depression, we compared never depressed (ND), previously depressed (PD) and clinically depressed (CD) individuals on free recall, recognition and fabrication of positive and negative self-statements. The results showed that: (i) overall NDs and PDs recalled more positive self-statements than CDs, whereas CDs correctly recognized more negative self-statements than NDs and PDs; and (ii) CDs and PDs fabricated more negative than positive self-statements, whereas no difference was obtained for NDs. The results seem to be in line with Clark and Beck's suggestions. However, there are several aspects of the present findings that make the picture more complicated.

  11. A method for automatic liver segmentation from multi-phase contrast-enhanced CT images

    Science.gov (United States)

    Yuan, Rong; Luo, Ming; Wang, Shaofa; Wang, Luyao; Xie, Qingguo

    2014-03-01

    Liver segmentation is a basic and indispensable function in systems of computer aided liver surgery for volume calculation, operation designing and risk evaluation. Traditional manual segmentation is very time consuming because of the complicated contours of liver and the big amount of images. For increasing the efficiency of the clinical work, in this paper, a fully-automatic method was proposed to segment the liver from multi-phase contrast-enhanced computed tomography (CT) images. As an advanced region growing method, we applied various pre- and post-processing to get better segmentation from the different phases. Fifteen sets of clinical abdomens CT images of five patients were segmented by our algorithm, and the results were acceptable and evaluated by an experienced surgeon. The running-time is about 30 seconds for a single-phase data which includes more than 200 slices.

  12. Automatic target classification of man-made objects in synthetic aperture radar images using Gabor wavelet and neural network

    Science.gov (United States)

    Vasuki, Perumal; Roomi, S. Mohamed Mansoor

    2013-01-01

    Processing of synthetic aperture radar (SAR) images has led to the development of automatic target classification approaches. These approaches help to classify individual and mass military ground vehicles. This work aims to develop an automatic target classification technique to classify military targets like truck/tank/armored car/cannon/bulldozer. The proposed method consists of three stages via preprocessing, feature extraction, and neural network (NN). The first stage removes speckle noise in a SAR image by the identified frost filter and enhances the image by histogram equalization. The second stage uses a Gabor wavelet to extract the image features. The third stage classifies the target by an NN classifier using image features. The proposed work performs better than its counterparts, like K-nearest neighbor (KNN). The proposed work performs better on databases like moving and stationary target acquisition and recognition against the earlier methods by KNN.

  13. Analgorithmic Framework for Automatic Detection and Tracking Moving Point Targets in IR Image Sequences

    Directory of Open Access Journals (Sweden)

    R. Anand Raji

    2015-05-01

    Full Text Available Imaging sensors operating in infrared (IR region of electromagnetic spectrum are gaining importance in airborne automatic target recognition (ATR applications due to their passive nature of operation. IR imaging sensors exploit the unintended IR radiation emitted by the targets of interest for detection. The ATR systems based on the passive IR imaging sensors employ a set of signal processing algorithms for processing the image information in real-time. The real-time execution of signal processing algorithms provides the sufficient reaction time to the platform carrying ATR system to react upon the target of interest. These set of algorithms include detection, tracking, and classification of low-contrast, small sized-targets. Paper explained a signal processing framework developed to detect and track moving point targets from the acquired IR image sequences in real-time.Defence Science Journal, Vol. 65, No. 3, May 2015, pp.208-213, DOI: http://dx.doi.org/10.14429/dsj.65.8164

  14. Automatic Detection of Inactive Solar Cell Cracks in Electroluminescence Images

    DEFF Research Database (Denmark)

    Spataru, Sergiu; Hacke, Peter; Sera, Dezso

    2017-01-01

    We propose an algorithm for automatic determination of the electroluminescence (EL) signal threshold level corresponding to inactive solar cell cracks, resulting from their disconnection from the electrical circuit of the cell. The method enables automatic quantification of the cell crack size...

  15. Smart Image Enhancement Process

    Science.gov (United States)

    Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)

    2012-01-01

    Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.

  16. Image processing and recognition for biological images.

    Science.gov (United States)

    Uchida, Seiichi

    2013-05-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target.

  17. Automatic segmentation of lumbar vertebrae in CT images

    Science.gov (United States)

    Kulkarni, Amruta; Raina, Akshita; Sharifi Sarabi, Mona; Ahn, Christine S.; Babayan, Diana; Gaonkar, Bilwaj; Macyszyn, Luke; Raghavendra, Cauligi

    2017-03-01

    Lower back pain is one of the most prevalent disorders in the developed/developing world. However, its etiology is poorly understood and treatment is often determined subjectively. In order to quantitatively study the emergence and evolution of back pain, it is necessary to develop consistently measurable markers for pathology. Imaging based measures offer one solution to this problem. The development of imaging based on quantitative biomarkers for the lower back necessitates automated techniques to acquire this data. While the problem of segmenting lumbar vertebrae has been addressed repeatedly in literature, the associated problem of computing relevant biomarkers on the basis of the segmentation has not been addressed thoroughly. In this paper, we propose a Random-Forest based approach that learns to segment vertebral bodies in CT images followed by a biomarker evaluation framework that extracts vertebral heights and widths from the segmentations obtained. Our dataset consists of 15 CT sagittal scans obtained from General Electric Healthcare. Our main approach is divided into three parts: the first stage is image pre-processing which is used to correct for variations in illumination across all the images followed by preparing the foreground and background objects from images; the next stage is Machine Learning using Random-Forests, which distinguishes the interest-point vectors between foreground or background; and the last step is image post-processing, which is crucial to refine the results of classifier. The Dice coefficient was used as a statistical validation metric to evaluate the performance of our segmentations with an average value of 0.725 for our dataset.

  18. An automatic fractional coefficient setting method of FODPSO for hyperspectral image segmentation

    Science.gov (United States)

    Xie, Weiying; Li, Yunsong

    2015-05-01

    In this paper, an automatic fractional coefficient setting method of fractional-order Darwinian particle swarm optimization (FODPSO) is proposed for hyperspectral image segmentation. The spectrum has been already taken into consideration by integrating various types of band selection algorithms, firstly. We provide a short overview of the hyperspectral image to select an appropriate set of bands by combining supervised, semi-supervised and unsupervised band selection algorithms. Some approaches are not limited in regards to their spectral dimension, but are limited with respect to their spatial dimension owing to low spatial resolution. The addition of spatial information will be focused on improving the performance of hyperspectral image segmentation for later fusion or classification. Many researchers have advocated that a large fractional coefficient should be in the exploration state while a small fractional coefficient should be in the exploitation, which does not mean the coefficient purely decrease with time. Due to such reasons, we propose an adaptive FODPSO by setting the fractional coefficient adaptively for the application of final hyperspectral image segmentation. In fact, the paper introduces an evolutionary factor to automatically control the fractional coefficient by using a sigmoid function. Therefore, fractional coefficient with large value will benefit the global search in the exploration state. Conversely, when the fractional coefficient has a small value, the exploitation state is detected. Hence, it can avoid optimization process get trapped into the local optima. Ultimately, the experimental segmentation results prove the validity and efficiency of our proposed automatic fractional coefficient setting method of FODPSO compared with traditional PSO, DPSO and FODPSO.

  19. Automatic Registration Method for Optical Remote Sensing Images with Large Background Variations Using Line Segments

    Directory of Open Access Journals (Sweden)

    Xiaolong Shi

    2016-05-01

    Full Text Available Image registration is an essential step in the process of image fusion, environment surveillance and change detection. Finding correct feature matches during the registration process proves to be difficult, especially for remote sensing images with large background variations (e.g., images taken pre and post an earthquake or flood. Traditional registration methods based on local intensity probably cannot maintain steady performances, as differences are significant in the same area of the corresponding images, and ground control points are not always available in many disaster images. In this paper, an automatic image registration method based on the line segments on the main shape contours (e.g., coastal lines, long roads and mountain ridges is proposed for remote sensing images with large background variations because the main shape contours can hold relatively more invariant information. First, a line segment detector called EDLines (Edge Drawing Lines, which was proposed by Akinlar et al. in 2011, is used to extract line segments from two corresponding images, and a line validation step is performed to remove meaningless and fragmented line segments. Then, a novel line segment descriptor with a new histogram binning strategy, which is robust to global geometrical distortions, is generated for each line segment based on the geometrical relationships,including both the locations and orientations of theremaining line segments relative to it. As a result of the invariance of the main shape contours, correct line segment matches will have similar descriptors and can be obtained by cross-matching among the descriptors. Finally, a spatial consistency measure is used to remove incorrect matches, and transformation parameters between the reference and sensed images can be figured out. Experiments with images from different types of satellite datasets, such as Landsat7, QuickBird, WorldView, and so on, demonstrate that the proposed algorithm is

  20. A complete software application for automatic registration of x-ray mammography and magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Solves-Llorens, J. A.; Rupérez, M. J., E-mail: mjruperez@labhuman.i3bh.es; Monserrat, C. [LabHuman, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia (Spain); Feliu, E.; García, M. [Hospital Clínica Benidorm, Avda. Alfonso Puchades, 8, 03501 Benidorm (Alicante) (Spain); Lloret, M. [Hospital Universitari y Politècnic La Fe, Bulevar Sur, 46026 Valencia (Spain)

    2014-08-15

    Purpose: This work presents a complete and automatic software application to aid radiologists in breast cancer diagnosis. The application is a fully automated method that performs a complete registration of magnetic resonance (MR) images and x-ray (XR) images in both directions (from MR to XR and from XR to MR) and for both x-ray mammograms, craniocaudal (CC), and mediolateral oblique (MLO). This new approximation allows radiologists to mark points in the MR images and, without any manual intervention, it provides their corresponding points in both types of XR mammograms and vice versa. Methods: The application automatically segments magnetic resonance images and x-ray images using the C-Means method and the Otsu method, respectively. It compresses the magnetic resonance images in both directions, CC and MLO, using a biomechanical model of the breast that distinguishes the specific biomechanical behavior of each one of its three tissues (skin, fat, and glandular tissue) separately. It makes a projection of both compressions and registers them with the original XR images using affine transformations and nonrigid registration methods. Results: The application has been validated by two expert radiologists. This was carried out through a quantitative validation on 14 data sets in which the Euclidean distance between points marked by the radiologists and the corresponding points obtained by the application were measured. The results showed a mean error of 4.2 ± 1.9 mm for the MRI to CC registration, 4.8 ± 1.3 mm for the MRI to MLO registration, and 4.1 ± 1.3 mm for the CC and MLO to MRI registration. Conclusions: A complete software application that automatically registers XR and MR images of the breast has been implemented. The application permits radiologists to estimate the position of a lesion that is suspected of being a tumor in an imaging modality based on its position in another different modality with a clinically acceptable error. The results show that the

  1. Automatic anatomy recognition in whole-body PET/CT images

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Huiqian [College of Optoelectronic Engineering, Chongqing University, Chongqing 400044, China and Medical Image Processing Group Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States); Udupa, Jayaram K., E-mail: jay@mail.med.upenn.edu; Odhner, Dewey; Tong, Yubing; Torigian, Drew A. [Medical Image Processing Group Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States); Zhao, Liming [Medical Image Processing Group Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 and Research Center of Intelligent System and Robotics, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China)

    2016-01-15

    Purpose: Whole-body positron emission tomography/computed tomography (PET/CT) has become a standard method of imaging patients with various disease conditions, especially cancer. Body-wide accurate quantification of disease burden in PET/CT images is important for characterizing lesions, staging disease, prognosticating patient outcome, planning treatment, and evaluating disease response to therapeutic interventions. However, body-wide anatomy recognition in PET/CT is a critical first step for accurately and automatically quantifying disease body-wide, body-region-wise, and organwise. This latter process, however, has remained a challenge due to the lower quality of the anatomic information portrayed in the CT component of this imaging modality and the paucity of anatomic details in the PET component. In this paper, the authors demonstrate the adaptation of a recently developed automatic anatomy recognition (AAR) methodology [Udupa et al., “Body-wide hierarchical fuzzy modeling, recognition, and delineation of anatomy in medical images,” Med. Image Anal. 18, 752–771 (2014)] to PET/CT images. Their goal was to test what level of object localization accuracy can be achieved on PET/CT compared to that achieved on diagnostic CT images. Methods: The authors advance the AAR approach in this work in three fronts: (i) from body-region-wise treatment in the work of Udupa et al. to whole body; (ii) from the use of image intensity in optimal object recognition in the work of Udupa et al. to intensity plus object-specific texture properties, and (iii) from the intramodality model-building-recognition strategy to the intermodality approach. The whole-body approach allows consideration of relationships among objects in different body regions, which was previously not possible. Consideration of object texture allows generalizing the previous optimal threshold-based fuzzy model recognition method from intensity images to any derived fuzzy membership image, and in the process

  2. Image processing with ImageJ

    CERN Document Server

    Pascau, Javier

    2013-01-01

    The book will help readers discover the various facilities of ImageJ through a tutorial-based approach.This book is targeted at scientists, engineers, technicians, and managers, and anyone who wishes to master ImageJ for image viewing, processing, and analysis. If you are a developer, you will be able to code your own routines after you have finished reading this book. No prior knowledge of ImageJ is expected.

  3. Software workflow for the automatic tagging of medieval manuscript images (SWATI)

    Science.gov (United States)

    Chandna, Swati; Tonne, Danah; Jejkal, Thomas; Stotzka, Rainer; Krause, Celia; Vanscheidt, Philipp; Busch, Hannah; Prabhune, Ajinkya

    2015-01-01

    Digital methods, tools and algorithms are gaining in importance for the analysis of digitized manuscript collections in the arts and humanities. One example is the BMBF-funded research project "eCodicology" which aims to design, evaluate and optimize algorithms for the automatic identification of macro- and micro-structural layout features of medieval manuscripts. The main goal of this research project is to provide better insights into high-dimensional datasets of medieval manuscripts for humanities scholars. The heterogeneous nature and size of the humanities data and the need to create a database of automatically extracted reproducible features for better statistical and visual analysis are the main challenges in designing a workflow for the arts and humanities. This paper presents a concept of a workflow for the automatic tagging of medieval manuscripts. As a starting point, the workflow uses medieval manuscripts digitized within the scope of the project Virtual Scriptorium St. Matthias". Firstly, these digitized manuscripts are ingested into a data repository. Secondly, specific algorithms are adapted or designed for the identification of macro- and micro-structural layout elements like page size, writing space, number of lines etc. And lastly, a statistical analysis and scientific evaluation of the manuscripts groups are performed. The workflow is designed generically to process large amounts of data automatically with any desired algorithm for feature extraction. As a result, a database of objectified and reproducible features is created which helps to analyze and visualize hidden relationships of around 170,000 pages. The workflow shows the potential of automatic image analysis by enabling the processing of a single page in less than a minute. Furthermore, the accuracy tests of the workflow on a small set of manuscripts with respect to features like page size and text areas show that automatic and manual analysis are comparable. The usage of a computer

  4. Automatic CAD of meniscal tears on MR imaging: a morphology-based approach

    Science.gov (United States)

    Ramakrishna, Bharath; Liu, Weimin; Safdar, Nabile; Siddiqui, Khan; Kim, Woojin; Juluru, Krishna; Chang, Chein-I.; Siegel, Eliot

    2007-03-01

    Knee-related injuries, including meniscal tears, are common in young athletes and require accurate diagnosis and appropriate surgical intervention. Although with proper technique and skill, confidence in the detection of meniscal tears should be high, this task continues to be a challenge for many inexperienced radiologists. The purpose of our study was to automate detection of meniscal tears of the knee using a computer-aided detection (CAD) algorithm. Automated segmentation of the sagittal T1-weighted MR imaging sequences of the knee in 28 patients with diagnoses of meniscal tears was performed using morphologic image processing in a 3-step process including cropping, thresholding, and application of morphological constraints. After meniscal segmentation, abnormal linear meniscal signal was extracted through a second thresholding process. The results of this process were validated by comparison with the interpretations of 2 board-certified musculoskeletal radiologists. The automated meniscal extraction algorithm process was able to successfully perform region of interest selection, thresholding, and object shape constraint tasks to produce a convex image isolating the menisci in more than 69% of the 28 cases. A high correlation was also noted between the CAD algorithm and human observer results in identification of complex meniscal tears. Our initial investigation indicates considerable promise for automatic detection of simple and complex meniscal tears of the knee using the CAD algorithm. This observation poses interesting possibilities for increasing radiologist productivity and confidence, improving patient outcomes, and applying more sophisticated CAD algorithms to orthopedic imaging tasks.

  5. Exploring Automaticity in Text Processing: Syntactic Ambiguity as a Test Case

    Science.gov (United States)

    Rawson, Katherine A.

    2004-01-01

    A prevalent assumption in text comprehension research is that many aspects of text processing are automatic, with automaticity typically defined in terms of properties (e.g., speed and effort). The present research advocates conceptualization of automaticity in terms of underlying mechanisms and evaluates two such accounts, a…

  6. Memory-Based Processing as a Mechanism of Automaticity in Text Comprehension

    Science.gov (United States)

    Rawson, Katherine A.; Middleton, Erica L.

    2009-01-01

    A widespread theoretical assumption is that many processes involved in text comprehension are automatic, with automaticity typically defined in terms of properties (e.g., speed, effort). In contrast, the authors advocate for conceptualization of automaticity in terms of underlying cognitive mechanisms and evaluate one prominent account, the…

  7. Automatic detection of sub-km craters in high resolution planetary images

    Science.gov (United States)

    Urbach, Erik R.; Stepinski, Tomasz F.

    2009-06-01

    Impact craters are among the most studied geomorphic planetary features because they yield information about the past geological processes and provide a tool for measuring relative ages of observed geologic formations. Surveying impact craters is an important task which traditionally has been achieved by means of visual inspection of images. The shear number of smaller craters present in high resolution images makes visual counting of such craters impractical. In this paper we present a method that brings together a novel, efficient crater identification algorithm with a data processing pipeline; together they enable a fully automatic detection of sub-km craters in large panchromatic images. The technical details of the method are described and its performance is evaluated using a large, 12.5 m/pixel image centered on the Nanedi Valles on Mars. The detection percentage of the method is ˜70%. The system detects over 35,000 craters in this image; average crater density is 0.5craters/km2, but localized spots of much higher crater density are present. The method is designed to produce "million craters" global catalogs of sub-km craters on Mars and other planets wherever high resolution images are available. Such catalogs could be utilized for deriving high spatial resolution and high temporal precision stratigraphy on regional or even planetary scale.

  8. Automatic segmentation of the lumen of the carotid artery in ultrasound B-mode images

    Science.gov (United States)

    Santos, André M. F.; Tavares, Jão. Manuel R. S.; Sousa, Luísa; Santos, Rosa; Castro, Pedro; Azevedo, Elsa

    2013-02-01

    A new algorithm is proposed for the segmentation of the lumen and bifurcation boundaries of the carotid artery in B-mode ultrasound images. It uses the hipoechogenic characteristics of the lumen for the identification of the carotid boundaries and the echogenic characteristics for the identification of the bifurcation boundaries. The image to be segmented is processed with the application of an anisotropic diffusion filter for speckle removal and morphologic operators are employed in the detection of the artery. The obtained information is then used in the definition of two initial contours, one corresponding to the lumen and the other to the bifurcation boundaries, for the posterior application of the Chan-vese level set segmentation model. A set of longitudinal B-mode images of the common carotid artery (CCA) was acquired with a GE Healthcare Vivid-e ultrasound system (GE Healthcare, United Kingdom). All the acquired images include a part of the CCA and of the bifurcation that separates the CCA into the internal and external carotid arteries. In order to achieve the uppermost robustness in the imaging acquisition process, i.e., images with high contrast and low speckle noise, the scanner was adjusted differently for each acquisition and according to the medical exam. The obtained results prove that we were able to successfully apply a carotid segmentation technique based on cervical ultrasonography. The main advantage of the new segmentation method relies on the automatic identification of the carotid lumen, overcoming the limitations of the traditional methods.

  9. Combining STEREO SECCHI COR2 and HI1 images for automatic CME front edge tracking

    Directory of Open Access Journals (Sweden)

    Kirnosov Vladimir

    2016-01-01

    Full Text Available COR2 coronagraph images are the most commonly used data for coronal mass ejection (CME analysis among the various types of data provided by the STEREO (Solar Terrestrial Relations Observatory SECCHI (Sun-Earth Connection Coronal and Heliospheric Investigation suite of instruments. The field of view (FOV in COR2 images covers 2–15 solar radii (Rs that allow for tracking the front edge of a CME in its initial stage to forecast the lead-time of a CME and its chances of reaching the Earth. However, estimating the lead-time of a CME using COR2 images gives a larger lead-time, which may be associated with greater uncertainty. To reduce this uncertainty, CME front edge tracking should be continued beyond the FOV of COR2 images. Therefore, heliospheric imager (HI1 data that covers 15–90 Rs FOV must be included. In this paper, we propose a novel automatic method that takes both COR2 and HI1 images into account and combine the results to track the front edges of a CME continuously. The method consists of two modules: pre-processing and tracking. The pre-processing module produces a set of segmented images, which contain the signature of a CME, for both COR2 and HI1 separately. In addition, the HI1 images are resized and padded, so that the center of the Sun is the central coordinate of the resized HI1 images. The resulting COR2 and HI1 image set is then fed into the tracking module to estimate the position angle (PA and track the front edge of a CME. The detected front edge is then used to produce a height-time profile that is used to estimate the speed of a CME. The method was validated using 15 CME events observed in the period from January 1, 2008 to August 31, 2009. The results demonstrate that the proposed method is effective for CME front edge tracking in both COR2 and HI1 images. Using this method, the CME front edge can now be tracked automatically and continuously in a much larger range, i.e., from 2 to 90 Rs, for the first time. These

  10. Automatic Texture Reconstruction of 3d City Model from Oblique Images

    Science.gov (United States)

    Kang, Junhua; Deng, Fei; Li, Xinwei; Wan, Fang

    2016-06-01

    In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.

  11. Fundamentals of electronic image processing

    CERN Document Server

    Weeks, Arthur R

    1996-01-01

    This book is directed to practicing engineers and scientists who need to understand the fundamentals of image processing theory and algorithms to perform their technical tasks. It is intended to fill the gap between existing high-level texts dedicated to specialists in the field and the need for a more practical, fundamental text on image processing. A variety of example images are used to enhance reader understanding of how particular image processing algorithms work.

  12. Toward automatic evaluation of defect detectability in infrared images of composites and honeycomb structures

    Science.gov (United States)

    Florez-Ospina, Juan F.; Benitez-Restrepo, H. D.

    2015-07-01

    Non-destructive testing (NDT) refers to inspection methods employed to assess a material specimen without impairing its future usefulness. An important type of these methods is infrared (IR) for NDT (IRNDT), which employs the heat emitted by bodies/objects to rapidly and noninvasively inspect wide surfaces and to find specific defects such as delaminations, cracks, voids, and discontinuities in materials. Current advancements in sensor technology for IRNDT generate great amounts of image sequences. These data require further processing to determine the integrity of objects. Processing techniques for IRNDT data implicitly looks for defect visibility enhancement. Commonly, IRNDT community employs signal to noise ratio (SNR) to measure defect visibility. Nonetheless, current applications of SNR are local, thereby overseeing spatial information, and depend on a-priori knowledge of defect's location. In this paper, we present a general framework to assess defect detectability based on SNR maps derived from processed IR images. The joint use of image segmentation procedures along with algorithms for filling regions of interest (ROI) estimates a reference background to compute SNR maps. Our main contributions are: (i) a method to compute SNR maps that takes into account spatial variation and are independent of a-priori knowledge of defect location in the sample, (ii) spatial background analysis in processed images, and (iii) semi-automatic calculation of segmentation algorithm parameters. We test our approach in carbon fiber and honeycomb samples with complex geometries and defects with different sizes and depths.

  13. Prosody's Contribution to Fluency: An Examination of the Theory of Automatic Information Processing

    Science.gov (United States)

    Schrauben, Julie E.

    2010-01-01

    LaBerge and Samuels' (1974) theory of automatic information processing in reading offers a model that explains how and where the processing of information occurs and the degree to which processing of information occurs. These processes are dependent upon two criteria: accurate word decoding and automatic word recognition. However, LaBerge and…

  14. Automatic Tracking and Motility Analysis of Human Sperm in Time-Lapse Images.

    Science.gov (United States)

    Urbano, Leonardo F; Masson, Puneet; VerMilyea, Matthew; Kam, Moshe

    2017-03-01

    We present a fully automated multi-sperm tracking algorithm. It has the demonstrated capability to detect and track simultaneously hundreds of sperm cells in recorded videos while accurately measuring motility parameters over time and with minimal operator intervention. Algorithms of this kind may help in associating dynamic swimming parameters of human sperm cells with fertility and fertilization rates. Specifically, we offer an image processing method, based on radar tracking algorithms, that detects and tracks automatically the swimming paths of human sperm cells in timelapse microscopy image sequences of the kind that is analyzed by fertility clinics. Adapting the well-known joint probabilistic data association filter (JPDAF), we automatically tracked hundreds of human sperm simultaneously and measured their dynamic swimming parameters over time. Unlike existing CASA instruments, our algorithm has the capability to track sperm swimming in close proximity to each other and during apparent cell-to-cell collisions. Collecting continuously parameters for each sperm tracked without sample dilution (currently impossible using standard CASA systems) provides an opportunity to compare such data with standard fertility rates. The use of our algorithm thus has the potential to free the clinician from having to rely on elaborate motility measurements obtained manually by technicians, speed up semen processing, and provide medical practitioners and researchers with more useful data than are currently available.

  15. Lateralized automatic auditory processing of phonetic versus musical information: a PET study.

    Science.gov (United States)

    Tervaniemi, M; Medvedev, S V; Alho, K; Pakhomov, S V; Roudas, M S; Van Zuijen, T L; Näätänen, R

    2000-06-01

    Previous positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) studies show that during attentive listening, processing of phonetic information is associated with higher activity in the left auditory cortex than in the right auditory cortex while the opposite is true for musical information. The present PET study determined whether automatically activated neural mechanisms for phonetic and musical information are lateralized. To this end, subjects engaged in a visual word classification task were presented with phonetic sound sequences consisting of frequent (P = 0.8) and infrequent (P = 0.2) phonemes and with musical sound sequences consisting of frequent (P = 0.8) and infrequent (P = 0.2) chords. The phonemes and chords were matched in spectral complexity as well as in the magnitude of frequency difference between the frequent and infrequent sounds (/e/ vs. /o/; A major vs. A minor). In addition, control sequences, consisting of either frequent (/e/; A major) or infrequent sounds (/o/; A minor) were employed in separate blocks. When sound sequences consisted of intermixed frequent and infrequent sounds, automatic phonetic processing was lateralized to the left hemisphere and musical to the right hemisphere. This lateralization, however, did not occur in control blocks with one type of sound (frequent or infrequent). The data thus indicate that automatic activation of lateralized neuronal circuits requires sound comparison based on short-term sound representations.

  16. Automatic airway wall segmentation and thickness measurement for long-range optical coherence tomography images.

    Science.gov (United States)

    Qi, Li; Huang, Shenghai; Heidari, Andrew E; Dai, Cuixia; Zhu, Jiang; Zhang, Xuping; Chen, Zhongping

    2015-12-28

    We present an automatic segmentation method for the delineation and quantitative thickness measurement of multiple layers in endoscopic airway optical coherence tomography (OCT) images. The boundaries of the mucosa and the sub-mucosa layers are accurately extracted using a graph-theory-based dynamic programming algorithm. The algorithm was tested with sheep airway OCT images. Quantitative thicknesses of the mucosal layers are obtained automatically for smoke inhalation injury experiments.

  17. Automatic Ferrite Content Measurement based on Image Analysis and Pattern Classification

    Directory of Open Access Journals (Sweden)

    Hafiz Muhammad Tanveer

    2015-05-01

    Full Text Available The existing manual point counting technique for ferrite content measurement is a difficult time consuming method which has limited accuracy due to limited human perception and error induced by points on boundaries of grid spacing. In this paper, we present a novel algorithm, based on image analysis and pattern classification, to evaluate the volume fraction of ferrite in microstructure containing ferrite and austenite. The prime focus of the proposed algorithm is to solve the problem of ferrite content measurement using automatic binary classification approach. Classification of image data into two distinct classes, using optimum threshold finding method, is the key idea behind the new algorithm. Automation of the process to measure the ferrite content and to speed up specimen’s testing procedure is the main feature of the newly developed algorithm. Improved performance index by reducing error sources is reflected from obtained results and validated through the comparison with a well-known method of Ohtsu.

  18. Automatic detection and classification of damage zone(s) for incorporating in digital image correlation technique

    Science.gov (United States)

    Bhattacharjee, Sudipta; Deb, Debasis

    2016-07-01

    Digital image correlation (DIC) is a technique developed for monitoring surface deformation/displacement of an object under loading conditions. This method is further refined to make it capable of handling discontinuities on the surface of the sample. A damage zone is referred to a surface area fractured and opened in due course of loading. In this study, an algorithm is presented to automatically detect multiple damage zones in deformed image. The algorithm identifies the pixels located inside these zones and eliminate them from FEM-DIC processes. The proposed algorithm is successfully implemented on several damaged samples to estimate displacement fields of an object under loading conditions. This study shows that displacement fields represent the damage conditions reasonably well as compared to regular FEM-DIC technique without considering the damage zones.

  19. Automatic Recognition of Sunspots in HSOS Full-Disk Solar Images

    Science.gov (United States)

    Zhao, Cui; Lin, GangHua; Deng, YuanYong; Yang, Xiao

    2016-05-01

    A procedure is introduced to recognise sunspots automatically in solar full-disk photosphere images obtained from Huairou Solar Observing Station, National Astronomical Observatories of China. The images are first pre-processed through Gaussian algorithm. Sunspots are then recognised by the morphological Bot-hat operation and Otsu threshold. Wrong selection of sunspots is eliminated by a criterion of sunspot properties. Besides, in order to calculate the sunspots areas and the solar centre, the solar limb is extracted by a procedure using morphological closing and erosion operations and setting an adaptive threshold. Results of sunspot recognition reveal that the number of the sunspots detected by our procedure has a quite good agreement with the manual method. The sunspot recognition rate is 95% and error rate is 1.2%. The sunspot areas calculated by our method have high correlation (95%) with the area data from the United States Air Force/National Oceanic and Atmospheric Administration (USAF/NOAA).

  20. Automatic Recognition of Sunspots in HSOS Full-Disk Solar Images

    CERN Document Server

    Zhao, Cui; Deng, YuanYong; Yang, Xiao

    2016-01-01

    A procedure is introduced to recognise sunspots automatically in solar full-disk photosphere images obtained from Huairou Solar Observing Station, National Astronomical Observatories of China. The images are first pre-processed through Gaussian algorithm. Sunspots are then recognised by the morphological Bot-hat operation and Otsu threshold. Wrong selection of sunspots is eliminated by a criterion of sunspot properties. Besides, in order to calculate the sunspots areas and the solar centre, the solar limb is extracted by a procedure using morphological closing and erosion operations and setting an adaptive threshold. Results of sunspot recognition reveal that the number of the sunspots detected by our procedure has a quite good agreement with the manual method. The sunspot recognition rate is 95% and error rate is 1.2%. The sunspot areas calculated by our method have high correlation (95%) with the area data from USAF/NOAA.

  1. A review of metaphase chromosome image selection techniques for automatic karyotype generation.

    Science.gov (United States)

    Arora, Tanvi; Dhir, Renu

    2016-08-01

    The karyotype is analyzed to detect the genetic abnormalities. It is generated by arranging the chromosomes after extracting them from the metaphase chromosome images. The chromosomes are non-rigid bodies that contain the genetic information of an individual. The metaphase chromosome image spread contains the chromosomes, but these chromosomes are not distinct bodies; they can either be individual chromosomes or be touching one another; they may be bent or even may be overlapping and thus forming a cluster of chromosomes. The extraction of chromosomes from these touching and overlapping chromosomes is a very tedious process. The segmentation of a random metaphase chromosome image may not give us correct and accurate results. Therefore, before taking up a metaphase chromosome image for analysis, it must be analyzed for the orientation of the chromosomes it contains. The various reported methods for metaphase chromosome image selection for automatic karyotype generation are compared in this paper. After analysis, it has been concluded that each metaphase chromosome image selection method has its advantages and disadvantages.

  2. Automatic Registration of Low Altitude UAV Sequent Images and Laser Point Clouds

    Directory of Open Access Journals (Sweden)

    CHEN Chi

    2015-05-01

    Full Text Available It is proposed that a novel registration method for automatic co-registration of unmanned aerial vehicle (UAV images sequence and laser point clouds. Firstly, contours of building roofs are extracted from the images sequence and laser point clouds using marked point process and local salient region detection, respectively. The contours from each data are matched via back-project proximity. Secondly, the exterior orientations of the images are recovered using a linear solver based on the contours corner pairs followed by a co-planar optimization which is implicated by the matched lines form contours pairs. Finally, the exterior orientation parameters of images are further optimized by matching 3D points generated from images sequence and laser point clouds using an iterative near the point (ICP algorithm with relative movement threshold constraint. Experiments are undertaken to check the validity and effectiveness of the proposed method. The results show that the proposed method achieves high-precision co-registration of low-altitude UAV image sequence and laser points cloud robustly. The accuracy of the co-produced DOMs meets 1:500 scale standards.

  3. Automatic registration of Iphone images to LASER point clouds of the urban structures using shape features

    Science.gov (United States)

    Sirmacek, B.; Lindenbergh, R. C.; Menenti, M.

    2013-10-01

    Fusion of 3D airborne laser (LIDAR) data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.

  4. Automatic registration of imaging mass spectrometry data to the Allen Brain Atlas transcriptome

    Science.gov (United States)

    Abdelmoula, Walid M.; Carreira, Ricardo J.; Shyti, Reinald; Balluff, Benjamin; Tolner, Else; van den Maagdenberg, Arn M. J. M.; Lelieveldt, B. P. F.; McDonnell, Liam; Dijkstra, Jouke

    2014-03-01

    Imaging Mass Spectrometry (IMS) is an emerging molecular imaging technology that provides spatially resolved information on biomolecular structures; each image pixel effectively represents a molecular mass spectrum. By combining the histological images and IMS-images, neuroanatomical structures can be distinguished based on their biomolecular features as opposed to morphological features. The combination of IMS data with spatially resolved gene expression maps of the mouse brain, as provided by the Allen Mouse Brain atlas, would enable comparative studies of spatial metabolic and gene expression patterns in life-sciences research and biomarker discovery. As such, it would be highly desirable to spatially register IMS slices to the Allen Brain Atlas (ABA). In this paper, we propose a multi-step automatic registration pipeline to register ABA histology to IMS- images. Key novelty of the method is the selection of the best reference section from the ABA, based on pre-processed histology sections. First, we extracted a hippocampus-specific geometrical feature from the given experimental histological section to initially localize it among the ABA sections. Then, feature-based linear registration is applied to the initially localized section and its two neighbors in the ABA to select the most similar reference section. A non-rigid registration yields a one-to-one mapping of the experimental IMS slice to the ABA. The pipeline was applied on 6 coronal sections from two mouse brains, showing high anatomical correspondence, demonstrating the feasibility of complementing biomolecule distributions from individual mice with the genome-wide ABA transcriptome.

  5. Automatic Detection of Clouds and Shadows Using High Resolution Satellite Image Time Series

    Science.gov (United States)

    Champion, Nicolas

    2016-06-01

    Detecting clouds and their shadows is one of the primaries steps to perform when processing satellite images because they may alter the quality of some products such as large-area orthomosaics. The main goal of this paper is to present the automatic method developed at IGN-France for detecting clouds and shadows in a sequence of satellite images. In our work, surface reflectance orthoimages are used. They were processed from initial satellite images using a dedicated software. The cloud detection step consists of a region-growing algorithm. Seeds are firstly extracted. For that purpose and for each input ortho-image to process, we select the other ortho-images of the sequence that intersect it. The pixels of the input ortho-image are secondly labelled seeds if the difference of reflectance (in the blue channel) with overlapping ortho-images is bigger than a given threshold. Clouds are eventually delineated using a region-growing method based on a radiometric and homogeneity criterion. Regarding the shadow detection, our method is based on the idea that a shadow pixel is darker when comparing to the other images of the time series. The detection is basically composed of three steps. Firstly, we compute a synthetic ortho-image covering the whole study area. Its pixels have a value corresponding to the median value of all input reflectance ortho-images intersecting at that pixel location. Secondly, for each input ortho-image, a pixel is labelled shadows if the difference of reflectance (in the NIR channel) with the synthetic ortho-image is below a given threshold. Eventually, an optional region-growing step may be used to refine the results. Note that pixels labelled clouds during the cloud detection are not used for computing the median value in the first step; additionally, the NIR input data channel is used to perform the shadow detection, because it appeared to better discriminate shadow pixels. The method was tested on times series of Landsat 8 and Pl

  6. Automatic cloud detection for high resolution satellite stereo images and its application in terrain extraction

    Science.gov (United States)

    Wu, Teng; Hu, Xiangyun; Zhang, Yong; Zhang, Lulin; Tao, Pengjie; Lu, Luping

    2016-11-01

    The automatic extraction of terrain from high-resolution satellite optical images is very difficult under cloudy conditions. Therefore, accurate cloud detection is necessary to fully use the cloud-free parts of images for terrain extraction. This paper addresses automated cloud detection by introducing an image matching based method under a stereo vision framework, and the optimization usage of non-cloudy areas in stereo matching and the generation of digital surface models (DSMs). Given that clouds are often separated from the terrain surface, cloudy areas are extracted by integrating dense matching DSM, worldwide digital elevation model (DEM) (i.e., shuttle radar topography mission (SRTM)) and gray information from the images. This process consists of the following steps: an image based DSM is firstly generated through a multiple primitive multi-image matcher. Once it is aligned with the reference DEM based on common features, places with significant height differences between the DSM and the DEM will suggest the potential cloud covers. Detecting cloud at these places in the images then enables precise cloud delineation. In the final step, elevations of the reference DEM within the cloud covers are assigned to the corresponding region of the DSM to generate a cloud-free DEM. The proposed approach is evaluated with the panchromatic images of the Tianhui satellite and has been successfully used in its daily operation. The cloud detection accuracy for images without snow is as high as 95%. Experimental results demonstrate that the proposed method can significantly improve the usage of the cloudy panchromatic satellite images for terrain extraction.

  7. Automatic Image Registration Using Free and Open Source Software

    Science.gov (United States)

    Giri Babu, D.; Raja Shekhar, S. S.; Chandrasekar, K.; Sesha Sai, M. V. R.; Diwakar, P. G.; Dadhwal, V. K.

    2014-11-01

    Image registration is the most critical operation in remote sensing applications to enable location based referencing and analysis of earth features. This is the first step for any process involving identification, time series analysis or change detection using a large set of imagery over a region. Most of the reliable procedures involve time consuming and laborious manual methods of finding the corresponding matching features of the input image with respect to reference. Also the process, as it involves human interaction, does not converge with multiple operations at different times. Automated procedures rely on accurately determining the matching locations or points from both the images under comparison and the procedures are robust and consistent over time. Different algorithms are available to achieve this, based on pattern recognition, feature based detection, similarity techniques etc. In the present study and implementation, Correlation based methods have been used with a improvement over newly developed technique of identifying and pruning the false points of match. Free and Open Source Software (FOSS) have been used to develop the methodology to reach a wider audience, without any dependency on COTS (Commercially off the shelf) software. Standard deviation from foci of the ellipse of correlated points, is a statistical means of ensuring the best match of the points of interest based on both intensity values and location correspondence. The methodology is developed and standardised by enhancements to meet the registration requirements of remote sensing imagery. Results have shown a performance improvement, nearly matching the visual techniques and have been implemented in remote sensing operational projects. The main advantage of the proposed methodology is its viability in production mode environment. This paper also shows that the visualization capabilities of MapWinGIS, GDAL's image handling abilities and OSSIM's correlation facility can be efficiently

  8. Guidelines for Automatic Data Processing Physical Security and Risk Management. Federal Information Processing Standards Publication 31.

    Science.gov (United States)

    National Bureau of Standards (DOC), Washington, DC.

    These guidelines provide a handbook for use by federal organizations in structuring physical security and risk management programs for their automatic data processing facilities. This publication discusses security analysis, natural disasters, supporting utilities, system reliability, procedural measures and controls, off-site facilities,…

  9. Automatic segmentation and measurements of gestational sac using static B-mode ultrasound images

    Science.gov (United States)

    Ibrahim, Dheyaa Ahmed; Al-Assam, Hisham; Du, Hongbo; Farren, Jessica; Al-karawi, Dhurgham; Bourne, Tom; Jassim, Sabah

    2016-05-01

    Ultrasound imagery has been widely used for medical diagnoses. Ultrasound scanning is safe and non-invasive, and hence used throughout pregnancy for monitoring growth. In the first trimester, an important measurement is that of the Gestation Sac (GS). The task of measuring the GS size from an ultrasound image is done manually by a Gynecologist. This paper presents a new approach to automatically segment a GS from a static B-mode image by exploiting its geometric features for early identification of miscarriage cases. To accurately locate the GS in the image, the proposed solution uses wavelet transform to suppress the speckle noise by eliminating the high-frequency sub-bands and prepare an enhanced image. This is followed by a segmentation step that isolates the GS through the several stages. First, the mean value is used as a threshold to binarise the image, followed by filtering unwanted objects based on their circularity, size and mean of greyscale. The mean value of each object is then used to further select candidate objects. A Region Growing technique is applied as a post-processing to finally identify the GS. We evaluated the effectiveness of the proposed solution by firstly comparing the automatic size measurements of the segmented GS against the manual measurements, and then integrating the proposed segmentation solution into a classification framework for identifying miscarriage cases and pregnancy of unknown viability (PUV). Both test results demonstrate that the proposed method is effective in segmentation the GS and classifying the outcomes with high level accuracy (sensitivity (miscarriage) of 100% and specificity (PUV) of 99.87%).

  10. Processing of hyperspectral medical images applications in dermatology using Matlab

    CERN Document Server

    Koprowski, Robert

    2017-01-01

    This book presents new methods of analyzing and processing hyperspectral medical images, which can be used in diagnostics, for example for dermatological images. The algorithms proposed are fully automatic and the results obtained are fully reproducible. Their operation was tested on a set of several thousands of hyperspectral images and they were implemented in Matlab. The presented source code can be used without licensing restrictions. This is a valuable resource for computer scientists, bioengineers, doctoral students, and dermatologists interested in contemporary analysis methods.

  11. Eye Redness Image Processing Techniques

    Science.gov (United States)

    Adnan, M. R. H. Mohd; Zain, Azlan Mohd; Haron, Habibollah; Alwee, Razana; Zulfaezal Che Azemin, Mohd; Osman Ibrahim, Ashraf

    2017-09-01

    The use of photographs for the assessment of ocular conditions has been suggested to further standardize clinical procedures. The selection of the photographs to be used as scale reference images was subjective. Numerous methods have been proposed to assign eye redness scores by computational methods. Image analysis techniques have been investigated over the last 20 years in an attempt to forgo subjective grading scales. Image segmentation is one of the most important and challenging problems in image processing. This paper briefly outlines the comprehensive of image processing and the implementation of image segmentation in eye redness.

  12. Semi-automatic removal of foreground stars from images of galaxies

    CERN Document Server

    Frei, Z

    1996-01-01

    A new procedure, designed to remove foreground stars from galaxy profiles is presented. Although several programs exist for stellar and faint object photometry, none of them treat star removal from the images very carefully. I present my attempt to develop such a system, and briefly compare the performance of my software to one of the well known stellar photometry packages, DAOPhot. Major steps in my procedure are: (1) automatic construction of an empirical 2D point spread function from well separated stars that are situated off the galaxy; (2) automatic identification of those peaks that are likely to be foreground stars, scaling the PSF and removing these stars, and patching residuals (in the automatically determined smallest possible area where residuals are truly significant); and (3) cosmetic fix of remaining degradations in the image. The algorithm and software presented here is significantly better for automatic removal of foreground stars from images of galaxies than DAOPhot or similar packages, since...

  13. Image analysis for ophthalmological diagnosis image processing of Corvis ST images using Matlab

    CERN Document Server

    Koprowski, Robert

    2016-01-01

    This monograph focuses on the use of analysis and processing methods for images from the Corvis® ST tonometer. The presented analysis is associated with the quantitative, repeatable and fully automatic evaluation of the response of the eye, eyeball and cornea to an air-puff. All the described algorithms were practically implemented in MATLAB®. The monograph also describes and provides the full source code designed to perform the discussed calculations. As a result, this monograph is intended for scientists, graduate students and students of computer science and bioengineering as well as doctors wishing to expand their knowledge of modern diagnostic methods assisted by various image analysis and processing methods.

  14. Real-time hyperspectral processing for automatic nonferrous material sorting

    Science.gov (United States)

    Picón, Artzai; Ghita, Ovidiu; Bereciartua, Aranzazu; Echazarra, Jone; Whelan, Paul F.; Iriondo, Pedro M.

    2012-01-01

    The application of hyperspectral sensors in the development of machine vision solutions has become increasingly popular as the spectral characteristics of the imaged materials are better modeled in the hyperspectral domain than in the standard trichromatic red, green, blue data. While there is no doubt that the availability of detailed spectral information is opportune as it opens the possibility to construct robust image descriptors, it also raises a substantial challenge when this high-dimensional data is used in the development of real-time machine vision systems. To alleviate the computational demand, often decorrelation techniques are commonly applied prior to feature extraction. While this approach has reduced to some extent the size of the spectral descriptor, data decorrelation alone proved insufficient in attaining real-time classification. This fact is particularly apparent when pixel-wise image descriptors are not sufficiently robust to model the spectral characteristics of the imaged materials, a case when the spatial information (or textural properties) also has to be included in the classification process. The integration of spectral and spatial information entails a substantial computational cost, and as a result the prospects of real-time operation for the developed machine vision system are compromised. To answer this requirement, in this paper we have reengineered the approach behind the integration of the spectral and spatial information in the material classification process to allow the real-time sorting of the nonferrous fractions that are contained in the waste of electric and electronic equipment scrap.

  15. A workflow for the automatic segmentation of organelles in electron microscopy image stacks.

    Science.gov (United States)

    Perez, Alex J; Seyedhosseini, Mojtaba; Deerinck, Thomas J; Bushong, Eric A; Panda, Satchidananda; Tasdizen, Tolga; Ellisman, Mark H

    2014-01-01

    Electron microscopy (EM) facilitates analysis of the form, distribution, and functional status of key organelle systems in various pathological processes, including those associated with neurodegenerative disease. Such EM data often provide important new insights into the underlying disease mechanisms. The development of more accurate and efficient methods to quantify changes in subcellular microanatomy has already proven key to understanding the pathogenesis of Parkinson's and Alzheimer's diseases, as well as glaucoma. While our ability to acquire large volumes of 3D EM data is progressing rapidly, more advanced analysis tools are needed to assist in measuring precise three-dimensional morphologies of organelles within data sets that can include hundreds to thousands of whole cells. Although new imaging instrument throughputs can exceed teravoxels of data per day, image segmentation and analysis remain significant bottlenecks to achieving quantitative descriptions of whole cell structural organellomes. Here, we present a novel method for the automatic segmentation of organelles in 3D EM image stacks. Segmentations are generated using only 2D image information, making the method suitable for anisotropic imaging techniques such as serial block-face scanning electron microscopy (SBEM). Additionally, no assumptions about 3D organelle morphology are made, ensuring the method can be easily expanded to any number of structurally and functionally diverse organelles. Following the presentation of our algorithm, we validate its performance by assessing the segmentation accuracy of different organelle targets in an example SBEM dataset and demonstrate that it can be efficiently parallelized on supercomputing resources, resulting in a dramatic reduction in runtime.

  16. Complete automatic target cuer/recognition system for tactical forward-looking infrared images

    Science.gov (United States)

    Ernisse, Brian E.; Rogers, Steven K.; DeSimio, Martin P.; Raines, Richard A.

    1997-09-01

    A complete forward-looking IR (FLIR) automatic target cuer/recognizer (ATC/R) is presented. The data used for development and testing of this ATC/R are first generation FLIR images collected using a F-15E. The database contains thousands of images with various mission profiles and target arrangements. The specific target of interest is a mobile missile launcher, the primary target. The goal is to locate all vehicles (secondary targets) within a scene and identify the primary targets. The system developed and tested includes an image segmenter, region cluster algorithm, feature extractor, and classifier. Conventional image processing algorithms in conjunction with neural network techniques are used to form a complete ATC/R system. The conventional techniques include hit/miss filtering, difference of Gaussian filtering, and region clustering. A neural network (multilayer perceptron) is used for classification. These algorithms are developed, tested and then combined into a functional ATC/R system. Overall primary target detection rate (cuer) is 84% with a 69% primary target identification (recognizer) rate at ranges relevant to munitions release. Furthermore, the false alarm rate (a nontarget cued as a target) in only 2.3 per scene. The research is being completed with a 10 flight test profile using third generation FLIR images.

  17. Automatic counting and classification of bacterial colonies using hyperspectral imaging

    Science.gov (United States)

    Detection and counting of bacterial colonies on agar plates is a routine microbiology practice to get a rough estimate of the number of viable cells in a sample. There have been a variety of different automatic colony counting systems and software algorithms mainly based on color or gray-scale pictu...

  18. Algorithm of automatic generation of technology process and process relations of automotive wiring harnesses

    Institute of Scientific and Technical Information of China (English)

    XU Benzhu; ZHU Jiman; LIU Xiaoping

    2012-01-01

    Identifying each process and their constraint relations from the complex wiring harness drawings quickly and accurately is the basis for formulating process routes. According to the knowledge of automotive wiring harness and the characteristics of wiring harness components, we established the model of wiring harness graph. Then we research the algorithm of identifying technology processes automatically, finally we describe the relationships between processes by introducing the constraint matrix, which is in or- der to lay a good foundation for harness process planning and production scheduling.

  19. Automatic retrieval of bone fracture knowledge using natural language processing.

    Science.gov (United States)

    Do, Bao H; Wu, Andrew S; Maley, Joan; Biswal, Sandip

    2013-08-01

    Natural language processing (NLP) techniques to extract data from unstructured text into formal computer representations are valuable for creating robust, scalable methods to mine data in medical documents and radiology reports. As voice recognition (VR) becomes more prevalent in radiology practice, there is opportunity for implementing NLP in real time for decision-support applications such as context-aware information retrieval. For example, as the radiologist dictates a report, an NLP algorithm can extract concepts from the text and retrieve relevant classification or diagnosis criteria or calculate disease probability. NLP can work in parallel with VR to potentially facilitate evidence-based reporting (for example, automatically retrieving the Bosniak classification when the radiologist describes a kidney cyst). For these reasons, we developed and validated an NLP system which extracts fracture and anatomy concepts from unstructured text and retrieves relevant bone fracture knowledge. We implement our NLP in an HTML5 web application to demonstrate a proof-of-concept feedback NLP system which retrieves bone fracture knowledge in real time.

  20. Improved automatic tuning of PID controller for stable processes.

    Science.gov (United States)

    Kumar Padhy, Prabin; Majhi, Somanath

    2009-10-01

    This paper presents an improved automatic tuning method for stable processes using a modified relay in the presence of static load disturbances and measurement noise. The modified relay consists of a standard relay in series with a PI controller of unity proportional gain. The integral time constant of the PI controller of the modified relay is chosen so as to ensure a minimum loop phase margin of 30( composite function). A limit cycle is then obtained using the modified relay. Hereafter, the PID controller is designed using the limit cycle output data. The derivative time constant is obtained by maintaining the above mentioned loop phase margin. Minimizing the distance of Nyquist curve of the loop transfer function from the imaginary axis of the complex plane gives the proportional gain. The integral time constant of the PID controller is set equal to the integral time constant of the PI controller of the modified relay. The effectiveness of the proposed technique is verified by simulation results.

  1. Automatic processing of CERN video, audio and photo archives

    Energy Technology Data Exchange (ETDEWEB)

    Kwiatek, M [CERN, Geneva (Switzerland)], E-mail: Michal.Kwiatek@cem.ch

    2008-07-15

    The digitalization of CERN audio-visual archives, a major task currently in progress, will generate over 40 TB of video, audio and photo files. Storing these files is one issue, but a far more important challenge is to provide long-time coherence of the archive and to make these files available on-line with minimum manpower investment. An infrastructure, based on standard CERN services, has been implemented, whereby master files, stored in the CERN Distributed File System (DFS), are discovered and scheduled for encoding into lightweight web formats based on predefined profiles. Changes in master files, conversion profiles or in the metadata database (read from CDS, the CERN Document Server) are automatically detected and the media re-encoded whenever necessary. The encoding processes are run on virtual servers provided on-demand by the CERN Server Self Service Centre, so that new servers can be easily configured to adapt to higher load. Finally, the generated files are made available from the CERN standard web servers with streaming implemented using Windows Media Services.

  2. Fully automatic and reference-marker-free image stitching method for full-spine and full-leg imaging with computed radiography

    Science.gov (United States)

    Wang, Xiaohui; Foos, David H.; Doran, James; Rogers, Michael K.

    2004-05-01

    Full-leg and full-spine imaging with standard computed radiography (CR) systems requires several cassettes/storage phosphor screens to be placed in a staggered arrangement and exposed simultaneously to achieve an increased imaging area. A method has been developed that can automatically and accurately stitch the acquired sub-images without relying on any external reference markers. It can detect and correct the order, orientation, and overlap arrangement of the subimages for stitching. The automatic determination of the order, orientation, and overlap arrangement of the sub-images consists of (1) constructing a hypothesis list that includes all cassette/screen arrangements, (2) refining hypotheses based on a set of rules derived from imaging physics, (3) correlating each consecutive sub-image pair in each hypothesis and establishing an overall figure-of-merit, (4) selecting the hypothesis of maximum figure-of-merit. The stitching process requires the CR reader to over scan each CR screen so that the screen edges are completely visible in the acquired sub-images. The rotational displacement and vertical displacement between two consecutive sub-images are calculated by matching the orientation and location of the screen edge in the front image and its corresponding shadow in the back image. The horizontal displacement is estimated by maximizing the correlation function between the two image sections in the overlap region. Accordingly, the two images are stitched together. This process is repeated for the newly stitched composite image and the next consecutive sub-image until a full-image composite is created. The method has been evaluated in both phantom experiments and clinical studies. The standard deviation of image misregistration is below one image pixel.

  3. VIPS: an image processing system for large images

    Science.gov (United States)

    Cupitt, John; Martinez, Kirk

    1996-02-01

    This paper describes VIPS (VASARI Image Processing System), an image processing system developed by the authors in the course of the EU-funded projects VASARI (1989-1992) and MARC (1992-1995). VIPS implements a fully demand-driven dataflow image IO (input- output) system. Evaluation of library functions is delayed for as long as possible. When evaluation does occur, all delayed operations evaluate together in a pipeline, requiring no space for storing intermediate images and no unnecessary disc IO. If more than one CPU is available, then VIPS operations will automatically evaluate in parallel, giving an approximately linear speed-up. The evaluation system can be controlled by the application programmer. We have implemented a user-interface for the VIPS library which uses expose events in an X window rather than disc output to drive evaluation. This makes it possible, for example, for the user to rotate an 800 MByte image by 12 degrees and immediately scroll around the result.

  4. Challenges in automatic sorting of construction and demolition waste by hyperspectral imaging

    Science.gov (United States)

    Hollstein, Frank; Cacho, Íñigo; Arnaiz, Sixto; Wohllebe, Markus

    2016-05-01

    EU-28 countries currently generate 460 Mt/year of construction and demolition waste (C&DW) and the generation rate is expected to reach around 570 Mt/year between 2025 and 2030. There is great potential for recycling C&DW materials since they are massively produced and content valuable resources. But new C&DW is more complex than existing one and there is a need for shifting from traditional recycling approaches to novel recycling solutions. One basic step to achieve this objective is an improvement in (automatic) sorting technology. Hyperspectral Imaging is a promising candidate to support the process. However, the industrial distribution of Hyperspectral Imaging in the C&DW recycling branch is currently insufficiently pronounced due to high investment costs, still insufficient robustness of optical sensor hardware in harsh ambient conditions and, because of the need of sensor fusion, not well-engineered special software methods to perform the (on line) sorting tasks. Thereby frame rates of over 300 Hz are needed for a successful sorting result. Currently the biggest challenges with regard to C&DW detection cover the need of overlapping VIS, NIR and SWIR hyperspectral images in time and space, in particular for selective recognition of contaminated particles. In the study on hand a new approach for hyperspectral imagers is presented by exploiting SWIR hyperspectral information in real time (with 300 Hz). The contribution describes both laboratory results with regard to optical detection of the most important C&DW material composites as well as a development path for an industrial implementation in automatic sorting and separation lines. The main focus is placed on the closure of the two recycling circuits "grey to grey" and "red to red" because of their outstanding potential for sustainability in conservation of construction resources.

  5. Automatic solar panel recognition and defect detection using infrared imaging

    Science.gov (United States)

    Gao, Xiang; Munson, Eric; Abousleman, Glen P.; Si, Jennie

    2015-05-01

    Failure-free operation of solar panels is of fundamental importance for modern commercial solar power plants. To achieve higher power generation efficiency and longer panel life, a simple and reliable panel evaluation method is required. By using thermal infrared imaging, anomalies can be detected without having to incorporate expensive electrical detection circuitry. In this paper, we propose a solar panel defect detection system, which automates the inspection process and mitigates the need for manual panel inspection in a large solar farm. Infrared video sequences of each array of solar panels are first collected by an infrared camera mounted to a moving cart, which is driven from array to array in a solar farm. The image processing algorithm segments the solar panels from the background in real time, with only the height of the array (specified as the number of rows of panels in the array) being given as prior information to aid in the segmentation process. In order to "count" the number the panels within any given array, frame-to frame panel association is established using optical flow. Local anomalies in a single panel such as hotspots and cracks will be immediately detected and labeled as soon as the panel is recognized in the field of view. After the data from an entire array is collected, hot panels are detected using DBSCAN clustering. On real-world test data containing over 12,000 solar panels, over 98% of all panels are recognized and correctly counted, with 92% of all types of defects being identified by the system.

  6. Image Processing Research

    Science.gov (United States)

    1975-09-30

    linear. c). The prediction is to be based on a selected small number of past estimates. This will impose a desired limited memory requirement for the...otservational ericra can lead to oscillatory estimates. Since c is generally quite smooth, it is reasonable to impose some suopthing constraints on... figura 4 continuous Gaussian noise was added to an image. Median filtering resmltinq in a slight visual improvement. For image enhancement applications

  7. Childhood trauma exposure disrupts the automatic regulation of emotional processing

    National Research Council Canada - National Science Library

    Marusak, Hilary A; Martin, Kayla R; Etkin, Amit; Thomason, Moriah E

    ... precipitate illness are evident during formative, developmental years. This study examined whether automatic regulation of emotional conflict is perturbed in a high-risk urban sample of trauma-exposed children and adolescents...

  8. MO-F-CAMPUS-J-02: Automatic Recognition of Patient Treatment Site in Portal Images Using Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    Chang, X; Yang, D [Washington University in St Louis, St Louis, MO (United States)

    2015-06-15

    Purpose: To investigate the method to automatically recognize the treatment site in the X-Ray portal images. It could be useful to detect potential treatment errors, and to provide guidance to sequential tasks, e.g. automatically verify the patient daily setup. Methods: The portal images were exported from MOSAIQ as DICOM files, and were 1) processed with a threshold based intensity transformation algorithm to enhance contrast, and 2) where then down-sampled (from 1024×768 to 128×96) by using bi-cubic interpolation algorithm. An appearance-based vector space model (VSM) was used to rearrange the images into vectors. A principal component analysis (PCA) method was used to reduce the vector dimensions. A multi-class support vector machine (SVM), with radial basis function kernel, was used to build the treatment site recognition models. These models were then used to recognize the treatment sites in the portal image. Portal images of 120 patients were included in the study. The images were selected to cover six treatment sites: brain, head and neck, breast, lung, abdomen and pelvis. Each site had images of the twenty patients. Cross-validation experiments were performed to evaluate the performance. Results: MATLAB image processing Toolbox and scikit-learn (a machine learning library in python) were used to implement the proposed method. The average accuracies using the AP and RT images separately were 95% and 94% respectively. The average accuracy using AP and RT images together was 98%. Computation time was ∼0.16 seconds per patient with AP or RT image, ∼0.33 seconds per patient with both of AP and RT images. Conclusion: The proposed method of treatment site recognition is efficient and accurate. It is not sensitive to the differences of image intensity, size and positions of patients in the portal images. It could be useful for the patient safety assurance. The work was partially supported by a research grant from Varian Medical System.

  9. Localization accuracy from automatic and semi-automatic rigid registration of locally-advanced lung cancer targets during image-guided radiation therapy

    Science.gov (United States)

    Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D.

    2012-01-01

    Purpose: To evaluate localization accuracy resulting from rigid registration of locally-advanced lung cancer targets using fully automatic and semi-automatic protocols for image-guided radiation therapy. Methods: Seventeen lung cancer patients, fourteen also presenting with involved lymph nodes, received computed tomography (CT) scans once per week throughout treatment under active breathing control. A physician contoured both lung and lymph node targets for all weekly scans. Various automatic and semi-automatic rigid registration techniques were then performed for both individual and simultaneous alignments of the primary gross tumor volume (GTVP) and involved lymph nodes (GTVLN) to simulate the localization process in image-guided radiation therapy. Techniques included “standard” (direct registration of weekly images to a planning CT), “seeded” (manual prealignment of targets to guide standard registration), “transitive-based” (alignment of pretreatment and planning CTs through one or more intermediate images), and “rereferenced” (designation of a new reference image for registration). Localization error (LE) was assessed as the residual centroid and border distances between targets from planning and weekly CTs after registration. Results: Initial bony alignment resulted in centroid LE of 7.3 ± 5.4 mm and 5.4 ± 3.4 mm for the GTVP and GTVLN, respectively. Compared to bony alignment, transitive-based and seeded registrations significantly reduced GTVP centroid LE to 4.7 ± 3.7 mm (p = 0.011) and 4.3 ± 2.5 mm (p < 1 × 10−3), respectively, but the smallest GTVP LE of 2.4 ± 2.1 mm was provided by rereferenced registration (p < 1 × 10−6). Standard registration significantly reduced GTVLN centroid LE to 3.2 ± 2.5 mm (p < 1 × 10−3) compared to bony alignment, with little additional gain offered by the other registration techniques. For simultaneous target alignment, centroid LE as low as 3

  10. Level set method with automatic selective local statistics for brain tumor segmentation in MR images.

    Science.gov (United States)

    Thapaliya, Kiran; Pyun, Jae-Young; Park, Chun-Su; Kwon, Goo-Rak

    2013-01-01

    The level set approach is a powerful tool for segmenting images. This paper proposes a method for segmenting brain tumor images from MR images. A new signed pressure function (SPF) that can efficiently stop the contours at weak or blurred edges is introduced. The local statistics of the different objects present in the MR images were calculated. Using local statistics, the tumor objects were identified among different objects. In this level set method, the calculation of the parameters is a challenging task. The calculations of different parameters for different types of images were automatic. The basic thresholding value was updated and adjusted automatically for different MR images. This thresholding value was used to calculate the different parameters in the proposed algorithm. The proposed algorithm was tested on the magnetic resonance images of the brain for tumor segmentation and its performance was evaluated visually and quantitatively. Numerical experiments on some brain tumor images highlighted the efficiency and robustness of this method.

  11. A fast and automatic mosaic method for high-resolution satellite images

    Science.gov (United States)

    Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing

    2015-12-01

    We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.

  12. Automatic cell object extraction of red tide algae in microscopic images

    Science.gov (United States)

    Yu, Kun; Ji, Guangrong; Zheng, Haiyong

    2017-03-01

    Extracting the cell objects of red tide algae is the most important step in the construction of an automatic microscopic image recognition system for harmful algal blooms. This paper describes a set of composite methods for the automatic segmentation of cells of red tide algae from microscopic images. Depending on the existence of setae, we classify the common marine red tide algae into non-setae algae species and Chaetoceros, and design segmentation strategies for these two categories according to their morphological characteristics. In view of the varied forms and fuzzy edges of non-setae algae, we propose a new multi-scale detection algorithm for algal cell regions based on border- correlation, and further combine this with morphological operations and an improved GrabCut algorithm to segment single-cell and multicell objects. In this process, similarity detection is introduced to eliminate the pseudo cellular regions. For Chaetoceros, owing to the weak grayscale information of their setae and the low contrast between the setae and background, we propose a cell extraction method based on a gray surface orientation angle model. This method constructs a gray surface vector model, and executes the gray mapping of the orientation angles. The obtained gray values are then reconstructed and linearly stretched. Finally, appropriate morphological processing is conducted to preserve the orientation information and tiny features of the setae. Experimental results demonstrate that the proposed methods can effectively remove noise and accurately extract both categories of algae cell objects possessing a complete shape, regular contour, and clear edge. Compared with other advanced segmentation techniques, our methods are more robust when considering images with different appearances and achieve more satisfactory segmentation effects.

  13. The application of pattern recognition in the automatic classification of microscopic rock images

    Science.gov (United States)

    Młynarczuk, Mariusz; Górszczyk, Andrzej; Ślipek, Bartłomiej

    2013-10-01

    The classification of rocks is an inherent part of modern geology. The manual identification of rock samples is a time-consuming process, and-due to the subjective nature of human judgement-burdened with risk. In the course of the study discussed in the present paper, the authors investigated the possibility of automating this process. During the study, nine different rock samples were used. Their digital images were obtained from thin sections, with a polarizing microscope. These photographs were subsequently classified in an automatic manner, by means of four pattern recognition methods: the nearest neighbor algorithm, the K-nearest neighbor, the nearest mode algorithm, and the method of optimal spherical neighborhoods. The effectiveness of these methods was tested in four different color spaces: RGB, CIELab, YIQ, and HSV. The results of the study show that the automatic recognition of the discussed rock types is possible. The study also revealed that, if the CIELab color space and the nearest neighbor classification method are used, the rock samples in question are classified correctly, with the recognition levels of 99.8%.

  14. Optical and digital image processing

    CERN Document Server

    Cristobal, Gabriel; Thienpont, Hugo

    2011-01-01

    In recent years, Moore's law has fostered the steady growth of the field of digital image processing, though the computational complexity remains a problem for most of the digital image processing applications. In parallel, the research domain of optical image processing has matured, potentially bypassing the problems digital approaches were suffering and bringing new applications. The advancement of technology calls for applications and knowledge at the intersection of both areas but there is a clear knowledge gap between the digital signal processing and the optical processing communities. T

  15. Industrial Applications of Image Processing

    Science.gov (United States)

    Ciora, Radu Adrian; Simion, Carmen Mihaela

    2014-11-01

    The recent advances in sensors quality and processing power provide us with excellent tools for designing more complex image processing and pattern recognition tasks. In this paper we review the existing applications of image processing and pattern recognition in industrial engineering. First we define the role of vision in an industrial. Then a dissemination of some image processing techniques, feature extraction, object recognition and industrial robotic guidance is presented. Moreover, examples of implementations of such techniques in industry are presented. Such implementations include automated visual inspection, process control, part identification, robots control. Finally, we present some conclusions regarding the investigated topics and directions for future investigation

  16. Tidal analysis and Arrival Process Mining Using Automatic Identification System (AIS) Data

    Science.gov (United States)

    2017-01-01

    ER D C/ CH L TR -1 7- 2 Coastal Inlets Research Program Tidal Analysis and Arrival Process Mining Using Automatic Identification System...17-2 January 2017 Tidal Analysis and Arrival Process Mining Using Automatic Identification System (AIS) Data Brandan M. Scully Coastal and...13 Tidal analysis

  17. Trends In Microcomputer Image Processing

    Science.gov (United States)

    Strum, William E.

    1988-05-01

    We have seen, in the last four years, the microcomputer become the platform of choice for many image processing applications. By 1991, Frost and Sullivan forecasts that 75% of all image processing will be carried out on microcomputers. Many factors have contributed to this trend and will be discussed in the following paper.

  18. Review of automatic detection of pig behaviours by using image analysis

    Science.gov (United States)

    Han, Shuqing; Zhang, Jianhua; Zhu, Mengshuai; Wu, Jianzhai; Kong, Fantao

    2017-06-01

    Automatic detection of lying, moving, feeding, drinking, and aggressive behaviours of pigs by means of image analysis can save observation input by staff. It would help staff make early detection of diseases or injuries of pigs during breeding and improve management efficiency of swine industry. This study describes the progress of pig behaviour detection based on image analysis and advancement in image segmentation of pig body, segmentation of pig adhesion and extraction of pig behaviour characteristic parameters. Challenges for achieving automatic detection of pig behaviours were summarized.

  19. Experimental research on showing automatic disappearance pen handwriting based on spectral imaging technology

    Science.gov (United States)

    Su, Yi; Xu, Lei; Liu, Ningning; Huang, Wei; Xu, Xiaojing

    2016-10-01

    Purpose to find an efficient, non-destructive examining method for showing the disappearing words after writing with automatic disappearance pen. Method Using the imaging spectrometer to show the potential disappearance words on paper surface according to different properties of reflection absorbed by various substances in different bands. Results the disappeared words by using different disappearance pens to write on the same paper or the same disappearance pen to write on different papers, both can get good show results through the use of the spectral imaging examining methods. Conclusion Spectral imaging technology can show the disappearing words after writing by using the automatic disappearance pen.

  20. SWNT Imaging Using Multispectral Image Processing

    Science.gov (United States)

    Blades, Michael; Pirbhai, Massooma; Rotkin, Slava V.

    2012-02-01

    A flexible optical system was developed to image carbon single-wall nanotube (SWNT) photoluminescence using the multispectral capabilities of a typical CCD camcorder. The built in Bayer filter of the CCD camera was utilized, using OpenCV C++ libraries for image processing, to decompose the image generated in a high magnification epifluorescence microscope setup into three pseudo-color channels. By carefully calibrating the filter beforehand, it was possible to extract spectral data from these channels, and effectively isolate the SWNT signals from the background.

  1. Memory as a function of attention, level of processing, and automatization.

    Science.gov (United States)

    Fisk, A D; Schneider, W

    1984-04-01

    The relationships between long-term memory (LTM) modification, attentional allocation, and type of processing are examined. Automatic/controlled processing theory (Schneider & Shiffrin, 1977) predicts that the nature and amount of controlled processing determines LTM storage and that stimuli can be automatically processed with no lasting LTM effect. Subjects performed the following: (a) an intentional learning, (b) a semantic categorization, (c) a graphic categorization, (d) a distracting digit-search while intentionally learning words, and (e) a distracting digit-search while ignoring words. Frequency judgments were more accurate in the semantic and intentional conditions than the graphic condition. Frequency judgments in the digit-search conditions were near chance. Experiment 2 extensively trained subjects to develop automatic categorization. Automatic categorization produced no frequency learning and little recognition. These results also disconfirm the Hasher and Zacks (1979) "automatic encoding" proposal regarding the nature of processing.

  2. Non-destructive automatic leaf area measurements by combining stereo and time-of-flight images

    NARCIS (Netherlands)

    Song, Y.; Glasbey, C.A.; Polder, G.; Heijden, van der G.W.A.M.

    2014-01-01

    Leaf area measurements are commonly obtained by destructive and laborious practice. This study shows how stereo and time-of-flight (ToF) images can be combined for non-destructive automatic leaf area measurements. The authors focus on some challenging plant images captured in a greenhouse environmen

  3. An entropy-based approach to automatic image segmentation of satellite images

    CERN Document Server

    Barbieri, A L; Rodrigues, F A; Bruno, O M; Costa, L da F

    2009-01-01

    An entropy-based image segmentation approach is introduced and applied to color images obtained from Google Earth. Segmentation refers to the process of partitioning a digital image in order to locate different objects and regions of interest. The application to satellite images paves the way to automated monitoring of ecological catastrophes, urban growth, agricultural activity, maritime pollution, climate changing and general surveillance. Regions representing aquatic, rural and urban areas are identified and the accuracy of the proposed segmentation methodology is evaluated. The comparison with gray level images revealed that the color information is fundamental to obtain an accurate segmentation.

  4. Automatic multi-resolution image registration based on genetic algorithm and Hausdorff distance

    Institute of Scientific and Technical Information of China (English)

    Famao Ye; Lin Su; Shukai Li

    2006-01-01

    @@ Image registration is a crucial step in all image analysis tasks in which the final information is gained from the combination of various data sources, and it is difficult to automatically register due to the complexity of image. An approach based on genetic algorithm and Hausdorff distance to automatic image registration is presented. We use a multi-resolution edge tracker to find out the fine-quality edges and utilize the Hausdorff distance between the input image and the reference image as similarity measure. We use wavelet decomposition and genetic algorithm, which combine local search methods with global ones balancing exploration and exploitation, to speed up the search of the best transformation parameters.Experimental results show that the proposed approach is a promising method for registration of image.

  5. UV image processing to detect diffuse clouds

    Science.gov (United States)

    Armengot, M.; Gómez de Castro, A. I.; López-Santiago, J.; Sánchez-Doreste, N.

    2015-05-01

    The presence of diffuse clouds along the Galaxy is under consideration as far as they are related to stellar formation and their physical properties are not well understood. The signal received from most of these structures in the UV images is minimal compared to the point sources. The presence of noise in these images makes hard the analysis because the Signal-to-Noise ratio is proportionally much higher in these areas. However, the digital processing of the images shows that it is possible to enhance and target these clouds. Typically, this kind of treatment is done on purpose for specific research areas and the Astrophysicist's work depends on the computer tools and its possibilities for enhancing a particular area based on a prior knowledge. Automating this step is the goal of our work to make easier the study of these structures in UV images. In particular we have used the GALEX survey images in the aim of learning to automatically detect such clouds and be able of unsupervised detection and graphic enhancement to log them. Our experiments show the existence of some evidences in the UV images that allow the systematic computing and open the chance to generalize the algorithm to find these structures in universe areas where they have not been recorded yet.

  6. A methodology for the semi-automatic digital image analysis of fragmental impactites

    Science.gov (United States)

    Chanou, A.; Osinski, G. R.; Grieve, R. A. F.

    2014-04-01

    A semi-automated digital image analysis method is developed for the comparative textural study of impact melt-bearing breccias. This method uses the freeware software ImageJ developed by the National Institute of Health (NIH). Digital image analysis is performed on scans of hand samples (10-15 cm across), based on macroscopic interpretations of the rock components. All image processing and segmentation are done semi-automatically, with the least possible manual intervention. The areal fraction of components is estimated and modal abundances can be deduced, where the physical optical properties (e.g., contrast, color) of the samples allow it. Other parameters that can be measured include, for example, clast size, clast-preferred orientations, average box-counting dimension or fragment shape complexity, and nearest neighbor distances (NnD). This semi-automated method allows the analysis of a larger number of samples in a relatively short time. Textures, granulometry, and shape descriptors are of considerable importance in rock characterization. The methodology is used to determine the variations of the physical characteristics of some examples of fragmental impactites.

  7. Comparison of algorithms for automatic border detection of melanoma in dermoscopy images

    Science.gov (United States)

    Srinivasa Raghavan, Sowmya; Kaur, Ravneet; LeAnder, Robert

    2016-09-01

    Melanoma is one of the most rapidly accelerating cancers in the world [1]. Early diagnosis is critical to an effective cure. We propose a new algorithm for more accurately detecting melanoma borders in dermoscopy images. Proper border detection requires eliminating occlusions like hair and bubbles by processing the original image. The preprocessing step involves transforming the RGB image to the CIE L*u*v* color space, in order to decouple brightness from color information, then increasing contrast, using contrast-limited adaptive histogram equalization (CLAHE), followed by artifacts removal using a Gaussian filter. After preprocessing, the Chen-Vese technique segments the preprocessed images to create a lesion mask which undergoes a morphological closing operation. Next, the largest central blob in the lesion is detected, after which, the blob is dilated to generate an image output mask. Finally, the automatically-generated mask is compared to the manual mask by calculating the XOR error [3]. Our border detection algorithm was developed using training and test sets of 30 and 20 images, respectively. This detection method was compared to the SRM method [4] by calculating the average XOR error for each of the two algorithms. Average error for test images was 0.10, using the new algorithm, and 0.99, using SRM method. In comparing the average error values produced by the two algorithms, it is evident that the average XOR error for our technique is lower than the SRM method, thereby implying that the new algorithm detects borders of melanomas more accurately than the SRM algorithm.

  8. Automatic recognition of abnormal cells in cytological tests using multispectral imaging

    Science.gov (United States)

    Gertych, A.; Galliano, G.; Bose, S.; Farkas, D. L.

    2010-03-01

    Cervical cancer is the leading cause of gynecologic disease-related death worldwide, but is almost completely preventable with regular screening, for which cytological testing is a method of choice. Although such testing has radically lowered the death rate from cervical cancer, it is plagued by low sensitivity and inter-observer variability. Moreover, its effectiveness is still restricted because the recognition of shape and morphology of nuclei is compromised by overlapping and clumped cells. Multispectral imaging can aid enhanced morphological characterization of cytological specimens. Features including spectral intensity and texture, reflecting relevant morphological differences between normal and abnormal cells, can be derived from cytopathology images and utilized in a detection/classification scheme. Our automated processing of multispectral image cubes yields nuclear objects which are subjected to classification facilitated by a library of spectral signatures obtained from normal and abnormal cells, as marked by experts. Clumps are processed separately with reduced set of signatures. Implementation of this method yields high rate of successful detection and classification of nuclei into predefined malignant and premalignant types and correlates well with those obtained by an expert. Our multispectral approach may have an impact on the diagnostic workflow of cytological tests. Abnormal cells can be automatically highlighted and quantified, thus objectivity and performance of the reading can be improved in a way which is currently unavailable in clinical setting.

  9. Controlled versus automatic processes: which is dominant to safety? The moderating effect of inhibitory control.

    Directory of Open Access Journals (Sweden)

    Yaoshan Xu

    Full Text Available This study explores the precursors of employees' safety behaviors based on a dual-process model, which suggests that human behaviors are determined by both controlled and automatic cognitive processes. Employees' responses to a self-reported survey on safety attitudes capture their controlled cognitive process, while the automatic association concerning safety measured by an Implicit Association Test (IAT reflects employees' automatic cognitive processes about safety. In addition, this study investigates the moderating effects of inhibition on the relationship between self-reported safety attitude and safety behavior, and that between automatic associations towards safety and safety behavior. The results suggest significant main effects of self-reported safety attitude and automatic association on safety behaviors. Further, the interaction between self-reported safety attitude and inhibition and that between automatic association and inhibition each predict unique variances in safety behavior. Specifically, the safety behaviors of employees with lower level of inhibitory control are influenced more by automatic association, whereas those of employees with higher level of inhibitory control are guided more by self-reported safety attitudes. These results suggest that safety behavior is the joint outcome of both controlled and automatic cognitive processes, and the relative importance of these cognitive processes depends on employees' individual differences in inhibitory control. The implications of these findings for theoretical and practical issues are discussed at the end.

  10. Controlled versus automatic processes: which is dominant to safety? The moderating effect of inhibitory control.

    Science.gov (United States)

    Xu, Yaoshan; Li, Yongjuan; Ding, Weidong; Lu, Fan

    2014-01-01

    This study explores the precursors of employees' safety behaviors based on a dual-process model, which suggests that human behaviors are determined by both controlled and automatic cognitive processes. Employees' responses to a self-reported survey on safety attitudes capture their controlled cognitive process, while the automatic association concerning safety measured by an Implicit Association Test (IAT) reflects employees' automatic cognitive processes about safety. In addition, this study investigates the moderating effects of inhibition on the relationship between self-reported safety attitude and safety behavior, and that between automatic associations towards safety and safety behavior. The results suggest significant main effects of self-reported safety attitude and automatic association on safety behaviors. Further, the interaction between self-reported safety attitude and inhibition and that between automatic association and inhibition each predict unique variances in safety behavior. Specifically, the safety behaviors of employees with lower level of inhibitory control are influenced more by automatic association, whereas those of employees with higher level of inhibitory control are guided more by self-reported safety attitudes. These results suggest that safety behavior is the joint outcome of both controlled and automatic cognitive processes, and the relative importance of these cognitive processes depends on employees' individual differences in inhibitory control. The implications of these findings for theoretical and practical issues are discussed at the end.

  11. Automatic generation of endocardial surface meshes with 1-to-1 correspondence from cine-MR images

    Science.gov (United States)

    Su, Yi; Teo, S.-K.; Lim, C. W.; Zhong, L.; Tan, R. S.

    2015-03-01

    In this work, we develop an automatic method to generate a set of 4D 1-to-1 corresponding surface meshes of the left ventricle (LV) endocardial surface which are motion registered over the whole cardiac cycle. These 4D meshes have 1- to-1 point correspondence over the entire set, and is suitable for advanced computational processing, such as shape analysis, motion analysis and finite element modelling. The inputs to the method are the set of 3D LV endocardial surface meshes of the different frames/phases of the cardiac cycle. Each of these meshes is reconstructed independently from border-delineated MR images and they have no correspondence in terms of number of vertices/points and mesh connectivity. To generate point correspondence, the first frame of the LV mesh model is used as a template to be matched to the shape of the meshes in the subsequent phases. There are two stages in the mesh correspondence process: (1) a coarse matching phase, and (2) a fine matching phase. In the coarse matching phase, an initial rough matching between the template and the target is achieved using a radial basis function (RBF) morphing process. The feature points on the template and target meshes are automatically identified using a 16-segment nomenclature of the LV. In the fine matching phase, a progressive mesh projection process is used to conform the rough estimate to fit the exact shape of the target. In addition, an optimization-based smoothing process is used to achieve superior mesh quality and continuous point motion.

  12. Automatic Detection and Evaluation of Solar Cell Micro-Cracks in Electroluminescence Images Using Matched Filters

    Energy Technology Data Exchange (ETDEWEB)

    Spataru, Sergiu; Hacke, Peter; Sera, Dezso

    2016-11-21

    A method for detecting micro-cracks in solar cells using two dimensional matched filters was developed, derived from the electroluminescence intensity profile of typical micro-cracks. We describe the image processing steps to obtain a binary map with the location of the micro-cracks. Finally, we show how to automatically estimate the total length of each micro-crack from these maps, and propose a method to identify severe types of micro-cracks, such as parallel, dendritic, and cracks with multiple orientations. With an optimized threshold parameter, the technique detects over 90 % of cracks larger than 3 cm in length. The method shows great potential for quantifying micro-crack damage after manufacturing or module transportation for the determination of a module quality criterion for cell cracking in photovoltaic modules.

  13. An adaptive spatial clustering method for automatic brain MR image segmentation

    Institute of Scientific and Technical Information of China (English)

    Jingdan Zhang; Daoqing Dai

    2009-01-01

    In this paper, an adaptive spatial clustering method is presented for automatic brain MR image segmentation, which is based on a competitive learning algorithm-self-organizing map (SOM). We use a pattern recognition approach in terms of feature generation and classifier design. Firstly, a multi-dimensional feature vector is constructed using local spatial information. Then, an adaptive spatial growing hierarchical SOM (ASGHSOM) is proposed as the classifier, which is an extension of SOM, fusing multi-scale segmentation with the competitive learning clustering algorithm to overcome the problem of overlapping grey-scale intensities on boundary regions. Furthermore, an adaptive spatial distance is integrated with ASGHSOM, in which local spatial information is considered in the cluster-ing process to reduce the noise effect and the classification ambiguity. Our proposed method is validated by extensive experiments using both simulated and real MR data with varying noise level, and is compared with the state-of-the-art algorithms.

  14. Quantitative Study on Nonmetallic Inclusion Particles in Steels by Automatic Image Analysis With Extreme Values Method

    Institute of Scientific and Technical Information of China (English)

    Cássio Barbosa; José Brant de Campos; J(ǒ)neo Lopes do Nascimento; Iêda Maria Vieira Caminha

    2009-01-01

    The presence of nonmetallic inclusion particles which appear during steelmaking process is harmful to the properties of steels, which is mainly as a function of some aspects such as size, volume fraction, shape, and distribution of these particles. The automatic image analysis technique is one of the most important tools for the quantitative determination of these parameters. The classical Student approach and the Extreme Values Method (EVM) were used for the inclusion size and shape determination and the evaluation of distance between the inclusion particles. The results thus obtained indicated that there were significant differences in the characteristics of the inclusion particles in the analyzed products. Both methods achieved results with some differences, indicating that EVM could be used as a faster and more reliable statistical methodology.

  15. Multi-Objective Genetic Programming with Redundancy-Regulations for Automatic Construction of Image Feature Extractors

    Science.gov (United States)

    Watchareeruetai, Ukrit; Matsumoto, Tetsuya; Takeuchi, Yoshinori; Kudo, Hiroaki; Ohnishi, Noboru

    We propose a new multi-objective genetic programming (MOGP) for automatic construction of image feature extraction programs (FEPs). The proposed method was originated from a well known multi-objective evolutionary algorithm (MOEA), i.e., NSGA-II. The key differences are that redundancy-regulation mechanisms are applied in three main processes of the MOGP, i.e., population truncation, sampling, and offspring generation, to improve population diversity as well as convergence rate. Experimental results indicate that the proposed MOGP-based FEP construction system outperforms the two conventional MOEAs (i.e., NSGA-II and SPEA2) for a test problem. Moreover, we compared the programs constructed by the proposed MOGP with four human-designed object recognition programs. The results show that the constructed programs are better than two human-designed methods and are comparable with the other two human-designed methods for the test problem.

  16. Automatic quantitative analysis of cardiac MR perfusion images

    NARCIS (Netherlands)

    Breeuwer, Marcel; Spreeuwers, Luuk; Quist, Marcel

    2001-01-01

    Magnetic Resonance Imaging (MRI) is a powerful technique for imaging cardiovascular diseases. The introduction of cardiovascular MRI into clinical practice is however hampered by the lack of efficient and accurate image analysis methods. This paper focuses on the evaluation of blood perfusion in the

  17. A marked point process of rectangles and segments for automatic analysis of digital elevation models.

    Science.gov (United States)

    Ortner, Mathias; Descombe, Xavier; Zerubia, Josiane

    2008-01-01

    This work presents a framework for automatic feature extraction from images using stochastic geometry. Features in images are modeled as realizations of a spatial point process of geometrical shapes. This framework allows the incorporation of a priori knowledge on the spatial repartition of features. More specifically, we present a model based on the superposition of a process of segments and a process of rectangles. The former is dedicated to the detection of linear networks of discontinuities, while the latter aims at segmenting homogeneous areas. An energy is defined, favoring connections of segments, alignments of rectangles, as well as a relevant interaction between both types of objects. The estimation is performed by minimizing the energy using a simulated annealing algorithm. The proposed model is applied to the analysis of Digital Elevation Models (DEMs). These images are raster data representing the altimetry of a dense urban area. We present results on real data provided by the IGN (French National Geographic Institute) consisting in low quality DEMs of various types.

  18. Segmentation of multi-isotope imaging mass spectrometry data for semi-automatic detection of regions of interest.

    Directory of Open Access Journals (Sweden)

    Philipp Gormanns

    Full Text Available Multi-isotope imaging mass spectrometry (MIMS associates secondary ion mass spectrometry (SIMS with detection of several atomic masses, the use of stable isotopes as labels, and affiliated quantitative image-analysis software. By associating image and measure, MIMS allows one to obtain quantitative information about biological processes in sub-cellular domains. MIMS can be applied to a wide range of biomedical problems, in particular metabolism and cell fate [1], [2], [3]. In order to obtain morphologically pertinent data from MIMS images, we have to define regions of interest (ROIs. ROIs are drawn by hand, a tedious and time-consuming process. We have developed and successfully applied a support vector machine (SVM for segmentation of MIMS images that allows fast, semi-automatic boundary detection of regions of interests. Using the SVM, high-quality ROIs (as compared to an expert's manual delineation were obtained for 2 types of images derived from unrelated data sets. This automation simplifies, accelerates and improves the post-processing analysis of MIMS images. This approach has been integrated into "Open MIMS," an ImageJ-plugin for comprehensive analysis of MIMS images that is available online at http://www.nrims.hms.harvard.edu/NRIMS_ImageJ.php.

  19. Segmentation of Multi-Isotope Imaging Mass Spectrometry Data for Semi-Automatic Detection of Regions of Interest

    Science.gov (United States)

    Poczatek, J. Collin; Turck, Christoph W.; Lechene, Claude

    2012-01-01

    Multi-isotope imaging mass spectrometry (MIMS) associates secondary ion mass spectrometry (SIMS) with detection of several atomic masses, the use of stable isotopes as labels, and affiliated quantitative image-analysis software. By associating image and measure, MIMS allows one to obtain quantitative information about biological processes in sub-cellular domains. MIMS can be applied to a wide range of biomedical problems, in particular metabolism and cell fate [1], [2], [3]. In order to obtain morphologically pertinent data from MIMS images, we have to define regions of interest (ROIs). ROIs are drawn by hand, a tedious and time-consuming process. We have developed and successfully applied a support vector machine (SVM) for segmentation of MIMS images that allows fast, semi-automatic boundary detection of regions of interests. Using the SVM, high-quality ROIs (as compared to an expert's manual delineation) were obtained for 2 types of images derived from unrelated data sets. This automation simplifies, accelerates and improves the post-processing analysis of MIMS images. This approach has been integrated into “Open MIMS,” an ImageJ-plugin for comprehensive analysis of MIMS images that is available online at http://www.nrims.hms.harvard.edu/NRIMS_ImageJ.php. PMID:22347386

  20. Are Automatic Imitation and Spatial Compatibility Mediated by Different Processes?

    Science.gov (United States)

    Cooper, Richard P.; Catmur, Caroline; Heyes, Cecilia

    2013-01-01

    Automatic imitation or "imitative compatibility" is thought to be mediated by the mirror neuron system and to be a laboratory model of the motor mimicry that occurs spontaneously in naturalistic social interaction. Imitative compatibility and spatial compatibility effects are known to depend on different stimulus dimensions--body…

  1. Biomedical signal and image processing

    CERN Document Server

    Najarian, Kayvan

    2012-01-01

    INTRODUCTION TO DIGITAL SIGNAL AND IMAGE PROCESSINGSignals and Biomedical Signal ProcessingIntroduction and OverviewWhat is a ""Signal""?Analog, Discrete, and Digital SignalsProcessing and Transformation of SignalsSignal Processing for Feature ExtractionSome Characteristics of Digital ImagesSummaryProblemsFourier TransformIntroduction and OverviewOne-Dimensional Continuous Fourier TransformSampling and NYQUIST RateOne-Dimensional Discrete Fourier TransformTwo-Dimensional Discrete Fourier TransformFilter DesignSummaryProblemsImage Filtering, Enhancement, and RestorationIntroduction and Overview

  2. Digital geometry in image processing

    CERN Document Server

    Mukhopadhyay, Jayanta

    2013-01-01

    Exploring theories and applications developed during the last 30 years, Digital Geometry in Image Processing presents a mathematical treatment of the properties of digital metric spaces and their relevance in analyzing shapes in two and three dimensions. Unlike similar books, this one connects the two areas of image processing and digital geometry, highlighting important results of digital geometry that are currently used in image analysis and processing. The book discusses different digital geometries in multi-dimensional integral coordinate spaces. It also describes interesting properties of

  3. Image processing for optical mapping.

    Science.gov (United States)

    Ravindran, Prabu; Gupta, Aditya

    2015-01-01

    Optical Mapping is an established single-molecule, whole-genome analysis system, which has been used to gain a comprehensive understanding of genomic structure and to study structural variation of complex genomes. A critical component of Optical Mapping system is the image processing module, which extracts single molecule restriction maps from image datasets of immobilized, restriction digested and fluorescently stained large DNA molecules. In this review, we describe robust and efficient image processing techniques to process these massive datasets and extract accurate restriction maps in the presence of noise, ambiguity and confounding artifacts. We also highlight a few applications of the Optical Mapping system.

  4. Topics in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2013-01-01

      The sixteen chapters included in this book were written by invited experts of international recognition and address important issues in Medical Image Processing and Computational Vision, including: Object Recognition, Object Detection, Object Tracking, Pose Estimation, Facial Expression Recognition, Image Retrieval, Data Mining, Automatic Video Understanding and Management, Edges Detection, Image Segmentation, Modelling and Simulation, Medical thermography, Database Systems, Synthetic Aperture Radar and Satellite Imagery.   Different applications are addressed and described throughout the book, comprising: Object Recognition and Tracking, Facial Expression Recognition, Image Database, Plant Disease Classification, Video Understanding and Management, Image Processing, Image Segmentation, Bio-structure Modelling and Simulation, Medical Imaging, Image Classification, Medical Diagnosis, Urban Areas Classification, Land Map Generation.   The book brings together the current state-of-the-art in the various mul...

  5. Comparative analysis of different implementations of a parallel algorithm for automatic target detection and classification of hyperspectral images

    Science.gov (United States)

    Paz, Abel; Plaza, Antonio; Plaza, Javier

    2009-08-01

    Automatic target detection in hyperspectral images is a task that has attracted a lot of attention recently. In the last few years, several algoritms have been developed for this purpose, including the well-known RX algorithm for anomaly detection, or the automatic target detection and classification algorithm (ATDCA), which uses an orthogonal subspace projection (OSP) approach to extract a set of spectrally distinct targets automatically from the input hyperspectral data. Depending on the complexity and dimensionality of the analyzed image scene, the target/anomaly detection process may be computationally very expensive, a fact that limits the possibility of utilizing this process in time-critical applications. In this paper, we develop computationally efficient parallel versions of both the RX and ATDCA algorithms for near real-time exploitation of these algorithms. In the case of ATGP, we use several distance metrics in addition to the OSP approach. The parallel versions are quantitatively compared in terms of target detection accuracy, using hyperspectral data collected by NASA's Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) over the World Trade Center in New York, five days after the terrorist attack of September 11th, 2001, and also in terms of parallel performance, using a massively Beowulf cluster available at NASA's Goddard Space Flight Center in Maryland.

  6. The Masked Semantic Priming Effect Is Task Dependent: Reconsidering the Automatic Spreading Activation Process

    Science.gov (United States)

    de Wit, Bianca; Kinoshita, Sachiko

    2015-01-01

    Semantic priming effects are popularly explained in terms of an automatic spreading activation process, according to which the activation of a node in a semantic network spreads automatically to interconnected nodes, preactivating a semantically related word. It is expected from this account that semantic priming effects should be routinely…

  7. The Masked Semantic Priming Effect Is Task Dependent: Reconsidering the Automatic Spreading Activation Process

    Science.gov (United States)

    de Wit, Bianca; Kinoshita, Sachiko

    2015-01-01

    Semantic priming effects are popularly explained in terms of an automatic spreading activation process, according to which the activation of a node in a semantic network spreads automatically to interconnected nodes, preactivating a semantically related word. It is expected from this account that semantic priming effects should be routinely…

  8. Acoustic image-processing software

    Science.gov (United States)

    Several algorithims that display, enhance and analyze side-scan sonar images of the seafloor, have been developed by the University of Washington, Seattle, as part of an Office of Naval Research funded program in acoustic image analysis. One of these programs, PORTAL, is a small (less than 100K) image display and enhancement program that can run on MS-DOS computers with VGA boards. This program is now available in the public domain for general use in acoustic image processing.PORTAL is designed to display side-scan sonar data that is stored in most standard formats, including SeaMARC I, II, 150 and GLORIA data. (See image.) In addition to the “standard” formats, PORTAL has a module “front end” that allows the user to modify the program to accept other image formats. In addition to side-scan sonar data, the program can also display digital optical images from scanners and “framegrabbers,” gridded bathymetry data from Sea Beam and other sources, and potential field (magnetics/gravity) data. While limited in image analysis capability, the program allows image enhancement by histogram manipulation, and basic filtering operations, including multistage filtering. PORTAL can print reasonably high-quality images on Postscript laser printers and lower-quality images on non-Postscript printers with HP Laserjet emulation. Images suitable only for index sheets are also possible on dot matrix printers.

  9. Automatic segmentation of HeLa cell images

    CERN Document Server

    Urban, Jan

    2011-01-01

    In this work, the possibilities for segmentation of cells from their background and each other in digital image were tested, combined and improoved. Lot of images with young, adult and mixture cells were able to prove the quality of described algorithms. Proper segmentation is one of the main task of image analysis and steps order differ from work to work, depending on input images. Reply for biologicaly given question was looking for in this work, including filtration, details emphasizing, segmentation and sphericity computing. Order of algorithms and way to searching for them was also described. Some questions and ideas for further work were mentioned in the conclusion part.

  10. Automatic dynamic mask extraction for PIV images containing an unsteady interface, bubbles, and a moving structure

    Science.gov (United States)

    Dussol, David; Druault, Philippe; Mallat, Bachar; Delacroix, Sylvain; Germain, Grégory

    2016-07-01

    When performing Particle Image Velocimetry (PIV) measurements in complex fluid flows with moving interfaces and a two-phase flow, it is necessary to develop a mask to remove non-physical measurements. This is the case when studying, for example, the complex bubble sweep-down phenomenon observed in oceanographic research vessels. Indeed, in such a configuration, the presence of an unsteady free surface, of a solid-liquid interface and of bubbles in the PIV frame, leads to generate numerous laser reflections and therefore spurious velocity vectors. In this note, an image masking process is developed to successively identify the boundaries of the ship and the free surface interface. As the presence of the solid hull surface induces laser reflections, the hull edge contours are simply detected in the first PIV frame and dynamically estimated for consecutive ones. As for the unsteady surface determination, a specific process is implemented like the following: i) the edge detection of the gradient magnitude in the PIV frame, ii) the extraction of the particles by filtering high-intensity large areas related to the bubbles and/or hull reflections, iii) the extraction of the rough region containing these particles and their reflections, iv) the removal of these reflections. The unsteady surface is finally obtained with a fifth-order polynomial interpolation. The resulted free surface is successfully validated from the Fourier analysis and by visualizing selected PIV images containing numerous spurious high intensity areas. This paper demonstrates how this data analysis process leads to PIV images database without reflections and an automatic detection of both the free surface and the rigid body. An application of this new mask is finally detailed, allowing a preliminary analysis of the hydrodynamic flow.

  11. Automatic real and apparent age estimation in still images

    OpenAIRE

    Pardo Garcia, Pablo

    2015-01-01

    We performed a study on age estimation via still images creating a new face image database containing real age and apparent age label annotations. Two age estimation methods are proposed using the state of the art techniques and analyse their performance with the proposed database.

  12. An application of image processing techniques in computed tomography image analysis

    DEFF Research Database (Denmark)

    McEvoy, Fintan

    2007-01-01

    An estimate of the thickness of subcutaneous adipose tissue at differing positions around the body was required in a study examining body composition. To eliminate human error associated with the manual placement of markers for measurements and to facilitate the collection of data from a large...... number of animals and image slices, automation of the process was desirable. The open-source and free image analysis program ImageJ was used. A macro procedure was created that provided the required functionality. The macro performs a number of basic image processing procedures. These include an initial...... process designed to remove the scanning table from the image and to center the animal in the image. This is followed by placement of a vertical line segment from the mid point of the upper border of the image to the image center. Measurements are made between automatically detected outer and inner...

  13. Color Image Processing and Object Tracking System

    Science.gov (United States)

    Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.

    1996-01-01

    This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.

  14. Automatic Segmentation and Inpainting of Specular Highlights for Endoscopic Imaging

    Directory of Open Access Journals (Sweden)

    Arnold Mirko

    2010-01-01

    Full Text Available Minimally invasive medical procedures have become increasingly common in today's healthcare practice. Images taken during such procedures largely show tissues of human organs, such as the mucosa of the gastrointestinal tract. These surfaces usually have a glossy appearance showing specular highlights. For many visual analysis algorithms, these distinct and bright visual features can become a significant source of error. In this article, we propose two methods to address this problem: (a a segmentation method based on nonlinear filtering and colour image thresholding and (b an efficient inpainting method. The inpainting algorithm eliminates the negative effect of specular highlights on other image analysis algorithms and also gives a visually pleasing result. The methods compare favourably to the existing approaches reported for endoscopic imaging. Furthermore, in contrast to the existing approaches, the proposed segmentation method is applicable to the widely used sequential RGB image acquisition systems.

  15. Semi-automatic elastic registration on thyroid gland ultrasonic image

    Science.gov (United States)

    Xu, Xia; Zhong, Yue; Luo, Yan; Li, Deyu; Lin, Jiangli; Wang, Tianfu

    2007-12-01

    Knowledge of in vivo thyroid volume has both diagnostic and therapeutic importance and could lead to a more precise quantification of absolute activity contained in the thyroid gland. However, the shape of thyroid gland is irregular and difficult to calculate. For precise estimation of thyroid volume by ultrasound imaging, this paper presents a novel semiautomatic minutiae matching method in thyroid gland ultrasonic image by means of thin-plate spline model. Registration consists of four basic steps: feature detection, feature matching, mapping function design, and image transformation and resampling. Due to the connectivity of thyroid gland boundary, we choose active contour model as feature detector, and radials from centric points for feature matching. The proposed approach has been used in thyroid gland ultrasound images registration. Registration results of 18 healthy adults' thyroid gland ultrasound images show this method consumes less time and energy with good objectivity than algorithms selecting landmarks manually.

  16. Automatic Detection of Changes on Mars Surface from High-Resolution Orbital Images

    Science.gov (United States)

    Sidiropoulos, Panagiotis; Muller, Jan-Peter

    2017-04-01

    Over the last 40 years Mars has been extensively mapped by several NASA and ESA orbital missions, generating a large image dataset comprised of approximately 500,000 high-resolution images (of science can be employed for training and verification it is unsuitable for planetwide systematic change detection. In this work, we introduce a novel approach in planetary image change detection, which involves a batch-mode automatic change detection pipeline that identifies regions that have changed. This is tested in anger, on tens of thousands of high-resolution images over the MC11 quadrangle [5], acquired by CTX, HRSC, THEMIS-VIS and MOC-NA instruments [1]. We will present results which indicate a substantial level of activity in this region of Mars, including instances of dynamic natural phenomena that haven't been cataloged in the planetary science literature before. We will demonstrate the potential and usefulness of such an automatic approach in planetary science change detection. Acknowledgments: The research leading to these results has received funding from the STFC "MSSL Consolidated Grant" ST/K000977/1 and partial support from the European Union's Seventh Framework Programme (FP7/2007-2013) under iMars grant agreement n° 607379. References: [1] P. Sidiropoulos and J. - P. Muller (2015) On the status of orbital high-resolution repeat imaging of Mars for the observation of dynamic surface processes. Planetary and Space Science, 117: 207-222. [2] O. Aharonson, et al. (2003) Slope streak formation and dust deposition rates on Mars. Journal of Geophysical Research: Planets, 108(E12):5138 [3] A. McEwen, et al. (2011) Seasonal flows on warm martian slopes. Science, 333 (6043): 740-743. [4] S. Byrne, et al. (2009) Distribution of mid-latitude ground ice on mars from new impact craters. Science, 325(5948):1674-1676. [5] K. Gwinner, et al (2016) The High Resolution Stereo Camera (HRSC) of Mars Express and its approach to science analysis and mapping for Mars and its

  17. Conditioning reaction time: evidence for a process of conditioned automatization.

    Science.gov (United States)

    Montare, A

    1992-12-01

    The classical conditioning of the standard, simple reaction time (RT) in 140 college men and women is described. Consequent to an anticipatory instructed conditioning procedure, two experimental and two control groups acquired voluntary, controlled US(light)-URTR (unconditioned reaction-time response) associations which then served as the foundation for subsequent classical conditioning when a novel CS (auditory click) was simultaneously paired with the US. Conditioned reaction-time responses (CRTRs) occurred significantly more often during test trials in the two experimental groups than in the two control groups. Statistical and introspective findings support the notion that observed CRTRs may be products of cognitively unconscious conditioned automatization whereby the conditioning of relatively slow, voluntary, and controlled US-URTR associations leads to the acquisition of relatively fast, involuntary, and automatic CS-CRTR associations.

  18. Effective System for Automatic Bundle Block Adjustment and Ortho Image Generation from Multi Sensor Satellite Imagery

    Science.gov (United States)

    Akilan, A.; Nagasubramanian, V.; Chaudhry, A.; Reddy, D. Rajesh; Sudheer Reddy, D.; Usha Devi, R.; Tirupati, T.; Radhadevi, P. V.; Varadan, G.

    2014-11-01

    Block Adjustment is a technique for large area mapping for images obtained from different remote sensingsatellites.The challenge in this process is to handle huge number of satellite imageries from different sources with different resolution and accuracies at the system level. This paper explains a system with various tools and techniques to effectively handle the end-to-end chain in large area mapping and production with good level of automation and the provisions for intuitive analysis of final results in 3D and 2D environment. In addition, the interface for using open source ortho and DEM references viz., ETM, SRTM etc. and displaying ESRI shapes for the image foot-prints are explained. Rigorous theory, mathematical modelling, workflow automation and sophisticated software engineering tools are included to ensure high photogrammetric accuracy and productivity. Major building blocks like Georeferencing, Geo-capturing and Geo-Modelling tools included in the block adjustment solution are explained in this paper. To provide optimal bundle block adjustment solution with high precision results, the system has been optimized in many stages to exploit the full utilization of hardware resources. The robustness of the system is ensured by handling failure in automatic procedure and saving the process state in every stage for subsequent restoration from the point of interruption. The results obtained from various stages of the system are presented in the paper.

  19. SU-C-201-04: Quantification of Perfusion Heterogeneity Based On Texture Analysis for Fully Automatic Detection of Ischemic Deficits From Myocardial Perfusion Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Y [National Cheng Kung University, Tainan, Taiwan (China); Huang, H [Chang Gung University, Taoyuan, Taiwan (China); Su, T [Chang Gung Memorial Hospital, Taoyuan, Taiwan (China)

    2015-06-15

    Purpose: Texture-based quantification of image heterogeneity has been a popular topic for imaging studies in recent years. As previous studies mainly focus on oncological applications, we report our recent efforts of applying such techniques on cardiac perfusion imaging. A fully automated procedure has been developed to perform texture analysis for measuring the image heterogeneity. Clinical data were used to evaluate the preliminary performance of such methods. Methods: Myocardial perfusion images of Thallium-201 scans were collected from 293 patients with suspected coronary artery disease. Each subject underwent a Tl-201 scan and a percutaneous coronary intervention (PCI) within three months. The PCI Result was used as the gold standard of coronary ischemia of more than 70% stenosis. Each Tl-201 scan was spatially normalized to an image template for fully automatic segmentation of the LV. The segmented voxel intensities were then carried into the texture analysis with our open-source software Chang Gung Image Texture Analysis toolbox (CGITA). To evaluate the clinical performance of the image heterogeneity for detecting the coronary stenosis, receiver operating characteristic (ROC) analysis was used to compute the overall accuracy, sensitivity and specificity as well as the area under curve (AUC). Those indices were compared to those obtained from the commercially available semi-automatic software QPS. Results: With the fully automatic procedure to quantify heterogeneity from Tl-201 scans, we were able to achieve a good discrimination with good accuracy (74%), sensitivity (73%), specificity (77%) and AUC of 0.82. Such performance is similar to those obtained from the semi-automatic QPS software that gives a sensitivity of 71% and specificity of 77%. Conclusion: Based on fully automatic procedures of data processing, our preliminary data indicate that the image heterogeneity of myocardial perfusion imaging can provide useful information for automatic determination

  20. An Automatic Development Process for Integrated Modular Avionics Software

    OpenAIRE

    2013-01-01

    With the ever-growing avionics functions, the modern avionics architecture is evolving from traditional federated architecture to Integrated Modular Avionics (IMA). ARINC653 is a major industry standard to support partitioning concept introduced in IMA to achieve security isolation between avionics functions with different criticalities. To decrease the complexity and improve the reliability of the design and implementation of IMA-based avionics software, this paper proposes an automatic deve...

  1. Automatic and effortful memory processes in depressed persons.

    Science.gov (United States)

    Rohling, M L; Scogin, F

    1993-03-01

    Clinical lore has held that depression results in memory dysfunction, particularly in older adults. Some believe that memory loss due to depression is indistinguishable from an organic dementia and label such dysfunction pseudodementia. Previous literature has inconclusively supported the relation between depression and memory deficits. This research assessed three groups of subjects: (a) 30 depressed patients, (b) 20 psychiatric controls, and (c) 30 normal controls. Dependent memory tasks were designed to vary along the automatic and effortful memory encoding continuum defined by Hasher and Zacks (1979). Two tasks were designed to be effortful (free recall and paired associates) and two tasks were designed to be automatic (memory for frequency and location). Contrary to predictions, depression was not related to memory deficits. However, post-hoc analyses indicated that psychiatric hospitalization and psychotropic medication had a greater negative impact on memory than did depression. As predicted, age resulted in effortful encoding deficits whereas age resulted in minimal deficits on the automatic tasks. There was no evidence of an interaction between depression and age that would be consistent with the descriptive label of pseudodementia.

  2. A workflow for the automatic segmentation of organelles in electron microscopy image stacks

    Directory of Open Access Journals (Sweden)

    Alex Joseph Perez

    2014-11-01

    Full Text Available Electron microscopy (EM facilitates analysis of the form, distribution, and functional status of key organelle systems in various pathological processes, including those associated with neurodegenerative disease. Such EM data often provide important new insights into the underlying disease mechanisms. The development of more accurate and efficient methods to quantify changes in subcellular microanatomy has already proven key to understanding the pathogenesis of Parkinson’s and Alzheimer’s diseases, as well as glaucoma. While our ability to acquire large volumes of 3D EM data is progressing rapidly, more advanced analysis tools are needed to assist in measuring precise three-dimensional morphologies of organelles within data sets that can include hundreds to thousands of whole cells. Although new imaging instrument throughputs can exceed teravoxels of data per day, image segmentation and analysis remain significant bottlenecks to achieving quantitative descriptions of whole cell structural organellomes. Here, we present a novel method for the automatic segmentation of organelles in 3D EM image stacks. Segmentations are generated using only 2D image information, making the method suitable for anisotropic imaging techniques such as serial block-face scanning electron microscopy (SBEM. Additionally, no assumptions about 3D organelle morphology are made, ensuring the method can be easily expanded to any number of structurally and functionally diverse organelles. Following the presentation of our algorithm, we validate its performance by assessing the segmentation accuracy of different organelle targets in an example SBEM dataset and demonstrate that it can be efficiently parallelized on supercomputing resources, resulting in a dramatic reduction in runtime.

  3. Toward automatic phenotyping of retinal images from genetically determined mono- and dizygotic twins using amplitude modulation-frequency modulation methods

    Science.gov (United States)

    Soliz, P.; Davis, B.; Murray, V.; Pattichis, M.; Barriga, S.; Russell, S.

    2010-03-01

    This paper presents an image processing technique for automatically categorize age-related macular degeneration (AMD) phenotypes from retinal images. Ultimately, an automated approach will be much more precise and consistent in phenotyping of retinal diseases, such as AMD. We have applied the automated phenotyping to retina images from a cohort of mono- and dizygotic twins. The application of this technology will allow one to perform more quantitative studies that will lead to a better understanding of the genetic and environmental factors associated with diseases such as AMD. A method for classifying retinal images based on features derived from the application of amplitude-modulation frequency-modulation (AM-FM) methods is presented. Retinal images from identical and fraternal twins who presented with AMD were processed to determine whether AM-FM could be used to differentiate between the two types of twins. Results of the automatic classifier agreed with the findings of other researchers in explaining the variation of the disease between the related twins. AM-FM features classified 72% of the twins correctly. Visual grading found that genetics could explain between 46% and 71% of the variance.

  4. A PCA Based Automatic Image Categorization Approach Using Dominant Color Features

    Institute of Scientific and Technical Information of China (English)

    WUChunming; QIANHui; WANGDonghui

    2005-01-01

    Automatic Image categorization is a universal problem in area of Content-based image retrieval (CBIR). The goal of automatic image categorization is to find a mapping between images and the predefined image categories. The difficulty of this problem is that how to describe image content and incorporate low-level features into semantic categories. As a solution, we propose a Principal component analysis (PCA) based approach. This approach assumes that the images in the same semantic category have the similar spatial distribution of color components and treats the images in the same category as a linear combination of a fixed set of dominant color blocks with special textural information. A three-step algorithm is designed: (1) extracting Dominant colors (DC) of images, which describe the major color information in an image; (2) Establishing a feature space based on DC blocks and its textural information; (3) using PCA to reduce dimensionality of feature space and using the basis vectors to categorize images. An experimental database containing nine categories including cars, flowers, houses, portraits, fish, bark, sunshine, leaves and fresco is constructed to test the algorithm based on our image categorization approach. The results show that this approach is effective and a reasonable compromise between accuracy and speed in practice.

  5. Fast and Automatic Ultrasound Simulation from CT Images

    OpenAIRE

    Weijian Cong; Jian Yang; Yue Liu; Yongtian Wang

    2013-01-01

    Ultrasound is currently widely used in clinical diagnosis because of its fast and safe imaging principles. As the anatomical structures present in an ultrasound image are not as clear as CT or MRI. Physicians usually need advance clinical knowledge and experience to distinguish diseased tissues. Fast simulation of ultrasound provides a cost-effective way for the training and correlation of ultrasound and the anatomic structures. In this paper, a novel method is proposed for fast simulation of...

  6. Automatic Tracing and Segmentation of Rat Mammary Fat Pads in MRI Image Sequences Based on Cartoon-Texture Model

    Institute of Scientific and Technical Information of China (English)

    TU Shengxian; ZHANG Su; CHEN Yazhu; Freedman Matthew T; WANG Bin; XUAN Jason; WANG Yue

    2009-01-01

    The growth patterns of mammary fat pads and glandular tissues inside the fat pads may be related with the risk factors of breast cancer.Quantitative measurements of this relationship are available after segmentation of mammary pads and glandular tissues.Rat fat pads may lose continuity along image sequences or adjoin similar intensity areas like epidermis and subcutaneous regions.A new approach for automatic tracing and segmentation of fat pads in magnetic resonance imaging (MRI) image sequences is presented,which does not require that the number of pads be constant or the spatial location of pads be adjacent among image slices.First,each image is decomposed into cartoon image and texture image based on cartoon-texture model.They will be used as smooth image and feature image for segmentation and for targeting pad seeds,respectively.Then,two-phase direct energy segmentation based on Chan-Vese active contour model is applied to partitioning the cartoon image into a set of regions,from which the pad boundary is traced iteratively from the pad seed.A tracing algorithm based on scanning order is proposed to accurately trace the pad boundary,which effectively removes the epidermis attached to the pad without any post processing as well as solves the problem of over-segmentation of some small holes inside the pad.The experimental results demonstrate the utility of this approach in accurate delineation of various numbers of mammary pads from several sets of MRI images.

  7. Automatic classification of sleep stages based on the time-frequency image of EEG signals.

    Science.gov (United States)

    Bajaj, Varun; Pachori, Ram Bilas

    2013-12-01

    In this paper, a new method for automatic sleep stage classification based on time-frequency image (TFI) of electroencephalogram (EEG) signals is proposed. Automatic classification of sleep stages is an important part for diagnosis and treatment of sleep disorders. The smoothed pseudo Wigner-Ville distribution (SPWVD) based time-frequency representation (TFR) of EEG signal has been used to obtain the time-frequency image (TFI). The segmentation of TFI has been performed based on the frequency-bands of the rhythms of EEG signals. The features derived from the histogram of segmented TFI have been used as an input feature set to multiclass least squares support vector machines (MC-LS-SVM) together with the radial basis function (RBF), Mexican hat wavelet, and Morlet wavelet kernel functions for automatic classification of sleep stages from EEG signals. The experimental results are presented to show the effectiveness of the proposed method for classification of sleep stages from EEG signals.

  8. Automatic Tongue Tracking in X-Ray Images

    Institute of Scientific and Technical Information of China (English)

    LUO Changwei; LI Rui; YU Lingyun; YU Jun; WANG Zengfu

    2015-01-01

    X-ray imaging is an eff ective technique to obtain the continuous motions of the vocal tract during speech, and Active appearance model (AAM) is a useful tool to analyze the X-ray images. However, for the task of tongue tracking in X-ray images, the accuracy of AAM fit-ting is insufficient. AAM aims to minimize the residual er-ror between the model appearance and the input image. It often fails to accurately converge to the true landmarks. To improve the tracking accuracy, we propose a fitting method by combining Constrained local model (CLM) into AAM. In our method, we first combine the objective functions of AAM and CLM into a single ob jective function. Then, we pro ject out the texture variation and derive a gradi-ent based method to optimize the objective function. Our method eff ectively incorporates not only the shape prior and global texture, but also local texture around each land-mark. Experiments demonstrate that the proposed method significantly reduces the fitting error. We also show that re-alistic 3D tongue animation can be created by using tongue tracking results of the X-ray images.

  9. Fuzzy image processing in sun sensor

    Science.gov (United States)

    Mobasser, S.; Liebe, C. C.; Howard, A.

    2003-01-01

    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  10. Automatic Fusion of Hyperspectral Images and Laser Scans Using Feature Points

    Directory of Open Access Journals (Sweden)

    Xiao Zhang

    2015-01-01

    Full Text Available Automatic fusion of different kinds of image datasets is so intractable with diverse imaging principle. This paper presents a novel method for automatic fusion of two different images: 2D hyperspectral images acquired with a hyperspectral camera and 3D laser scans obtained with a laser scanner, without any other sensor. Only a few corresponding feature points are used, which are automatically extracted from a scene viewed by the two sensors. Extraction method of feature points relies on SURF algorithm and camera model, which can convert a 3D laser scan into a 2D laser image with the intensity of the pixels defined by the attributes in the laser scan. Moreover, Collinearity Equation and Direct Linear Transformation are used to create the initial corresponding relationship of the two images. Adjustment is also used to create corrected values to eliminate errors. The experimental result shows that this method is successfully validated with images collected by a hyperspectral camera and a laser scanner.

  11. Automatic neutron PSD transmission from a process computer to a timeshare system

    Energy Technology Data Exchange (ETDEWEB)

    Bullock, J.B.; Sides, W.H. Jr.

    1977-04-01

    A method for automatically telephoning, connecting, and transmitting neutron power-spectral density data from a CDC-1700 process control computer to a PDP-10 time-share system is described. Detailed program listings and block diagrams are included.

  12. Multiscale Stategies in Automatic Image-Domain Waveform Tomography

    Institute of Scientific and Technical Information of China (English)

    Yujin Liu; Zhenchun Li

    2015-01-01

    Multiscale strategies are very important in the successful application of waveform-based velocity inversion. The strategy that sequentially preceeds from long to short scale of velocity model, has been well developed in full waveform inversion (FWI) to solve the local mininum problem. In contrast, it’s not well understood in the image-domain waveform tomography (IWT), which back-projects incoherent waveform components of the common image gather into velocity updates. IWT is less prone to local minimum problem but tends to build long-scale model with low resolution. In order to build both long- and short-scale model by IWT, we discuss several multiscale strategies restricted in the image domain. The strategies include model reparameterization, objective function switching and gradient rescaling. Numerical tests on Marmsousi model and real data demonstrate that our proposed multiscale IWT is effective in buidling velocity model with wide wavenumber spectrum.

  13. Automatic annotation of radiological observations in liver CT images.

    Science.gov (United States)

    Gimenez, Francisco; Xu, Jiajing; Liu, Yi; Liu, Tiffany; Beaulieu, Christopher; Rubin, Daniel; Napel, Sandy

    2012-01-01

    We aim to predict radiological observations using computationally-derived imaging features extracted from computed tomography (CT) images. We created a dataset of 79 CT images containing liver lesions identified and annotated by a radiologist using a controlled vocabulary of 76 semantic terms. Computationally-derived features were extracted describing intensity, texture, shape, and edge sharpness. Traditional logistic regression was compared to L(1)-regularized logistic regression (LASSO) in order to predict the radiological observations using computational features. The approach was evaluated by leave one out cross-validation. Informative radiological observations such as lesion enhancement, hypervascular attenuation, and homogeneous retention were predicted well by computational features. By exploiting relationships between computational and semantic features, this approach could lead to more accurate and efficient radiology reporting.

  14. Automatic detection of anatomical landmarks in uterine cervix images.

    Science.gov (United States)

    Greenspan, Hayit; Gordon, Shiri; Zimmerman, Gali; Lotenberg, Shelly; Jeronimo, Jose; Antani, Sameer; Long, Rodney

    2009-03-01

    The work focuses on a unique medical repository of digital cervicographic images ("Cervigrams") collected by the National Cancer Institute (NCI) in longitudinal multiyear studies. NCI, together with the National Library of Medicine (NLM), is developing a unique web-accessible database of the digitized cervix images to study the evolution of lesions related to cervical cancer. Tools are needed for automated analysis of the cervigram content to support cancer research. We present a multistage scheme for segmenting and labeling regions of anatomical interest within the cervigrams. In particular, we focus on the extraction of the cervix region and fine detection of the cervix boundary; specular reflection is eliminated as an important preprocessing step; in addition, the entrance to the endocervical canal (the "os"), is detected. Segmentation results are evaluated on three image sets of cervigrams that were manually labeled by NCI experts.

  15. Image processing using reconfigurable FPGAs

    Science.gov (United States)

    Ferguson, Lee

    1996-10-01

    The use of reconfigurable field-programmable gate arrays (FPGAs) for imaging applications show considerable promise to fill the gap that often occurs when digital signal processor chips fail to meet performance specifications. Single chip DSPs do not have the overall performance to meet the needs of many imaging applications, particularly in real-time designs. Using multiple DSPs to boost performance often presents major design challenges in maintaining data alignment and process synchronization. These challenges can impose serious cost, power consumption and board space penalties. Image processing requires manipulating massive amounts of data at high-speed. Although DSP chips can process data at high-speeds, their architectures can inhibit overall system performance in real-time imaging. The rate of operations can be increased when they are performed in dedicated hardware, such as special-purpose imaging devices and FPGAs, which provides the horsepower necessary to implement real-time image processing products successfully and cost-effectively. For many fixed applications, non-SRAM- based (antifuse or flash-based) FPGAs provide the raw speed to accomplish standard high-speed functions. However, in applications where algorithms are continuously changing and compute operations must be modified, only SRAM-based FPGAs give enough flexibility. The addition of reconfigurable FPGAs as a flexible hardware facility enables DSP chips to perform optimally. The benefits primarily stem from optimizing the hardware for the algorithms or the use of reconfigurable hardware to enhance the product architecture. And with SRAM-based FPGAs that are capable of partial dynamic reconfiguration, such as the Cache-Logic FPGAs from Atmel, continuous modification of data and logic is not only possible, it is practical as well. First we review the particular demands of image processing. Then we present various applications and discuss strategies for exploiting the capabilities of

  16. Automatic masking for robust 3D-2D image registration in image-guided spine surgery

    Science.gov (United States)

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-03-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  17. Automatic landmark generation for deformable image registration evaluation for 4D CT images of lung

    Science.gov (United States)

    Vickress, J.; Battista, J.; Barnett, R.; Morgan, J.; Yartsev, S.

    2016-10-01

    Deformable image registration (DIR) has become a common tool in medical imaging across both diagnostic and treatment specialties, but the methods used offer varying levels of accuracy. Evaluation of DIR is commonly performed using manually selected landmarks, which is subjective, tedious and time consuming. We propose a semi-automated method that saves time and provides accuracy comparable to manual selection. Three landmarking methods including manual (with two independent observers), scale invariant feature transform (SIFT), and SIFT with manual editing (SIFT-M) were tested on 10 thoracic 4DCT image studies corresponding to the 0% and 50% phases of respiration. Results of each method were evaluated against a gold standard (GS) landmark set comparing both mean and proximal landmark displacements. The proximal method compares the local deformation magnitude between a test landmark pair and the closest GS pair. Statistical analysis was done using an intra class correlation (ICC) between test and GS displacement values. The creation time per landmark pair was 22, 34, 2.3, and 4.3 s for observers 1 and 2, SIFT, and SIFT-M methods respectively. Across 20 lungs from the 10 CT studies, the ICC values between the GS and observer 1 and 2, SIFT, and SIFT-M methods were 0.85, 0.85, 0.84, and 0.82 for mean lung deformation, and 0.97, 0.98, 0.91, and 0.96 for proximal landmark deformation, respectively. SIFT and SIFT-M methods have an accuracy that is comparable to manual methods when tested against a GS landmark set while saving 90% of the time. The number and distribution of landmarks significantly affected the analysis as manifested by the different results for mean deformation and proximal landmark deformation methods. Automatic landmark methods offer a promising alternative to manual landmarking, if the quantity, quality and distribution of landmarks can be optimized for the intended application.

  18. Model-based automatic 3d building model generation by integrating LiDAR and aerial images

    Science.gov (United States)

    Habib, A.; Kwak, E.; Al-Durgham, M.

    2011-12-01

    Accurate, detailed, and up-to-date 3D building models are important for several applications such as telecommunication network planning, urban planning, and military simulation. Existing building reconstruction approaches can be classified according to the data sources they use (i.e., single versus multi-sensor approaches), the processing strategy (i.e., data-driven, model-driven, or hybrid), or the amount of user interaction (i.e., manual, semiautomatic, or fully automated). While it is obvious that 3D building models are important components for many applications, they still lack the economical and automatic techniques for their generation while taking advantage of the available multi-sensory data and combining processing strategies. In this research, an automatic methodology for building modelling by integrating multiple images and LiDAR data is proposed. The objective of this research work is to establish a framework for automatic building generation by integrating data driven and model-driven approaches while combining the advantages of image and LiDAR datasets.

  19. Automatic Detection of Sub-Kilometer Craters in High Resolution Images of Mars

    Science.gov (United States)

    Urbach, E. R.; Stepinski, T. F.

    2008-03-01

    A method for automatic detection of impact craters in high resolution images of Mars is presented. This new method enables detection of sub-kilometer craters that are too small to be cataloged by previous methods and too numerous for manual detection.

  20. Normalising orthographic and dialectal variants for the automatic processing of Swiss German

    OpenAIRE

    Samardzic, Tanja; Scherrer, Yves; Glaser, Elvira

    2015-01-01

    Swiss dialects of German are, unlike most dialects of well standardised languages, widely used in everyday communication. Despite this fact, they lack tools and resources for natural language processing. The main reason for this is the fact that the dialects are mostly spoken and that written resources are small and highly inconsistent. This paper addresses the great variability in writing that poses a problem for automatic processing. We propose an automatic approach to normalising the varia...

  1. Fully automatic prostate segmentation from transrectal ultrasound images based on radial bas-relief initialization and slice-based propagation.

    Science.gov (United States)

    Yu, Yanyan; Chen, Yimin; Chiu, Bernard

    2016-07-01

    Prostate segmentation from transrectal ultrasound (TRUS) images plays an important role in the diagnosis and treatment planning of prostate cancer. In this paper, a fully automatic slice-based segmentation method was developed to segment TRUS prostate images. The initial prostate contour was determined using a novel method based on the radial bas-relief (RBR) method, and a false edge removal algorithm proposed here in. 2D slice-based propagation was used in which the contour on each image slice was deformed using a level-set evolution model, which was driven by edge-based and region-based energy fields generated by dyadic wavelet transform. The optimized contour on an image slice propagated to the adjacent slice, and subsequently deformed using the level-set model. The propagation continued until all image slices were segmented. To determine the initial slice where the propagation began, the initial prostate contour was deformed individually on each transverse image. A method was developed to self-assess the accuracy of the deformed contour based on the average image intensity inside and outside of the contour. The transverse image on which highest accuracy was attained was chosen to be the initial slice for the propagation process. Evaluation was performed for 336 transverse images from 15 prostates that include images acquired at mid-gland, base and apex regions of the prostates. The average mean absolute difference (MAD) between algorithm and manual segmentations was 0.79±0.26mm, which is comparable to results produced by previously published semi-automatic segmentation methods. Statistical evaluation shows that accurate segmentation was not only obtained at the mid-gland, but also at the base and apex regions.

  2. Effects of pose and image resolution on automatic face recognition

    NARCIS (Netherlands)

    Mahmood, Zahid; Ali, Tauseef; Khan, Samee U.

    The popularity of face recognition systems have increased due to their use in widespread applications. Driven by the enormous number of potential application domains, several algorithms have been proposed for face recognition. Face pose and image resolutions are among the two important factors that

  3. Diffeomorphic image registration with automatic time-step adjustment

    DEFF Research Database (Denmark)

    Pai, Akshay Sadananda Uppinakudru; Klein, S.; Sommer, Stefan Horst;

    2015-01-01

    In this paper, we propose an automated Euler's time-step adjustment scheme for diffeomorphic image registration using stationary velocity fields (SVFs). The proposed variational problem aims at bounding the inverse consistency error by adaptively adjusting the number of Euler's step required...

  4. Effects of pose and image resolution on automatic face recognition

    NARCIS (Netherlands)

    Mahmood, Zahid; Ali, Tauseef; Khan, Samee U.

    2015-01-01

    The popularity of face recognition systems have increased due to their use in widespread applications. Driven by the enormous number of potential application domains, several algorithms have been proposed for face recognition. Face pose and image resolutions are among the two important factors that

  5. Automatic line detection in document images using recursive morphological transforms

    Science.gov (United States)

    Kong, Bin; Chen, Su S.; Haralick, Robert M.; Phillips, Ihsin T.

    1995-03-01

    In this paper, we describe a system that detects lines of various types, e.g., solid lines and dotted lines, on document images. The main techniques are based on the recursive morphological transforms, namely the recursive opening and closing transforms. The advantages of the transforms are that they can perform binary opening and closing with any sized structuring element simultaneously in constant time per pixel, and that they offer a solution to morphological image analysis problems where the sizes of the structuring elements have to be determined after the examination of the image itself. The system is evaluated on about 1,200 totally ground-truthed IRS tax form images of different qualities. The line detection output is compared with a set of hand-drawn groundtruth lines. The statistics like the number and rate of correct detection, miss detection, and false alarm are calculated. The performance of 32 algorithms for solid line detection are compared to find the best one. The optimal algorithm tuning parameter settings could be estimated on the fly using a regression tree.

  6. Differential morphology and image processing.

    Science.gov (United States)

    Maragos, P

    1996-01-01

    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.

  7. Automatic Cell Detection in Bright-Field Microscope Images Using SIFT, Random Forests, and Hierarchical Clustering.

    Science.gov (United States)

    Mualla, Firas; Scholl, Simon; Sommerfeldt, Bjorn; Maier, Andreas; Hornegger, Joachim

    2013-12-01

    We present a novel machine learning-based system for unstained cell detection in bright-field microscope images. The system is fully automatic since it requires no manual parameter tuning. It is also highly invariant with respect to illumination conditions and to the size and orientation of cells. Images from two adherent cell lines and one suspension cell line were used in the evaluation for a total number of more than 3500 cells. Besides real images, simulated images were also used in the evaluation. The detection error was between approximately zero and 15.5% which is a significantly superior performance compared to baseline approaches.

  8. DETECTING DIGITAL IMAGE FORGERIES USING RE-SAMPLING BY AUTOMATIC REGION OF INTEREST (ROI

    Directory of Open Access Journals (Sweden)

    P. Subathra

    2012-05-01

    Full Text Available Nowadays, digital images can be easily altered by using high-performance computers, sophisticated photo-editing, computer graphics software, etc. It will affect the authenticity of images in law, politics, the media, and business. In this paper, we proposed a Resampling technique using automatic selection of Region of Interest (ROI method for finding the authenticity of digitally altered image. The proposed technique provides better results beneath scaling, rotation, skewing transformations, and any of their arbitrary combinations in image. It surmounts the protracted complexity in manual ROI selection.

  9. Automatic Detection of Building Points from LIDAR and Dense Image Matching Point Clouds

    Science.gov (United States)

    Maltezos, E.; Ioannidis, C.

    2015-08-01

    This study aims to detect automatically building points: (a) from LIDAR point cloud using simple techniques of filtering that enhance the geometric properties of each point, and (b) from a point cloud which is extracted applying dense image matching at high resolution colour-infrared (CIR) digital aerial imagery using the stereo method semi-global matching (SGM). At first step, the removal of the vegetation is carried out. At the LIDAR point cloud, two different methods are implemented and evaluated using initially the normals and the roughness values afterwards: (1) the proposed scan line smooth filtering and a thresholding process, and (2) a bilateral filtering and a thresholding process. For the case of the CIR point cloud, a variation of the normalized differential vegetation index (NDVI) is computed for the same purpose. Afterwards, the bare-earth is extracted using a morphological operator and removed from the rest scene so as to maintain the buildings points. The results of the extracted buildings applying each approach at an urban area in northern Greece are evaluated using an existing orthoimage as reference; also, the results are compared with the corresponding classified buildings extracted from two commercial software. Finally, in order to verify the utility and functionality of the extracted buildings points that achieved the best accuracy, the 3D models in terms of Level of Detail 1 (LoD 1) and a 3D building change detection process are indicatively performed on a sub-region of the overall scene.

  10. Feature-point-extracting-based automatically mosaic for composite microscopic images

    Institute of Scientific and Technical Information of China (English)

    YIN YanSheng; ZHAO XiuYang; TIAN XiaoFeng; LI Jia

    2007-01-01

    Image mosaic is a crucial step in the three-dimensional reconstruction of composite materials to align the serial images. A novel method is adopted to mosaic two SiC/Al microscopic images with an amplification coefficient of 1000. The two images are denoised by Gaussian model, and feature points are then extracted by using Harris corner detector. The feature points are filtered through Canny edge detector. A 40x40 feature template is chosen by sowing a seed in an overlapped area of the reference image, and the homologous region in floating image is acquired automatically by the way of correlation analysis. The feature points in matched templates are used as feature point-sets. Using the transformational parameters acquired by SVD-ICP method, the two images are transformed into the universal coordinates and merged to the final mosaic image.

  11. An automatic segmentation method for multi-tomatoes image under complicated natural background

    Science.gov (United States)

    Yin, Jianjun; Mao, Hanping; Hu, Yongguang; Wang, Xinzhong; Chen, Shuren

    2006-12-01

    It is a fundamental work to realize intelligent fruit-picking that mature fruits are distinguished from complicated backgrounds and determined their three-dimensional location. Various methods for fruit identification can be found from the literatures. However, surprisingly little attention has been paid to image segmentation of multi-fruits which growth states are separated, connected, overlapped and partially covered by branches and leaves of plant under the natural illumination condition. In this paper we present an automatic segmentation method that comprises of three main steps. Firstly, Red and Green component image are extracted from RGB color image, and Green component subtracted from Red component gives RG of chromatic aberration gray-level image. Gray-level value between objects and background has obviously difference in RG image. By the feature, Ostu's threshold method is applied to do adaptive RG image segmentation. And then, marker-controlled watershed segmentation based on morphological grayscale reconstruction is applied into Red component image to search boundary of connected or overlapped tomatoes. Finally, intersection operation is done by operation results of above two steps to get binary image of final segmentation. The tests show that the automatic segmentation method has satisfactory effect upon multi-tomatoes image of various growth states under the natural illumination condition. Meanwhile, it has very robust for different maturity of multi-tomatoes image.

  12. A fully automatic approach for multimodal PET and MR image segmentation in gamma knife treatment planning.

    Science.gov (United States)

    Rundo, Leonardo; Stefano, Alessandro; Militello, Carmelo; Russo, Giorgio; Sabini, Maria Gabriella; D'Arrigo, Corrado; Marletta, Francesco; Ippolito, Massimo; Mauri, Giancarlo; Vitabile, Salvatore; Gilardi, Maria Carla

    2017-06-01

    Nowadays, clinical practice in Gamma Knife treatments is generally based on MRI anatomical information alone. However, the joint use of MRI and PET images can be useful for considering both anatomical and metabolic information about the lesion to be treated. In this paper we present a co-segmentation method to integrate the segmented Biological Target Volume (BTV), using [(11)C]-Methionine-PET (MET-PET) images, and the segmented Gross Target Volume (GTV), on the respective co-registered MR images. The resulting volume gives enhanced brain tumor information to be used in stereotactic neuro-radiosurgery treatment planning. GTV often does not match entirely with BTV, which provides metabolic information about brain lesions. For this reason, PET imaging is valuable and it could be used to provide complementary information useful for treatment planning. In this way, BTV can be used to modify GTV, enhancing Clinical Target Volume (CTV) delineation. A novel fully automatic multimodal PET/MRI segmentation method for Leksell Gamma Knife(®) treatments is proposed. This approach improves and combines two computer-assisted and operator-independent single modality methods, previously developed and validated, to segment BTV and GTV from PET and MR images, respectively. In addition, the GTV is utilized to combine the superior contrast of PET images with the higher spatial resolution of MRI, obtaining a new BTV, called BTVMRI. A total of 19 brain metastatic tumors, undergone stereotactic neuro-radiosurgery, were retrospectively analyzed. A framework for the evaluation of multimodal PET/MRI segmentation is also presented. Overlap-based and spatial distance-based metrics were considered to quantify similarity concerning PET and MRI segmentation approaches. Statistics was also included to measure correlation among the different segmentation processes. Since it is not possible to define a gold-standard CTV according to both MRI and PET images without treatment response assessment

  13. Quality Control in Automated Manufacturing Processes – Combined Features for Image Processing

    Directory of Open Access Journals (Sweden)

    B. Kuhlenkötter

    2006-01-01

    Full Text Available In production processes the use of image processing systems is widespread. Hardware solutions and cameras respectively are available for nearly every application. One important challenge of image processing systems is the development and selection of appropriate algorithms and software solutions in order to realise ambitious quality control for production processes. This article characterises the development of innovative software by combining features for an automatic defect classification on product surfaces. The artificial intelligent method Support Vector Machine (SVM is used to execute the classification task according to the combined features. This software is one crucial element for the automation of a manually operated production process

  14. A novel scheme for automatic nonrigid image registration using deformation invariant feature and geometric constraint

    Science.gov (United States)

    Deng, Zhipeng; Lei, Lin; Zhou, Shilin

    2015-10-01

    Automatic image registration is a vital yet challenging task, particularly for non-rigid deformation images which are more complicated and common in remote sensing images, such as distorted UAV (unmanned aerial vehicle) images or scanning imaging images caused by flutter. Traditional non-rigid image registration methods are based on the correctly matched corresponding landmarks, which usually needs artificial markers. It is a rather challenging task to locate the accurate position of the points and get accurate homonymy point sets. In this paper, we proposed an automatic non-rigid image registration algorithm which mainly consists of three steps: To begin with, we introduce an automatic feature point extraction method based on non-linear scale space and uniform distribution strategy to extract the points which are uniform distributed along the edge of the image. Next, we propose a hybrid point matching algorithm using DaLI (Deformation and Light Invariant) descriptor and local affine invariant geometric constraint based on triangulation which is constructed by K-nearest neighbor algorithm. Based on the accurate homonymy point sets, the two images are registrated by the model of TPS (Thin Plate Spline). Our method is demonstrated by three deliberately designed experiments. The first two experiments are designed to evaluate the distribution of point set and the correctly matching rate on synthetic data and real data respectively. The last experiment is designed on the non-rigid deformation remote sensing images and the three experimental results demonstrate the accuracy, robustness, and efficiency of the proposed algorithm compared with other traditional methods.

  15. Automatic Marker-free Longitudinal Infrared Image Registration by Shape Context Based Matching and Competitive Winner-guided Optimal Corresponding

    Science.gov (United States)

    Lee, Chia-Yen; Wang, Hao-Jen; Lai, Jhih-Hao; Chang, Yeun-Chung; Huang, Chiun-Sheng

    2017-02-01

    Long-term comparisons of infrared image can facilitate the assessment of breast cancer tissue growth and early tumor detection, in which longitudinal infrared image registration is a necessary step. However, it is hard to keep markers attached on a body surface for weeks, and rather difficult to detect anatomic fiducial markers and match them in the infrared image during registration process. The proposed study, automatic longitudinal infrared registration algorithm, develops an automatic vascular intersection detection method and establishes feature descriptors by shape context to achieve robust matching, as well as to obtain control points for the deformation model. In addition, competitive winner-guided mechanism is developed for optimal corresponding. The proposed algorithm is evaluated in two ways. Results show that the algorithm can quickly lead to accurate image registration and that the effectiveness is superior to manual registration with a mean error being 0.91 pixels. These findings demonstrate that the proposed registration algorithm is reasonably accurate and provide a novel method of extracting a greater amount of useful data from infrared images.

  16. Image processing of galaxy photographs

    Science.gov (United States)

    Arp, H.; Lorre, J.

    1976-01-01

    New computer techniques for analyzing and processing photographic images of galaxies are presented, with interesting scientific findings gleaned from the processed photographic data. Discovery and enhancement of very faint and low-contrast nebulous features, improved resolution of near-limit detail in nebulous and stellar images, and relative colors of a group of nebulosities in the field are attained by the methods. Digital algorithms, nonlinear pattern-recognition filters, linear convolution filters, plate averaging and contrast enhancement techniques, and an atmospheric deconvolution technique are described. New detail is revealed in images of NGC 7331, Stephan's Quintet, Seyfert's Sextet, and the jet in M87, via processes of addition of plates, star removal, contrast enhancement, standard deviation filtering, and computer ratioing to bring out qualitative color differences.

  17. Automatic tissue segmentation of breast biopsies imaged by QPI

    Science.gov (United States)

    Majeed, Hassaan; Nguyen, Tan; Kandel, Mikhail; Marcias, Virgilia; Do, Minh; Tangella, Krishnarao; Balla, Andre; Popescu, Gabriel

    2016-03-01

    The current tissue evaluation method for breast cancer would greatly benefit from higher throughput and less inter-observer variation. Since quantitative phase imaging (QPI) measures physical parameters of tissue, it can be used to find quantitative markers, eliminating observer subjectivity. Furthermore, since the pixel values in QPI remain the same regardless of the instrument used, classifiers can be built to segment various tissue components without need for color calibration. In this work we use a texton-based approach to segment QPI images of breast tissue into various tissue components (epithelium, stroma or lumen). A tissue microarray comprising of 900 unstained cores from 400 different patients was imaged using Spatial Light Interference Microscopy. The training data were generated by manually segmenting the images for 36 cores and labelling each pixel (epithelium, stroma or lumen.). For each pixel in the data, a response vector was generated by the Leung-Malik (LM) filter bank and these responses were clustered using the k-means algorithm to find the centers (called textons). A random forest classifier was then trained to find the relationship between a pixel's label and the histogram of these textons in that pixel's neighborhood. The segmentation was carried out on the validation set by calculating the texton histogram in a pixel's neighborhood and generating a label based on the model learnt during training. Segmentation of the tissue into various components is an important step toward efficiently computing parameters that are markers of disease. Automated segmentation, followed by diagnosis, can improve the accuracy and speed of analysis leading to better health outcomes.

  18. A multiresolution prostate representation for automatic segmentation in magnetic resonance images.

    Science.gov (United States)

    Alvarez, Charlens; Martínez, Fabio; Romero, Eduardo

    2017-04-01

    Accurate prostate delineation is necessary in radiotherapy processes for concentrating the dose onto the prostate and reducing side effects in neighboring organs. Currently, manual delineation is performed over magnetic resonance imaging (MRI) taking advantage of its high soft tissue contrast property. Nevertheless, as human intervention is a consuming task with high intra- and interobserver variability rates, (semi)-automatic organ delineation tools have emerged to cope with these challenges, reducing the time spent for these tasks. This work presents a multiresolution representation that defines a novel metric and allows to segment a new prostate by combining a set of most similar prostates in a dataset. The proposed method starts by selecting the set of most similar prostates with respect to a new one using the proposed multiresolution representation. This representation characterizes the prostate through a set of salient points, extracted from a region of interest (ROI) that encloses the organ and refined using structural information, allowing to capture main relevant features of the organ boundary. Afterward, the new prostate is automatically segmented by combining the nonrigidly registered expert delineations associated to the previous selected similar prostates using a weighted patch-based strategy. Finally, the prostate contour is smoothed based on morphological operations. The proposed approach was evaluated with respect to the expert manual segmentation under a leave-one-out scheme using two public datasets, obtaining averaged Dice coefficients of 82% ± 0.07 and 83% ± 0.06, and demonstrating a competitive performance with respect to atlas-based state-of-the-art methods. The proposed multiresolution representation provides a feature space that follows a local salient point criteria and a global rule of the spatial configuration among these points to find out the most similar prostates. This strategy suggests an easy adaptation in the clinical

  19. Rapid prototyping in the development of image processing systems

    Science.gov (United States)

    von der Fecht, Arno; Kelm, Claus Thomas

    2004-08-01

    This contribution presents a rapid prototyping approach for the real-time demonstration of image processing algorithms. As an example EADS/LFK has developed a basic IR target tracking system implementing this approach. Traditionally in research and industry time-independent simulation of image processing algorithms on a host computer is processed. This method is good for demonstrating the algorithms' capabilities. Rarely done is a time-dependent simulation or even a real-time demonstration on a target platform to prove the real-time capabilities. In 1D signal processing applications time-dependent simulation and real-time demonstration has already been used for quite a while. For time-dependent simulation Simulink from The MathWorks has established as an industry standard. Combined with The MathWorks' Real-Time Workshop the simulation model can be transferred to a real-time target processor. The executable is generated automatically by the Real-Time Workshop directly out of the simulation model. In 2D signal processing applications like image processing The Mathworks' Matlab is commonly used for time-independent simulation. To achieve time-dependent simulation and real-time demonstration capabilities the algorithms can be transferred to Simulink, which in fact runs on top of Matlab. Additionally to increase the performance Simulink models or parts of them can be transferred to Xilinx FPGAs using Xilinx' System Generator. With a single model and the automatic workflow both, a time-dependant simulation and the real-time demonstration, are covered leading to an easy and flexible rapid prototyping approach. EADS/LFK is going to use this approach for a wider spectrum of IR image processing applications like automatic target recognition or image based navigation or imaging laser radar target recognition.

  20. Image processing system for digital chest X-ray images

    Energy Technology Data Exchange (ETDEWEB)

    Cocklin, M.; Gourlay, A.; Jackson, P.; Kaye, G.; Miessler, M. (I.B.M. U.K. Scientific Centre, Winchester (UK)); Kerr, I.; Lams, P. (Radiology Department, Brompton Hospital, London (UK))

    1984-01-01

    This paper investigates the requirements for image processing of digital chest X-ray images. These images are conventionally recorded on film and are characterised by large size, wide dynamic range and high resolution. X-ray detection systems are now becoming available for capturing these images directly in photoelectronic-digital form. The hardware and software facilities required for handling these images are described. These facilities include high resolution digital image displays, programmable video look up tables, image stores for image capture and processing and a full range of software tools for image manipulation. Examples are given of the applications of digital image processing techniques to this class of image.

  1. Automatic detection of invasive ductal carcinoma in whole slide images with convolutional neural networks

    Science.gov (United States)

    Cruz-Roa, Angel; Basavanhally, Ajay; González, Fabio; Gilmore, Hannah; Feldman, Michael; Ganesan, Shridar; Shih, Natalie; Tomaszewski, John; Madabhushi, Anant

    2014-03-01

    This paper presents a deep learning approach for automatic detection and visual analysis of invasive ductal carcinoma (IDC) tissue regions in whole slide images (WSI) of breast cancer (BCa). Deep learning approaches are learn-from-data methods involving computational modeling of the learning process. This approach is similar to how human brain works using different interpretation levels or layers of most representative and useful features resulting into a hierarchical learned representation. These methods have been shown to outpace traditional approaches of most challenging problems in several areas such as speech recognition and object detection. Invasive breast cancer detection is a time consuming and challenging task primarily because it involves a pathologist scanning large swathes of benign regions to ultimately identify the areas of malignancy. Precise delineation of IDC in WSI is crucial to the subsequent estimation of grading tumor aggressiveness and predicting patient outcome. DL approaches are particularly adept at handling these types of problems, especially if a large number of samples are available for training, which would also ensure the generalizability of the learned features and classifier. The DL framework in this paper extends a number of convolutional neural networks (CNN) for visual semantic analysis of tumor regions for diagnosis support. The CNN is trained over a large amount of image patches (tissue regions) from WSI to learn a hierarchical part-based representation. The method was evaluated over a WSI dataset from 162 patients diagnosed with IDC. 113 slides were selected for training and 49 slides were held out for independent testing. Ground truth for quantitative evaluation was provided via expert delineation of the region of cancer by an expert pathologist on the digitized slides. The experimental evaluation was designed to measure classifier accuracy in detecting IDC tissue regions in WSI. Our method yielded the best quantitative

  2. Image analysis in automatic system of pollen recognition

    Directory of Open Access Journals (Sweden)

    Piotr Rapiejko

    2012-12-01

    Full Text Available In allergology practice and research, it would be convenient to receive pollen identification and monitoring results in much shorter time than it comes from human identification. Image based analysis is one of the approaches to an automated identification scheme for pollen grain and pattern recognition on such images is widely used as a powerful tool. The goal of such attempt is to provide accurate, fast recognition and classification and counting of pollen grains by computer system for monitoring. The isolated pollen grain are objects extracted from microscopic image by CCD camera and PC computer under proper conditions for further analysis. The algorithms are based on the knowledge from feature vector analysis of estimated parameters calculated from grain characteristics, including morphological features, surface features and other applicable estimated characteristics. Segmentation algorithms specially tailored to pollen object characteristics provide exact descriptions of pollen characteristics (border and internal features already used by human expert. The specific characteristics and its measures are statistically estimated for each object. Some low level statistics for estimated local and global measures of the features establish the feature space. Some special care should be paid on choosing these feature and on constructing the feature space to optimize the number of subspaces for higher recognition rates in low-level classification for type differentiation of pollen grains.The results of estimated parameters of feature vector in low dimension space for some typical pollen types are presented, as well as some effective and fast recognition results of performed experiments for different pollens. The findings show the ewidence of using proper chosen estimators of central and invariant moments (M21, NM2, NM3, NM8 NM9, of tailored characteristics for good enough classification measures (efficiency > 95%, even for low dimensional classifiers

  3. Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV

    Science.gov (United States)

    Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.

    2016-06-01

    Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.

  4. Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV

    Directory of Open Access Journals (Sweden)

    J. W. Park

    2016-06-01

    Full Text Available Recently, aerial photography with unmanned aerial vehicle (UAV system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments’s LTE (long-term evolution, Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area’s that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision, RTKLIB, Open Drone Map.

  5. CMOS imagers from phototransduction to image processing

    CERN Document Server

    Etienne-Cummings, Ralph

    2004-01-01

    The idea of writing a book on CMOS imaging has been brewing for several years. It was placed on a fast track after we agreed to organize a tutorial on CMOS sensors for the 2004 IEEE International Symposium on Circuits and Systems (ISCAS 2004). This tutorial defined the structure of the book, but as first time authors/editors, we had a lot to learn about the logistics of putting together information from multiple sources. Needless to say, it was a long road between the tutorial and the book, and it took more than a few months to complete. We hope that you will find our journey worthwhile and the collated information useful. The laboratories of the authors are located at many universities distributed around the world. Their unifying theme, however, is the advancement of knowledge for the development of systems for CMOS imaging and image processing. We hope that this book will highlight the ideas that have been pioneered by the authors, while providing a roadmap for new practitioners in this field to exploit exc...

  6. Evaluation of an automatic brain segmentation method developed for neonates on adult MR brain images

    Science.gov (United States)

    Moeskops, Pim; Viergever, Max A.; Benders, Manon J. N. L.; Išgum, Ivana

    2015-03-01

    Automatic brain tissue segmentation is of clinical relevance in images acquired at all ages. The literature presents a clear distinction between methods developed for MR images of infants, and methods developed for images of adults. The aim of this work is to evaluate a method developed for neonatal images in the segmentation of adult images. The evaluated method employs supervised voxel classification in subsequent stages, exploiting spatial and intensity information. Evaluation was performed using images available within the MRBrainS13 challenge. The obtained average Dice coefficients were 85.77% for grey matter, 88.66% for white matter, 81.08% for cerebrospinal fluid, 95.65% for cerebrum, and 96.92% for intracranial cavity, currently resulting in the best overall ranking. The possibility of applying the same method to neonatal as well as adult images can be of great value in cross-sectional studies that include a wide age range.

  7. Fingerprint recognition using image processing

    Science.gov (United States)

    Dholay, Surekha; Mishra, Akassh A.

    2011-06-01

    Finger Print Recognition is concerned with the difficult task of matching the images of finger print of a person with the finger print present in the database efficiently. Finger print Recognition is used in forensic science which helps in finding the criminals and also used in authentication of a particular person. Since, Finger print is the only thing which is unique among the people and changes from person to person. The present paper describes finger print recognition methods using various edge detection techniques and also how to detect correct finger print using a camera images. The present paper describes the method that does not require a special device but a simple camera can be used for its processes. Hence, the describe technique can also be using in a simple camera mobile phone. The various factors affecting the process will be poor illumination, noise disturbance, viewpoint-dependence, Climate factors, and Imaging conditions. The described factor has to be considered so we have to perform various image enhancement techniques so as to increase the quality and remove noise disturbance of image. The present paper describe the technique of using contour tracking on the finger print image then using edge detection on the contour and after that matching the edges inside the contour.

  8. The perfect photo book: hints for the image selection process

    Science.gov (United States)

    Fageth, Reiner; Schmidt-Sacht, Wulf

    2007-01-01

    An ever increasing amount of digital images are being captured. This increase is due to several reasons. People are afraid of not "capturing the moment" and pressing the shutter is not directly liked to costs as was the case with silver halide photography. This behaviour seems to be convenient but can result in a dilemma for the consumer. This paper presents tools designed to help the consumer overcome the time consuming image selection process while turning the chore of selecting the images for prints or placing them automatically into a photo book into a fun experience.

  9. Automatic pterygium detection on cornea images to enhance computer-aided cortical cataract grading system.

    Science.gov (United States)

    Gao, Xinting; Wong, Damon Wing Kee; Aryaputera, Aloysius Wishnu; Sun, Ying; Cheng, Ching-Yu; Cheung, Carol; Wong, Tien Yin

    2012-01-01

    In this paper, we present a new method to detect pterygiums using cornea images. Due to the similarity of appearances and spatial locations between pterygiums and cortical cataracts, pterygiums are often falsely detected as cortical cataracts on retroillumination images by a computer-aided grading system. The proposed method can be used to filter out the pterygium which improves the accuracy of cortical cataract grading system. This work has three major contributions. First, we propose a new pupil segmentation method for visible wavelength images. Second, an automatic detection method of pterygiums is proposed. Third, we develop an enhanced compute-aided cortical cataract grading system that excludes pterygiums. The proposed method is tested using clinical data and the experimental results demonstrate that the proposed method can improve the existing automatic cortical cataract grading system.

  10. Investigation of Ballistic Evidence through an Automatic Image Analysis and Identification System.

    Science.gov (United States)

    Kara, Ilker

    2016-05-01

    Automated firearms identification (AFI) systems contribute to shedding light on criminal events by comparison between different pieces of evidence on cartridge cases and bullets and by matching similar ones that were fired from the same firearm. Ballistic evidence can be rapidly analyzed and classified by means of an automatic image analysis and identification system. In addition, it can be used to narrow the range of possible matching evidence. In this study conducted on the cartridges ejected from the examined pistol, three imaging areas, namely the firing pin impression, capsule traces, and the intersection of these traces, were compared automatically using the image analysis and identification system through the correlation ranking method to determine the numeric values that indicate the significance of the similarities. These numerical features that signify the similarities and differences between pistol makes and models can be used in groupings to make a distinction between makes and models of pistols.

  11. Automatic Open Space Area Extraction and Change Detection from High Resolution Urban Satellite Images

    CERN Document Server

    Kodge, B G

    2011-01-01

    In this paper, we study efficient and reliable automatic extraction algorithm to find out the open space area from the high resolution urban satellite imagery, and to detect changes from the extracted open space area during the period 2003, 2006 and 2008. This automatic extraction and change detection algorithm uses some filters, segmentation and grouping that are applied on satellite images. The resultant images may be used to calculate the total available open space area and the built up area. It may also be used to compare the difference between present and past open space area using historical urban satellite images of that same projection, which is an important geo spatial data management application.

  12. Automatic estimation of retinal nerve fiber bundle orientation in SD-OCT images using a structure-oriented smoothing filter

    Science.gov (United States)

    Ghafaryasl, Babak; Baart, Robert; de Boer, Johannes F.; Vermeer, Koenraad A.; van Vliet, Lucas J.

    2017-02-01

    Optical coherence tomography (OCT) yields high-resolution, three-dimensional images of the retina. A better understanding of retinal nerve fiber bundle (RNFB) trajectories in combination with visual field data may be used for future diagnosis and monitoring of glaucoma. However, manual tracing of these bundles is a tedious task. In this work, we present an automatic technique to estimate the orientation of RNFBs from volumetric OCT scans. Our method consists of several steps, starting from automatic segmentation of the RNFL. Then, a stack of en face images around the posterior nerve fiber layer interface was extracted. The image showing the best visibility of RNFB trajectories was selected for further processing. After denoising the selected en face image, a semblance structure-oriented filter was applied to probe the strength of local linear structure in a discrete set of orientations creating an orientation space. Gaussian filtering along the orientation axis in this space is used to find the dominant orientation. Next, a confidence map was created to supplement the estimated orientation. This confidence map was used as pixel weight in normalized convolution to regularize the semblance filter response after which a new orientation estimate can be obtained. Finally, after several iterations an orientation field corresponding to the strongest local orientation was obtained. The RNFB orientations of six macular scans from three subjects were estimated. For all scans, visual inspection shows a good agreement between the estimated orientation fields and the RNFB trajectories in the en face images. Additionally, a good correlation between the orientation fields of two scans of the same subject was observed. Our method was also applied to a larger field of view around the macula. Manual tracing of the RNFB trajectories shows a good agreement with the automatically obtained streamlines obtained by fiber tracking.

  13. A fully automatic image-to-world registration method for image-guided procedure with intraoperative imaging updates

    Science.gov (United States)

    Li, Senhu; Sarment, David

    2016-03-01

    Image-guided procedure with intraoperative imaging updates has made a big impact on minimally invasive surgery. Compact and mobile CT imaging device combining with current commercial available image guided navigation system is a legitimate and cost-efficient solution for a typical operating room setup. However, the process of manual fiducial-based registration between image and physical spaces (image-to-world) is troublesome for surgeons during the procedure, which results in much procedure interruptions and is the main source of registration errors. In this study, we developed a novel method to eliminate the manual registration process. Instead of using probe to manually localize the fiducials during the surgery, a tracking plate with known fiducial positions relative to the reference coordinates is designed and fabricated through 3D printing technique. The workflow and feasibility of this method has been studied through a phantom experiment.

  14. Automatic segmentation method of striatum regions in quantitative susceptibility mapping images

    Science.gov (United States)

    Murakawa, Saki; Uchiyama, Yoshikazu; Hirai, Toshinori

    2015-03-01

    Abnormal accumulation of brain iron has been detected in various neurodegenerative diseases. Quantitative susceptibility mapping (QSM) is a novel contrast mechanism in magnetic resonance (MR) imaging and enables the quantitative analysis of local tissue susceptibility property. Therefore, automatic segmentation tools of brain regions on QSM images would be helpful for radiologists' quantitative analysis in various neurodegenerative diseases. The purpose of this study was to develop an automatic segmentation and classification method of striatum regions on QSM images. Our image database consisted of 22 QSM images obtained from healthy volunteers. These images were acquired on a 3.0 T MR scanner. The voxel size was 0.9×0.9×2 mm. The matrix size of each slice image was 256×256 pixels. In our computerized method, a template mating technique was first used for the detection of a slice image containing striatum regions. An image registration technique was subsequently employed for the classification of striatum regions in consideration of the anatomical knowledge. After the image registration, the voxels in the target image which correspond with striatum regions in the reference image were classified into three striatum regions, i.e., head of the caudate nucleus, putamen, and globus pallidus. The experimental results indicated that 100% (21/21) of the slice images containing striatum regions were detected accurately. The subjective evaluation of the classification results indicated that 20 (95.2%) of 21 showed good or adequate quality. Our computerized method would be useful for the quantitative analysis of Parkinson diseases in QSM images.

  15. Strategies to Automatically Derive a Process Model from a Configurable Process Model Based on Event Data

    Directory of Open Access Journals (Sweden)

    Mauricio Arriagada-Benítez

    2017-10-01

    Full Text Available Configurable process models are frequently used to represent business workflows and other discrete event systems among different branches of large organizations: they unify commonalities shared by all branches and describe their differences, at the same time. The configuration of such models is usually done manually, which is challenging. On the one hand, when the number of configurable nodes in the configurable process model grows, the size of the search space increases exponentially. On the other hand, the person performing the configuration may lack the holistic perspective to make the right choice for all configurable nodes at the same time, since choices influence each other. Nowadays, information systems that support the execution of business processes create event data reflecting how processes are performed. In this article, we propose three strategies (based on exhaustive search, genetic algorithms and a greedy heuristic that use event data to automatically derive a process model from a configurable process model that better represents the characteristics of the process in a specific branch. These strategies have been implemented in our proposed framework and tested in both business-like event logs as recorded in a higher educational enterprise resource planning system and a real case scenario involving a set of Dutch municipalities.

  16. Automatic detection of spiculation of pulmonary nodules in computed tomography images

    DEFF Research Database (Denmark)

    Ciompi, F; Jacobs, C; Scholten, E.T.

    2015-01-01

    We present a fully automatic method for the assessment of spiculation of pulmonary nodules in low-dose Computed Tomography (CT) images. Spiculation is considered as one of the indicators of nodule malignancy and an important feature to assess in order to decide on a patient-tailored follow......-up procedure. For this reason, lung cancer screening scenario would benefit from the presence of a fully automatic system for the assessment of spiculation. The presented framework relies on the fact that spiculated nodules mainly differ from non-spiculated ones in their morphology. In order to discriminate...... to classify spiculated nodules via supervised learning. We tested our approach on a set of nodules from the Danish Lung Cancer Screening Trial (DLCST) dataset. Our results show that the proposed method outperforms other 3-D descriptors of morphology in the automatic assessment of spiculation. © (2015...

  17. Computer image processing: Geologic applications

    Science.gov (United States)

    Abrams, M. J.

    1978-01-01

    Computer image processing of digital data was performed to support several geological studies. The specific goals were to: (1) relate the mineral content to the spectral reflectance of certain geologic materials, (2) determine the influence of environmental factors, such as atmosphere and vegetation, and (3) improve image processing techniques. For detection of spectral differences related to mineralogy, the technique of band ratioing was found to be the most useful. The influence of atmospheric scattering and methods to correct for the scattering were also studied. Two techniques were used to correct for atmospheric effects: (1) dark object subtraction, (2) normalization of use of ground spectral measurements. Of the two, the first technique proved to be the most successful for removing the effects of atmospheric scattering. A digital mosaic was produced from two side-lapping LANDSAT frames. The advantages were that the same enhancement algorithm can be applied to both frames, and there is no seam where the two images are joined.

  18. Multimedia image and video processing

    CERN Document Server

    Guan, Ling

    2012-01-01

    As multimedia applications have become part of contemporary daily life, numerous paradigm-shifting technologies in multimedia processing have emerged over the last decade. Substantially updated with 21 new chapters, Multimedia Image and Video Processing, Second Edition explores the most recent advances in multimedia research and applications. This edition presents a comprehensive treatment of multimedia information mining, security, systems, coding, search, hardware, and communications as well as multimodal information fusion and interaction. Clearly divided into seven parts, the book begins w

  19. Fragmentation measurement using image processing

    Directory of Open Access Journals (Sweden)

    Farhang Sereshki

    2016-12-01

    Full Text Available In this research, first of all, the existing problems in fragmentation measurement are reviewed for the sake of its fast and reliable evaluation. Then, the available methods used for evaluation of blast results are mentioned. The produced errors especially in recognizing the rock fragments in computer-aided methods, and also, the importance of determination of their sizes in the image analysis methods are described. After reviewing the previous work done, an algorithm is proposed for the automated determination of rock particles’ boundary in the Matlab software. This method can determinate automatically the particles boundary in the minimum time. The results of proposed method are compared with those of Split Desktop and GoldSize software in two automated and manual states. Comparing the curves extracted from different methods reveals that the proposed approach is accurately applicable in measuring the size distribution of laboratory samples, while the manual determination of boundaries in the conventional software is very time-consuming, and the results of automated netting of fragments are very different with the real value due to the error in separation of the objects.

  20. GIPSY : Groningen Image Processing System

    NARCIS (Netherlands)

    Allen, R. J.; Ekers, R. D.; Terlouw, J. P.; Vogelaar, M. G. R.

    2011-01-01

    GIPSY is an acronym of Groningen Image Processing SYstem. It is a highly interactive software system for the reduction and display of astronomical data. It supports multi-tasking using a versatile user interface, it has an advanced data structure, a powerful script language and good display faciliti

  1. Concept Learning through Image Processing.

    Science.gov (United States)

    Cifuentes, Lauren; Yi-Chuan, Jane Hsieh

    This study explored computer-based image processing as a study strategy for middle school students' science concept learning. Specifically, the research examined the effects of computer graphics generation on science concept learning and the impact of using computer graphics to show interrelationships among concepts during study time. The 87…

  2. Linear Algebra and Image Processing

    Science.gov (United States)

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  3. On Processing Hexagonally Sampled Images

    Science.gov (United States)

    2011-07-01

    Definition Addition Negation Subtraction Scalar Multiplication                  2121 2121 21 2 aacc aarr aa pp1...coordinate system for addressing a hexagonal grid that provides support for efficient image processing • Efficient ASA methods were shown for gradient

  4. Automatic gallbladder and gallstone regions segmentation in ultrasound image.

    Science.gov (United States)

    Lian, Jing; Ma, Yide; Ma, Yurun; Shi, Bin; Liu, Jizhao; Yang, Zhen; Guo, Yanan

    2017-04-01

    As gallbladder diseases including gallstone and cholecystitis are mainly diagnosed by using ultra-sonographic examinations, we propose a novel method to segment the gallbladder and gallstones in ultrasound images. The method is divided into five steps. Firstly, a modified Otsu algorithm is combined with the anisotropic diffusion to reduce speckle noise and enhance image contrast. The Otsu algorithm separates distinctly the weak edge regions from the central region of the gallbladder. Secondly, a global morphology filtering algorithm is adopted for acquiring the fine gallbladder region. Thirdly, a parameter-adaptive pulse-coupled neural network (PA-PCNN) is employed to obtain the high-intensity regions including gallstones. Fourthly, a modified region-growing algorithm is used to eliminate physicians' labeled regions and avoid over-segmentation of gallstones. It also has good self-adaptability within the growth cycle in light of the specified growing and terminating conditions. Fifthly, the smoothing contours of the detected gallbladder and gallstones are obtained by the locally weighted regression smoothing (LOESS). We test the proposed method on the clinical data from Gansu Provincial Hospital of China and obtain encouraging results. For the gallbladder and gallstones, average similarity percent of contours (EVA) containing metrics dice's similarity , overlap fraction and overlap value is 86.01 and 79.81%, respectively; position error is 1.7675 and 0.5414 mm, respectively; runtime is 4.2211 and 0.6603 s, respectively. Our method then achieves competitive performance compared with the state-of-the-art methods. The proposed method is potential to assist physicians for diagnosing the gallbladder disease rapidly and effectively.

  5. Estimating babassu palm density using automatic palm tree detection with very high spatial resolution satellite images.

    Science.gov (United States)

    Dos Santos, Alessio Moreira; Mitja, Danielle; Delaître, Eric; Demagistri, Laurent; de Souza Miranda, Izildinha; Libourel, Thérèse; Petit, Michel

    2017-05-15

    High spatial resolution images as well as image processing and object detection algorithms are recent technologies that aid the study of biodiversity and commercial plantations of forest species. This paper seeks to contribute knowledge regarding the use of these technologies by studying randomly dispersed native palm tree. Here, we analyze the automatic detection of large circular crown (LCC) palm tree using a high spatial resolution panchromatic GeoEye image (0.50 m) taken on the area of a community of small agricultural farms in the Brazilian Amazon. We also propose auxiliary methods to estimate the density of the LCC palm tree Attalea speciosa (babassu) based on the detection results. We used the "Compt-palm" algorithm based on the detection of palm tree shadows in open areas via mathematical morphology techniques and the spatial information was validated using field methods (i.e. structural census and georeferencing). The algorithm recognized individuals in life stages 5 and 6, and the extraction percentage, branching factor and quality percentage factors were used to evaluate its performance. A principal components analysis showed that the structure of the studied species differs from other species. Approximately 96% of the babassu individuals in stage 6 were detected. These individuals had significantly smaller stipes than the undetected ones. In turn, 60% of the stage 5 babassu individuals were detected, showing significantly a different total height and a different number of leaves from the undetected ones. Our calculations regarding resource availability indicate that 6870 ha contained 25,015 adult babassu palm tree, with an annual potential productivity of 27.4 t of almond oil. The detection of LCC palm tree and the implementation of auxiliary field methods to estimate babassu density is an important first step to monitor this industry resource that is extremely important to the Brazilian economy and thousands of families over a large scale.

  6. Methodology and Implications of Reconstruction and Automatic Processing of Natural Language of the Classroom.

    Science.gov (United States)

    Marlin, Marjorie; Barron, Nancy

    This paper discusses in some detail the procedural areas of reconstruction and automatic processing used by the Classroom Interaction Project of the University of Missouri's Center for Research in Social Behavior in the analysis of classroom language. First discussed is the process of reconstruction, here defined as the "process of adding to…

  7. Massively parallel processing of remotely sensed hyperspectral images

    Science.gov (United States)

    Plaza, Javier; Plaza, Antonio; Valencia, David; Paz, Abel

    2009-08-01

    In this paper, we develop several parallel techniques for hyperspectral image processing that have been specifically designed to be run on massively parallel systems. The techniques developed cover the three relevant areas of hyperspectral image processing: 1) spectral mixture analysis, a popular approach to characterize mixed pixels in hyperspectral data addressed in this work via efficient implementation of a morphological algorithm for automatic identification of pure spectral signatures or endmembers from the input data; 2) supervised classification of hyperspectral data using multi-layer perceptron neural networks with back-propagation learning; and 3) automatic target detection in the hyperspectral data using orthogonal subspace projection concepts. The scalability of the proposed parallel techniques is investigated using Barcelona Supercomputing Center's MareNostrum facility, one of the most powerful supercomputers in Europe.

  8. Automatic generation of optimal business processes from business rules

    NARCIS (Netherlands)

    Steen, B.; Ferreira Pires, Luis; Iacob, Maria Eugenia

    2010-01-01

    In recent years, business process models are increasingly being used as a means for business process improvement. Business rules can be seen as requirements for business processes, in that they describe the constraints that must hold for business processes that implement these business rules.

  9. 一种自动的图像分割方法%A method of automatic image segmentation

    Institute of Scientific and Technical Information of China (English)

    王晓明; 熊九龙; 王志虎; 祝夏雨; 张玘

    2013-01-01

    针对传统图像分割算法需要参数设置等缺点,提出了一种自动的图像分割算法,采用基于改进视觉注意机制的粗分割和结合主动轮廓与区域生长的精确分割两个过程对图像进行自动分割。实验结果表明,该方法的分割性能优于自适应阈值算法和Kmeans聚类算法,且具有较强的鲁棒性。%In view of the deficiency that traditional image segmentation algorithm needs to set parameters , an automatic image segmentation is proposed in this paper. It applies two process of coarse segmentation based on improved visual attention mechanism and precise segmentation combined active contour with region growing to achieve automatic image segmentation. Experiments show that the proposed algorithm outperforms adaptive threshold algorithm and Kmeans clustering algorithm in image segmentation. The robustness of the proposed algorithm is strong.

  10. Automatic media-adventitia IVUS image segmentation based on sparse representation framework and dynamic directional active contour model.

    Science.gov (United States)

    Zakeri, Fahimeh Sadat; Setarehdan, Seyed Kamaledin; Norouzi, Somayye

    2017-03-25

    Segmentation of the arterial wall boundaries from intravascular ultrasound images is an important image processing task in order to quantify arterial wall characteristics such as shape, area, thickness and eccentricity. Since manual segmentation of these boundaries is a laborious and time consuming procedure, many researchers attempted to develop (semi-) automatic segmentation techniques as a powerful tool for educational and clinical purposes in the past but as yet there is no any clinically approved method in the market. This paper presents a deterministic-statistical strategy for automatic media-adventitia border detection by a fourfold algorithm. First, a smoothed initial contour is extracted based on the classification in the sparse representation framework which is combined with the dynamic directional convolution vector field. Next, an active contour model is utilized for the propagation of the initial contour toward the interested borders. Finally, the extracted contour is refined in the leakage, side branch openings and calcification regions based on the image texture patterns. The performance of the proposed algorithm is evaluated by comparing the results to those manually traced borders by an expert on 312 different IVUS images obtained from four different patients. The statistical analysis of the results demonstrates the efficiency of the proposed method in the media-adventitia border detection with enough consistency in the leakage and calcification regions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Image mining and Automatic Feature extraction from Remotely Sensed Image (RSI using Cubical Distance Methods

    Directory of Open Access Journals (Sweden)

    S.Sasikala

    2013-04-01

    Full Text Available Information processing and decision support system using image mining techniques is in advance drive with huge availability of remote sensing image (RSI. RSI describes inherent properties of objects by recording their natural reflectance in the electro-magnetic spectral (ems region. Information on such objects could be gathered by their color properties or their spectral values in various ems range in the form of pixels. Present paper explains a method of such information extraction using cubical distance method and subsequent results. Thismethod is one among the simpler in its approach and considers grouping of pixels on the basis of equal distance from a specified point in the image or selected pixel having definite attribute values (DN in different spectral layers of the RSI. The color distance and the occurrence pixel distance play a vital role in determining similarobjects as clusters aid in extracting features in the RSI domain.

  12. Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images

    Science.gov (United States)

    Fischer, Bernd

    2004-01-01

    which use numerical approximations even in cases where closed-form solutions exist. AutoBayes is implemented in Prolog and comprises approximately 75.000 lines of code. In this paper, we take one typical scientific data analysis problem-analyzing planetary nebulae images taken by the Hubble Space Telescope-and show how AutoBayes can be used to automate the implementation of the necessary anal- ysis programs. We initially follow the analysis described by Knuth and Hajian [KHO2] and use AutoBayes to derive code for the published models. We show the details of the code derivation process, including the symbolic computations and automatic integration of library procedures, and compare the results of the automatically generated and manually implemented code. We then go beyond the original analysis and use AutoBayes to derive code for a simple image segmentation procedure based on a mixture model which can be used to automate a manual preproceesing step. Finally, we combine the original approach with the simple segmentation which yields a more detailed analysis. This also demonstrates that AutoBayes makes it easy to combine different aspects of data analysis.

  13. Automatic detection of the intima-media thickness in ultrasound images of the common carotid artery using neural networks.

    Science.gov (United States)

    Menchón-Lara, Rosa-María; Bastida-Jumilla, María-Consuelo; Morales-Sánchez, Juan; Sancho-Gómez, José-Luis

    2014-02-01

    Atherosclerosis is the leading underlying pathologic process that results in cardiovascular diseases, which represents the main cause of death and disability in the world. The atherosclerotic process is a complex degenerative condition mainly affecting the medium- and large-size arteries, which begins in childhood and may remain unnoticed during decades. The intima-media thickness (IMT) of the common carotid artery (CCA) has emerged as one of the most powerful tool for the evaluation of preclinical atherosclerosis. IMT is measured by means of B-mode ultrasound images, which is a non-invasive and relatively low-cost technique. This paper proposes an effective image segmentation method for the IMT measurement in an automatic way. With this purpose, segmentation is posed as a pattern recognition problem, and a combination of artificial neural networks has been trained to solve this task. In particular, multi-layer perceptrons trained under the scaled conjugate gradient algorithm have been used. The suggested approach is tested on a set of 60 longitudinal ultrasound images of the CCA by comparing the automatic segmentation with four manual tracings. Moreover, the intra- and inter-observer errors have also been assessed. Despite of the simplicity of our approach, several quantitative statistical evaluations have shown its accuracy and robustness.

  14. The role of automaticity and attention in neural processes underlying empathy for happiness, sadness, and anxiety.

    Science.gov (United States)

    Morelli, Sylvia A; Lieberman, Matthew D

    2013-01-01

    Although many studies have examined the neural basis of empathy, relatively little is known about how empathic processes are affected by different attentional conditions. Thus, we examined whether instructions to empathize might amplify responses in empathy-related regions and whether cognitive load would diminish the involvement of these regions. Thirty-two participants completed a functional magnetic resonance imaging session assessing empathic responses to individuals experiencing happy, sad, and anxious events. Stimuli were presented under three conditions: watching naturally, actively empathizing, and under cognitive load. Across analyses, we found evidence for a core set of neural regions that support empathic processes (dorsomedial prefrontal cortex, DMPFC; medial prefrontal cortex, MPFC; temporoparietal junction, TPJ; amygdala; ventral anterior insula, AI; and septal area, SA). Two key regions-the ventral AI and SA-were consistently active across all attentional conditions, suggesting that they are automatically engaged during empathy. In addition, watching vs. empathizing with targets was not markedly different and instead led to similar subjective and neural responses to others' emotional experiences. In contrast, cognitive load reduced the subjective experience of empathy and diminished neural responses in several regions related to empathy and social cognition (DMPFC, MPFC, TPJ, and amygdala). The results reveal how attention impacts empathic processes and provides insight into how empathy may unfold in everyday interactions.

  15. The role of automaticity and attention in neural processes underlying empathy for happiness, sadness, and anxiety

    Directory of Open Access Journals (Sweden)

    Sylvia A. Morelli

    2013-05-01

    Full Text Available Although many studies have examined the neural basis of experiencing empathy, relatively little is known about how empathic processes are affected by different attentional conditions. Thus, we examined whether instructions to empathize might amplify responses in empathy-related regions and whether cognitive load would diminish the involvement of these regions. 32 participants completed a functional magnetic resonance imaging session assessing empathic responses to individuals experiencing happy, sad, and anxious events. Stimuli were presented under three conditions: watching naturally, while instructed to empathize, and under cognitive load. Across analyses, we found evidence for a core set of neural regions that support empathic processes (dorsomedial prefrontal cortex, DMPFC; medial prefrontal cortex, MPFC; temporoparietal junction, TPJ; amygdala; ventral anterior insula, AI; septal area, SA. Two key regions – the ventral AI and SA – were consistently active across all attentional conditions, suggesting that they are automatically engaged during empathy. In addition, watching versus empathizing with targets was not markedly different and instead led to similar subjective and neural responses to others’ emotional experiences. In contrast, cognitive load reduced the subjective experience of empathy and diminished neural responses in several regions related to empathy (DMPFC, MPFC, TPJ, amygdala and social cognition. The current results reveal how attention impacts empathic processes and provides insight into how empathy may unfold in everyday interactions.

  16. Testing interactive effects of automatic and conflict control processes during response inhibition - A system neurophysiological study.

    Science.gov (United States)

    Chmielewski, Witold X; Beste, Christian

    2017-02-01

    In everyday life successful acting often requires to inhibit automatic responses that might not be appropriate in the current situation. These response inhibition processes have been shown to become aggravated with increasing automaticity of pre-potent response tendencies. Likewise, it has been shown that inhibitory processes are complicated by a concurrent engagement in additional cognitive control processes (e.g. conflicting monitoring). Therefore, opposing processes (i.e. automaticity and cognitive control) seem to strongly impact response inhibition. However, possible interactive effects of automaticity and cognitive control for the modulation of response inhibition processes have yet not been examined. In the current study we examine this question using a novel experimental paradigm combining a Go/NoGo with a Simon task in a system neurophysiological approach combining EEG recordings with source localization analyses. The results show that response inhibition is less accurate in non-conflicting than in conflicting stimulus-response mappings. Thus it seems that conflicts and the resulting engagement in conflict monitoring processes, as reflected in the N2 amplitude, may foster response inhibition processes. This engagement in conflict monitoring processes leads to an increase in cognitive control, as reflected by an increased activity in the anterior and posterior cingulate areas, while simultaneously the automaticity of response tendencies is decreased. Most importantly, this study suggests that the quality of conflict processes in anterior cingulate areas and especially the resulting interaction of cognitive control and automaticity of pre-potent response tendencies are important factors to consider, when it comes to the modulation of response inhibition processes.

  17. Development of an Automatic Program to Analyze Sunspot Groups on White Light Images using OpenCV

    Science.gov (United States)

    Park, J.; Moon, Y.; Choi, S.

    2011-12-01

    Sunspots usually appear in a group which can be classified by certain morphological criteria. In this study we examine the moments which are statistical parameters computed by summing over every pixels of contours, for quantifying the morphological characteristics of a sunspot group. The moments can be another additional characteristics to the sunspot group classification such as McIntosh classification. We are developing a program for image processing, detection of contours and computation of the moments using white light full disk images from Big Bear Solar Observatory. We apply the program to count the sunspot number from 530 white light images in 2003. The sunspot numbers obtained by the program are compared with those by SIDC. The comparison shows that they have a good correlation (r=84%). We are extending this application to automatic sunspot classification (e.g., McIntosh classification) and flare forecasting.

  18. Evaluation of two software tools dedicated to an automatic analysis of the CT scanner image spatial resolution.

    Science.gov (United States)

    Torfeh, Tarraf; Beaumont, Stéphane; Guédon, Jean Pierre; Denis, Eloïse

    2007-01-01

    An evaluation of two software tools dedicated to an automatic analysis of the CT scanner image spatial resolution is presented in this paper. The methods evaluated consist of calculating the Modulation Transfer Function (MTF) of the CT scanners; the first uses an image of an impulse source, while the second method proposed by Droege and Morin uses an image of cyclic bar patterns. Two Digital Test Objects (DTO) are created to this purpose. These DTOs are then blurred by doing a convolution with a two-dimensional Gaussian Point Spread Function (PSF(Ref)), which has a well known Full Width at Half Maximum (FWHM). The evaluation process consists then of comparing the Fourier transform of the PSF on the one hand, and the two mentioned methods on the other hand.

  19. Semi-automatic segmentation of vertebral bodies in volumetric MR images using a statistical shape+pose model

    Science.gov (United States)

    Suzani, Amin; Rasoulian, Abtin; Fels, Sidney; Rohling, Robert N.; Abolmaesumi, Purang

    2014-03-01

    Segmentation of vertebral structures in magnetic resonance (MR) images is challenging because of poor con­trast between bone surfaces and surrounding soft tissue. This paper describes a semi-automatic method for segmenting vertebral bodies in multi-slice MR images. In order to achieve a fast and reliable segmentation, the method takes advantage of the correlation between shape and pose of different vertebrae in the same patient by using a statistical multi-vertebrae anatomical shape+pose model. Given a set of MR images of the spine, we initially reduce the intensity inhomogeneity in the images by using an intensity-correction algorithm. Then a 3D anisotropic diffusion filter smooths the images. Afterwards, we extract edges from a relatively small region of the pre-processed image with a simple user interaction. Subsequently, an iterative Expectation Maximization tech­nique is used to register the statistical multi-vertebrae anatomical model to the extracted edge points in order to achieve a fast and reliable segmentation for lumbar vertebral bodies. We evaluate our method in terms of speed and accuracy by applying it to volumetric MR images of the spine acquired from nine patients. Quantitative and visual results demonstrate that the method is promising for segmentation of vertebral bodies in volumetric MR images.

  20. Image processing with ImageJ

    NARCIS (Netherlands)

    Abramoff, M.D.; Magalhães, Paulo J.; Ram, Sunanda J.

    2004-01-01

    Wayne Rasband of NIH has created ImageJ, an open source Java-written program that is now at version 1.31 and is used for many imaging applications, including those that that span the gamut from skin analysis to neuroscience. ImageJ is in the public domain and runs on any operating system (OS). Image

  1. Automatic convey or System with In–Process Sorting Mechanism using PLC and HMI System

    Directory of Open Access Journals (Sweden)

    Y V Aruna

    2015-11-01

    Full Text Available Programmable logic controllers are widely used in many manufacturing process like machinery packaging material handling automatic assembly. These are special type of microprocessor based controller used for any application that needs any kind of electrical controller including lighting controller and HVAC control system. Automatic conveyor system is a computerized control method of controlling and managing the sorting mechanism at the same time maintaining the efficiency of the industry & quality of the products.HMI for automatic conveyor system is considered the primary way of controlling each operation. Text displays are available as well as graphical touch screens. It is used in touch panels and local monitoring of machines. This paper deals with the efficient use of PLC in automatic conveyor system and also building the accuracy in it.

  2. Research on automatic loading & unloading technology for vertical hot ring rolling process

    Directory of Open Access Journals (Sweden)

    Xiaokai Wang

    2015-01-01

    Full Text Available The automatic loading & unloading technology is the key to the automatic ring production line. In this paper, the automatic vertical hot ring rolling (VHRR process is taken as the target, the method of the loading & unloading for VHRR is proposed, and the mechanical structure of loading & unloading system is designed, The virtual prototype model of VHRR mill and loading & unloading mechanism is established, and the coordinated control method of VHRR mill and loading & unloading auxiliaries is studied, the movement trace and dynamic characteristic of the critical components are obtained. Finally, a series of hot ring rolling tests are conducted on the VHRR mill, and the production rhythm and the formed rings' geometric precision are analysed. The tests results show that the loading & unloading technology can meet the high quality and high efficiency ring production requirement. The research conclusions have practical significance for the large-scale automatic ring production.

  3. A mobile medical QR-code authentication system and its automatic FICE image evaluation application

    Directory of Open Access Journals (Sweden)

    Yi-Ying Chang

    2015-04-01

    Full Text Available This paper presents an adaptive imaging technique run on a mobile service system for endoscopic image enhancement by using color transform and Gray Level Co-occurrence Matrices (GLCM for a single input endoscopy image. The method is simply deal with the color image channels combination which chose the maximum scalar values of red, green and blue channel images, respectively. The GLCM subsequently applied for selecting the highest contrast and entropy images of the expanding image series. The enhanced endoscopy image is generated by fusing of the color, contrast and entropy images. We also proposed a service system with medical image retrieval application via quick response code authentication based on the Android operating system, which helps clinicians convenient in using mobile phone and reviewing images of the patient with cost efficiency. For the mobile technologies are growing rapidly, the mobile service system is installed to connect a Picture Archive and Communication Systems (PACS system in hospital and applied for automatic evaluation of colon images screening. The experimental results show the proposed system is efficient for observing gastrointestinal tract polyp. The performance is evaluated and compared with Fujinon intelligent chromo endoscopy enhanced method.

  4. Natural language processing and visualization in the molecular imaging domain.

    Science.gov (United States)

    Tulipano, P Karina; Tao, Ying; Millar, William S; Zanzonico, Pat; Kolbert, Katherine; Xu, Hua; Yu, Hong; Chen, Lifeng; Lussier, Yves A; Friedman, Carol

    2007-06-01

    Molecular imaging is at the crossroads of genomic sciences and medical imaging. Information within the molecular imaging literature could be used to link to genomic and imaging information resources and to organize and index images in a way that is potentially useful to researchers. A number of natural language processing (NLP) systems are available to automatically extract information from genomic literature. One existing NLP system, known as BioMedLEE, automatically extracts biological information consisting of biomolecular substances and phenotypic data. This paper focuses on the adaptation, evaluation, and application of BioMedLEE to the molecular imaging domain. In order to adapt BioMedLEE for this domain, we extend an existing molecular imaging terminology and incorporate it into BioMedLEE. BioMedLEE's performance is assessed with a formal evaluation study. The system's performance, measured as recall and precision, is 0.74 (95% CI: [.70-.76]) and 0.70 (95% CI [.63-.76]), respectively. We adapt a JAVA viewer known as PGviewer for the simultaneous visualization of images with NLP extracted information.

  5. An examination of the rapid automatized naming-reading relationship using functional magnetic resonance imaging.

    Science.gov (United States)

    Cummine, J; Chouinard, B; Szepesvari, E; Georgiou, G K

    2015-10-01

    Rapid automatized naming (RAN) has been established to be a strong predictor of reading. Yet, the neural correlates underlying the RAN-reading relationship remain unknown. Thus, the purpose of this study was to determine: (a) the extent to which RAN and reading activate similar brain regions (within subjects), (b) whether RAN and reading are directly related in the shared activity network outlined in (a), and (c) to what extent RAN neural activation predicts behavioral reading performance. Using functional magnetic resonance imaging (fMRI), university students (N=15; Mean age=20.6 years) were assessed on RAN (letters and digits) and single-word reading (words and non-words). The results revealed a common RAN-reading network that included regions associated with motor planning (cerebellum), semantic access (middle temporal gyrus), articulation (supplementary motor area, pre-motor), and grapheme-phoneme translation (supramarginal gyrus). We found differences between RAN and reading with respect to percent signal change (PSC) in phonological and orthographic regions, but not in articulatory regions. Significant correlations between the neural RAN and reading parameters were found primarily in motor/articulatory regions. Further, we found a unique relationship between in-scanner reading response time and RAN PSC in the left inferior frontal gyrus. Taken together, these findings support the notion that RAN and reading activate similar neural networks. However, the relationship between RAN and reading is primarily driven by commonalities in the motor-sequencing/articulatory processes.

  6. Applying deep learning technology to automatically identify metaphase chromosomes using scanning microscopic images: an initial investigation

    Science.gov (United States)

    Qiu, Yuchen; Lu, Xianglan; Yan, Shiju; Tan, Maxine; Cheng, Samuel; Li, Shibo; Liu, Hong; Zheng, Bin

    2016-03-01

    Automated high throughput scanning microscopy is a fast developing screening technology used in cytogenetic laboratories for the diagnosis of leukemia or other genetic diseases. However, one of the major challenges of using this new technology is how to efficiently detect the analyzable metaphase chromosomes during the scanning process. The purpose of this investigation is to develop a computer aided detection (CAD) scheme based on deep learning technology, which can identify the metaphase chromosomes with high accuracy. The CAD scheme includes an eight layer neural network. The first six layers compose of an automatic feature extraction module, which has an architecture of three convolution-max-pooling layer pairs. The 1st, 2nd and 3rd pair contains 30, 20, 20 feature maps, respectively. The seventh and eighth layers compose of a multiple layer perception (MLP) based classifier, which is used to identify the analyzable metaphase chromosomes. The performance of new CAD scheme was assessed by receiver operation characteristic (ROC) method. A number of 150 regions of interest (ROIs) were selected to test the performance of our new CAD scheme. Each ROI contains either interphase cell or metaphase chromosomes. The results indicate that new scheme is able to achieve an area under the ROC curve (AUC) of 0.886+/-0.043. This investigation demonstrates that applying a deep learning technique may enable to significantly improve the accuracy of the metaphase chromosome detection using a scanning microscopic imaging technology in the future.

  7. Progress toward automatic classification of human brown adipose tissue using biomedical imaging

    Science.gov (United States)

    Gifford, Aliya; Towse, Theodore F.; Walker, Ronald C.; Avison, Malcom J.; Welch, E. B.

    2015-03-01

    Brown adipose tissue (BAT) is a small but significant tissue, which may play an important role in obesity and the pathogenesis of metabolic syndrome. Interest in studying BAT in adult humans is increasing, but in order to quantify BAT volume in a single measurement or to detect changes in BAT over the time course of a longitudinal experiment, BAT needs to first be reliably differentiated from surrounding tissue. Although the uptake of the radiotracer 18F-Fluorodeoxyglucose (18F-FDG) in adipose tissue on positron emission tomography (PET) scans following cold exposure is accepted as an indication of BAT, it is not a definitive indicator, and to date there exists no standardized method for segmenting BAT. Consequently, there is a strong need for robust automatic classification of BAT based on properties measured with biomedical imaging. In this study we begin the process of developing an automated segmentation method based on properties obtained from fat-water MRI and PET-CT scans acquired on ten healthy adult subjects.

  8. Role of Artificial Intelligence Techniques (Automatic Classifiers) in Molecular Imaging Modalities in Neurodegenerative Diseases.

    Science.gov (United States)

    Cascianelli, Silvia; Scialpi, Michele; Amici, Serena; Forini, Nevio; Minestrini, Matteo; Fravolini, Mario Luca; Sinzinger, Helmut; Schillaci, Orazio; Palumbo, Barbara

    2017-01-01

    Artificial Intelligence (AI) is a very active Computer Science research field aiming to develop systems that mimic human intelligence and is helpful in many human activities, including Medicine. In this review we presented some examples of the exploiting of AI techniques, in particular automatic classifiers such as Artificial Neural Network (ANN), Support Vector Machine (SVM), Classification Tree (ClT) and ensemble methods like Random Forest (RF), able to analyze findings obtained by positron emission tomography (PET) or single-photon emission tomography (SPECT) scans of patients with Neurodegenerative Diseases, in particular Alzheimer's Disease. We also focused our attention on techniques applied in order to preprocess data and reduce their dimensionality via feature selection or projection in a more representative domain (Principal Component Analysis - PCA - or Partial Least Squares - PLS - are examples of such methods); this is a crucial step while dealing with medical data, since it is necessary to compress patient information and retain only the most useful in order to discriminate subjects into normal and pathological classes. Main literature papers on the application of these techniques to classify patients with neurodegenerative disease extracting data from molecular imaging modalities are reported, showing that the increasing development of computer aided diagnosis systems is very promising to contribute to the diagnostic process.

  9. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    Directory of Open Access Journals (Sweden)

    Po-Chia Yeh

    2012-08-01

    Full Text Available The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  10. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    Science.gov (United States)

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  11. MaNIAC-UAV - a methodology for automatic pavement defects detection using images obtained by Unmanned Aerial Vehicles

    Science.gov (United States)

    Henrique Castelo Branco, Luiz; César Lima Segantine, Paulo

    2015-09-01

    Intelligent Transportation Systems - ITS is a set of integrated technologies (Remote Sensing, Image Processing, Communications Systems and others) that aim to offer services and advanced traffic management for the several transportation modes (road, air and rail). Collect data on the characteristics and conditions of the road surface and keep them update is an important and difficult task that needs to be currently managed in order to reduce accidents and vehicle maintenance costs. Nowadays several roads and highways are paved, but usually there is insufficient updated data about current condition and status. There are different types of pavement defects on the roads and to keep them in good condition they should be constantly monitored and maintained according to pavement management strategy. This paper presents a methodology to obtain, automatically, information about the conditions of the highway asphalt pavement. Data collection was done through remote sensing using an UAV (Unmanned Aerial Vehicle) and the image processing and pattern recognition techniques through Geographic Information System.

  12. Automatic detection and morphological delineation of bacteriophages in electron microscopy images.

    Science.gov (United States)

    Gelzinis, A; Verikas, A; Vaiciukynas, E; Bacauskiene, M; Sulcius, S; Simoliunas, E; Staniulis, J; Paskauskas, R

    2015-09-01

    Automatic detection, recognition and geometric characterization of bacteriophages in electron microscopy images was the main objective of this work. A novel technique, combining phase congruency-based image enhancement, Hough transform-, Radon transform- and open active contours with free boundary conditions-based object detection was developed to detect and recognize the bacteriophages associated with infection and lysis of cyanobacteria Aphanizomenon flos-aquae. A random forest classifier designed to recognize phage capsids provided higher than 99% accuracy, while measurable phage tails were detected and associated with a correct capsid with 81.35% accuracy. Automatically derived morphometric measurements of phage capsids and tails exhibited lower variability than the ones obtained manually. The technique allows performing precise and accurate quantitative (e.g. abundance estimation) and qualitative (e.g. diversity and capsid size) measurements for studying the interactions between host population and different phages that infect the same host.

  13. Automatic mitral annulus tracking in volumetric ultrasound using non-rigid image registration.

    Science.gov (United States)

    De Veene, Henri; Bertrand, Philippe B; Popovic, Natasa; Vandervoort, Pieter M; Claus, Piet; De Beule, Matthieu; Heyde, Brecht

    2015-01-01

    Analysis of mitral annular dynamics plays an important role in the diagnosis and selection of optimal valve repair strategies, but remains cumbersome and time-consuming if performed manually. In this paper we propose non-rigid image registration to automatically track the annulus in 3D ultrasound images for both normal and pathological valves, and compare the performance against manual tracing. Relevant clinical properties such as annular area, circumference and excursion could be extracted reliably by the tracking algorithm. The root-mean-square error, calculated as the difference between the manually traced landmarks (18 in total) and the automatic tracking, was 1.96 ± 0.46 mm over 10 valves (5 healthy and 5 diseased) which is within the clinically acceptable error range.

  14. Comparative Study of Image Denoising Algorithms in Digital Image Processing

    Directory of Open Access Journals (Sweden)

    Aarti

    2014-05-01

    Full Text Available This paper proposes a basic scheme for understanding the fundamentals of digital image processing and the image denising algorithm. There are three basic operation categorized on during image processing i.e. image rectification and restoration, enhancement and information extraction. Image denoising is the basic problem in digital image processing. The main task is to make the image free from Noise. Salt & pepper (Impulse noise and the additive white Gaussian noise and blurredness are the types of noise that occur during transmission and capturing. For denoising the image there are some algorithms which denoise the image.

  15. Comparative Study of Image Denoising Algorithms in Digital Image Processing

    Directory of Open Access Journals (Sweden)

    Aarti Kumari

    2015-11-01

    Full Text Available This paper proposes a basic scheme for understanding the fundamentals of digital image processing and the image denising algorithm. There are three basic operation categorized on during image processing i.e. image rectification and restoration, enhancement and information extraction. Image denoising is the basic problem in digital image processing. The main task is to make the image free from Noise. Salt & pepper (Impulse noise and the additive white Gaussian noise and blurredness are the types of noise that occur during transmission and capturing. For denoising the image there are some algorithms which denoise the image.

  16. Improving Seismic Image with Advanced Processing Techniques

    Directory of Open Access Journals (Sweden)

    Mericy Lastra Cunill

    2012-07-01

    Full Text Available Taking Taking into account the need to improve the seismic image in the central area of Cuba, specifically in the area of the Venegas sector, located in the Cuban Folded Belt, the seismic data acquired by Cuba Petróleo (CUPET in the year 2007 was reprocessed according to the experience accumulated during the previous processing carried out in the same year, and the new geologic knowledge on the area. This was done with the objective of improving the results. The processing applied previously was analyzed by reprocessing the primary data with new focuses and procedures, among them are the following: the attenuation of the superficial wave with a filter in the Radon domain in its lineal variant, the change of the primary statics corrections of elevation by those of refraction, the study of velocity with the selection automatic biespectral of high density, the study of the anisotropy, the attenuation of the random noise, and the pre stack time and depth migration. As a result of this reprocessing, a structure that was not identified in the seismic sections of the previous processing was located at the top of a Continental Margin sediment located to the north of the sector that increased the potentialities of finding hydrocarbons in quantities of economic importance thus diminishing the risk of drilling in the sector Venegas.

  17. Automated vehicle counting using image processing and machine learning

    Science.gov (United States)

    Meany, Sean; Eskew, Edward; Martinez-Castro, Rosana; Jang, Shinae

    2017-04-01

    Vehicle counting is used by the government to improve roadways and the flow of traffic, and by private businesses for purposes such as determining the value of locating a new store in an area. A vehicle count can be performed manually or automatically. Manual counting requires an individual to be on-site and tally the traffic electronically or by hand. However, this can lead to miscounts due to factors such as human error A common form of automatic counting involves pneumatic tubes, but pneumatic tubes disrupt traffic during installation and removal, and can be damaged by passing vehicles. Vehicle counting can also be performed via the use of a camera at the count site recording video of the traffic, with counting being performed manually post-recording or using automatic algorithms. This paper presents a low-cost procedure to perform automatic vehicle counting using remote video cameras with an automatic counting algorithm. The procedure would utilize a Raspberry Pi micro-computer to detect when a car is in a lane, and generate an accurate count of vehicle movements. The method utilized in this paper would use background subtraction to process the images and a machine learning algorithm to provide the count. This method avoids fatigue issues that are encountered in manual video counting and prevents the disruption of roadways that occurs when installing pneumatic tubes

  18. Automatic registration between 3D intra-operative ultrasound and pre-operative CT images of the liver based on robust edge matching

    Science.gov (United States)

    Nam, Woo Hyun; Kang, Dong-Goo; Lee, Duhgoon; Lee, Jae Young; Ra, Jong Beom

    2012-01-01

    The registration of a three-dimensional (3D) ultrasound (US) image with a computed tomography (CT) or magnetic resonance image is beneficial in various clinical applications such as diagnosis and image-guided intervention of the liver. However, conventional methods usually require a time-consuming and inconvenient manual process for pre-alignment, and the success of this process strongly depends on the proper selection of initial transformation parameters. In this paper, we present an automatic feature-based affine registration procedure of 3D intra-operative US and pre-operative CT images of the liver. In the registration procedure, we first segment vessel lumens and the liver surface from a 3D B-mode US image. We then automatically estimate an initial registration transformation by using the proposed edge matching algorithm. The algorithm finds the most likely correspondences between the vessel centerlines of both images in a non-iterative manner based on a modified Viterbi algorithm. Finally, the registration is iteratively refined on the basis of the global affine transformation by jointly using the vessel and liver surface information. The proposed registration algorithm is validated on synthesized datasets and 20 clinical datasets, through both qualitative and quantitative evaluations. Experimental results show that automatic registration can be successfully achieved between 3D B-mode US and CT images even with a large initial misalignment.

  19. Automatic Detection and Segmentation of Kidneys in 3D CT Images Using Random Forests

    OpenAIRE

    Cuingnet, Rémi; Prevost, Raphaël; Lesage, David; Cohen, Laurent D.; Mory, Benoît; Ardon, Roberto

    2012-01-01

    International audience; Kidney segmentation in 3D CT images allows extracting useful information for nephrologists. For practical use in clinical routine, such an algorithm should be fast, automatic and robust to contrast-agent enhancement and elds of view. By combining and re ning state-of-the-art techniques (random forests and template deformation), we demonstrate the possibility of building an algorithm that meets these requirements. Kidneys are localized with random forests following a co...

  20. Automatic system for quantification and visualization of lung aeration on chest computed tomography images: the Lung Image System Analysis - LISA

    Energy Technology Data Exchange (ETDEWEB)

    Felix, John Hebert da Silva; Cortez, Paulo Cesar, E-mail: jhsfelix@gmail.co [Universidade Federal do Ceara (UFC), Fortaleza, CE (Brazil). Dept. de Engenharia de Teleinformatica; Holanda, Marcelo Alcantara [Universidade Federal do Ceara (UFC), Fortaleza, CE (Brazil). Hospital Universitario Walter Cantidio. Dept. de Medicina Clinica

    2010-12-15

    High Resolution Computed Tomography (HRCT) is the exam of choice for the diagnostic evaluation of lung parenchyma diseases. There is an increasing interest for computational systems able to automatically analyze the radiological densities of the lungs in CT images. The main objective of this study is to present a system for the automatic quantification and visualization of the lung aeration in HRCT images of different degrees of aeration, called Lung Image System Analysis (LISA). The secondary objective is to compare LISA to the Osiris system and also to specific algorithm lung segmentation (ALS), on the accuracy of the lungs segmentation. The LISA system automatically extracts the following image attributes: lungs perimeter, cross sectional area, volume, the radiological densities histograms, the mean lung density (MLD) in Hounsfield units (HU), the relative area of the lungs with voxels with density values lower than -950 HU (RA950) and the 15th percentile of the least density voxels (PERC15). Furthermore, LISA has a colored mask algorithm that applies pseudo-colors to the lung parenchyma according to the pre-defined radiological density chosen by the system user. The lungs segmentations of 102 images of 8 healthy volunteers and 141 images of 11 patients with Chronic Obstructive Pulmonary Disease (COPD) were compared on the accuracy and concordance among the three methods. The LISA was more effective on lungs segmentation than the other two methods. LISA's color mask tool improves the spatial visualization of the degrees of lung aeration and the various attributes of the image that can be extracted may help physicians and researchers to better assess lung aeration both quantitatively and qualitatively. LISA may have important clinical and research applications on the assessment of global and regional lung aeration and therefore deserves further developments and validation studies. (author)