WorldWideScience

Sample records for automated image analysis

  1. Automated image analysis techniques for cardiovascular magnetic resonance imaging

    NARCIS (Netherlands)

    Geest, Robertus Jacobus van der

    2011-01-01

    The introductory chapter provides an overview of various aspects related to quantitative analysis of cardiovascular MR (CMR) imaging studies. Subsequently, the thesis describes several automated methods for quantitative assessment of left ventricular function from CMR imaging studies. Several novel

  2. Image analysis and platform development for automated phenotyping in cytomics

    NARCIS (Netherlands)

    Yan, Kuan

    2013-01-01

    This thesis is dedicated to the empirical study of image analysis in HT/HC screen study. Often a HT/HC screening produces extensive amounts that cannot be manually analyzed. Thus, an automated image analysis solution is prior to an objective understanding of the raw image data. Compared to general a

  3. Extended -Regular Sequence for Automated Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jin Hee-Jeong

    2006-01-01

    Full Text Available Microarray study enables us to obtain hundreds of thousands of expressions of genes or genotypes at once, and it is an indispensable technology for genome research. The first step is the analysis of scanned microarray images. This is the most important procedure for obtaining biologically reliable data. Currently most microarray image processing systems require burdensome manual block/spot indexing work. Since the amount of experimental data is increasing very quickly, automated microarray image analysis software becomes important. In this paper, we propose two automated methods for analyzing microarray images. First, we propose the extended -regular sequence to index blocks and spots, which enables a novel automatic gridding procedure. Second, we provide a methodology, hierarchical metagrid alignment, to allow reliable and efficient batch processing for a set of microarray images. Experimental results show that the proposed methods are more reliable and convenient than the commercial tools.

  4. Automated image analysis in the study of collagenous colitis

    DEFF Research Database (Denmark)

    Kanstrup, Anne-Marie Fiehn; Kristensson, Martin; Engel, Ulla

    2016-01-01

    PURPOSE: The aim of this study was to develop an automated image analysis software to measure the thickness of the subepithelial collagenous band in colon biopsies with collagenous colitis (CC) and incomplete CC (CCi). The software measures the thickness of the collagenous band on microscopic...

  5. Quantifying biodiversity using digital cameras and automated image analysis.

    Science.gov (United States)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and

  6. Automated morphological analysis approach for classifying colorectal microscopic images

    Science.gov (United States)

    Marghani, Khaled A.; Dlay, Satnam S.; Sharif, Bayan S.; Sims, Andrew J.

    2003-10-01

    Automated medical image diagnosis using quantitative measurements is extremely helpful for cancer prognosis to reach a high degree of accuracy and thus make reliable decisions. In this paper, six morphological features based on texture analysis were studied in order to categorize normal and cancer colon mucosa. They were derived after a series of pre-processing steps to generate a set of different shape measurements. Based on the shape and the size, six features known as Euler Number, Equivalent Diamater, Solidity, Extent, Elongation, and Shape Factor AR were extracted. Mathematical morphology is used firstly to remove background noise from segmented images and then to obtain different morphological measures to describe shape, size, and texture of colon glands. The automated system proposed is tested to classifying 102 microscopic samples of colorectal tissues, which consist of 44 normal color mucosa and 58 cancerous. The results were first statistically evaluated, using one-way ANOVA method in order to examine the significance of each feature extracted. Then significant features are selected in order to classify the dataset into two categories. Finally, using two discrimination methods; linear method and k-means clustering, important classification factors were estimated. In brief, this study demonstrates that abnormalities in low-level power tissue morphology can be distinguished using quantitative image analysis. This investigation shows the potential of an automated vision system in histopathology. Furthermore, it has the advantage of being objective, and more importantly a valuable diagnostic decision support tool.

  7. Automated monitoring of activated sludge using image analysis

    OpenAIRE

    Motta, Maurício da; M. N. Pons; Roche, N; A.L. Amaral; Ferreira, E. C.; Alves, M.M.; Mota, M.; Vivier, H.

    2000-01-01

    An automated procedure for the characterisation by image analysis of the morphology of activated sludge has been used to monitor in a systematic manner the biomass in wastewater treatment plants. Over a period of one year, variations in terms mainly of the fractal dimension of flocs and of the amount of filamentous bacteria could be related to rain events affecting the plant influent flow rate and composition. Grand Nancy Council. Météo-France. Brasil. Ministério da Ciênc...

  8. Automated pollen identification using microscopic imaging and texture analysis.

    Science.gov (United States)

    Marcos, J Víctor; Nava, Rodrigo; Cristóbal, Gabriel; Redondo, Rafael; Escalante-Ramírez, Boris; Bueno, Gloria; Déniz, Óscar; González-Porto, Amelia; Pardo, Cristina; Chung, François; Rodríguez, Tomás

    2015-01-01

    Pollen identification is required in different scenarios such as prevention of allergic reactions, climate analysis or apiculture. However, it is a time-consuming task since experts are required to recognize each pollen grain through the microscope. In this study, we performed an exhaustive assessment on the utility of texture analysis for automated characterisation of pollen samples. A database composed of 1800 brightfield microscopy images of pollen grains from 15 different taxa was used for this purpose. A pattern recognition-based methodology was adopted to perform pollen classification. Four different methods were evaluated for texture feature extraction from the pollen image: Haralick's gray-level co-occurrence matrices (GLCM), log-Gabor filters (LGF), local binary patterns (LBP) and discrete Tchebichef moments (DTM). Fisher's discriminant analysis and k-nearest neighbour were subsequently applied to perform dimensionality reduction and multivariate classification, respectively. Our results reveal that LGF and DTM, which are based on the spectral properties of the image, outperformed GLCM and LBP in the proposed classification problem. Furthermore, we found that the combination of all the texture features resulted in the highest performance, yielding an accuracy of 95%. Therefore, thorough texture characterisation could be considered in further implementations of automatic pollen recognition systems based on image processing techniques.

  9. Granulometric profiling of aeolian dust deposits by automated image analysis

    Science.gov (United States)

    Varga, György; Újvári, Gábor; Kovács, János; Jakab, Gergely; Kiss, Klaudia; Szalai, Zoltán

    2016-04-01

    Determination of granulometric parameters is of growing interest in the Earth sciences. Particle size data of sedimentary deposits provide insights into the physicochemical environment of transport, accumulation and post-depositional alterations of sedimentary particles, and are important proxies applied in paleoclimatic reconstructions. It is especially true for aeolian dust deposits with a fairly narrow grain size range as a consequence of the extremely selective nature of wind sediment transport. Therefore, various aspects of aeolian sedimentation (wind strength, distance to source(s), possible secondary source regions and modes of sedimentation and transport) can be reconstructed only from precise grain size data. As terrestrial wind-blown deposits are among the most important archives of past environmental changes, proper explanation of the proxy data is a mandatory issue. Automated imaging provides a unique technique to gather direct information on granulometric characteristics of sedimentary particles. Granulometric data obtained from automatic image analysis of Malvern Morphologi G3-ID is a rarely applied new technique for particle size and shape analyses in sedimentary geology. Size and shape data of several hundred thousand (or even million) individual particles were automatically recorded in this study from 15 loess and paleosoil samples from the captured high-resolution images. Several size (e.g. circle-equivalent diameter, major axis, length, width, area) and shape parameters (e.g. elongation, circularity, convexity) were calculated by the instrument software. At the same time, the mean light intensity after transmission through each particle is automatically collected by the system as a proxy of optical properties of the material. Intensity values are dependent on chemical composition and/or thickness of the particles. The results of the automated imaging were compared to particle size data determined by three different laser diffraction instruments

  10. Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images

    Directory of Open Access Journals (Sweden)

    Jianfang Cao

    2015-01-01

    Full Text Available With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance.

  11. Fuzzy emotional semantic analysis and automated annotation of scene images.

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao

    2015-01-01

    With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP) neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance.

  12. Application of automated image analysis to coal petrography

    Science.gov (United States)

    Chao, E.C.T.; Minkin, J.A.; Thompson, C.L.

    1982-01-01

    The coal petrologist seeks to determine the petrographic characteristics of organic and inorganic coal constituents and their lateral and vertical variations within a single coal bed or different coal beds of a particular coal field. Definitive descriptions of coal characteristics and coal facies provide the basis for interpretation of depositional environments, diagenetic changes, and burial history and determination of the degree of coalification or metamorphism. Numerous coal core or columnar samples must be studied in detail in order to adequately describe and define coal microlithotypes, lithotypes, and lithologic facies and their variations. The large amount of petrographic information required can be obtained rapidly and quantitatively by use of an automated image-analysis system (AIAS). An AIAS can be used to generate quantitative megascopic and microscopic modal analyses for the lithologic units of an entire columnar section of a coal bed. In our scheme for megascopic analysis, distinctive bands 2 mm or more thick are first demarcated by visual inspection. These bands consist of either nearly pure microlithotypes or lithotypes such as vitrite/vitrain or fusite/fusain, or assemblages of microlithotypes. Megascopic analysis with the aid of the AIAS is next performed to determine volume percentages of vitrite, inertite, minerals, and microlithotype mixtures in bands 0.5 to 2 mm thick. The microlithotype mixtures are analyzed microscopically by use of the AIAS to determine their modal composition in terms of maceral and optically observable mineral components. Megascopic and microscopic data are combined to describe the coal unit quantitatively in terms of (V) for vitrite, (E) for liptite, (I) for inertite or fusite, (M) for mineral components other than iron sulfide, (S) for iron sulfide, and (VEIM) for the composition of the mixed phases (Xi) i = 1,2, etc. in terms of the maceral groups vitrinite V, exinite E, inertinite I, and optically observable mineral

  13. An investigation of image compression on NIIRS rating degradation through automated image analysis

    Science.gov (United States)

    Chen, Hua-Mei; Blasch, Erik; Pham, Khanh; Wang, Zhonghai; Chen, Genshe

    2016-05-01

    The National Imagery Interpretability Rating Scale (NIIRS) is a subjective quantification of static image widely adopted by the Geographic Information System (GIS) community. Efforts have been made to relate NIIRS image quality to sensor parameters using the general image quality equations (GIQE), which make it possible to automatically predict the NIIRS rating of an image through automated image analysis. In this paper, we present an automated procedure to extract line edge profile based on which the NIIRS rating of a given image can be estimated through the GIQEs if the ground sampling distance (GSD) is known. Steps involved include straight edge detection, edge stripes determination, and edge intensity determination, among others. Next, we show how to employ GIQEs to estimate NIIRS degradation without knowing the ground truth GSD and investigate the effects of image compression on the degradation of an image's NIIRS rating. Specifically, we consider JPEG and JPEG2000 image compression standards. The extensive experimental results demonstrate the effect of image compression on the ground sampling distance and relative edge response, which are the major factors effecting NIIRS rating.

  14. Automated Image Processing for the Analysis of DNA Repair Dynamics

    CERN Document Server

    Riess, Thorsten; Tomas, Martin; Ferrando-May, Elisa; Merhof, Dorit

    2011-01-01

    The efficient repair of cellular DNA is essential for the maintenance and inheritance of genomic information. In order to cope with the high frequency of spontaneous and induced DNA damage, a multitude of repair mechanisms have evolved. These are enabled by a wide range of protein factors specifically recognizing different types of lesions and finally restoring the normal DNA sequence. This work focuses on the repair factor XPC (xeroderma pigmentosum complementation group C), which identifies bulky DNA lesions and initiates their removal via the nucleotide excision repair pathway. The binding of XPC to damaged DNA can be visualized in living cells by following the accumulation of a fluorescent XPC fusion at lesions induced by laser microirradiation in a fluorescence microscope. In this work, an automated image processing pipeline is presented which allows to identify and quantify the accumulation reaction without any user interaction. The image processing pipeline comprises a preprocessing stage where the ima...

  15. Automated Nanofiber Diameter Measurement in SEM Images Using a Robust Image Analysis Method

    Directory of Open Access Journals (Sweden)

    Ertan Öznergiz

    2014-01-01

    Full Text Available Due to the high surface area, porosity, and rigidity, applications of nanofibers and nanosurfaces have developed in recent years. Nanofibers and nanosurfaces are typically produced by electrospinning method. In the production process, determination of average fiber diameter is crucial for quality assessment. Average fiber diameter is determined by manually measuring the diameters of randomly selected fibers on scanning electron microscopy (SEM images. However, as the number of the images increases, manual fiber diameter determination becomes a tedious and time consuming task as well as being sensitive to human errors. Therefore, an automated fiber diameter measurement system is desired. In the literature, this task is achieved by using image analysis algorithms. Typically, these methods first isolate each fiber in the image and measure the diameter of each isolated fiber. Fiber isolation is an error-prone process. In this study, automated calculation of nanofiber diameter is achieved without fiber isolation using image processing and analysis algorithms. Performance of the proposed method was tested on real data. The effectiveness of the proposed method is shown by comparing automatically and manually measured nanofiber diameter values.

  16. Automated image analysis for space debris identification and astrometric measurements

    Science.gov (United States)

    Piattoni, Jacopo; Ceruti, Alessandro; Piergentili, Fabrizio

    2014-10-01

    The space debris is a challenging problem for the human activity in the space. Observation campaigns are conducted around the globe to detect and track uncontrolled space objects. One of the main problems in optical observation is obtaining useful information about the debris dynamical state by the images collected. For orbit determination, the most relevant information embedded in optical observation is the precise angular position, which can be evaluated by astrometry procedures, comparing the stars inside the image with star catalogs. This is typically a time consuming process, if done by a human operator, which makes this task impractical when dealing with large amounts of data, in the order of thousands images per night, generated by routinely conducted observations. An automated procedure is investigated in this paper that is capable to recognize the debris track inside a picture, calculate the celestial coordinates of the image's center and use these information to compute the debris angular position in the sky. This procedure has been implemented in a software code, that does not require human interaction and works without any supplemental information besides the image itself, detecting space objects and solving for their angular position without a priori information. The algorithm for object detection was developed inside the research team. For the star field computation, the software code astrometry.net was used and released under GPL v2 license. The complete procedure was validated by an extensive testing, using the images obtained in the observation campaign performed in a joint project between the Italian Space Agency (ASI) and the University of Bologna at the Broglio Space center, Kenya.

  17. Automated image analysis of atomic force microscopy images of rotavirus particles

    Energy Technology Data Exchange (ETDEWEB)

    Venkataraman, S. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Allison, D.P. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Biochemistry, Cellular, and Molecular Biology, University of Tennessee, Knoxville, TN 37996 (United States); Molecular Imaging Inc. Tempe, AZ, 85282 (United States); Qi, H. [Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Morrell-Falvey, J.L. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Kallewaard, N.L. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Crowe, J.E. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Doktycz, M.J. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States)]. E-mail: doktyczmj@ornl.gov

    2006-06-15

    A variety of biological samples can be imaged by the atomic force microscope (AFM) under environments that range from vacuum to ambient to liquid. Generally imaging is pursued to evaluate structural features of the sample or perhaps identify some structural changes in the sample that are induced by the investigator. In many cases, AFM images of sample features and induced structural changes are interpreted in general qualitative terms such as markedly smaller or larger, rougher, highly irregular, or smooth. Various manual tools can be used to analyze images and extract more quantitative data, but this is usually a cumbersome process. To facilitate quantitative AFM imaging, automated image analysis routines are being developed. Viral particles imaged in water were used as a test case to develop an algorithm that automatically extracts average dimensional information from a large set of individual particles. The extracted information allows statistical analyses of the dimensional characteristics of the particles and facilitates interpretation related to the binding of the particles to the surface. This algorithm is being extended for analysis of other biological samples and physical objects that are imaged by AFM.

  18. A performance analysis system for MEMS using automated imaging methods

    Energy Technology Data Exchange (ETDEWEB)

    LaVigne, G.F.; Miller, S.L.

    1998-08-01

    The ability to make in-situ performance measurements of MEMS operating at high speeds has been demonstrated using a new image analysis system. Significant improvements in performance and reliability have directly resulted from the use of this system.

  19. Automated analysis of image mammogram for breast cancer diagnosis

    Science.gov (United States)

    Nurhasanah, Sampurno, Joko; Faryuni, Irfana Diah; Ivansyah, Okto

    2016-03-01

    Medical imaging help doctors in diagnosing and detecting diseases that attack the inside of the body without surgery. Mammogram image is a medical image of the inner breast imaging. Diagnosis of breast cancer needs to be done in detail and as soon as possible for determination of next medical treatment. The aim of this work is to increase the objectivity of clinical diagnostic by using fractal analysis. This study applies fractal method based on 2D Fourier analysis to determine the density of normal and abnormal and applying the segmentation technique based on K-Means clustering algorithm to image abnormal for determine the boundary of the organ and calculate the area of organ segmentation results. The results show fractal method based on 2D Fourier analysis can be used to distinguish between the normal and abnormal breast and segmentation techniques with K-Means Clustering algorithm is able to generate the boundaries of normal and abnormal tissue organs, so area of the abnormal tissue can be determined.

  20. Automated Dermoscopy Image Analysis of Pigmented Skin Lesions

    Directory of Open Access Journals (Sweden)

    Alfonso Baldi

    2010-03-01

    Full Text Available Dermoscopy (dermatoscopy, epiluminescence microscopy is a non-invasive diagnostic technique for the in vivo observation of pigmented skin lesions (PSLs, allowing a better visualization of surface and subsurface structures (from the epidermis to the papillary dermis. This diagnostic tool permits the recognition of morphologic structures not visible by the naked eye, thus opening a new dimension in the analysis of the clinical morphologic features of PSLs. In order to reduce the learning-curve of non-expert clinicians and to mitigate problems inherent in the reliability and reproducibility of the diagnostic criteria used in pattern analysis, several indicative methods based on diagnostic algorithms have been introduced in the last few years. Recently, numerous systems designed to provide computer-aided analysis of digital images obtained by dermoscopy have been reported in the literature. The goal of this article is to review these systems, focusing on the most recent approaches based on content-based image retrieval systems (CBIR.

  1. Digital transplantation pathology: combining whole slide imaging, multiplex staining and automated image analysis.

    Science.gov (United States)

    Isse, K; Lesniak, A; Grama, K; Roysam, B; Minervini, M I; Demetris, A J

    2012-01-01

    Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. "-Omics" analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: (a) spatial-temporal relationships; (b) rare events/cells; (c) complex structural context; and (d) integration into a "systems" model. Nevertheless, except for immunostaining, no transformative advancements have "modernized" routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology-global "-omic" analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes.

  2. Automated analysis of craniofacial morphology using magnetic resonance images.

    Directory of Open Access Journals (Sweden)

    M Mallar Chakravarty

    Full Text Available Quantitative analysis of craniofacial morphology is of interest to scholars working in a wide variety of disciplines, such as anthropology, developmental biology, and medicine. T1-weighted (anatomical magnetic resonance images (MRI provide excellent contrast between soft tissues. Given its three-dimensional nature, MRI represents an ideal imaging modality for the analysis of craniofacial structure in living individuals. Here we describe how T1-weighted MR images, acquired to examine brain anatomy, can also be used to analyze facial features. Using a sample of typically developing adolescents from the Saguenay Youth Study (N = 597; 292 male, 305 female, ages: 12 to 18 years, we quantified inter-individual variations in craniofacial structure in two ways. First, we adapted existing nonlinear registration-based morphological techniques to generate iteratively a group-wise population average of craniofacial features. The nonlinear transformations were used to map the craniofacial structure of each individual to the population average. Using voxel-wise measures of expansion and contraction, we then examined the effects of sex and age on inter-individual variations in facial features. Second, we employed a landmark-based approach to quantify variations in face surfaces. This approach involves: (a placing 56 landmarks (forehead, nose, lips, jaw-line, cheekbones, and eyes on a surface representation of the MRI-based group average; (b warping the landmarks to the individual faces using the inverse nonlinear transformation estimated for each person; and (3 using a principal components analysis (PCA of the warped landmarks to identify facial features (i.e. clusters of landmarks that vary in our sample in a correlated fashion. As with the voxel-wise analysis of the deformation fields, we examined the effects of sex and age on the PCA-derived spatial relationships between facial features. Both methods demonstrated significant sexual dimorphism in

  3. Automated image analysis for quantification of filamentous bacteria

    DEFF Research Database (Denmark)

    Fredborg, M.; Rosenvinge, F. S.; Spillum, E.

    2015-01-01

    Background: Antibiotics of the beta-lactam group are able to alter the shape of the bacterial cell wall, e.g. filamentation or a spheroplast formation. Early determination of antimicrobial susceptibility may be complicated by filamentation of bacteria as this can be falsely interpreted as growth...... displaying different resistant profiles and differences in filamentation kinetics were used to study a novel image analysis algorithm to quantify length of bacteria and bacterial filamentation. A total of 12 beta-lactam antibiotics or beta-lactam-beta-lactamase inhibitor combinations were analyzed...

  4. A feasibility assessment of automated FISH image and signal analysis to assist cervical cancer detection

    Science.gov (United States)

    Wang, Xingwei; Li, Yuhua; Liu, Hong; Li, Shibo; Zhang, Roy R.; Zheng, Bin

    2012-02-01

    Fluorescence in situ hybridization (FISH) technology provides a promising molecular imaging tool to detect cervical cancer. Since manual FISH analysis is difficult, time-consuming, and inconsistent, the automated FISH image scanning systems have been developed. Due to limited focal depth of scanned microscopic image, a FISH-probed specimen needs to be scanned in multiple layers that generate huge image data. To improve diagnostic efficiency of using automated FISH image analysis, we developed a computer-aided detection (CAD) scheme. In this experiment, four pap-smear specimen slides were scanned by a dual-detector fluorescence image scanning system that acquired two spectrum images simultaneously, which represent images of interphase cells and FISH-probed chromosome X. During image scanning, once detecting a cell signal, system captured nine image slides by automatically adjusting optical focus. Based on the sharpness index and maximum intensity measurement, cells and FISH signals distributed in 3-D space were projected into a 2-D con-focal image. CAD scheme was applied to each con-focal image to detect analyzable interphase cells using an adaptive multiple-threshold algorithm and detect FISH-probed signals using a top-hat transform. The ratio of abnormal cells was calculated to detect positive cases. In four scanned specimen slides, CAD generated 1676 con-focal images that depicted analyzable cells. FISH-probed signals were independently detected by our CAD algorithm and an observer. The Kappa coefficients for agreement between CAD and observer ranged from 0.69 to 1.0 in detecting/counting FISH signal spots. The study demonstrated the feasibility of applying automated FISH image and signal analysis to assist cyto-geneticists in detecting cervical cancers.

  5. An automated image analysis system to measure and count organisms in laboratory microcosms.

    Directory of Open Access Journals (Sweden)

    François Mallard

    Full Text Available 1. Because of recent technological improvements in the way computer and digital camera perform, the potential use of imaging for contributing to the study of communities, populations or individuals in laboratory microcosms has risen enormously. However its limited use is due to difficulties in the automation of image analysis. 2. We present an accurate and flexible method of image analysis for detecting, counting and measuring moving particles on a fixed but heterogeneous substrate. This method has been specifically designed to follow individuals, or entire populations, in experimental laboratory microcosms. It can be used in other applications. 3. The method consists in comparing multiple pictures of the same experimental microcosm in order to generate an image of the fixed background. This background is then used to extract, measure and count the moving organisms, leaving out the fixed background and the motionless or dead individuals. 4. We provide different examples (springtails, ants, nematodes, daphnia to show that this non intrusive method is efficient at detecting organisms under a wide variety of conditions even on faintly contrasted and heterogeneous substrates. 5. The repeatability and reliability of this method has been assessed using experimental populations of the Collembola Folsomia candida. 6. We present an ImageJ plugin to automate the analysis of digital pictures of laboratory microcosms. The plugin automates the successive steps of the analysis and recursively analyses multiple sets of images, rapidly producing measurements from a large number of replicated microcosms.

  6. OpenComet: An automated tool for comet assay image analysis

    Directory of Open Access Journals (Sweden)

    Benjamin M. Gyori

    2014-01-01

    Full Text Available Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time.

  7. OpenComet: an automated tool for comet assay image analysis.

    Science.gov (United States)

    Gyori, Benjamin M; Venkatachalam, Gireedhar; Thiagarajan, P S; Hsu, David; Clement, Marie-Veronique

    2014-01-01

    Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time.

  8. Detailed interrogation of trypanosome cell biology via differential organelle staining and automated image analysis

    Directory of Open Access Journals (Sweden)

    Wheeler Richard J

    2012-01-01

    Full Text Available Abstract Background Many trypanosomatid protozoa are important human or animal pathogens. The well defined morphology and precisely choreographed division of trypanosomatid cells makes morphological analysis a powerful tool for analyzing the effect of mutations, chemical insults and changes between lifecycle stages. High-throughput image analysis of micrographs has the potential to accelerate collection of quantitative morphological data. Trypanosomatid cells have two large DNA-containing organelles, the kinetoplast (mitochondrial DNA and nucleus, which provide useful markers for morphometric analysis; however they need to be accurately identified and often lie in close proximity. This presents a technical challenge. Accurate identification and quantitation of the DNA content of these organelles is a central requirement of any automated analysis method. Results We have developed a technique based on double staining of the DNA with a minor groove binding (4'', 6-diamidino-2-phenylindole (DAPI and a base pair intercalating (propidium iodide (PI or SYBR green fluorescent stain and color deconvolution. This allows the identification of kinetoplast and nuclear DNA in the micrograph based on whether the organelle has DNA with a more A-T or G-C rich composition. Following unambiguous identification of the kinetoplasts and nuclei the resulting images are amenable to quantitative automated analysis of kinetoplast and nucleus number and DNA content. On this foundation we have developed a demonstrative analysis tool capable of measuring kinetoplast and nucleus DNA content, size and position and cell body shape, length and width automatically. Conclusions Our approach to DNA staining and automated quantitative analysis of trypanosomatid morphology accelerated analysis of trypanosomatid protozoa. We have validated this approach using Leishmania mexicana, Crithidia fasciculata and wild-type and mutant Trypanosoma brucei. Automated analysis of T. brucei

  9. Automated static image analysis as a novel tool in describing the physical properties of dietary fiber

    OpenAIRE

    Kurek,Marcin Andrzej; Piwińska, Monika; Wyrwisz, Jarosław; Wierzbicka, Agnieszka

    2015-01-01

    Abstract The growing interest in the usage of dietary fiber in food has caused the need to provide precise tools for describing its physical properties. This research examined two dietary fibers from oats and beets, respectively, in variable particle sizes. The application of automated static image analysis for describing the hydration properties and particle size distribution of dietary fiber was analyzed. Conventional tests for water holding capacity (WHC) were conducted. The particles were...

  10. Scanner-based image quality measurement system for automated analysis of EP output

    Science.gov (United States)

    Kipman, Yair; Mehta, Prashant; Johnson, Kate

    2003-12-01

    Inspection of electrophotographic print cartridge quality and compatibility requires analysis of hundreds of pages on a wide population of printers and copiers. Although print quality inspection is often achieved through the use of anchor prints and densitometry, more comprehensive analysis and quantitative data is desired for performance tracking, benchmarking and failure mode analysis. Image quality measurement systems range in price and performance, image capture paths and levels of automation. In order to address the requirements of a specific application, careful consideration was made to print volume, budgetary limits, and the scope of the desired image quality measurements. A flatbed scanner-based image quality measurement system was selected to support high throughput, maximal automation, and sufficient flexibility for both measurement methods and image sampling rates. Using an automatic document feeder (ADF) for sample management, a half ream of prints can be measured automatically without operator intervention. The system includes optical character recognition (OCR) for automatic determination of target type for measurement suite selection. This capability also enables measurement of mixed stacks of targets since each sample is identified prior to measurement. In addition, OCR is used to read toner ID, machine ID, print count, and other pertinent information regarding the printing conditions and environment. This data is saved to a data file along with the measurement results for complete test documentation. Measurement methods were developed to replace current methods of visual inspection and densitometry. The features that were being analyzed visually could be addressed via standard measurement algorithms. Measurement of density proved to be less simple since the scanner is not a densitometer and anything short of an excellent estimation would be meaningless. In order to address the measurement of density, a transfer curve was built to translate the

  11. RootGraph: a graphic optimization tool for automated image analysis of plant roots.

    Science.gov (United States)

    Cai, Jinhai; Zeng, Zhanghui; Connor, Jason N; Huang, Chun Yuan; Melino, Vanessa; Kumar, Pankaj; Miklavcic, Stanley J

    2015-11-01

    This paper outlines a numerical scheme for accurate, detailed, and high-throughput image analysis of plant roots. In contrast to existing root image analysis tools that focus on root system-average traits, a novel, fully automated and robust approach for the detailed characterization of root traits, based on a graph optimization process is presented. The scheme, firstly, distinguishes primary roots from lateral roots and, secondly, quantifies a broad spectrum of root traits for each identified primary and lateral root. Thirdly, it associates lateral roots and their properties with the specific primary root from which the laterals emerge. The performance of this approach was evaluated through comparisons with other automated and semi-automated software solutions as well as against results based on manual measurements. The comparisons and subsequent application of the algorithm to an array of experimental data demonstrate that this method outperforms existing methods in terms of accuracy, robustness, and the ability to process root images under high-throughput conditions.

  12. A method for the automated detection phishing websites through both site characteristics and image analysis

    Science.gov (United States)

    White, Joshua S.; Matthews, Jeanna N.; Stacy, John L.

    2012-06-01

    Phishing website analysis is largely still a time-consuming manual process of discovering potential phishing sites, verifying if suspicious sites truly are malicious spoofs and if so, distributing their URLs to the appropriate blacklisting services. Attackers increasingly use sophisticated systems for bringing phishing sites up and down rapidly at new locations, making automated response essential. In this paper, we present a method for rapid, automated detection and analysis of phishing websites. Our method relies on near real-time gathering and analysis of URLs posted on social media sites. We fetch the pages pointed to by each URL and characterize each page with a set of easily computed values such as number of images and links. We also capture a screen-shot of the rendered page image, compute a hash of the image and use the Hamming distance between these image hashes as a form of visual comparison. We provide initial results demonstrate the feasibility of our techniques by comparing legitimate sites to known fraudulent versions from Phishtank.com, by actively introducing a series of minor changes to a phishing toolkit captured in a local honeypot and by performing some initial analysis on a set of over 2.8 million URLs posted to Twitter over a 4 days in August 2011. We discuss the issues encountered during our testing such as resolvability and legitimacy of URL's posted on Twitter, the data sets used, the characteristics of the phishing sites we discovered, and our plans for future work.

  13. A semi-automated image analysis procedure for in situ plankton imaging systems.

    Science.gov (United States)

    Bi, Hongsheng; Guo, Zhenhua; Benfield, Mark C; Fan, Chunlei; Ford, Michael; Shahrestani, Suzan; Sieracki, Jeffery M

    2015-01-01

    Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS) system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects ( 95%). First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that could not be removed

  14. Single-cell bacteria growth monitoring by automated DEP-facilitated image analysis.

    Science.gov (United States)

    Peitz, Ingmar; van Leeuwen, Rien

    2010-11-07

    Growth monitoring is the method of choice in many assays measuring the presence or properties of pathogens, e.g. in diagnostics and food quality. Established methods, relying on culturing large numbers of bacteria, are rather time-consuming, while in healthcare time often is crucial. Several new approaches have been published, mostly aiming at assaying growth or other properties of a small number of bacteria. However, no method so far readily achieves single-cell resolution with a convenient and easy to handle setup that offers the possibility for automation and high throughput. We demonstrate these benefits in this study by employing dielectrophoretic capturing of bacteria in microfluidic electrode structures, optical detection and automated bacteria identification and counting with image analysis algorithms. For a proof-of-principle experiment we chose an antibiotic susceptibility test with Escherichia coli and polymyxin B. Growth monitoring is demonstrated on single cells and the impact of the antibiotic on the growth rate is shown. The minimum inhibitory concentration as a standard diagnostic parameter is derived from a dose-response plot. This report is the basis for further integration of image analysis code into device control. Ultimately, an automated and parallelized setup may be created, using an optical microscanner and many of the electrode structures simultaneously. Sufficient data for a sound statistical evaluation and a confirmation of the initial findings can then be generated in a single experiment.

  15. Difference Tracker: ImageJ plugins for fully automated analysis of multiple axonal transport parameters.

    Science.gov (United States)

    Andrews, Simon; Gilley, Jonathan; Coleman, Michael P

    2010-11-30

    Studies of axonal transport are critical, not only to understand its normal regulation, but also to determine the roles of transport impairment in disease. Exciting new resources have recently become available allowing live imaging of axonal transport in physiologically relevant settings, such as mammalian nerves. Thus the effects of disease, ageing and therapies can now be assessed directly in nervous system tissue. However, these imaging studies present new challenges. Manual or semi-automated analysis of the range of transport parameters required for a suitably complete evaluation is very time-consuming and can be subjective due to the complexity of the particle movements in axons in ex vivo explants or in vivo. We have developed Difference Tracker, a program combining two new plugins for the ImageJ image-analysis freeware, to provide fast, fully automated and objective analysis of a number of relevant measures of trafficking of fluorescently labeled particles so that axonal transport in different situations can be easily compared. We confirm that Difference Tracker can accurately track moving particles in highly simplified, artificial simulations. It can also identify and track multiple motile fluorescently labeled mitochondria simultaneously in time-lapse image stacks from live imaging of tibial nerve axons, reporting values for a number of parameters that are comparable to those obtained through manual analysis of the same axons. Difference Tracker therefore represents a useful free resource for the comparative analysis of axonal transport under different conditions, and could potentially be used and developed further in many other studies requiring quantification of particle movements.

  16. Long-term live cell imaging and automated 4D analysis of drosophila neuroblast lineages.

    Directory of Open Access Journals (Sweden)

    Catarina C F Homem

    Full Text Available The developing Drosophila brain is a well-studied model system for neurogenesis and stem cell biology. In the Drosophila central brain, around 200 neural stem cells called neuroblasts undergo repeated rounds of asymmetric cell division. These divisions typically generate a larger self-renewing neuroblast and a smaller ganglion mother cell that undergoes one terminal division to create two differentiating neurons. Although single mitotic divisions of neuroblasts can easily be imaged in real time, the lack of long term imaging procedures has limited the use of neuroblast live imaging for lineage analysis. Here we describe a method that allows live imaging of cultured Drosophila neuroblasts over multiple cell cycles for up to 24 hours. We describe a 4D image analysis protocol that can be used to extract cell cycle times and growth rates from the resulting movies in an automated manner. We use it to perform lineage analysis in type II neuroblasts where clonal analysis has indicated the presence of a transit-amplifying population that potentiates the number of neurons. Indeed, our experiments verify type II lineages and provide quantitative parameters for all cell types in those lineages. As defects in type II neuroblast lineages can result in brain tumor formation, our lineage analysis method will allow more detailed and quantitative analysis of tumorigenesis and asymmetric cell division in the Drosophila brain.

  17. Whole-slide imaging and automated image analysis: considerations and opportunities in the practice of pathology.

    Science.gov (United States)

    Webster, J D; Dunstan, R W

    2014-01-01

    Digital pathology, the practice of pathology using digitized images of pathologic specimens, has been transformed in recent years by the development of whole-slide imaging systems, which allow for the evaluation and interpretation of digital images of entire histologic sections. Applications of whole-slide imaging include rapid transmission of pathologic data for consultations and collaborations, standardization and distribution of pathologic materials for education, tissue specimen archiving, and image analysis of histologic specimens. Histologic image analysis allows for the acquisition of objective measurements of histomorphologic, histochemical, and immunohistochemical properties of tissue sections, increasing both the quantity and quality of data obtained from histologic assessments. Currently, numerous histologic image analysis software solutions are commercially available. Choosing the appropriate solution is dependent on considerations of the investigative question, computer programming and image analysis expertise, and cost. However, all studies using histologic image analysis require careful consideration of preanalytical variables, such as tissue collection, fixation, and processing, and experimental design, including sample selection, controls, reference standards, and the variables being measured. The fields of digital pathology and histologic image analysis are continuing to evolve, and their potential impact on pathology is still growing. These methodologies will increasingly transform the practice of pathology, allowing it to mature toward a quantitative science. However, this maturation requires pathologists to be at the forefront of the process, ensuring their appropriate application and the validity of their results. Therefore, histologic image analysis and the field of pathology should co-evolve, creating a symbiotic relationship that results in high-quality reproducible, objective data.

  18. Bacterial growth on surfaces: Automated image analysis for quantification of growth rate-related parameters

    DEFF Research Database (Denmark)

    Møller, S.; Sternberg, Claus; Poulsen, L. K.

    1995-01-01

    species-specific hybridizations with fluorescence-labelled ribosomal probes to estimate the single-cell concentration of RNA. By automated analysis of digitized images of stained cells, we determined four independent growth rate-related parameters: cellular RNA and DNA contents, cell volume......, and the frequency of dividing cells in a cell population. These parameters were used to compare physiological states of liquid-suspended and surfacegrowing Pseudomonas putida KT2442 in chemostat cultures. The major finding is that the correlation between substrate availability and cellular growth rate found...

  19. A semi-automated image analysis procedure for in situ plankton imaging systems.

    Directory of Open Access Journals (Sweden)

    Hongsheng Bi

    Full Text Available Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects ( 95%. First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that

  20. Automated image analysis of the host-pathogen interaction between phagocytes and Aspergillus fumigatus.

    Directory of Open Access Journals (Sweden)

    Franziska Mech

    Full Text Available Aspergillus fumigatus is a ubiquitous airborne fungus and opportunistic human pathogen. In immunocompromised hosts, the fungus can cause life-threatening diseases like invasive pulmonary aspergillosis. Since the incidence of fungal systemic infections drastically increased over the last years, it is a major goal to investigate the pathobiology of A. fumigatus and in particular the interactions of A. fumigatus conidia with immune cells. Many of these studies include the activity of immune effector cells, in particular of macrophages, when they are confronted with conidia of A. fumigus wild-type and mutant strains. Here, we report the development of an automated analysis of confocal laser scanning microscopy images from macrophages coincubated with different A. fumigatus strains. At present, microscopy images are often analysed manually, including cell counting and determination of interrelations between cells, which is very time consuming and error-prone. Automation of this process overcomes these disadvantages and standardises the analysis, which is a prerequisite for further systems biological studies including mathematical modeling of the infection process. For this purpose, the cells in our experimental setup were differentially stained and monitored by confocal laser scanning microscopy. To perform the image analysis in an automatic fashion, we developed a ruleset that is generally applicable to phagocytosis assays and in the present case was processed by the software Definiens Developer XD. As a result of a complete image analysis we obtained features such as size, shape, number of cells and cell-cell contacts. The analysis reported here, reveals that different mutants of A. fumigatus have a major influence on the ability of macrophages to adhere and to phagocytose the respective conidia. In particular, we observe that the phagocytosis ratio and the aggregation behaviour of pksP mutant compared to wild-type conidia are both significantly

  1. Fully automated quantitative analysis of breast cancer risk in DCE-MR images

    Science.gov (United States)

    Jiang, Luan; Hu, Xiaoxin; Gu, Yajia; Li, Qiang

    2015-03-01

    Amount of fibroglandular tissue (FGT) and background parenchymal enhancement (BPE) in dynamic contrast enhanced magnetic resonance (DCE-MR) images are two important indices for breast cancer risk assessment in the clinical practice. The purpose of this study is to develop and evaluate a fully automated scheme for quantitative analysis of FGT and BPE in DCE-MR images. Our fully automated method consists of three steps, i.e., segmentation of whole breast, fibroglandular tissues, and enhanced fibroglandular tissues. Based on the volume of interest extracted automatically, dynamic programming method was applied in each 2-D slice of a 3-D MR scan to delineate the chest wall and breast skin line for segmenting the whole breast. This step took advantages of the continuity of chest wall and breast skin line across adjacent slices. We then further used fuzzy c-means clustering method with automatic selection of cluster number for segmenting the fibroglandular tissues within the segmented whole breast area. Finally, a statistical method was used to set a threshold based on the estimated noise level for segmenting the enhanced fibroglandular tissues in the subtraction images of pre- and post-contrast MR scans. Based on the segmented whole breast, fibroglandular tissues, and enhanced fibroglandular tissues, FGT and BPE were automatically computed. Preliminary results of technical evaluation and clinical validation showed that our fully automated scheme could obtain good segmentation of the whole breast, fibroglandular tissues, and enhanced fibroglandular tissues to achieve accurate assessment of FGT and BPE for quantitative analysis of breast cancer risk.

  2. SparkMaster: automated calcium spark analysis with ImageJ.

    Science.gov (United States)

    Picht, Eckard; Zima, Aleksey V; Blatter, Lothar A; Bers, Donald M

    2007-09-01

    Ca sparks are elementary Ca-release events from intracellular Ca stores that are observed in virtually all types of muscle. Typically, Ca sparks are measured in the line-scan mode with confocal laser-scanning microscopes, yielding two-dimensional images (distance vs. time). The manual analysis of these images is time consuming and prone to errors as well as investigator bias. Therefore, we developed SparkMaster, an automated analysis program that allows rapid and reliable spark analysis. The underlying analysis algorithm is adapted from the threshold-based standard method of spark analysis developed by Cheng et al. (Biophys J 76: 606-617, 1999) and is implemented here in the freely available image-processing software ImageJ. SparkMaster offers a graphical user interface through which all analysis parameters and output options are selected. The analysis includes general image parameters (number of detected sparks, spark frequency) and individual spark parameters (amplitude, full width at half-maximum amplitude, full duration at half-maximum amplitude, full width, full duration, time to peak, maximum steepness of spark upstroke, time constant of spark decay). We validated the algorithm using images with synthetic sparks embedded into backgrounds with different signal-to-noise ratios to determine an analysis criteria at which a high sensitivity is combined with a low frequency of false-positive detections. Finally, we applied SparkMaster to analyze experimental data of sparks measured in intact and permeabilized ventricular cardiomyocytes, permeabilized mammalian skeletal muscle, and intact smooth muscle cells. We found that SparkMaster provides a reliable, easy to use, and fast way of analyzing Ca sparks in a wide variety of experimental conditions.

  3. Automated detection of regions of interest for tissue microarray experiments: an image texture analysis

    Directory of Open Access Journals (Sweden)

    Tözeren Aydin

    2007-03-01

    Full Text Available Abstract Background Recent research with tissue microarrays led to a rapid progress toward quantifying the expressions of large sets of biomarkers in normal and diseased tissue. However, standard procedures for sampling tissue for molecular profiling have not yet been established. Methods This study presents a high throughput analysis of texture heterogeneity on breast tissue images for the purpose of identifying regions of interest in the tissue for molecular profiling via tissue microarray technology. Image texture of breast histology slides was described in terms of three parameters: the percentage of area occupied in an image block by chromatin (B, percentage occupied by stroma-like regions (P, and a statistical heterogeneity index H commonly used in image analysis. Texture parameters were defined and computed for each of the thousands of image blocks in our dataset using both the gray scale and color segmentation. The image blocks were then classified into three categories using the texture feature parameters in a novel statistical learning algorithm. These categories are as follows: image blocks specific to normal breast tissue, blocks specific to cancerous tissue, and those image blocks that are non-specific to normal and disease states. Results Gray scale and color segmentation techniques led to identification of same regions in histology slides as cancer-specific. Moreover the image blocks identified as cancer-specific belonged to those cell crowded regions in whole section image slides that were marked by two pathologists as regions of interest for further histological studies. Conclusion These results indicate the high efficiency of our automated method for identifying pathologic regions of interest on histology slides. Automation of critical region identification will help minimize the inter-rater variability among different raters (pathologists as hundreds of tumors that are used to develop an array have typically been evaluated

  4. Automating Shallow Seismic Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Steeples, Don W.

    2004-12-09

    This seven-year, shallow-seismic reflection research project had the aim of improving geophysical imaging of possible contaminant flow paths. Thousands of chemically contaminated sites exist in the United States, including at least 3,700 at Department of Energy (DOE) facilities. Imaging technologies such as shallow seismic reflection (SSR) and ground-penetrating radar (GPR) sometimes are capable of identifying geologic conditions that might indicate preferential contaminant-flow paths. Historically, SSR has been used very little at depths shallower than 30 m, and even more rarely at depths of 10 m or less. Conversely, GPR is rarely useful at depths greater than 10 m, especially in areas where clay or other electrically conductive materials are present near the surface. Efforts to image the cone of depression around a pumping well using seismic methods were only partially successful (for complete references of all research results, see the full Final Technical Report, DOE/ER/14826-F), but peripheral results included development of SSR methods for depths shallower than one meter, a depth range that had not been achieved before. Imaging at such shallow depths, however, requires geophone intervals of the order of 10 cm or less, which makes such surveys very expensive in terms of human time and effort. We also showed that SSR and GPR could be used in a complementary fashion to image the same volume of earth at very shallow depths. The primary research focus of the second three-year period of funding was to develop and demonstrate an automated method of conducting two-dimensional (2D) shallow-seismic surveys with the goal of saving time, effort, and money. Tests involving the second generation of the hydraulic geophone-planting device dubbed the ''Autojuggie'' showed that large numbers of geophones can be placed quickly and automatically and can acquire high-quality data, although not under rough topographic conditions. In some easy

  5. Automated multidimensional image analysis reveals a role for Abl in embryonic wound repair.

    Science.gov (United States)

    Zulueta-Coarasa, Teresa; Tamada, Masako; Lee, Eun J; Fernandez-Gonzalez, Rodrigo

    2014-07-01

    The embryonic epidermis displays a remarkable ability to repair wounds rapidly. Embryonic wound repair is driven by the evolutionary conserved redistribution of cytoskeletal and junctional proteins around the wound. Drosophila has emerged as a model to screen for factors implicated in wound closure. However, genetic screens have been limited by the use of manual analysis methods. We introduce MEDUSA, a novel image-analysis tool for the automated quantification of multicellular and molecular dynamics from time-lapse confocal microscopy data. We validate MEDUSA by quantifying wound closure in Drosophila embryos, and we show that the results of our automated analysis are comparable to analysis by manual delineation and tracking of the wounds, while significantly reducing the processing time. We demonstrate that MEDUSA can also be applied to the investigation of cellular behaviors in three and four dimensions. Using MEDUSA, we find that the conserved nonreceptor tyrosine kinase Abelson (Abl) contributes to rapid embryonic wound closure. We demonstrate that Abl plays a role in the organization of filamentous actin and the redistribution of the junctional protein β-catenin at the wound margin during embryonic wound repair. Finally, we discuss different models for the role of Abl in the regulation of actin architecture and adhesion dynamics at the wound margin.

  6. Development of an automated imaging pipeline for the analysis of the zebrafish larval kidney.

    Directory of Open Access Journals (Sweden)

    Jens H Westhoff

    Full Text Available The analysis of kidney malformation caused by environmental influences during nephrogenesis or by hereditary nephropathies requires animal models allowing the in vivo observation of developmental processes. The zebrafish has emerged as a useful model system for the analysis of vertebrate organ development and function, and it is suitable for the identification of organotoxic or disease-modulating compounds on a larger scale. However, to fully exploit its potential in high content screening applications, dedicated protocols are required allowing the consistent visualization of inner organs such as the embryonic kidney. To this end, we developed a high content screening compatible pipeline for the automated imaging of standardized views of the developing pronephros in zebrafish larvae. Using a custom designed tool, cavities were generated in agarose coated microtiter plates allowing for accurate positioning and orientation of zebrafish larvae. This enabled the subsequent automated acquisition of stable and consistent dorsal views of pronephric kidneys. The established pipeline was applied in a pilot screen for the analysis of the impact of potentially nephrotoxic drugs on zebrafish pronephros development in the Tg(wt1b:EGFP transgenic line in which the developing pronephros is highlighted by GFP expression. The consistent image data that was acquired allowed for quantification of gross morphological pronephric phenotypes, revealing concentration dependent effects of several compounds on nephrogenesis. In addition, applicability of the imaging pipeline was further confirmed in a morpholino based model for cilia-associated human genetic disorders associated with different intraflagellar transport genes. The developed tools and pipeline can be used to study various aspects in zebrafish kidney research, and can be readily adapted for the analysis of other organ systems.

  7. Automated analysis of heterogeneous carbon nanostructures by high-resolution electron microscopy and on-line image processing

    Energy Technology Data Exchange (ETDEWEB)

    Toth, P., E-mail: toth.pal@uni-miskolc.hu [Department of Chemical Engineering, University of Utah, 50 S. Central Campus Drive, Salt Lake City, UT 84112-9203 (United States); Farrer, J.K. [Department of Physics and Astronomy, Brigham Young University, N283 ESC, Provo, UT 84602 (United States); Palotas, A.B. [Department of Combustion Technology and Thermal Energy, University of Miskolc, H3515, Miskolc-Egyetemvaros (Hungary); Lighty, J.S.; Eddings, E.G. [Department of Chemical Engineering, University of Utah, 50 S. Central Campus Drive, Salt Lake City, UT 84112-9203 (United States)

    2013-06-15

    High-resolution electron microscopy is an efficient tool for characterizing heterogeneous nanostructures; however, currently the analysis is a laborious and time-consuming manual process. In order to be able to accurately and robustly quantify heterostructures, one must obtain a statistically high number of micrographs showing images of the appropriate sub-structures. The second step of analysis is usually the application of digital image processing techniques in order to extract meaningful structural descriptors from the acquired images. In this paper it will be shown that by applying on-line image processing and basic machine vision algorithms, it is possible to fully automate the image acquisition step; therefore, the number of acquired images in a given time can be increased drastically without the need for additional human labor. The proposed automation technique works by computing fields of structural descriptors in situ and thus outputs sets of the desired structural descriptors in real-time. The merits of the method are demonstrated by using combustion-generated black carbon samples. - Highlights: ► The HRTEM analysis of heterogeneous nanostructures is a tedious manual process. ► Automatic HRTEM image acquisition and analysis can improve data quantity and quality. ► We propose a method based on on-line image analysis for the automation of HRTEM image acquisition. ► The proposed method is demonstrated using HRTEM images of soot particles.

  8. Automated Astrometric Analysis of Satellite Observations using Wide-field Imaging

    Science.gov (United States)

    Skuljan, J.; Kay, J.

    2016-09-01

    An observational trial was conducted in the South Island of New Zealand from 24 to 28 February 2015, as a collaborative effort between the United Kingdom and New Zealand in the area of space situational awareness. The aim of the trial was to observe a number of satellites in low Earth orbit using wide-field imaging from two separate locations, in order to determine the space trajectory and compare the measurements with the predictions based on the standard two-line elements. This activity was an initial step in building a space situational awareness capability at the Defence Technology Agency of the New Zealand Defence Force. New Zealand has an important strategic position as the last land mass that many satellites selected for deorbiting pass before entering the Earth's atmosphere over the dedicated disposal area in the South Pacific. A preliminary analysis of the trial data has demonstrated that relatively inexpensive equipment can be used to successfully detect satellites at moderate altitudes. A total of 60 satellite passes were observed over the five nights of observation and about 2600 images were collected. A combination of cooled CCD and standard DSLR cameras were used, with a selection of lenses between 17 mm and 50 mm in focal length, covering a relatively wide field of view of 25 to 60 degrees. The CCD cameras were equipped with custom-made GPS modules to record the time of exposure with a high accuracy of one millisecond, or better. Specialised software has been developed for automated astrometric analysis of the trial data. The astrometric solution is obtained as a two-dimensional least-squares polynomial fit to the measured pixel positions of a large number of stars (typically 1000) detected across the image. The star identification is fully automated and works well for all camera-lens combinations used in the trial. A moderate polynomial degree of 3 to 5 is selected to take into account any image distortions introduced by the lens. A typical RMS

  9. Automated Brain Image classification using Neural Network Approach and Abnormality Analysis

    Directory of Open Access Journals (Sweden)

    P.Muthu Krishnammal

    2015-06-01

    Full Text Available Image segmentation of surgical images plays an important role in diagnosis and analysis the anatomical structure of human body. Magnetic Resonance Imaging (MRI helps in obtaining a structural image of internal parts of the body. This paper aims at developing an automatic support system for stage classification using learning machine and to detect brain Tumor by fuzzy clustering methods to detect the brain Tumor in its early stages and to analyze anatomical structures. The three stages involved are: feature extraction using GLCM and the tumor classification using PNN-RBF network and segmentation using SFCM. Here fast discrete curvelet transformation is used to analyze texture of an image which be used as a base for a Computer Aided Diagnosis (CAD system .The Probabilistic Neural Network with radial basis function is employed to implement an automated Brain Tumor classification. It classifies the stage of Brain Tumor that is benign, malignant or normal automatically. Then the segmentation of the brain abnormality using Spatial FCM and the severity of the tumor is analysed using the number of tumor cells in the detected abnormal region.The proposed method reports promising results in terms of training performance and classification accuracies.

  10. Automated Analysis of {sup 123}I-beta-CIT SPECT Images with Statistical Probabilistic Anatomical Mapping

    Energy Technology Data Exchange (ETDEWEB)

    Eo, Jae Seon; Lee, Hoyoung; Lee, Jae Sung; Kim, Yu Kyung; Jeon, Bumseok; Lee, Dong Soo [Seoul National Univ., Seoul (Korea, Republic of)

    2014-03-15

    Population-based statistical probabilistic anatomical maps have been used to generate probabilistic volumes of interest for analyzing perfusion and metabolic brain imaging. We investigated the feasibility of automated analysis for dopamine transporter images using this technique and evaluated striatal binding potentials in Parkinson's disease and Wilson's disease. We analyzed 2β-Carbomethoxy-3β-(4-{sup 123}I-iodophenyl)tropane ({sup 123}I-beta-CIT) SPECT images acquired from 26 people with Parkinson's disease (M:F=11:15,mean age=49±12 years), 9 people with Wilson's disease (M: F=6:3, mean age=26±11 years) and 17 normal controls (M:F=5:12, mean age=39±16 years). A SPECT template was created using striatal statistical probabilistic map images. All images were spatially normalized onto the template, and probability-weighted regional counts in striatal structures were estimated. The binding potential was calculated using the ratio of specific and nonspecific binding activities at equilibrium. Voxel-based comparisons between groups were also performed using statistical parametric mapping. Qualitative assessment showed that spatial normalizations of the SPECT images were successful for all images. The striatal binding potentials of participants with Parkinson's disease and Wilson's disease were significantly lower than those of normal controls. Statistical parametric mapping analysis found statistically significant differences only in striatal regions in both disease groups compared to controls. We successfully evaluated the regional {sup 123}I-beta-CIT distribution using the SPECT template and probabilistic map data automatically. This procedure allows an objective and quantitative comparison of the binding potential, which in this case showed a significantly decreased binding potential in the striata of patients with Parkinson's disease or Wilson's disease.

  11. Semi-automated porosity identification from thin section images using image analysis and intelligent discriminant classifiers

    Science.gov (United States)

    Ghiasi-Freez, Javad; Soleimanpour, Iman; Kadkhodaie-Ilkhchi, Ali; Ziaii, Mansur; Sedighi, Mahdi; Hatampour, Amir

    2012-08-01

    Identification of different types of porosity within a reservoir rock is a functional parameter for reservoir characterization since various pore types play different roles in fluid transport and also, the pore spaces determine the fluid storage capacity of the reservoir. The present paper introduces a model for semi-automatic identification of porosity types within thin section images. To get this goal, a pattern recognition algorithm is followed. Firstly, six geometrical shape parameters of sixteen largest pores of each image are extracted using image analysis techniques. The extracted parameters and their corresponding pore types of 294 pores are used for training two intelligent discriminant classifiers, namely linear and quadratic discriminant analysis. The trained classifiers take the geometrical features of the pores to identify the type and percentage of five types of porosity, including interparticle, intraparticle, oomoldic, biomoldic, and vuggy in each image. The accuracy of classifiers is determined from two standpoints. Firstly, the predicted and measured percentages of each type of porosity are compared with each other. The results indicate reliable performance for predicting percentage of each type of porosity. In the second step, the precisions of classifiers for categorizing the pore spaces are analyzed. The classifiers also took a high acceptance score when used for individual recognition of pore spaces. The proposed methodology is a further promising application for petroleum geologists allowing statistical study of pore types in a rapid and accurate way.

  12. Automated local bright feature image analysis of nuclear proteindistribution identifies changes in tissue phenotype

    Energy Technology Data Exchange (ETDEWEB)

    Knowles, David; Sudar, Damir; Bator, Carol; Bissell, Mina

    2006-02-01

    The organization of nuclear proteins is linked to cell and tissue phenotypes. When cells arrest proliferation, undergo apoptosis, or differentiate, the distribution of nuclear proteins changes. Conversely, forced alteration of the distribution of nuclear proteins modifies cell phenotype. Immunostaining and fluorescence microscopy have been critical for such findings. However, there is an increasing need for quantitative analysis of nuclear protein distribution to decipher epigenetic relationships between nuclear structure and cell phenotype, and to unravel the mechanisms linking nuclear structure and function. We have developed imaging methods to quantify the distribution of fluorescently-stained nuclear protein NuMA in different mammary phenotypes obtained using three-dimensional cell culture. Automated image segmentation of DAPI-stained nuclei was generated to isolate thousands of nuclei from three-dimensional confocal images. Prominent features of fluorescently-stained NuMA were detected using a novel local bright feature analysis technique, and their normalized spatial density calculated as a function of the distance from the nuclear perimeter to its center. The results revealed marked changes in the distribution of the density of NuMA bright features as non-neoplastic cells underwent phenotypically normal acinar morphogenesis. In contrast, we did not detect any reorganization of NuMA during the formation of tumor nodules by malignant cells. Importantly, the analysis also discriminated proliferating non-neoplastic cells from proliferating malignant cells, suggesting that these imaging methods are capable of identifying alterations linked not only to the proliferation status but also to the malignant character of cells. We believe that this quantitative analysis will have additional applications for classifying normal and pathological tissues.

  13. Automated image analysis reveals the dynamic 3-dimensional organization of multi-ciliary arrays.

    Science.gov (United States)

    Galati, Domenico F; Abuin, David S; Tauber, Gabriel A; Pham, Andrew T; Pearson, Chad G

    2015-12-23

    Multi-ciliated cells (MCCs) use polarized fields of undulating cilia (ciliary array) to produce fluid flow that is essential for many biological processes. Cilia are positioned by microtubule scaffolds called basal bodies (BBs) that are arranged within a spatially complex 3-dimensional geometry (3D). Here, we develop a robust and automated computational image analysis routine to quantify 3D BB organization in the ciliate, Tetrahymena thermophila. Using this routine, we generate the first morphologically constrained 3D reconstructions of Tetrahymena cells and elucidate rules that govern the kinetics of MCC organization. We demonstrate the interplay between BB duplication and cell size expansion through the cell cycle. In mutant cells, we identify a potential BB surveillance mechanism that balances large gaps in BB spacing by increasing the frequency of closely spaced BBs in other regions of the cell. Finally, by taking advantage of a mutant predisposed to BB disorganization, we locate the spatial domains that are most prone to disorganization by environmental stimuli. Collectively, our analyses reveal the importance of quantitative image analysis to understand the principles that guide the 3D organization of MCCs.

  14. Improving cervical region of interest by eliminating vaginal walls and cotton-swabs for automated image analysis

    Science.gov (United States)

    Venkataraman, Sankar; Li, Wenjing

    2008-03-01

    Image analysis for automated diagnosis of cervical cancer has attained high prominence in the last decade. Automated image analysis at all levels requires a basic segmentation of the region of interest (ROI) within a given image. The precision of the diagnosis is often reflected by the precision in detecting the initial region of interest, especially when some features outside the ROI mimic the ones within the same. Work described here discusses algorithms that are used to improve the cervical region of interest as a part of automated cervical image diagnosis. A vital visual aid in diagnosing cervical cancer is the aceto-whitening of the cervix after the application of acetic acid. Color and texture are used to segment acetowhite regions within the cervical ROI. Vaginal walls along with cottonswabs sometimes mimic these essential features leading to several false positives. Work presented here is focused towards detecting in-focus vaginal wall boundaries and then extrapolating them to exclude vaginal walls from the cervical ROI. In addition, discussed here is a marker-controlled watershed segmentation that is used to detect cottonswabs from the cervical ROI. A dataset comprising 50 high resolution images of the cervix acquired after 60 seconds of acetic acid application were used to test the algorithm. Out of the 50 images, 27 benefited from a new cervical ROI. Significant improvement in overall diagnosis was observed in these images as false positives caused by features outside the actual ROI mimicking acetowhite region were eliminated.

  15. Automated segmentation of muscle and adipose tissue on CT images for human body composition analysis

    Science.gov (United States)

    Chung, Howard; Cobzas, Dana; Birdsell, Laura; Lieffers, Jessica; Baracos, Vickie

    2009-02-01

    The ability to compute body composition in cancer patients lends itself to determining the specific clinical outcomes associated with fat and lean tissue stores. For example, a wasting syndrome of advanced disease associates with shortened survival. Moreover, certain tissue compartments represent sites for drug distribution and are likely determinants of chemotherapy efficacy and toxicity. CT images are abundant, but these cannot be fully exploited unless there exist practical and fast approaches for tissue quantification. Here we propose a fully automated method for segmenting muscle, visceral and subcutaneous adipose tissues, taking the approach of shape modeling for the analysis of skeletal muscle. Muscle shape is represented using PCA encoded Free Form Deformations with respect to a mean shape. The shape model is learned from manually segmented images and used in conjunction with a tissue appearance prior. VAT and SAT are segmented based on the final deformed muscle shape. In comparing the automatic and manual methods, coefficients of variation (COV) (1 - 2%), were similar to or smaller than inter- and intra-observer COVs reported for manual segmentation.

  16. Automated analysis of images acquired with electronic portal imaging device during delivery of quality assurance plans for inversely optimized arc therapy

    DEFF Research Database (Denmark)

    Fredh, Anna; Korreman, Stine; Rosenschöld, Per Munck af

    2010-01-01

    This work presents an automated method for comprehensively analyzing EPID images acquired for quality assurance of RapidArc treatment delivery. In-house-developed software has been used for the analysis and long-term results from measurements on three linacs are presented.......This work presents an automated method for comprehensively analyzing EPID images acquired for quality assurance of RapidArc treatment delivery. In-house-developed software has been used for the analysis and long-term results from measurements on three linacs are presented....

  17. Experimental saltwater intrusion in coastal aquifers using automated image analysis: Applications to homogeneous aquifers

    Science.gov (United States)

    Robinson, G.; Ahmed, Ashraf A.; Hamill, G. A.

    2016-07-01

    This paper presents the applications of a novel methodology to quantify saltwater intrusion parameters in laboratory-scale experiments. The methodology uses an automated image analysis procedure, minimising manual inputs and the subsequent systematic errors that can be introduced. This allowed the quantification of the width of the mixing zone which is difficult to measure in experimental methods that are based on visual observations. Glass beads of different grain sizes were tested for both steady-state and transient conditions. The transient results showed good correlation between experimental and numerical intrusion rates. The experimental intrusion rates revealed that the saltwater wedge reached a steady state condition sooner while receding than advancing. The hydrodynamics of the experimental mixing zone exhibited similar traits; a greater increase in the width of the mixing zone was observed in the receding saltwater wedge, which indicates faster fluid velocities and higher dispersion. The angle of intrusion analysis revealed the formation of a volume of diluted saltwater at the toe position when the saltwater wedge is prompted to recede. In addition, results of different physical repeats of the experiment produced an average coefficient of variation less than 0.18 of the measured toe length and width of the mixing zone.

  18. An automated classification system for the differentiation of obstructive lung diseases based on the textural analysis of HRCT images

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seong Hoon; Seo, Joon Beom; Kim, Nam Kug; Lee, Young Kyung; Kim, Song Soo; Chae, Eun Jin [University of Ulsan, College of Medicine, Asan Medical Center, Seoul (Korea, Republic of); Lee, June Goo [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    2007-07-15

    To develop an automated classification system for the differentiation of obstructive lung diseases based on the textural analysis of HRCT images, and to evaluate the accuracy and usefulness of the system. For textural analysis, histogram features, gradient features, run length encoding, and a co-occurrence matrix were employed. A Bayesian classifier was used for automated classification. The images (image number n = 256) were selected from the HRCT images obtained from 17 healthy subjects (n = 67), 26 patients with bronchiolitis obliterans (n = 70), 28 patients with mild centrilobular emphysema (n = 65), and 21 patients with panlobular emphysema or severe centrilobular emphysema (n = 63). An five-fold cross-validation method was used to assess the performance of the system. Class-specific sensitivities were analyzed and the overall accuracy of the system was assessed with kappa statistics. The sensitivity of the system for each class was as follows: normal lung 84.9%, bronchiolitis obliterans 83.8%, mild centrilobular emphysema 77.0%, and panlobular emphysema or severe centrilobular emphysema 95.8%. The overall performance for differentiating each disease and the normal lung was satisfactory with a kappa value of 0.779. An automated classification system for the differentiation between obstructive lung diseases based on the textural analysis of HRCT images was developed. The proposed system discriminates well between the various obstructive lung diseases and the normal lung.

  19. Development and application of an automated analysis method for individual cerebral perfusion single photon emission tomography images

    CERN Document Server

    Cluckie, A J

    2001-01-01

    Neurological images may be analysed by performing voxel by voxel comparisons with a group of control subject images. An automated, 3D, voxel-based method has been developed for the analysis of individual single photon emission tomography (SPET) scans. Clusters of voxels are identified that represent regions of abnormal radiopharmaceutical uptake. Morphological operators are applied to reduce noise in the clusters, then quantitative estimates of the size and degree of the radiopharmaceutical uptake abnormalities are derived. Statistical inference has been performed using a Monte Carlo method that has not previously been applied to SPET scans, or for the analysis of individual images. This has been validated for group comparisons of SPET scans and for the analysis of an individual image using comparison with a group. Accurate statistical inference was obtained independent of experimental factors such as degrees of freedom, image smoothing and voxel significance level threshold. The analysis method has been eval...

  20. Analyzing and mining automated imaging experiments.

    Science.gov (United States)

    Berlage, Thomas

    2007-04-01

    Image mining is the application of computer-based techniques that extract and exploit information from large image sets to support human users in generating knowledge from these sources. This review focuses on biomedical applications of this technique, in particular automated imaging at the cellular level. Due to increasing automation and the availability of integrated instruments, biomedical users are becoming increasingly confronted with the problem of analyzing such data. Image database applications need to combine data management, image analysis and visual data mining. The main point of such a system is a software layer that represents objects within an image and the ability to use a large spectrum of quantitative and symbolic object features. Image analysis needs to be adapted to each particular experiment; therefore, 'end user programming' will be desired to make the technology more widely applicable.

  1. Automated classification of female facial beauty by image analysis and supervised learning

    Science.gov (United States)

    Gunes, Hatice; Piccardi, Massimo; Jan, Tony

    2004-01-01

    The fact that perception of facial beauty may be a universal concept has long been debated amongst psychologists and anthropologists. In this paper, we performed experiments to evaluate the extent of beauty universality by asking a number of diverse human referees to grade a same collection of female facial images. Results obtained show that the different individuals gave similar votes, thus well supporting the concept of beauty universality. We then trained an automated classifier using the human votes as the ground truth and used it to classify an independent test set of facial images. The high accuracy achieved proves that this classifier can be used as a general, automated tool for objective classification of female facial beauty. Potential applications exist in the entertainment industry and plastic surgery.

  2. Automated quantification and sizing of unbranched filamentous cyanobacteria by model-based object-oriented image analysis.

    Science.gov (United States)

    Zeder, Michael; Van den Wyngaert, Silke; Köster, Oliver; Felder, Kathrin M; Pernthaler, Jakob

    2010-03-01

    Quantification and sizing of filamentous cyanobacteria in environmental samples or cultures are time-consuming and are often performed by using manual or semiautomated microscopic analysis. Automation of conventional image analysis is difficult because filaments may exhibit great variations in length and patchy autofluorescence. Moreover, individual filaments frequently cross each other in microscopic preparations, as deduced by modeling. This paper describes a novel approach based on object-oriented image analysis to simultaneously determine (i) filament number, (ii) individual filament lengths, and (iii) the cumulative filament length of unbranched cyanobacterial morphotypes in fluorescent microscope images in a fully automated high-throughput manner. Special emphasis was placed on correct detection of overlapping objects by image analysis and on appropriate coverage of filament length distribution by using large composite images. The method was validated with a data set for Planktothrix rubescens from field samples and was compared with manual filament tracing, the line intercept method, and the Utermöhl counting approach. The computer program described allows batch processing of large images from any appropriate source and annotation of detected filaments. It requires no user interaction, is available free, and thus might be a useful tool for basic research and drinking water quality control.

  3. Quantification of Eosinophilic Granule Protein Deposition in Biopsies of Inflammatory Skin Diseases by Automated Image Analysis of Highly Sensitive Immunostaining

    Directory of Open Access Journals (Sweden)

    Peter Kiehl

    1999-01-01

    Full Text Available Eosinophilic granulocytes are major effector cells in inflammation. Extracellular deposition of toxic eosinophilic granule proteins (EGPs, but not the presence of intact eosinophils, is crucial for their functional effect in situ. As even recent morphometric approaches to quantify the involvement of eosinophils in inflammation have been only based on cell counting, we developed a new method for the cell‐independent quantification of EGPs by image analysis of immunostaining. Highly sensitive, automated immunohistochemistry was done on paraffin sections of inflammatory skin diseases with 4 different primary antibodies against EGPs. Image analysis of immunostaining was performed by colour translation, linear combination and automated thresholding. Using strictly standardized protocols, the assay was proven to be specific and accurate concerning segmentation in 8916 fields of 520 sections, well reproducible in repeated measurements and reliable over 16 weeks observation time. The method may be valuable for the cell‐independent segmentation of immunostaining in other applications as well.

  4. A Novel Automated High-Content Analysis Workflow Capturing Cell Population Dynamics from Induced Pluripotent Stem Cell Live Imaging Data

    Science.gov (United States)

    Kerz, Maximilian; Folarin, Amos; Meleckyte, Ruta; Watt, Fiona M.; Dobson, Richard J.; Danovi, Davide

    2016-01-01

    Most image analysis pipelines rely on multiple channels per image with subcellular reference points for cell segmentation. Single-channel phase-contrast images are often problematic, especially for cells with unfavorable morphology, such as induced pluripotent stem cells (iPSCs). Live imaging poses a further challenge, because of the introduction of the dimension of time. Evaluations cannot be easily integrated with other biological data sets including analysis of endpoint images. Here, we present a workflow that incorporates a novel CellProfiler-based image analysis pipeline enabling segmentation of single-channel images with a robust R-based software solution to reduce the dimension of time to a single data point. These two packages combined allow robust segmentation of iPSCs solely on phase-contrast single-channel images and enable live imaging data to be easily integrated to endpoint data sets while retaining the dynamics of cellular responses. The described workflow facilitates characterization of the response of live-imaged iPSCs to external stimuli and definition of cell line–specific, phenotypic signatures. We present an efficient tool set for automated high-content analysis suitable for cells with challenging morphology. This approach has potentially widespread applications for human pluripotent stem cells and other cell types. PMID:27256155

  5. Towards an automated analysis of video-microscopy images of fungal morphogenesis

    Directory of Open Access Journals (Sweden)

    Diéguez-Uribeondo, Javier

    2005-06-01

    Full Text Available Fungal morphogenesis is an exciting field of cell biology and several mathematical models have been developed to describe it. These models require experimental evidences to be corroborated and, therefore, there is a continuous search for new microscopy and image analysis techniques. In this work, we have used a Canny-edge-detector based technique to automate the generation of hyphal profiles and calculation of morphogenetic parameters such as diameter, elongation rates and hyphoid fitness. The results show that the data obtained with this technique are similar to published data generated with manualbased tracing techniques and that have been carried out on the same species or genus. Thus, we show that application of edge detector-based technique to hyphal growth represents an efficient and accurate method to study hyphal morphogenesis. This represents the first step towards an automated analysis of videomicroscopy images of fungal morphogenesis.La morfogénesis de los hongos es un área de estudio de gran relevancia en la biología celular y en la que se han desarrollado varios modelos matemáticos. Los modelos matemáticos de procesos biológicos precisan de pruebas experimentales que apoyen y corroboren las predicciones teóricas y, por este motivo, existe una búsqueda continua de nuevas técnicas de microscopía y análisis de imágenes para su aplicación en el estudio del crecimiento celular. En este trabajo hemos utilizado una técnica basada en un detector de contornos llamado “Canny-edge-detectorâ€� con el objetivo de automatizar la generación de perfiles de hifas y el cálculo de parámetros morfogenéticos, tales como: el diámetro, la velocidad de elongación y el ajuste con el perfil hifoide, es decir, el perfil teórico de las hifas de los hongos. Los resultados obtenidos son similares a los datos publicados a partir de técnicas manuales de trazado de contornos, generados en la misma especie y género. De esta manera

  6. An Improved Method for Measuring Quantitative Resistance to the Wheat Pathogen Zymoseptoria tritici Using High-Throughput Automated Image Analysis.

    Science.gov (United States)

    Stewart, Ethan L; Hagerty, Christina H; Mikaberidze, Alexey; Mundt, Christopher C; Zhong, Ziming; McDonald, Bruce A

    2016-07-01

    Zymoseptoria tritici causes Septoria tritici blotch (STB) on wheat. An improved method of quantifying STB symptoms was developed based on automated analysis of diseased leaf images made using a flatbed scanner. Naturally infected leaves (n = 949) sampled from fungicide-treated field plots comprising 39 wheat cultivars grown in Switzerland and 9 recombinant inbred lines (RIL) grown in Oregon were included in these analyses. Measures of quantitative resistance were percent leaf area covered by lesions, pycnidia size and gray value, and pycnidia density per leaf and lesion. These measures were obtained automatically with a batch-processing macro utilizing the image-processing software ImageJ. All phenotypes in both locations showed a continuous distribution, as expected for a quantitative trait. The trait distributions at both sites were largely overlapping even though the field and host environments were quite different. Cultivars and RILs could be assigned to two or more statistically different groups for each measured phenotype. Traditional visual assessments of field resistance were highly correlated with quantitative resistance measures based on image analysis for the Oregon RILs. These results show that automated image analysis provides a promising tool for assessing quantitative resistance to Z. tritici under field conditions.

  7. Application of automated methodologies based on digital images for phenological behaviour analysis in Mediterranean species

    Science.gov (United States)

    Cesaraccio, Carla; Piga, Alessandra; Ventura, Andrea; Arca, Angelo; Duce, Pierpaolo; Granados, Joel

    2015-04-01

    The importance of phenological research for understanding the consequences of global environmental change on vegetation is highlighted in the most recent IPCC reports. Collecting time series of phenological events appears to be of crucial importance to better understand how vegetation systems respond to climatic regime fluctuations, and, consequently, to develop effective management and adaptation strategies. Vegetation monitoring based on "near-surface" remote sensing techniques have been proposed in recent researches. In particular, the use of digital cameras has become more common for phenological monitoring. Digital images provide spectral information in the red, green, and blue (RGB) wavelengths. Inflection points in seasonal variations of intensities of each color channel can be used to identify phenological events. In this research, an Automated Phenological Observation System (APOS), based on digital image sensors, was used for monitoring the phenological behavior of shrubland species in a Mediterranean site. Major species of the shrubland ecosystem that were analyzed were: Cistus monspeliensis L., Cistus incanus L., Rosmarinus officinalis L., Pistacia lentiscus L., and Pinus halepensis Mill. The system was developed under the INCREASE (an Integrated Network on Climate Change Research) EU-funded research infrastructure project, which is based upon large scale field experiments with non-intrusive climatic manipulations. Monitoring of phenological behavior was conducted during 2012-2014 years. To the end of retrieve phenological information from digital images, a routine of commands to process the digital image file using the program MATLAB (R2013b, The MathWorks, Natick, Mass.) was specifically created. The images of the dataset have been re-classified and renamed files according to the date and time of acquisition. The analysis was focused on regions of interest (ROIs) of the panoramas acquired, defined by the presence of the most representative species of

  8. Automated Image Analysis for the Detection of Benthic Crustaceans and Bacterial Mat Coverage Using the VENUS Undersea Cabled Network

    Directory of Open Access Journals (Sweden)

    Jacopo Aguzzi

    2011-11-01

    Full Text Available The development and deployment of sensors for undersea cabled observatories is presently biased toward the measurement of habitat variables, while sensor technologies for biological community characterization through species identification and individual counting are less common. The VENUS cabled multisensory network (Vancouver Island, Canada deploys seafloor camera systems at several sites. Our objective in this study was to implement new automated image analysis protocols for the recognition and counting of benthic decapods (i.e., the galatheid squat lobster, Munida quadrispina, as well as for the evaluation of changes in bacterial mat coverage (i.e., Beggiatoa spp., using a camera deployed in Saanich Inlet (103 m depth. For the counting of Munida we remotely acquired 100 digital photos at hourly intervals from 2 to 6 December 2009. In the case of bacterial mat coverage estimation, images were taken from 2 to 8 December 2009 at the same time frequency. The automated image analysis protocols for both study cases were created in MatLab 7.1. Automation for Munida counting incorporated the combination of both filtering and background correction (Median- and Top-Hat Filters with Euclidean Distances (ED on Red-Green-Blue (RGB channels. The Scale-Invariant Feature Transform (SIFT features and Fourier Descriptors (FD of tracked objects were then extracted. Animal classifications were carried out with the tools of morphometric multivariate statistic (i.e., Partial Least Square Discriminant Analysis; PLSDA on Mean RGB (RGBv value for each object and Fourier Descriptors (RGBv+FD matrices plus SIFT and ED. The SIFT approach returned the better results. Higher percentages of images were correctly classified and lower misclassification errors (an animal is present but not detected occurred. In contrast, RGBv+FD and ED resulted in a high incidence of records being generated for non-present animals. Bacterial mat coverage was estimated in terms of Percent

  9. Automated image analysis for the detection of benthic crustaceans and bacterial mat coverage using the VENUS undersea cabled network.

    Science.gov (United States)

    Aguzzi, Jacopo; Costa, Corrado; Robert, Katleen; Matabos, Marjolaine; Antonucci, Francesca; Juniper, S Kim; Menesatti, Paolo

    2011-01-01

    The development and deployment of sensors for undersea cabled observatories is presently biased toward the measurement of habitat variables, while sensor technologies for biological community characterization through species identification and individual counting are less common. The VENUS cabled multisensory network (Vancouver Island, Canada) deploys seafloor camera systems at several sites. Our objective in this study was to implement new automated image analysis protocols for the recognition and counting of benthic decapods (i.e., the galatheid squat lobster, Munida quadrispina), as well as for the evaluation of changes in bacterial mat coverage (i.e., Beggiatoa spp.), using a camera deployed in Saanich Inlet (103 m depth). For the counting of Munida we remotely acquired 100 digital photos at hourly intervals from 2 to 6 December 2009. In the case of bacterial mat coverage estimation, images were taken from 2 to 8 December 2009 at the same time frequency. The automated image analysis protocols for both study cases were created in MatLab 7.1. Automation for Munida counting incorporated the combination of both filtering and background correction (Median- and Top-Hat Filters) with Euclidean Distances (ED) on Red-Green-Blue (RGB) channels. The Scale-Invariant Feature Transform (SIFT) features and Fourier Descriptors (FD) of tracked objects were then extracted. Animal classifications were carried out with the tools of morphometric multivariate statistic (i.e., Partial Least Square Discriminant Analysis; PLSDA) on Mean RGB (RGBv) value for each object and Fourier Descriptors (RGBv+FD) matrices plus SIFT and ED. The SIFT approach returned the better results. Higher percentages of images were correctly classified and lower misclassification errors (an animal is present but not detected) occurred. In contrast, RGBv+FD and ED resulted in a high incidence of records being generated for non-present animals. Bacterial mat coverage was estimated in terms of Percent Coverage

  10. Automated Orientation of Aerial Images

    DEFF Research Database (Denmark)

    Høhle, Joachim

    2002-01-01

    Methods for automated orientation of aerial images are presented. They are based on the use of templates, which are derived from existing databases, and area-based matching. The characteristics of available database information and the accuracy requirements for map compilation and orthoimage...

  11. AI (artificial intelligence in histopathology--from image analysis to automated diagnosis.

    Directory of Open Access Journals (Sweden)

    Aleksandar Bogovac

    2010-02-01

    Full Text Available The technological progress in digitalization of complete histological glass slides has opened a new door in tissue--based diagnosis. The presentation of microscopic images as a whole in a digital matrix is called virtual slide. A virtual slide allows calculation and related presentation of image information that otherwise can only be seen by individual human performance. The digital world permits attachments of several (if not all fields of view and the contemporary visualization on a screen. The presentation of all microscopic magnifications is possible if the basic pixel resolution is less than 0.25 microns. To introduce digital tissue--based diagnosis into the daily routine work of a surgical pathologist requires a new setup of workflow arrangement and procedures. The quality of digitized images is sufficient for diagnostic purposes; however, the time needed for viewing virtual slides exceeds that of viewing original glass slides by far. The reason lies in a slower and more difficult sampling procedure, which is the selection of information containing fields of view. By application of artificial intelligence, tissue--based diagnosis in routine work can be managed automatically in steps as follows: 1. The individual image quality has to be measured, and corrected, if necessary. 2. A diagnostic algorithm has to be applied. An algorithm has be developed, that includes both object based (object features, structures and pixel based (texture measures. 3. These measures serve for diagnosis classification and feedback to order additional information, for example in virtual immunohistochemical slides. 4. The measures can serve for automated image classification and detection of relevant image information by themselves without any labeling. 5. The pathologists' duty will not be released by such a system; to the contrary, it will manage and supervise the system, i.e., just working at a "higher level". Virtual slides are already in use for teaching and

  12. AI (artificial intelligence) in histopathology--from image analysis to automated diagnosis.

    Science.gov (United States)

    Kayser, Klaus; Görtler, Jürgen; Bogovac, Milica; Bogovac, Aleksandar; Goldmann, Torsten; Vollmer, Ekkehard; Kayser, Gian

    2009-01-01

    The technological progress in digitalization of complete histological glass slides has opened a new door in tissue--based diagnosis. The presentation of microscopic images as a whole in a digital matrix is called virtual slide. A virtual slide allows calculation and related presentation of image information that otherwise can only be seen by individual human performance. The digital world permits attachments of several (if not all) fields of view and the contemporary visualization on a screen. The presentation of all microscopic magnifications is possible if the basic pixel resolution is less than 0.25 microns. To introduce digital tissue--based diagnosis into the daily routine work of a surgical pathologist requires a new setup of workflow arrangement and procedures. The quality of digitized images is sufficient for diagnostic purposes; however, the time needed for viewing virtual slides exceeds that of viewing original glass slides by far. The reason lies in a slower and more difficult sampling procedure, which is the selection of information containing fields of view. By application of artificial intelligence, tissue--based diagnosis in routine work can be managed automatically in steps as follows: 1. The individual image quality has to be measured, and corrected, if necessary. 2. A diagnostic algorithm has to be applied. An algorithm has be developed, that includes both object based (object features, structures) and pixel based (texture) measures. 3. These measures serve for diagnosis classification and feedback to order additional information, for example in virtual immunohistochemical slides. 4. The measures can serve for automated image classification and detection of relevant image information by themselves without any labeling. 5. The pathologists' duty will not be released by such a system; to the contrary, it will manage and supervise the system, i.e., just working at a "higher level". Virtual slides are already in use for teaching and continuous

  13. Automated detection, 3D segmentation and analysis of high resolution spine MR images using statistical shape models

    Science.gov (United States)

    Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.

    2012-12-01

    Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.

  14. Automated kidney morphology measurements from ultrasound images using texture and edge analysis

    Science.gov (United States)

    Ravishankar, Hariharan; Annangi, Pavan; Washburn, Michael; Lanning, Justin

    2016-04-01

    In a typical ultrasound scan, a sonographer measures Kidney morphology to assess renal abnormalities. Kidney morphology can also help to discriminate between chronic and acute kidney failure. The caliper placements and volume measurements are often time consuming and an automated solution will help to improve accuracy, repeatability and throughput. In this work, we developed an automated Kidney morphology measurement solution from long axis Ultrasound scans. Automated kidney segmentation is challenging due to wide variability in kidney shape, size, weak contrast of the kidney boundaries and presence of strong edges like diaphragm, fat layers. To address the challenges and be able to accurately localize and detect kidney regions, we present a two-step algorithm that makes use of edge and texture information in combination with anatomical cues. First, we use an edge analysis technique to localize kidney region by matching the edge map with predefined templates. To accurately estimate the kidney morphology, we use textural information in a machine learning algorithm framework using Haar features and Gradient boosting classifier. We have tested the algorithm on 45 unseen cases and the performance against ground truth is measured by computing Dice overlap, % error in major and minor axis of kidney. The algorithm shows successful performance on 80% cases.

  15. Fast-FISH Detection and Semi-Automated Image Analysis of Numerical Chromosome Aberrations in Hematological Malignancies

    Directory of Open Access Journals (Sweden)

    Arif Esa

    1998-01-01

    Full Text Available A new fluorescence in situ hybridization (FISH technique called Fast-FISH in combination with semi-automated image analysis was applied to detect numerical aberrations of chromosomes 8 and 12 in interphase nuclei of peripheral blood lymphocytes and bone marrow cells from patients with acute myelogenous leukemia (AML and chronic lymphocytic leukemia (CLL. Commercially available α-satellite DNA probes specific for the centromere regions of chromosome 8 and chromosome 12, respectively, were used. After application of the Fast-FISH protocol, the microscopic images of the fluorescence-labelled cell nuclei were recorded by the true color CCD camera Kappa CF 15 MC and evaluated quantitatively by computer analysis on a PC. These results were compared to results obtained from the same type of specimens using the same analysis system but with a standard FISH protocol. In addition, automated spot counting after both FISH techniques was compared to visual spot counting after standard FISH. A total number of about 3,000 cell nuclei was evaluated. For quantitative brightness parameters, a good correlation between standard FISH labelling and Fast-FISH was found. Automated spot counting after Fast-FISH coincided within a few percent to automated and visual spot counting after standard FISH. The examples shown indicate the reliability and reproducibility of Fast-FISH and its potential for automatized interphase cell diagnostics of numerical chromosome aberrations. Since the Fast-FISH technique requires a hybridization time as low as 1/20 of established standard FISH techniques, omitting most of the time consuming working steps in the protocol, it may contribute considerably to clinical diagnostics. This may especially be interesting in cases where an accurate result is required within a few hours.

  16. Automated image analysis to quantify the subnuclear organization of transcriptional coregulatory protein complexes in living cell populations

    Science.gov (United States)

    Voss, Ty C.; Demarco, Ignacio A.; Booker, Cynthia F.; Day, Richard N.

    2004-06-01

    Regulated gene transcription is dependent on the steady-state concentration of DNA-binding and coregulatory proteins assembled in distinct regions of the cell nucleus. For example, several different transcriptional coactivator proteins, such as the Glucocorticoid Receptor Interacting Protein (GRIP), localize to distinct spherical intranuclear bodies that vary from approximately 0.2-1 micron in diameter. We are using multi-spectral wide-field microscopy of cells expressing coregulatory proteins labeled with the fluorescent proteins (FP) to study the mechanisms that control the assembly and distribution of these structures in living cells. However, variability between cells in the population makes an unbiased and consistent approach to this image analysis absolutely critical. To address this challenge, we developed a protocol for rigorous quantification of subnuclear organization in cell populations. Cells transiently co-expressing a green FP (GFP)-GRIP and the monomeric red FP (mRFP) are selected for imaging based only on the signal in the red channel, eliminating bias due to knowledge of coregulator organization. The impartially selected images of the GFP-coregulatory protein are then analyzed using an automated algorithm to objectively identify and measure the intranuclear bodies. By integrating all these features, this combination of unbiased image acquisition and automated analysis facilitates the precise and consistent measurement of thousands of protein bodies from hundreds of individual living cells that represent the population.

  17. Breast Density Analysis with Automated Whole-Breast Ultrasound: Comparison with 3-D Magnetic Resonance Imaging.

    Science.gov (United States)

    Chen, Jeon-Hor; Lee, Yan-Wei; Chan, Si-Wa; Yeh, Dah-Cherng; Chang, Ruey-Feng

    2016-05-01

    In this study, a semi-automatic breast segmentation method was proposed on the basis of the rib shadow to extract breast regions from 3-D automated whole-breast ultrasound (ABUS) images. The density results were correlated with breast density values acquired with 3-D magnetic resonance imaging (MRI). MRI images of 46 breasts were collected from 23 women without a history of breast disease. Each subject also underwent ABUS. We used Otsu's thresholding method on ABUS images to obtain local rib shadow information, which was combined with the global rib shadow information (extracted from all slice projections) and integrated with the anatomy's breast tissue structure to determine the chest wall line. The fuzzy C-means classifier was used to extract the fibroglandular tissues from the acquired images. Whole-breast volume (WBV) and breast percentage density (BPD) were calculated in both modalities. Linear regression was used to compute the correlation of density results between the two modalities. The consistency of density measurement was also analyzed on the basis of intra- and inter-operator variation. There was a high correlation of density results between MRI and ABUS (R(2) = 0.798 for WBV, R(2) = 0.825 for PBD). The mean WBV from ABUS images was slightly smaller than the mean WBV from MR images (MRI: 342.24 ± 128.08 cm(3), ABUS: 325.47 ± 136.16 cm(3), p MRI: 24.71 ± 15.16%, ABUS: 28.90 ± 17.73%, p breast density measurement variation between the two modalities. Our results revealed a high correlation in WBV and BPD between MRI and ABUS. Our study suggests that ABUS provides breast density information useful in the assessment of breast health.

  18. Immunohistochemical Ki-67/KL1 double stains increase accuracy of Ki-67 indices in breast cancer and simplify automated image analysis

    DEFF Research Database (Denmark)

    Nielsen, Patricia S; Bentzer, Nina K; Jensen, Vibeke

    2014-01-01

    observers and automated image analysis. RESULTS: Indices were predominantly higher for single stains than double stains (P≤0.002), yet the difference between observers was statistically significant (PPearson correlation coefficient for manual and automated indices ranged from 0.......69 to 0.85 (Pcorrelating automated indices with tumor characteristics, for example, tumor size (P... stains, Ki-67 should be quantified on double stains to reach a higher accuracy. Automated indices correlated well with manual estimates and tumor characteristics, and they are thus possibly valuable tools in future exploration of Ki-67 in breast cancer....

  19. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python.

    Science.gov (United States)

    Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri

    2014-01-01

    In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  20. Semi-automated image analysis for the assessment of megafaunal densities at the Arctic deep-sea observatory HAUSGARTEN.

    Directory of Open Access Journals (Sweden)

    Timm Schoening

    Full Text Available Megafauna play an important role in benthic ecosystem function and are sensitive indicators of environmental change. Non-invasive monitoring of benthic communities can be accomplished by seafloor imaging. However, manual quantification of megafauna in images is labor-intensive and therefore, this organism size class is often neglected in ecosystem studies. Automated image analysis has been proposed as a possible approach to such analysis, but the heterogeneity of megafaunal communities poses a non-trivial challenge for such automated techniques. Here, the potential of a generalized object detection architecture, referred to as iSIS (intelligent Screening of underwater Image Sequences, for the quantification of a heterogenous group of megafauna taxa is investigated. The iSIS system is tuned for a particular image sequence (i.e. a transect using a small subset of the images, in which megafauna taxa positions were previously marked by an expert. To investigate the potential of iSIS and compare its results with those obtained from human experts, a group of eight different taxa from one camera transect of seafloor images taken at the Arctic deep-sea observatory HAUSGARTEN is used. The results show that inter- and intra-observer agreements of human experts exhibit considerable variation between the species, with a similar degree of variation apparent in the automatically derived results obtained by iSIS. Whilst some taxa (e. g. Bathycrinus stalks, Kolga hyalina, small white sea anemone were well detected by iSIS (i. e. overall Sensitivity: 87%, overall Positive Predictive Value: 67%, some taxa such as the small sea cucumber Elpidia heckeri remain challenging, for both human observers and iSIS.

  1. Semi-automated image analysis for the assessment of megafaunal densities at the Arctic deep-sea observatory HAUSGARTEN.

    Science.gov (United States)

    Schoening, Timm; Bergmann, Melanie; Ontrup, Jörg; Taylor, James; Dannheim, Jennifer; Gutt, Julian; Purser, Autun; Nattkemper, Tim W

    2012-01-01

    Megafauna play an important role in benthic ecosystem function and are sensitive indicators of environmental change. Non-invasive monitoring of benthic communities can be accomplished by seafloor imaging. However, manual quantification of megafauna in images is labor-intensive and therefore, this organism size class is often neglected in ecosystem studies. Automated image analysis has been proposed as a possible approach to such analysis, but the heterogeneity of megafaunal communities poses a non-trivial challenge for such automated techniques. Here, the potential of a generalized object detection architecture, referred to as iSIS (intelligent Screening of underwater Image Sequences), for the quantification of a heterogenous group of megafauna taxa is investigated. The iSIS system is tuned for a particular image sequence (i.e. a transect) using a small subset of the images, in which megafauna taxa positions were previously marked by an expert. To investigate the potential of iSIS and compare its results with those obtained from human experts, a group of eight different taxa from one camera transect of seafloor images taken at the Arctic deep-sea observatory HAUSGARTEN is used. The results show that inter- and intra-observer agreements of human experts exhibit considerable variation between the species, with a similar degree of variation apparent in the automatically derived results obtained by iSIS. Whilst some taxa (e. g. Bathycrinus stalks, Kolga hyalina, small white sea anemone) were well detected by iSIS (i. e. overall Sensitivity: 87%, overall Positive Predictive Value: 67%), some taxa such as the small sea cucumber Elpidia heckeri remain challenging, for both human observers and iSIS.

  2. Analysis of irradiated U-7wt%Mo dispersion fuel microstructures using automated image processing

    Science.gov (United States)

    Collette, R.; King, J.; Buesch, C.; Keiser, D. D.; Williams, W.; Miller, B. D.; Schulthess, J.

    2016-07-01

    The High Performance Research Reactor Fuel Development (HPPRFD) program is responsible for developing low enriched uranium (LEU) fuel substitutes for high performance reactors fueled with highly enriched uranium (HEU) that have not yet been converted to LEU. The uranium-molybdenum (U-Mo) fuel system was selected for this effort. In this study, fission gas pore segmentation was performed on U-7wt%Mo dispersion fuel samples at three separate fission densities using an automated image processing interface developed in MATLAB. Pore size distributions were attained that showed both expected and unexpected fission gas behavior. In general, it proved challenging to identify any dominant trends when comparing fission bubble data across samples from different fuel plates due to varying compositions and fabrication techniques. The results exhibited fair agreement with the fission density vs. porosity correlation developed by the Russian reactor conversion program.

  3. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python

    Directory of Open Access Journals (Sweden)

    Nicolas eRey-Villamizar

    2014-04-01

    Full Text Available In this article, we describe use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis task, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral brain tissue images surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels, 6,000$times$10,000$times$500 voxels with 16 bits/voxel, implying image sizes exceeding 250GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analytics for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment consisting. Our Python script enables efficient data storage and movement between compute and storage servers, logging all processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  4. A simple viability analysis for unicellular cyanobacteria using a new autofluorescence assay, automated microscopy, and ImageJ

    Directory of Open Access Journals (Sweden)

    Schulze Katja

    2011-11-01

    Full Text Available Abstract Background Currently established methods to identify viable and non-viable cells of cyanobacteria are either time-consuming (eg. plating or preparation-intensive (eg. fluorescent staining. In this paper we present a new and fast viability assay for unicellular cyanobacteria, which uses red chlorophyll fluorescence and an unspecific green autofluorescence for the differentiation of viable and non-viable cells without the need of sample preparation. Results The viability assay for unicellular cyanobacteria using red and green autofluorescence was established and validated for the model organism Synechocystis sp. PCC 6803. Both autofluorescence signals could be observed simultaneously allowing a direct classification of viable and non-viable cells. The results were confirmed by plating/colony count, absorption spectra and chlorophyll measurements. The use of an automated fluorescence microscope and a novel ImageJ based image analysis plugin allow a semi-automated analysis. Conclusions The new method simplifies the process of viability analysis and allows a quick and accurate analysis. Furthermore results indicate that a combination of the new assay with absorption spectra or chlorophyll concentration measurements allows the estimation of the vitality of cells.

  5. Automated Classification Of Scanning Electron Microscope Particle Images Using Morphological Analysis

    Science.gov (United States)

    Lamarche, B. L.; Lewis, R. R.; Girvin, D. C.; McKinley, J. P.

    2008-12-01

    We are developing a software tool that can automatically classify anthropogenic and natural aerosol particulates using morphological analysis. Our method was developed using SEM (background and secondary electron) images of single particles. Particle silhouettes are detected and converted into polygons using Intel's OpenCV image processing library. Our analysis then proceeds independently for the two kinds of images. Analysis of secondary images concerns itself solely with the silhouette and seeks to quantify its shape and roughness. Traversing the polygon with spline interpolation, we uniformly sample k(s), the signed curvature of the silhouette's path as a function of distance along the perimeter s. k(s) is invariant under rotation and translation. The power spectrum of k(s) qualitatively shows both shape and roughness: more power at low frequencies indicates variation in shape; more power at higher frequencies indicates a rougher silhouette. We present a series of filters (low-, band-, and high-pass) which we convolve with k(s) to yield a set of parameters that characterize the shape and roughness numerically. Analysis of backscatter images focuses on the (visual) texture, which is the result of both composition and geometry. Using the silhouette as a boundary, we compute the variogram, a statistical measure of inter-pixel covariance as a function of distance. Variograms take on characteristic curves, which we fit with a heuristic, asymptotic function that uses a small set of parameters. The combination of silhouette and variogram fit parameters forms the basis of a multidimensional classification space whose dimensionality we may reduce by principal component analysis and whose region boundaries allow us to classify new particles. This analysis is performed without a priori knowledge of other physical, chemical, or climatic properties. The method will be adapted to multi-particulate images.

  6. Automated spectral imaging for clinical diagnostics

    Science.gov (United States)

    Breneman, John; Heffelfinger, David M.; Pettipiece, Ken; Tsai, Chris; Eden, Peter; Greene, Richard A.; Sorensen, Karen J.; Stubblebine, Will; Witney, Frank

    1998-04-01

    Bio-Rad Laboratories supplies imaging equipment for many applications in the life sciences. As part of our effort to offer more flexibility to the investigator, we are developing a microscope-based imaging spectrometer for the automated detection and analysis of either conventionally or fluorescently labeled samples. Immediate applications will include the use of fluorescence in situ hybridization (FISH) technology. The field of cytogenetics has benefited greatly from the increased sensitivity of FISH producing simplified analysis of complex chromosomal rearrangements. FISH methods for identification lends itself to automation more easily than the current cytogenetics industry standard of G- banding, however, the methods are complementary. Several technologies have been demonstrated successfully for analyzing the signals from labeled samples, including filter exchanging and interferometry. The detection system lends itself to other fluorescent applications including the display of labeled tissue sections, DNA chips, capillary electrophoresis or any other system using color as an event marker. Enhanced displays of conventionally stained specimens will also be possible.

  7. Investigation into diagnostic agreement using automated computer-assisted histopathology pattern recognition image analysis

    Directory of Open Access Journals (Sweden)

    Joshua D Webster

    2012-01-01

    Full Text Available The extent to which histopathology pattern recognition image analysis (PRIA agrees with microscopic assessment has not been established. Thus, a commercial PRIA platform was evaluated in two applications using whole-slide images. Substantial agreement, lacking significant constant or proportional errors, between PRIA and manual morphometric image segmentation was obtained for pulmonary metastatic cancer areas (Passing/Bablok regression. Bland-Altman analysis indicated heteroscedastic measurements and tendency toward increasing variance with increasing tumor burden, but no significant trend in mean bias. The average between-methods percent tumor content difference was -0.64. Analysis of between-methods measurement differences relative to the percent tumor magnitude revealed that method disagreement had an impact primarily in the smallest measurements (tumor burden 0.988, indicating high reproducibility for both methods, yet PRIA reproducibility was superior (C.V.: PRIA = 7.4, manual = 17.1. Evaluation of PRIA on morphologically complex teratomas led to diagnostic agreement with pathologist assessments of pluripotency on subsets of teratomas. Accommodation of the diversity of teratoma histologic features frequently resulted in detrimental trade-offs, increasing PRIA error elsewhere in images. PRIA error was nonrandom and influenced by variations in histomorphology. File-size limitations encountered while training algorithms and consequences of spectral image processing dominance contributed to diagnostic inaccuracies experienced for some teratomas. PRIA appeared better suited for tissues with limited phenotypic diversity. Technical improvements may enhance diagnostic agreement, and consistent pathologist input will benefit further development and application of PRIA.

  8. Bright field microscopy as an alternative to whole cell fluorescence in automated analysis of macrophage images.

    Directory of Open Access Journals (Sweden)

    Jyrki Selinummi

    Full Text Available BACKGROUND: Fluorescence microscopy is the standard tool for detection and analysis of cellular phenomena. This technique, however, has a number of drawbacks such as the limited number of available fluorescent channels in microscopes, overlapping excitation and emission spectra of the stains, and phototoxicity. METHODOLOGY: We here present and validate a method to automatically detect cell population outlines directly from bright field images. By imaging samples with several focus levels forming a bright field -stack, and by measuring the intensity variations of this stack over the -dimension, we construct a new two dimensional projection image of increased contrast. With additional information for locations of each cell, such as stained nuclei, this bright field projection image can be used instead of whole cell fluorescence to locate borders of individual cells, separating touching cells, and enabling single cell analysis. Using the popular CellProfiler freeware cell image analysis software mainly targeted for fluorescence microscopy, we validate our method by automatically segmenting low contrast and rather complex shaped murine macrophage cells. SIGNIFICANCE: The proposed approach frees up a fluorescence channel, which can be used for subcellular studies. It also facilitates cell shape measurement in experiments where whole cell fluorescent staining is either not available, or is dependent on a particular experimental condition. We show that whole cell area detection results using our projected bright field images match closely to the standard approach where cell areas are localized using fluorescence, and conclude that the high contrast bright field projection image can directly replace one fluorescent channel in whole cell quantification. Matlab code for calculating the projections can be downloaded from the supplementary site: http://sites.google.com/site/brightfieldorstaining.

  9. Optimizing object-based image analysis for semi-automated geomorphological mapping

    NARCIS (Netherlands)

    Anders, N.; Smith, M.; Seijmonsbergen, H.; Bouten, W.; Hengl, T.; Evans, I.S.; Wilson, J.P.; Gould, M.

    2011-01-01

    Object-Based Image Analysis (OBIA) is considered a useful tool for analyzing high-resolution digital terrain data. In the past, both segmentation and classification parameters were optimized manually by trial and error. We propose a method to automatically optimize classification parameters for incr

  10. Automated image processing and analysis of cartilage MRI: enabling technology for data mining applied to osteoarthritis

    Science.gov (United States)

    Tameem, Hussain Z.; Sinha, Usha S.

    2011-01-01

    Osteoarthritis (OA) is a heterogeneous and multi-factorial disease characterized by the progressive loss of articular cartilage. Magnetic Resonance Imaging has been established as an accurate technique to assess cartilage damage through both cartilage morphology (volume and thickness) and cartilage water mobility (Spin-lattice relaxation, T2). The Osteoarthritis Initiative, OAI, is a large scale serial assessment of subjects at different stages of OA including those with pre-clinical symptoms. The electronic availability of the comprehensive data collected as part of the initiative provides an unprecedented opportunity to discover new relationships in complex diseases such as OA. However, imaging data, which provides the most accurate non-invasive assessment of OA, is not directly amenable for data mining. Changes in morphometry and relaxivity with OA disease are both complex and subtle, making manual methods extremely difficult. This chapter focuses on the image analysis techniques to automatically localize the differences in morphometry and relaxivity changes in different population sub-groups (normal and OA subjects segregated by age, gender, and race). The image analysis infrastructure will enable automatic extraction of cartilage features at the voxel level; the ultimate goal is to integrate this infrastructure to discover relationships between the image findings and other clinical features. PMID:21785520

  11. Automated image processing and analysis of cartilage MRI: enabling technology for data mining applied to osteoarthritis.

    Science.gov (United States)

    Tameem, Hussain Z; Sinha, Usha S

    2007-01-01

    Osteoarthritis (OA) is a heterogeneous and multi-factorial disease characterized by the progressive loss of articular cartilage. Magnetic Resonance Imaging has been established as an accurate technique to assess cartilage damage through both cartilage morphology (volume and thickness) and cartilage water mobility (Spin-lattice relaxation, T2). The Osteoarthritis Initiative, OAI, is a large scale serial assessment of subjects at different stages of OA including those with pre-clinical symptoms. The electronic availability of the comprehensive data collected as part of the initiative provides an unprecedented opportunity to discover new relationships in complex diseases such as OA. However, imaging data, which provides the most accurate non-invasive assessment of OA, is not directly amenable for data mining. Changes in morphometry and relaxivity with OA disease are both complex and subtle, making manual methods extremely difficult. This chapter focuses on the image analysis techniques to automatically localize the differences in morphometry and relaxivity changes in different population sub-groups (normal and OA subjects segregated by age, gender, and race). The image analysis infrastructure will enable automatic extraction of cartilage features at the voxel level; the ultimate goal is to integrate this infrastructure to discover relationships between the image findings and other clinical features.

  12. Mathematical morphology for automated analysis of remotely sensed objects in radar images

    Science.gov (United States)

    Daida, Jason M.; Vesecky, John F.

    1991-01-01

    A symbiosis of pyramidal segmentation and morphological transmission is described. The pyramidal segmentation portion of the symbiosis has resulted in low (2.6 percent) misclassification error rate for a one-look simulation. Other simulations indicate lower error rates (1.8 percent for a four-look image). The morphological transformation portion has resulted in meaningful partitions with a minimal loss of fractal boundary information. An unpublished version of Thicken, suitable for watersheds transformations of fractal objects, is also presented. It is demonstrated that the proposed symbiosis works with SAR (synthetic aperture radar) images: in this case, a four-look Seasat image of sea ice. It is concluded that the symbiotic forms of both segmentation and morphological transformation seem well suited for unsupervised geophysical analysis.

  13. Automated Image Analysis in Undetermined Sections of Human Permanent Third Molars

    DEFF Research Database (Denmark)

    Bjørndal, Lars; Darvann, Tron Andre; Bro-Nielsen, Morten

    1997-01-01

    A computerized histomorphometric analysis was made of Karnovsky-fixed, hydroxethylmethacrylate embedded and toluidine blue/pyronin-stained sections to determine: (1) the two-dimensional size of the coronal odontoblasts given by their cytoplasm:nucleus ratio; (2) the ratio between the number of co...... sectioning profiles should be analysed. The use of advanced image processing on undemineralized tooth sections provides a rational foundation for further work on the reactions of the odontoblasts to external injuries including dental caries....

  14. Automating the Analysis of Spatial Grids A Practical Guide to Data Mining Geospatial Images for Human & Environmental Applications

    CERN Document Server

    Lakshmanan, Valliappa

    2012-01-01

    The ability to create automated algorithms to process gridded spatial data is increasingly important as remotely sensed datasets increase in volume and frequency. Whether in business, social science, ecology, meteorology or urban planning, the ability to create automated applications to analyze and detect patterns in geospatial data is increasingly important. This book provides students with a foundation in topics of digital image processing and data mining as applied to geospatial datasets. The aim is for readers to be able to devise and implement automated techniques to extract information from spatial grids such as radar, satellite or high-resolution survey imagery.

  15. A method for the automated processing and analysis of images of ULVWF-platelet strings.

    Science.gov (United States)

    Reeve, Scott R; Abbitt, Katherine B; Cruise, Thomas D; Hose, D Rodney; Lawford, Patricia V

    2013-01-01

    We present a method for identifying and analysing unusually large von Willebrand factor (ULVWF)-platelet strings in noisy low-quality images. The method requires relatively inexpensive, non-specialist equipment and allows multiple users to be employed in the capture of images. Images are subsequently enhanced and analysed, using custom-written software to perform the processing tasks. The formation and properties of ULVWF-platelet strings released in in vitro flow-based assays have recently become a popular research area. Endothelial cells are incorporated into a flow chamber, chemically stimulated to induce ULVWF release and perfused with isolated platelets which are able to bind to the ULVWF to form strings. The numbers and lengths of the strings released are related to characteristics of the flow. ULVWF-platelet strings are routinely identified by eye from video recordings captured during experiments and analysed manually using basic NIH image software to determine the number of strings and their lengths. This is a laborious, time-consuming task and a single experiment, often consisting of data from four to six dishes of endothelial cells, can take 2 or more days to analyse. The method described here allows analysis of the strings to provide data such as the number and length of strings, number of platelets per string and the distance between each platelet to be found. The software reduces analysis time, and more importantly removes user subjectivity, producing highly reproducible results with an error of less than 2% when compared with detailed manual analysis.

  16. Automated analysis of retinal imaging using machine learning techniques for computer vision [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Jeffrey De Fauw

    2016-07-01

    Full Text Available There are almost two million people in the United Kingdom living with sight loss, including around 360,000 people who are registered as blind or partially sighted. Sight threatening diseases, such as diabetic retinopathy and age related macular degeneration have contributed to the 40% increase in outpatient attendances in the last decade but are amenable to early detection and monitoring. With early and appropriate intervention, blindness may be prevented in many cases.   Ophthalmic imaging provides a way to diagnose and objectively assess the progression of a number of pathologies including neovascular (“wet” age-related macular degeneration (wet AMD and diabetic retinopathy. Two methods of imaging are commonly used: digital photographs of the fundus (the ‘back’ of the eye and Optical Coherence Tomography (OCT, a modality that uses light waves in a similar way to how ultrasound uses sound waves. Changes in population demographics and expectations and the changing pattern of chronic diseases creates a rising demand for such imaging. Meanwhile, interrogation of such images is time consuming, costly, and prone to human error. The application of novel analysis methods may provide a solution to these challenges.   This research will focus on applying novel machine learning algorithms to automatic analysis of both digital fundus photographs and OCT in Moorfields Eye Hospital NHS Foundation Trust patients.   Through analysis of the images used in ophthalmology, along with relevant clinical and demographic information, Google DeepMind Health will investigate the feasibility of automated grading of digital fundus photographs and OCT and provide novel quantitative measures for specific disease features and for monitoring the therapeutic success.

  17. Automated stent strut coverage and apposition analysis of in-vivo intra coronary optical coherence tomography images

    Science.gov (United States)

    Ughi, Giovanni J.; Adriaenssens, Tom; Onsea, Kevin; Kayaert, Peter; Dubois, Christophe; Coosemans, Mark; Sinnaeve, Peter; Desmet, Walter; D'hooge, Jan

    2011-03-01

    Several studies have proven that intra-vascular OCT is an appropriate imaging modality able to evaluate stent strut apposition and coverage in coronary arteries. Currently image processing is performed manually resulting in a very time consuming and labor intensive procedure. We propose an algorithm for fully automatic individual stent strut apposition and coverage analysis in coronary arteries. The vessel lumen and stent strut are automatically detected and segmented through analysis of the intensity profiles of the A-scan lines. From these data, apposition and coverage can then be estimated automatically. The algorithm was validated using manual measurement (performed by two trained cardiologists) as a reference. 108 images were taken at random from in-vivo pullbacks from 9 different patient presenting 'real-life' situations (i.e. blood residual, small luminal objects and artifacts). High Pearson's correlation coefficients were found (R = 0.96 - 0.95) between the automated and manual measurements while Bland-Altman statistics showed no significant bias with good limits of agreement. As such, it was shown that the presented algorithm provides a robust and a fast tool to automatically estimate apposition and coverage of stent struts in in-vivo pullbacks. This will be important for the integration of this technology in clinical routine and large clinical trials.

  18. Automated Quality Assurance Applied to Mammographic Imaging

    Directory of Open Access Journals (Sweden)

    Anne Davis

    2002-07-01

    Full Text Available Quality control in mammography is based upon subjective interpretation of the image quality of a test phantom. In order to suppress subjectivity due to the human observer, automated computer analysis of the Leeds TOR(MAM test phantom is investigated. Texture analysis via grey-level co-occurrence matrices is used to detect structures in the test object. Scoring of the substructures in the phantom is based on grey-level differences between regions and information from grey-level co-occurrence matrices. The results from scoring groups of particles within the phantom are presented.

  19. Assessing automated image analysis of sand grain shape to identify sedimentary facies, Gran Dolina archaeological site (Burgos, Spain)

    Science.gov (United States)

    Campaña, I.; Benito-Calvo, A.; Pérez-González, A.; Bermúdez de Castro, J. M.; Carbonell, E.

    2016-12-01

    Gran Dolina is a cave (Sierra de Atapuerca, Spain) infilled by a 25 m thick sedimentary record, divided into 12 lithostratigraphic units that have been separated into 19 sedimentary facies containing Early and Middle Pleistocene hominin remains. In this paper, an automated image analysis method has been used to study the shape of the sedimentary particles. Since particle shape is interpreted as the result of sedimentary transport and sediment source, this study can provide valuable data about the sedimentological mechanism of sequence formation. The shape of the sand fraction in 73 samples from Gran Dolina site and Sierra de Atapuerca was analyzed using the Malvern Morphologi G3, an advanced particle characterization tool. In this first complete test, we used this method to the published sequence of Gran Dolina, defined previously through field work observations and geochemical and textural analysis. The results indicate that this image analysis method allows differentiation of the sedimentary facies, providing objective tools to identify weathered layers and measure the textural maturity of the sediments. Channel facies have the highest values of circularity and convexity, showing the highest textural maturity of particles. On the other hand, terra rossa and debris flow samples show similar values, with the lowest particle maturity.

  20. Digital Rocks Portal: a sustainable platform for imaged dataset sharing, translation and automated analysis

    Science.gov (United States)

    Prodanovic, M.; Esteva, M.; Hanlon, M.; Nanda, G.; Agarwal, P.

    2015-12-01

    Recent advances in imaging have provided a wealth of 3D datasets that reveal pore space microstructure (nm to cm length scale) and allow investigation of nonlinear flow and mechanical phenomena from first principles using numerical approaches. This framework has popularly been called "digital rock physics". Researchers, however, have trouble storing and sharing the datasets both due to their size and the lack of standardized image types and associated metadata for volumetric datasets. This impedes scientific cross-validation of the numerical approaches that characterize large scale porous media properties, as well as development of multiscale approaches required for correct upscaling. A single research group typically specializes in an imaging modality and/or related modeling on a single length scale, and lack of data-sharing infrastructure makes it difficult to integrate different length scales. We developed a sustainable, open and easy-to-use repository called the Digital Rocks Portal, that (1) organizes images and related experimental measurements of different porous materials, (2) improves access to them for a wider community of geosciences or engineering researchers not necessarily trained in computer science or data analysis. Once widely accepter, the repository will jumpstart productivity and enable scientific inquiry and engineering decisions founded on a data-driven basis. This is the first repository of its kind. We show initial results on incorporating essential software tools and pipelines that make it easier for researchers to store and reuse data, and for educators to quickly visualize and illustrate concepts to a wide audience. For data sustainability and continuous access, the portal is implemented within the reliable, 24/7 maintained High Performance Computing Infrastructure supported by the Texas Advanced Computing Center (TACC) at the University of Texas at Austin. Long-term storage is provided through the University of Texas System Research

  1. Fully automated extraction and analysis of surface Urban Heat Island patterns from moderate resolution satellite images

    Science.gov (United States)

    Keramitsoglou, I.; Kiranoudis, C. T.

    2012-04-01

    Comparison of thermal patterns across different cities is hampered by the lack of an appropriate methodology to extract the patterns and characterize them. What is more, increased attention by the urban climate community has been expressed to assess the magnitude and dynamics of the surface Urban Heat Island effect and to identify environmental impacts of large cities and "megacities". Motivated by this need, we propose an innovative object-based image analysis procedure to extract thermal patterns for the quantitative analysis of satellite-derived land surface temperature maps. The spatial and thermal attributes associated with these objects are then calculated and used for the analyses of the intensity, the position and the spatial extent of SUHIs. The output eventually builds up and populates a database with comparable and consistent attributes, allowing comparisons between cities as well as urban climate studies. The methodology is demonstrated over the Greater Athens Area, Greece, with more than 3000 LST images acquired by MODIS over a decade being analyzed. The approach can be potentially applied to current and future (e.g. Sentinel-3) level-2 satellite-derived land surface temperature maps of 1km spatial resolution acquired over continental and coastal cities.

  2. A new automated method for analysis of gated-SPECT images based on a three-dimensional heart shaped model

    DEFF Research Database (Denmark)

    Lomsky, Milan; Richter, Jens; Johansson, Lena

    2005-01-01

    A new automated method for quantification of left ventricular function from gated-single photon emission computed tomography (SPECT) images has been developed. The method for quantification of cardiac function (CAFU) is based on a heart shaped model and the active shape algorithm. The model...

  3. Contaminant analysis automation demonstration proposal

    Energy Technology Data Exchange (ETDEWEB)

    Dodson, M.G.; Schur, A.; Heubach, J.G.

    1993-10-01

    The nation-wide and global need for environmental restoration and waste remediation (ER&WR) presents significant challenges to the analytical chemistry laboratory. The expansion of ER&WR programs forces an increase in the volume of samples processed and the demand for analysis data. To handle this expanding volume, productivity must be increased. However. The need for significantly increased productivity, faces contaminant analysis process which is costly in time, labor, equipment, and safety protection. Laboratory automation offers a cost effective approach to meeting current and future contaminant analytical laboratory needs. The proposed demonstration will present a proof-of-concept automated laboratory conducting varied sample preparations. This automated process also highlights a graphical user interface that provides supervisory, control and monitoring of the automated process. The demonstration provides affirming answers to the following questions about laboratory automation: Can preparation of contaminants be successfully automated?; Can a full-scale working proof-of-concept automated laboratory be developed that is capable of preparing contaminant and hazardous chemical samples?; Can the automated processes be seamlessly integrated and controlled?; Can the automated laboratory be customized through readily convertible design? and Can automated sample preparation concepts be extended to the other phases of the sample analysis process? To fully reap the benefits of automation, four human factors areas should be studied and the outputs used to increase the efficiency of laboratory automation. These areas include: (1) laboratory configuration, (2) procedures, (3) receptacles and fixtures, and (4) human-computer interface for the full automated system and complex laboratory information management systems.

  4. Automated image analysis for diameters and branching points of cerebral penetrating arteries and veins captured with two-photon microscopy.

    Science.gov (United States)

    Sugashi, Takuma; Yoshihara, Kouichi; Kawaguchi, Hiroshi; Takuwa, Hiroyuki; Ito, Hiroshi; Kanno, Iwao; Yamada, Yukio; Masamoto, Kazuto

    2014-01-01

    The present study was aimed to characterize 3-dimensional (3D) morphology of the cortical microvasculature (e.g., penetrating artery and emerging vein), using two-photon microscopy and automated analysis for their cross-sectional diameters and branching positions in the mouse cortex. We observed that both artery and vein had variable cross-sectional diameters across cortical depths. The mean diameter was similar for both artery (17 ± 5 μm) and vein (15 ± 5 μm), and there were no detectable differences over depths of 50-400 μm. On the other hand, the number of branches was slightly increased up to 400-μm depth for both the artery and vein. The mean number of branches per 0.1 mm vessel length was 1.7 ± 1.2 and 3.8 ± 1.6 for the artery and vein, respectively. This method allows for quantification of the large volume data of microvascular images captured with two-photon microscopy. This will contribute to the morphometric analysis of the cortical microvasculature in functioning brains.

  5. Automated hotspot analysis with aerial image CD metrology for advanced logic devices

    Science.gov (United States)

    Buttgereit, Ute; Trautzsch, Thomas; Kim, Min-ho; Seo, Jung-Uk; Yoon, Young-Keun; Han, Hak-Seung; Chung, Dong Hoon; Jeon, Chan-Uk; Meyers, Gary

    2014-09-01

    Continuously shrinking designs by further extension of 193nm technology lead to a much higher probability of hotspots especially for the manufacturing of advanced logic devices. The CD of these potential hotspots needs to be precisely controlled and measured on the mask. On top of that, the feature complexity increases due to high OPC load in the logic mask design which is an additional challenge for CD metrology. Therefore the hotspot measurements have been performed on WLCD from ZEISS, which provides the benefit of reduced complexity by measuring the CD in the aerial image and qualifying the printing relevant CD. This is especially of advantage for complex 2D feature measurements. Additionally, the data preparation for CD measurement becomes more critical due to the larger amount of CD measurements and the increasing feature diversity. For the data preparation this means to identify these hotspots and mark them automatically with the correct marker required to make the feature specific CD measurement successful. Currently available methods can address generic pattern but cannot deal with the pattern diversity of the hotspots. The paper will explore a method how to overcome those limitations and to enhance the time-to-result in the marking process dramatically. For the marking process the Synopsys WLCD Output Module was utilized, which is an interface between the CATS mask data prep software and the WLCD metrology tool. It translates the CATS marking directly into an executable WLCD measurement job including CD analysis. The paper will describe the utilized method and flow for the hotspot measurement. Additionally, the achieved results on hotspot measurements utilizing this method will be presented.

  6. Automated image analysis for quantification of reactive oxygen species in plant leaves.

    Science.gov (United States)

    Sekulska-Nalewajko, Joanna; Gocławski, Jarosław; Chojak-Koźniewska, Joanna; Kuźniak, Elżbieta

    2016-10-15

    The paper presents an image processing method for the quantitative assessment of ROS accumulation areas in leaves stained with DAB or NBT for H2O2 and O2(-) detection, respectively. Three types of images determined by the combination of staining method and background color are considered. The method is based on the principle of supervised machine learning with manually labeled image patterns used for training. The method's algorithm is developed as a JavaScript macro in the public domain Fiji (ImageJ) environment. It allows to select the stained regions of ROS-mediated histochemical reactions, subsequently fractionated according to the weak, medium and intense staining intensity and thus ROS accumulation. It also evaluates total leaf blade area. The precision of ROS accumulation area detection is validated by the Dice Similarity Coefficient in the case of manual patterns. The proposed framework reduces the computation complexity, once prepared, requires less image processing expertise than the competitive methods and represents a routine quantitative imaging assay for a general histochemical image classification.

  7. Automated image enhancement using power law transformations

    Indian Academy of Sciences (India)

    S P Vimal; P K Thiruvikraman

    2012-12-01

    We propose a scheme for automating power law transformations which are used for image enhancement. The scheme we propose does not require the user to choose the exponent in the power law transformation. This method works well for images having poor contrast, especially to those images in which the peaks corresponding to the background and the foreground are not widely separated.

  8. Use of laser range finders and range image analysis in automated assembly tasks

    Science.gov (United States)

    Alvertos, Nicolas; Dcunha, Ivan

    1990-01-01

    A proposition to study the effect of filtering processes on range images and to evaluate the performance of two different laser range mappers is made. Median filtering was utilized to remove noise from the range images. First and second order derivatives are then utilized to locate the similarities and dissimilarities between the processed and the original images. Range depth information is converted into spatial coordinates, and a set of coefficients which describe 3-D objects is generated using the algorithm developed in the second phase of this research. Range images of spheres and cylinders are used for experimental purposes. An algorithm was developed to compare the performance of two different laser range mappers based upon the range depth information of surfaces generated by each of the mappers. Furthermore, an approach based on 2-D analytic geometry is also proposed which serves as a basis for the recognition of regular 3-D geometric objects.

  9. Automated imaging system for single molecules

    Science.gov (United States)

    Schwartz, David Charles; Runnheim, Rodney; Forrest, Daniel

    2012-09-18

    There is provided a high throughput automated single molecule image collection and processing system that requires minimal initial user input. The unique features embodied in the present disclosure allow automated collection and initial processing of optical images of single molecules and their assemblies. Correct focus may be automatically maintained while images are collected. Uneven illumination in fluorescence microscopy is accounted for, and an overall robust imaging operation is provided yielding individual images prepared for further processing in external systems. Embodiments described herein are useful in studies of any macromolecules such as DNA, RNA, peptides and proteins. The automated image collection and processing system and method of same may be implemented and deployed over a computer network, and may be ergonomically optimized to facilitate user interaction.

  10. Results of Automated Retinal Image Analysis for Detection of Diabetic Retinopathy from the Nakuru Study, Kenya

    DEFF Research Database (Denmark)

    Juul Bøgelund Hansen, Morten; Abramoff, M. D.; Folk, J. C.;

    2015-01-01

    Objective Digital retinal imaging is an established method of screening for diabetic retinopathy (DR). It has been established that currently about 1% of the world's blind or visually impaired is due to DR. However, the increasing prevalence of diabetes mellitus and DR is creating an increased...... Reading Centre on the population of Nakuru Study from Kenya. Participants Retinal images were taken from participants of the Nakuru Eye Disease Study in Kenya in 2007/08 (n = 4,381 participants [NW6 Topcon Digital Retinal Camera]). Methods First, human grading was performed for the presence or absence...

  11. Development of an Automated Modality-Independent Elastographic Image Analysis System for Tumor Screening

    Science.gov (United States)

    2007-02-01

    quantity of polymer solution with an imaging contrast agent. Before the phantom fully polymerizes through freezing, a hypodermic needle is used to create...approximation in favor of one compatible with large deformations. The difference in solutions between small and large deformation theory can be...difference in the linear model among Fig. 2a and 2b, 2b is the reverse of 2a (this is a characteristic of linear theory ). However, the lack of this

  12. Application of automated image analysis to the identification and extraction of recyclable plastic bottles

    Institute of Scientific and Technical Information of China (English)

    Edgar SCAVINO; Dzuraidah Abdul WAHAB; Aini HUSSAIN; Hassan BASRI; Mohd Marzuki MUSTAFA

    2009-01-01

    An experimental machine vision apparatus was used to identify and extract recyclable plastic bottles out of a conveyor belt. Color images were taken with a commercially available Webcam, and the recognition was performed by our homemade software, based on the shape and dimensions of object images. The software was able to manage multiple bottles in a single image and was additionally extended to cases involving touching bottles. The identification was fulfilled by comparing the set of measured features with an existing database and meanwhile integrating various recognition techniques such as minimum distance in the feature space, self-organized maps, and neural networks. The recognition system was tested on a set of 50 different bottles and provided so far an accuracy of about 97% on bottle identification. The extraction of the bottles was performed by means of a pneumatic arm, which was activated according to the plastic type; polyethylene-terephthalate (PET) bottles were left on the conveyor belt, while non-PET boules were extracted. The software was designed to provide the best compromise between reliability and speed for real-time applications in view of the commercialization of the system at existing recycling plants.

  13. Automated sugar analysis

    Directory of Open Access Journals (Sweden)

    Tadeu Alcides MARQUES

    2016-03-01

    Full Text Available Abstract Sugarcane monosaccharides are reducing sugars, and classical analytical methodologies (Lane-Eynon, Benedict, complexometric-EDTA, Luff-Schoorl, Musson-Walker, Somogyi-Nelson are based on reducing copper ions in alkaline solutions. In Brazil, certain factories use Lane-Eynon, others use the equipment referred to as “REDUTEC”, and additional factories analyze reducing sugars based on a mathematic model. The objective of this paper is to understand the relationship between variations in millivolts, mass and tenors of reducing sugars during the analysis process. Another objective is to generate an automatic model for this process. The work herein uses the equipment referred to as “REDUTEC”, a digital balance, a peristaltic pump, a digital camcorder, math programs and graphics programs. We conclude that the millivolts, mass and tenors of reducing sugars exhibit a good mathematical correlation, and the mathematical model generated was benchmarked to low-concentration reducing sugars (<0.3%. Using the model created herein, reducing sugars analyses can be automated using the new equipment.

  14. Automated magnification calibration in transmission electron microscopy using Fourier analysis of replica images.

    NARCIS (Netherlands)

    Laak, J.A.W.M. van der; Dijkman, H.B.P.M.; Pahlplatz, M.M.M.

    2006-01-01

    The magnification factor in transmission electron microscopy is not very precise, hampering for instance quantitative analysis of specimens. Calibration of the magnification is usually performed interactively using replica specimens, containing line or grating patterns with known spacing. In the pre

  15. Simplified automated image analysis for detection and phenotyping of Mycobacterium tuberculosis on porous supports by monitoring growing microcolonies.

    Directory of Open Access Journals (Sweden)

    Alice L den Hertog

    Full Text Available BACKGROUND: Even with the advent of nucleic acid (NA amplification technologies the culture of mycobacteria for diagnostic and other applications remains of critical importance. Notably microscopic observed drug susceptibility testing (MODS, as opposed to traditional culture on solid media or automated liquid culture, has shown potential to both speed up and increase the provision of mycobacterial culture in high burden settings. METHODS: Here we explore the growth of Mycobacterial tuberculosis microcolonies, imaged by automated digital microscopy, cultured on a porous aluminium oxide (PAO supports. Repeated imaging during colony growth greatly simplifies "computer vision" and presumptive identification of microcolonies was achieved here using existing publically available algorithms. Our system thus allows the growth of individual microcolonies to be monitored and critically, also to change the media during the growth phase without disrupting the microcolonies. Transfer of identified microcolonies onto selective media allowed us, within 1-2 bacterial generations, to rapidly detect the drug susceptibility of individual microcolonies, eliminating the need for time consuming subculturing or the inoculation of multiple parallel cultures. SIGNIFICANCE: Monitoring the phenotype of individual microcolonies as they grow has immense potential for research, screening, and ultimately M. tuberculosis diagnostic applications. The method described is particularly appealing with respect to speed and automation.

  16. Semi-automated 3D leaf reconstruction and analysis of trichome patterning from light microscopic images.

    Directory of Open Access Journals (Sweden)

    Henrik Failmezger

    2013-04-01

    Full Text Available Trichomes are leaf hairs that are formed by single cells on the leaf surface. They are known to be involved in pathogen resistance. Their patterning is considered to emerge from a field of initially equivalent cells through the action of a gene regulatory network involving trichome fate promoting and inhibiting factors. For a quantitative analysis of single and double mutants or the phenotypic variation of patterns in different ecotypes, it is imperative to statistically evaluate the pattern reliably on a large number of leaves. Here we present a method that enables the analysis of trichome patterns at early developmental leaf stages and the automatic analysis of various spatial parameters. We focus on the most challenging young leaf stages that require the analysis in three dimensions, as the leaves are typically not flat. Our software TrichEratops reconstructs 3D surface models from 2D stacks of conventional light-microscope pictures. It allows the GUI-based annotation of different stages of trichome development, which can be analyzed with respect to their spatial distribution to capture trichome patterning events. We show that 3D modeling removes biases of simpler 2D models and that novel trichome patterning features increase the sensitivity for inter-accession comparisons.

  17. A method to quantify movement activity of groups of animals using automated image analysis

    Science.gov (United States)

    Xu, Jianyu; Yu, Haizhen; Liu, Ying

    2009-07-01

    Most physiological and environmental changes are capable of inducing variations in animal behavior. The behavioral parameters have the possibility to be measured continuously in-situ by a non-invasive and non-contact approach, and have the potential to be used in the actual productions to predict stress conditions. Most vertebrates tend to live in groups, herds, flocks, shoals, bands, packs of conspecific individuals. Under culture conditions, the livestock or fish are in groups and interact on each other, so the aggregate behavior of the group should be studied rather than that of individuals. This paper presents a method to calculate the movement speed of a group of animal in a enclosure or a tank denoted by body length speed that correspond to group activity using computer vision technique. Frame sequences captured at special time interval were subtracted in pairs after image segmentation and identification. By labeling components caused by object movement in difference frame, the projected area caused by the movement of every object in the capture interval was calculated; this projected area was divided by the projected area of every object in the later frame to get body length moving distance of each object, and further could obtain the relative body length speed. The average speed of all object can well respond to the activity of the group. The group activity of a tilapia (Oreochromis niloticus) school to high (2.65 mg/L) levels of unionized ammonia (UIA) concentration were quantified based on these methods. High UIA level condition elicited a marked increase in school activity at the first hour (P<0.05) exhibiting an avoidance reaction (trying to flee from high UIA condition), and then decreased gradually.

  18. High-Throughput Method for Automated Colony and Cell Counting by Digital Image Analysis Based on Edge Detection.

    Directory of Open Access Journals (Sweden)

    Priya Choudhry

    Full Text Available Counting cells and colonies is an integral part of high-throughput screens and quantitative cellular assays. Due to its subjective and time-intensive nature, manual counting has hindered the adoption of cellular assays such as tumor spheroid formation in high-throughput screens. The objective of this study was to develop an automated method for quick and reliable counting of cells and colonies from digital images. For this purpose, I developed an ImageJ macro Cell Colony Edge and a CellProfiler Pipeline Cell Colony Counting, and compared them to other open-source digital methods and manual counts. The ImageJ macro Cell Colony Edge is valuable in counting cells and colonies, and measuring their area, volume, morphology, and intensity. In this study, I demonstrate that Cell Colony Edge is superior to other open-source methods, in speed, accuracy and applicability to diverse cellular assays. It can fulfill the need to automate colony/cell counting in high-throughput screens, colony forming assays, and cellular assays.

  19. RootAnalyzer: A Cross-Section Image Analysis Tool for Automated Characterization of Root Cells and Tissues.

    Directory of Open Access Journals (Sweden)

    Joshua Chopin

    Full Text Available The morphology of plant root anatomical features is a key factor in effective water and nutrient uptake. Existing techniques for phenotyping root anatomical traits are often based on manual or semi-automatic segmentation and annotation of microscopic images of root cross sections. In this article, we propose a fully automated tool, hereinafter referred to as RootAnalyzer, for efficiently extracting and analyzing anatomical traits from root-cross section images. Using a range of image processing techniques such as local thresholding and nearest neighbor identification, RootAnalyzer segments the plant root from the image's background, classifies and characterizes the cortex, stele, endodermis and epidermis, and subsequently produces statistics about the morphological properties of the root cells and tissues. We use RootAnalyzer to analyze 15 images of wheat plants and one maize plant image and evaluate its performance against manually-obtained ground truth data. The comparison shows that RootAnalyzer can fully characterize most root tissue regions with over 90% accuracy.

  20. RootAnalyzer: A Cross-Section Image Analysis Tool for Automated Characterization of Root Cells and Tissues.

    Science.gov (United States)

    Chopin, Joshua; Laga, Hamid; Huang, Chun Yuan; Heuer, Sigrid; Miklavcic, Stanley J

    2015-01-01

    The morphology of plant root anatomical features is a key factor in effective water and nutrient uptake. Existing techniques for phenotyping root anatomical traits are often based on manual or semi-automatic segmentation and annotation of microscopic images of root cross sections. In this article, we propose a fully automated tool, hereinafter referred to as RootAnalyzer, for efficiently extracting and analyzing anatomical traits from root-cross section images. Using a range of image processing techniques such as local thresholding and nearest neighbor identification, RootAnalyzer segments the plant root from the image's background, classifies and characterizes the cortex, stele, endodermis and epidermis, and subsequently produces statistics about the morphological properties of the root cells and tissues. We use RootAnalyzer to analyze 15 images of wheat plants and one maize plant image and evaluate its performance against manually-obtained ground truth data. The comparison shows that RootAnalyzer can fully characterize most root tissue regions with over 90% accuracy.

  1. Automated brain tumor segmentation in magnetic resonance imaging based on sliding-window technique and symmetry analysis

    Institute of Scientific and Technical Information of China (English)

    Lian Yanyun; Song Zhijian

    2014-01-01

    Background Brain tumor segmentation from magnetic resonance imaging (MRI) is an important step toward surgical planning,treatment planning,monitoring of therapy.However,manual tumor segmentation commonly used in clinic is time-consuming and challenging,and none of the existed automated methods are highly robust,reliable and efficient in clinic application.An accurate and automated tumor segmentation method has been developed for brain tumor segmentation that will provide reproducible and objective results close to manual segmentation results.Methods Based on the symmetry of human brain,we employed sliding-window technique and correlation coefficient to locate the tumor position.At first,the image to be segmented was normalized,rotated,denoised,and bisected.Subsequently,through vertical and horizontal sliding-windows technique in turn,that is,two windows in the left and the right part of brain image moving simultaneously pixel by pixel in two parts of brain image,along with calculating of correlation coefficient of two windows,two windows with minimal correlation coefficient were obtained,and the window with bigger average gray value is the location of tumor and the pixel with biggest gray value is the locating point of tumor.At last,the segmentation threshold was decided by the average gray value of the pixels in the square with center at the locating point and 10 pixels of side length,and threshold segmentation and morphological operations were used to acquire the final tumor region.Results The method was evaluated on 3D FSPGR brain MR images of 10 patients.As a result,the average ratio of correct location was 93.4% for 575 slices containing tumor,the average Dice similarity coefficient was 0.77 for one scan,and the average time spent on one scan was 40 seconds.Conclusions An fully automated,simple and efficient segmentation method for brain tumor is proposed and promising for future clinic use.Correlation coefficient is a new and effective feature for tumor

  2. A Novel Morphometry-Based Protocol of Automated Video-Image Analysis for Species Recognition and Activity Rhythms Monitoring in Deep-Sea Fauna

    Directory of Open Access Journals (Sweden)

    Paolo Menesatti

    2009-10-01

    Full Text Available The understanding of ecosystem dynamics in deep-sea areas is to date limited by technical constraints on sampling repetition. We have elaborated a morphometry-based protocol for automated video-image analysis where animal movement tracking (by frame subtraction is accompanied by species identification from animals’ outlines by Fourier Descriptors and Standard K-Nearest Neighbours methods. One-week footage from a permanent video-station located at 1,100 m depth in Sagami Bay (Central Japan was analysed. Out of 150,000 frames (1 per 4 s, a subset of 10.000 was analyzed by a trained operator to increase the efficiency of the automated procedure. Error estimation of the automated and trained operator procedure was computed as a measure of protocol performance. Three displacing species were identified as the most recurrent: Zoarcid fishes (eelpouts, red crabs (Paralomis multispina, and snails (Buccinum soyomaruae. Species identification with KNN thresholding produced better results in automated motion detection. Results were discussed assuming that the technological bottleneck is to date deeply conditioning the exploration of the deep-sea.

  3. An automated multi-modal object analysis approach to coronary calcium scoring of adaptive heart isolated MSCT images

    Science.gov (United States)

    Wu, Jing; Ferns, Gordon; Giles, John; Lewis, Emma

    2012-02-01

    Inter- and intra- observer variability is a problem often faced when an expert or observer is tasked with assessing the severity of a disease. This issue is keenly felt in coronary calcium scoring of patients suffering from atherosclerosis where in clinical practice, the observer must identify firstly the presence, followed by the location of candidate calcified plaques found within the coronary arteries that may prevent oxygenated blood flow to the heart muscle. This can be challenging for a human observer as it is difficult to differentiate calcified plaques that are located in the coronary arteries from those found in surrounding anatomy such as the mitral valve or pericardium. The inclusion or exclusion of false positive or true positive calcified plaques respectively will alter the patient calcium score incorrectly, thus leading to the possibility of incorrect treatment prescription. In addition to the benefits to scoring accuracy, the use of fast, low dose multi-slice CT imaging to perform the cardiac scan is capable of acquiring the entire heart within a single breath hold. Thus exposing the patient to lower radiation dose, which for a progressive disease such as atherosclerosis where multiple scans may be required, is beneficial to their health. Presented here is a fully automated method for calcium scoring using both the traditional Agatston method, as well as the Volume scoring method. Elimination of the unwanted regions of the cardiac image slices such as lungs, ribs, and vertebrae is carried out using adaptive heart isolation. Such regions cannot contain calcified plaques but can be of a similar intensity and their removal will aid detection. Removal of both the ascending and descending aortas, as they contain clinical insignificant plaques, is necessary before the final calcium scores are calculated and examined against ground truth scores of three averaged expert observer results. The results presented here are intended to show the requirement and

  4. Strong Prognostic Value of Tumor-infiltrating Neutrophils and Lymphocytes Assessed by Automated Digital Image Analysis in Early Stage Cervical Cancer

    DEFF Research Database (Denmark)

    Carus, Andreas; Donskov, Frede; Switten Nielsen, Patricia;

    2014-01-01

    INTRODUCTION Manual observer-assisted stereological (OAS) assessments of tumor-infiltrating neutrophils and lymphocytes are prognostic, accurate, but cumbersome. We assessed the applicability of automated digital image analysis (DIA). METHODS Visiomorph software was used to obtain DIA densities...... to lymphocyte (TA–NL) index accurately predicted the risk of relapse, ranging from 8% to 52% (P = 0.001). CONCLUSIONS DIA is a potential assessment technique. The TA–NL index obtained by DIA is a strong prognostic variable with possible routine clinical application....

  5. Automated Image Data Exploitation Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Kamath, C; Poland, D; Sengupta, S K; Futterman, J H

    2004-01-26

    The automated production of maps of human settlement from recent satellite images is essential to detailed studies of urbanization, population movement, and the like. Commercial satellite imagery is becoming available with sufficient spectral and spatial resolution to apply computer vision techniques previously considered only for laboratory (high resolution, low noise) images. In this project, we extracted the boundaries of human settlements from IKONOS 4-band and panchromatic images using spectral segmentation together with a form of generalized second-order statistics and detection of edges and corners.

  6. Automated Analysis of Infinite Scenarios

    DEFF Research Database (Denmark)

    Buchholtz, Mikael

    2005-01-01

    The security of a network protocol crucially relies on the scenario in which the protocol is deployed. This paper describes syntactic constructs for modelling network scenarios and presents an automated analysis tool, which can guarantee that security properties hold in all of the (infinitely many...

  7. Plenoptic Imager for Automated Surface Navigation

    Science.gov (United States)

    Zollar, Byron; Milder, Andrew; Milder, Andrew; Mayo, Michael

    2010-01-01

    An electro-optical imaging device is capable of autonomously determining the range to objects in a scene without the use of active emitters or multiple apertures. The novel, automated, low-power imaging system is based on a plenoptic camera design that was constructed as a breadboard system. Nanohmics proved feasibility of the concept by designing an optical system for a prototype plenoptic camera, developing simulated plenoptic images and range-calculation algorithms, constructing a breadboard prototype plenoptic camera, and processing images (including range calculations) from the prototype system. The breadboard demonstration included an optical subsystem comprised of a main aperture lens, a mechanical structure that holds an array of micro lenses at the focal distance from the main lens, and a structure that mates a CMOS imaging sensor the correct distance from the micro lenses. The demonstrator also featured embedded electronics for camera readout, and a post-processor executing image-processing algorithms to provide ranging information.

  8. Automated Localization of Optic Disc in Retinal Images

    Directory of Open Access Journals (Sweden)

    Deepali A.Godse

    2013-03-01

    Full Text Available An efficient detection of optic disc (OD in colour retinal images is a significant task in an automated retinal image analysis system. Most of the algorithms developed for OD detection are especially applicable to normal and healthy retinal images. It is a challenging task to detect OD in all types of retinal images, that is, normal, healthy images as well as abnormal, that is, images affected due to disease. This paper presents an automated system to locate an OD and its centre in all types of retinal images. The ensemble of steps based on different criteria produces more accurate results. The proposed algorithm gives excellent results and avoids false OD detection. The technique is developed and tested on standard databases provided for researchers on internet, Diaretdb0 (130 images, Diaretdb1 (89 images, Drive (40 images and local database (194 images. The local database images are collected from ophthalmic clinics. It is able to locate OD and its centre in 98.45% of all tested cases. The results achieved by different algorithms can be compared when algorithms are applied on same standard databases. This comparison is also discussed in this paper which shows that the proposed algorithm is more efficient.

  9. AUTOMATED ANALYSIS OF BREAKERS

    Directory of Open Access Journals (Sweden)

    E. M. Farhadzade

    2014-01-01

    Full Text Available Breakers relate to Electric Power Systems’ equipment, the reliability of which influence, to a great extend, on reliability of Power Plants. In particular, the breakers determine structural reliability of switchgear circuit of Power Stations and network substations. Failure in short-circuit switching off by breaker with further failure of reservation unit or system of long-distance protection lead quite often to system emergency.The problem of breakers’ reliability improvement and the reduction of maintenance expenses is becoming ever more urgent in conditions of systematic increasing of maintenance cost and repair expenses of oil circuit and air-break circuit breakers. The main direction of this problem solution is the improvement of diagnostic control methods and organization of on-condition maintenance. But this demands to use a great amount of statistic information about nameplate data of breakers and their operating conditions, about their failures, testing and repairing, advanced developments (software of computer technologies and specific automated information system (AIS.The new AIS with AISV logo was developed at the department: “Reliability of power equipment” of AzRDSI of Energy. The main features of AISV are:· to provide the security and data base accuracy;· to carry out systematic control of breakers conformity with operating conditions;· to make the estimation of individual  reliability’s value and characteristics of its changing for given combination of characteristics variety;· to provide personnel, who is responsible for technical maintenance of breakers, not only with information but also with methodological support, including recommendations for the given problem solving  and advanced methods for its realization.

  10. Measurement of TLR-induced macrophage spreading by automated image analysis: differential role of Myd88 and MAPK in early and late responses

    Directory of Open Access Journals (Sweden)

    Jens eWenzel

    2011-10-01

    Full Text Available Sensing of infectious danger by Toll-like receptors (TLR on macrophages causes not only a reprogramming of the transcriptome but also changes in the cytoskeleton important for cell spreading and motility. Since manual determination of cell contact areas from fluorescence microscopy pictures is very time consuming and prone to bias, we have developed and tested algorithms for automated measurement of macrophage spreading. The two-step method combines identification of cells by nuclear staining with DAPI and cell surface staining of the integrin CD11b. Automated image analysis correlated very well with manual annotation in resting macrophages and early after stimulation, whereas at later time points the automated cell segmentation algorithm and manual annotation showed slightly larger variation. The method was applied to investigate the impact of genetic or pharmacological inhibition of known TLR signaling components. Deificiency in the adapter protein Myd88 strongly reduced spreading activity at the late time points, but had no impact early after LPS stimulation. A similar effect was observed upon pharmacological inhibition of MEK1, the kinase activating the MAPK ERK1/2, indicating that ERK1/2 mediates Myd88-dependent macrophages spreading. In contrast, macrophages lacking the MAPK p38 were impaired in the initial spreading response but responded normally 8 – 24 h after stimulation. The dichotomy of p38 and ERK1/2 MAPK effects on early and late macrophage spreading raises the question which of the respective substrate proteins mediate(s cytoskeletal remodeling and spreading. The automated measurement of cell spreading described here increases the objectivity and greatly reduces the time required for such investigations and is therefore expected to facilitate larger through-put analysis of macrophage spreading, e.g. in siRNA knockdown screens.

  11. Automated image-based tracking and its application in ecology.

    Science.gov (United States)

    Dell, Anthony I; Bender, John A; Branson, Kristin; Couzin, Iain D; de Polavieja, Gonzalo G; Noldus, Lucas P J J; Pérez-Escudero, Alfonso; Perona, Pietro; Straw, Andrew D; Wikelski, Martin; Brose, Ulrich

    2014-07-01

    The behavior of individuals determines the strength and outcome of ecological interactions, which drive population, community, and ecosystem organization. Bio-logging, such as telemetry and animal-borne imaging, provides essential individual viewpoints, tracks, and life histories, but requires capture of individuals and is often impractical to scale. Recent developments in automated image-based tracking offers opportunities to remotely quantify and understand individual behavior at scales and resolutions not previously possible, providing an essential supplement to other tracking methodologies in ecology. Automated image-based tracking should continue to advance the field of ecology by enabling better understanding of the linkages between individual and higher-level ecological processes, via high-throughput quantitative analysis of complex ecological patterns and processes across scales, including analysis of environmental drivers.

  12. Automated Analysis of Corpora Callosa

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Davies, Rhodri H.

    2003-01-01

    This report describes and evaluates the steps needed to perform modern model-based interpretation of the corpus callosum in MRI. The process is discussed from the initial landmark-free contours to full-fledged statistical models based on the Active Appearance Models framework. Topics treated incl...... include landmark placement, background modelling and multi-resolution analysis. Preliminary quantitative and qualitative validation in a cross-sectional study show that fully automated analysis and segmentation of the corpus callosum are feasible....

  13. Automated model-based calibration of imaging spectrographs

    Science.gov (United States)

    Kosec, Matjaž; Bürmen, Miran; Tomaževič, Dejan; Pernuš, Franjo; Likar, Boštjan

    2012-03-01

    Hyper-spectral imaging has gained recognition as an important non-invasive research tool in the field of biomedicine. Among the variety of available hyperspectral imaging systems, systems comprising an imaging spectrograph, lens, wideband illumination source and a corresponding camera stand out for the short acquisition time and good signal to noise ratio. The individual images acquired by imaging spectrograph-based systems contain full spectral information along one spatial dimension. Due to the imperfections in the camera lens and in particular the optical components of the imaging spectrograph, the acquired images are subjected to spatial and spectral distortions, resulting in scene dependent nonlinear spectral degradations and spatial misalignments which need to be corrected. However, the existing correction methods require complex calibration setups and a tedious manual involvement, therefore, the correction of the distortions is often neglected. Such simplified approach can lead to significant errors in the analysis of the acquired hyperspectral images. In this paper, we present a novel fully automated method for correction of the geometric and spectral distortions in the acquired images. The method is based on automated non-rigid registration of the reference and acquired images corresponding to the proposed calibration object incorporating standardized spatial and spectral information. The obtained transformation was successfully used for sub-pixel correction of various hyperspectral images, resulting in significant improvement of the spectral and spatial alignment. It was found that the proposed calibration is highly accurate and suitable for routine use in applications involving either diffuse reflectance or transmittance measurement setups.

  14. Automated localization and segmentation of lung tumor from PET-CT thorax volumes based on image feature analysis.

    Science.gov (United States)

    Cui, Hui; Wang, Xiuying; Feng, Dagan

    2012-01-01

    Positron emission tomography - computed tomography (PET-CT) plays an essential role in early tumor detection, diagnosis, staging and treatment. Automated and more accurate lung tumor detection and delineation from PET-CT is challenging. In this paper, on the basis of quantitative analysis of contrast feature of PET volume in SUV (standardized uptake value), our method firstly automatically localized the lung tumor. Then based on analysing the surrounding CT features of the initial tumor definition, our decision strategy determines the tumor segmentation from CT or from PET. The algorithm has been validated on 20 PET-CT studies involving non-small cell lung cancer (NSCLC). Experimental results demonstrated that our method was able to segment the tumor when adjacent to mediastinum or chest wall, and the algorithm outperformed the other five lung segmentation methods in terms of overlapping measure.

  15. Automated image analysis with the potential for process quality control applications in stem cell maintenance and differentiation.

    Science.gov (United States)

    Smith, David; Glen, Katie; Thomas, Robert

    2016-01-01

    The translation of laboratory processes into scaled production systems suitable for manufacture is a significant challenge for cell based therapies; in particular there is a lack of analytical methods that are informative and efficient for process control. Here the potential of image analysis as one part of the solution to this issue is explored, using pluripotent stem cell colonies as a valuable and challenging exemplar. The Cell-IQ live cell imaging platform was used to build image libraries of morphological culture attributes such as colony "edge," "core periphery" or "core" cells. Conventional biomarkers, such as Oct3/4, Nanog, and Sox-2, were shown to correspond to specific morphologies using immunostaining and flow cytometry techniques. Quantitative monitoring of these morphological attributes in-process using the reference image libraries showed rapid sensitivity to changes induced by different media exchange regimes or the addition of mesoderm lineage inducing cytokine BMP4. The imaging sample size to precision relationship was defined for each morphological attribute to show that this sensitivity could be achieved with a relatively low imaging sample. Further, the morphological state of single colonies could be correlated to individual colony outcomes; smaller colonies were identified as optimum for homogenous early mesoderm differentiation, while larger colonies maintained a morphologically pluripotent core. Finally, we show the potential of the same image libraries to assess cell number in culture with accuracy comparable to sacrificial digestion and counting. The data supports a potentially powerful role for quantitative image analysis in the setting of in-process specifications, and also for screening the effects of process actions during development, which is highly complementary to current analysis in optimization and manufacture.

  16. Automated Sentiment Analysis

    Science.gov (United States)

    2009-06-01

    Sentiment Analysis? Deep philosophical questions could be raised about the nature of sentiment. It is not exactly an emotion – one can choose to...and syntactic analysis easier. It also forestalls misunderstanding; sentences likely to be misclassified (because of unusual style, sarcasm , etc...has no emotional significance. We focus on supervised learning for this prototype; though, we can alter our program to perform unsupervised learning

  17. Low-dose DNA damage and replication stress responses quantified by optimized automated single-cell image analysis

    DEFF Research Database (Denmark)

    Mistrik, Martin; Oplustilova, Lenka; Lukas, Jiri

    2009-01-01

    by environmental or metabolic genotoxic insults is critical for contemporary biomedicine. The available physical, flow cytometry and sophisticated scanning approaches to DNA damage estimation each have some drawbacks such as insufficient sensitivity, limitation to analysis of cells in suspension, or high costs...... sensitive, quantitative, rapid and simple fluorescence image analysis in thousands of adherent cells per day. Sensitive DNA breakage estimation through analysis of phosphorylated histone H2AX (gamma-H2AX), and homologous recombination (HR) assessed by a new RPA/Rad51 dual-marker approach illustrate...

  18. Fully automated (operational) modal analysis

    Science.gov (United States)

    Reynders, Edwin; Houbrechts, Jeroen; De Roeck, Guido

    2012-05-01

    Modal parameter estimation requires a lot of user interaction, especially when parametric system identification methods are used and the modes are selected in a stabilization diagram. In this paper, a fully automated, generally applicable three-stage clustering approach is developed for interpreting such a diagram. It does not require any user-specified parameter or threshold value, and it can be used in an experimental, operational, and combined vibration testing context and with any parametric system identification algorithm. The three stages of the algorithm correspond to the three stages in a manual analysis: setting stabilization thresholds for clearing out the diagram, detecting columns of stable modes, and selecting a representative mode from each column. An extensive validation study illustrates the accuracy and robustness of this automation strategy.

  19. Automated landmark-guided deformable image registration

    Science.gov (United States)

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-01

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency.

  20. Automated vector selection of SIVQ and parallel computing integration MATLAB TM : Innovations supporting large-scale and high-throughput image analysis studies

    Directory of Open Access Journals (Sweden)

    Jerome Cheng

    2011-01-01

    Full Text Available Introduction: Spatially invariant vector quantization (SIVQ is a texture and color-based image matching algorithm that queries the image space through the use of ring vectors. In prior studies, the selection of one or more optimal vectors for a particular feature of interest required a manual process, with the user initially stochastically selecting candidate vectors and subsequently testing them upon other regions of the image to verify the vector′s sensitivity and specificity properties (typically by reviewing a resultant heat map. In carrying out the prior efforts, the SIVQ algorithm was noted to exhibit highly scalable computational properties, where each region of analysis can take place independently of others, making a compelling case for the exploration of its deployment on high-throughput computing platforms, with the hypothesis that such an exercise will result in performance gains that scale linearly with increasing processor count. Methods: An automated process was developed for the selection of optimal ring vectors to serve as the predicate matching operator in defining histopathological features of interest. Briefly, candidate vectors were generated from every possible coordinate origin within a user-defined vector selection area (VSA and subsequently compared against user-identified positive and negative "ground truth" regions on the same image. Each vector from the VSA was assessed for its goodness-of-fit to both the positive and negative areas via the use of the receiver operating characteristic (ROC transfer function, with each assessment resulting in an associated area-under-the-curve (AUC figure of merit. Results: Use of the above-mentioned automated vector selection process was demonstrated in two cases of use: First, to identify malignant colonic epithelium, and second, to identify soft tissue sarcoma. For both examples, a very satisfactory optimized vector was identified, as defined by the AUC metric. Finally, as an

  1. Practical approach to apply range image sensors in machine automation

    Science.gov (United States)

    Moring, Ilkka; Paakkari, Jussi

    1993-10-01

    In this paper we propose a practical approach to apply range imaging technology in machine automation. The applications we are especially interested in are industrial heavy-duty machines like paper roll manipulators in harbor terminals, harvesters in forests and drilling machines in mines. Characteristic of these applications is that the sensing system has to be fast, mid-ranging, compact, robust, and relatively cheap. On the other hand the sensing system is not required to be generic with respect to the complexity of scenes and objects or number of object classes. The key in our approach is that just a limited range data set or as we call it, a sparse range image is acquired and analyzed. This makes both the range image sensor and the range image analysis process more feasible and attractive. We believe that this is the way in which range imaging technology will enter the large industrial machine automation market. In the paper we analyze as a case example one of the applications mentioned and, based on that, we try to roughly specify the requirements for a range imaging based sensing system. The possibilities to implement the specified system are analyzed based on our own work on range image acquisition and interpretation.

  2. Automated analysis of complex data

    Science.gov (United States)

    Saintamant, Robert; Cohen, Paul R.

    1994-01-01

    We have examined some of the issues involved in automating exploratory data analysis, in particular the tradeoff between control and opportunism. We have proposed an opportunistic planning solution for this tradeoff, and we have implemented a prototype, Igor, to test the approach. Our experience in developing Igor was surprisingly smooth. In contrast to earlier versions that relied on rule representation, it was straightforward to increment Igor's knowledge base without causing the search space to explode. The planning representation appears to be both general and powerful, with high level strategic knowledge provided by goals and plans, and the hooks for domain-specific knowledge are provided by monitors and focusing heuristics.

  3. Sensitivity Analysis of Automated Ice Edge Detection

    Science.gov (United States)

    Moen, Mari-Ann N.; Isaksem, Hugo; Debien, Annekatrien

    2016-08-01

    The importance of highly detailed and time sensitive ice charts has increased with the increasing interest in the Arctic for oil and gas, tourism, and shipping. Manual ice charts are prepared by national ice services of several Arctic countries. Methods are also being developed to automate this task. Kongsberg Satellite Services uses a method that detects ice edges within 15 minutes after image acquisition. This paper describes a sensitivity analysis of the ice edge, assessing to which ice concentration class from the manual ice charts it can be compared to. The ice edge is derived using the Ice Tracking from SAR Images (ITSARI) algorithm. RADARSAT-2 images of February 2011 are used, both for the manual ice charts and the automatic ice edges. The results show that the KSAT ice edge lies within ice concentration classes with very low ice concentration or open water.

  4. Automated pipelines for spectroscopic analysis

    Science.gov (United States)

    Allende Prieto, C.

    2016-09-01

    The Gaia mission will have a profound impact on our understanding of the structure and dynamics of the Milky Way. Gaia is providing an exhaustive census of stellar parallaxes, proper motions, positions, colors and radial velocities, but also leaves some glaring holes in an otherwise complete data set. The radial velocities measured with the on-board high-resolution spectrograph will only reach some 10 % of the full sample of stars with astrometry and photometry from the mission, and detailed chemical information will be obtained for less than 1 %. Teams all over the world are organizing large-scale projects to provide complementary radial velocities and chemistry, since this can now be done very efficiently from the ground thanks to large and mid-size telescopes with a wide field-of-view and multi-object spectrographs. As a result, automated data processing is taking an ever increasing relevance, and the concept is applying to many more areas, from targeting to analysis. In this paper, I provide a quick overview of recent, ongoing, and upcoming spectroscopic surveys, and the strategies adopted in their automated analysis pipelines.

  5. Automated vertebra identification in CT images

    Science.gov (United States)

    Ehm, Matthias; Klinder, Tobias; Kneser, Reinhard; Lorenz, Cristian

    2009-02-01

    In this paper, we describe and compare methods for automatically identifying individual vertebrae in arbitrary CT images. The identification is an essential precondition for a subsequent model-based segmentation, which is used in a wide field of orthopedic, neurological, and oncological applications, e.g., spinal biopsies or the insertion of pedicle screws. Since adjacent vertebrae show similar characteristics, an automated labeling of the spine column is a very challenging task, especially if no surrounding reference structures can be taken into account. Furthermore, vertebra identification is complicated due to the fact that many images are bounded to a very limited field of view and may contain only few vertebrae. We propose and evaluate two methods for automatically labeling the spine column by evaluating similarities between given models and vertebral objects. In one method, object boundary information is taken into account by applying a Generalized Hough Transform (GHT) for each vertebral object. In the other method, appearance models containing mean gray value information are registered to each vertebral object using cross and local correlation as similarity measures for the optimization function. The GHT is advantageous in terms of computational performance but cuts back concerning the identification rate. A correct labeling of the vertebral column has been successfully performed on 93% of the test set consisting of 63 disparate input images using rigid image registration with local correlation as similarity measure.

  6. Analysis of an automated background correction method for cardiovascular MR phase contrast imaging in children and young adults

    Energy Technology Data Exchange (ETDEWEB)

    Rigsby, Cynthia K.; Hilpipre, Nicholas; Boylan, Emma E.; Popescu, Andrada R.; Deng, Jie [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Medical Imaging, Chicago, IL (United States); McNeal, Gary R. [Siemens Medical Solutions USA Inc., Customer Solutions Group, Cardiovascular MR R and D, Chicago, IL (United States); Zhang, Gang [Ann and Robert H. Lurie Children' s Hospital of Chicago Research Center, Biostatistics Research Core, Chicago, IL (United States); Choi, Grace [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Pediatrics, Chicago, IL (United States); Greiser, Andreas [Siemens AG Healthcare Sector, Erlangen (Germany)

    2014-03-15

    Phase contrast magnetic resonance imaging (MRI) is a powerful tool for evaluating vessel blood flow. Inherent errors in acquisition, such as phase offset, eddy currents and gradient field effects, can cause significant inaccuracies in flow parameters. These errors can be rectified with the use of background correction software. To evaluate the performance of an automated phase contrast MRI background phase correction method in children and young adults undergoing cardiac MR imaging. We conducted a retrospective review of patients undergoing routine clinical cardiac MRI including phase contrast MRI for flow quantification in the aorta (Ao) and main pulmonary artery (MPA). When phase contrast MRI of the right and left pulmonary arteries was also performed, these data were included. We excluded patients with known shunts and metallic implants causing visible MRI artifact and those with more than mild to moderate aortic or pulmonary stenosis. Phase contrast MRI of the Ao, mid MPA, proximal right pulmonary artery (RPA) and left pulmonary artery (LPA) using 2-D gradient echo Fast Low Angle SHot (FLASH) imaging was acquired during normal respiration with retrospective cardiac gating. Standard phase image reconstruction and the automatic spatially dependent background-phase-corrected reconstruction were performed on each phase contrast MRI dataset. Non-background-corrected and background-phase-corrected net flow, forward flow, regurgitant volume, regurgitant fraction, and vessel cardiac output were recorded for each vessel. We compared standard non-background-corrected and background-phase-corrected mean flow values for the Ao and MPA. The ratio of pulmonary to systemic blood flow (Qp:Qs) was calculated for the standard non-background and background-phase-corrected data and these values were compared to each other and for proximity to 1. In a subset of patients who also underwent phase contrast MRI of the MPA, RPA, and LPA a comparison was made between standard non

  7. Automated feature extraction by combining polarimetric SAR and object-based image analysis for monitoring of natural resource exploitation

    OpenAIRE

    Plank, Simon; Mager, Alexander; Schöpfer, Elisabeth

    2015-01-01

    An automated feature extraction procedure based on the combination of a pixel-based unsupervised classification of polarimetric synthetic aperture radar data (PolSAR) and an object-based post-classification is presented. High resolution SpotLight dual-polarimetric (HH/VV) TerraSAR-X imagery acquired over the Doba basin, Chad, is used for method development and validation. In an iterative training procedure the best suited polarimetric speckle filter, processing parameters for the following en...

  8. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  9. Power analysis in flexible automation

    Science.gov (United States)

    Titus, Nathan A.

    1992-12-01

    The performance of an automation or robotic device can be measured in terms of its power efficiency. Screw theory is used to mathematically define the task instantaneously with two screws. The task wrench defines the effect of the device on its environment, and the task twist describes the motion of the device. The tasks can be separated into three task types: kinetic, manipulative, and reactive. Efficiency metrics are developed for each task type. The output power is strictly a function of the task screws, while device input power is shown to be a function of the task, the device Jacobian, and the actuator type. Expressions for input power are developed for two common types of actuators, DC servometers and hydraulic actuators. Simple examples are used to illustrate how power analysis can be used for task/workspace planning, actuator selection, device configuration design, and redundancy resolution.

  10. Automated detection and analysis of Ca(2+) sparks in x-y image stacks using a thresholding algorithm implemented within the open-source image analysis platform ImageJ.

    Science.gov (United States)

    Steele, Elliot M; Steele, Derek S

    2014-02-04

    Previous studies have used analysis of Ca(2+) sparks extensively to investigate both normal and pathological Ca(2+) regulation in cardiac myocytes. The great majority of these studies used line-scan confocal imaging. In part, this is because the development of open-source software for automatic detection of Ca(2+) sparks in line-scan images has greatly simplified data analysis. A disadvantage of line-scan imaging is that data are collected from a single row of pixels, representing only a small fraction of the cell, and in many instances x-y confocal imaging is preferable. However, the limited availability of software for Ca(2+) spark analysis in two-dimensional x-y image stacks presents an obstacle to its wider application. This study describes the development and characterization of software to enable automatic detection and analysis of Ca(2+) sparks within x-y image stacks, implemented as a plugin within the open-source image analysis platform ImageJ. The program includes methods to enable precise identification of cells within confocal fluorescence images, compensation for changes in background fluorescence, and options that allow exclusion of events based on spatial characteristics.

  11. Automated curved planar reformation of 3D spine images

    Energy Technology Data Exchange (ETDEWEB)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo [University of Ljubljana, Faculty of Electrical Engineering, Trzaska 25, SI-1000 Ljubljana (Slovenia)

    2005-10-07

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks.

  12. Automated 3D renal segmentation based on image partitioning

    Science.gov (United States)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  13. An Automated Solar Synoptic Analysis Software System

    Science.gov (United States)

    Hong, S.; Lee, S.; Oh, S.; Kim, J.; Lee, J.; Kim, Y.; Lee, J.; Moon, Y.; Lee, D.

    2012-12-01

    We have developed an automated software system of identifying solar active regions, filament channels, and coronal holes, those are three major solar sources causing the space weather. Space weather forecasters of NOAA Space Weather Prediction Center produce the solar synoptic drawings as a daily basis to predict solar activities, i.e., solar flares, filament eruptions, high speed solar wind streams, and co-rotating interaction regions as well as their possible effects to the Earth. As an attempt to emulate this process with a fully automated and consistent way, we developed a software system named ASSA(Automated Solar Synoptic Analysis). When identifying solar active regions, ASSA uses high-resolution SDO HMI intensitygram and magnetogram as inputs and providing McIntosh classification and Mt. Wilson magnetic classification of each active region by applying appropriate image processing techniques such as thresholding, morphology extraction, and region growing. At the same time, it also extracts morphological and physical properties of active regions in a quantitative way for the short-term prediction of flares and CMEs. When identifying filament channels and coronal holes, images of global H-alpha network and SDO AIA 193 are used for morphological identification and also SDO HMI magnetograms for quantitative verification. The output results of ASSA are routinely checked and validated against NOAA's daily SRS(Solar Region Summary) and UCOHO(URSIgram code for coronal hole information). A couple of preliminary scientific results are to be presented using available output results. ASSA will be deployed at the Korean Space Weather Center and serve its customers in an operational status by the end of 2012.

  14. Development of automated conjunctival hyperemia analysis software.

    Science.gov (United States)

    Sumi, Tamaki; Yoneda, Tsuyoshi; Fukuda, Ken; Hoshikawa, Yasuhiro; Kobayashi, Masahiko; Yanagi, Masahide; Kiuchi, Yoshiaki; Yasumitsu-Lovell, Kahoko; Fukushima, Atsuki

    2013-11-01

    Conjunctival hyperemia is observed in a variety of ocular inflammatory conditions. The evaluation of hyperemia is indispensable for the treatment of patients with ocular inflammation. However, the major methods currently available for evaluation are based on nonquantitative and subjective methods. Therefore, we developed novel software to evaluate bulbar hyperemia quantitatively and objectively. First, we investigated whether the histamine-induced hyperemia of guinea pigs could be quantified by image analysis. Bulbar conjunctival images were taken by means of a digital camera, followed by the binarization of the images and the selection of regions of interest (ROIs) for evaluation. The ROIs were evaluated by counting the number of absolute pixel values. Pixel values peaked significantly 1 minute after histamine challenge was performed and were still increased after 5 minutes. Second, we applied the same method to antigen (ovalbumin)-induced hyperemia of sensitized guinea pigs, acquiring similar results except for the substantial upregulation in the first 5 minutes after challenge. Finally, we analyzed human bulbar hyperemia using the new software we developed especially for human usage. The new software allows the automatic calculation of pixel values once the ROIs have been selected. In our clinical trials, the percentage of blood vessel coverage of ROIs was significantly higher in the images of hyperemia caused by allergic conjunctival diseases and hyperemia induced by Bimatoprost, compared with those of healthy volunteers. We propose that this newly developed automated hyperemia analysis software will be an objective clinical tool for the evaluation of ocular hyperemia.

  15. Automated Identification of Rivers and Shorelines in Aerial Imagery Using Image Texture

    Science.gov (United States)

    2011-01-01

    defining the criteria for segmenting the image. For these cases certain automated, unsupervised (or minimally supervised), image classification ...banks, image analysis, edge finding, photography, satellite, texture, entropy 16. SECURITY CLASSIFICATION OF: a. REPORT Unclassified b. ABSTRACT...high resolution bank geometry. Much of the globe is covered by various sorts of multi- or hyperspectral imagery and numerous techniques have been

  16. Automation for System Safety Analysis

    Science.gov (United States)

    Malin, Jane T.; Fleming, Land; Throop, David; Thronesbery, Carroll; Flores, Joshua; Bennett, Ted; Wennberg, Paul

    2009-01-01

    This presentation describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  17. Automated blood vessel extraction using local features on retinal images

    Science.gov (United States)

    Hatanaka, Yuji; Samo, Kazuki; Tajima, Mikiya; Ogohara, Kazunori; Muramatsu, Chisako; Okumura, Susumu; Fujita, Hiroshi

    2016-03-01

    An automated blood vessel extraction using high-order local autocorrelation (HLAC) on retinal images is presented. Although many blood vessel extraction methods based on contrast have been proposed, a technique based on the relation of neighbor pixels has not been published. HLAC features are shift-invariant; therefore, we applied HLAC features to retinal images. However, HLAC features are weak to turned image, thus a method was improved by the addition of HLAC features to a polar transformed image. The blood vessels were classified using an artificial neural network (ANN) with HLAC features using 105 mask patterns as input. To improve performance, the second ANN (ANN2) was constructed by using the green component of the color retinal image and the four output values of ANN, Gabor filter, double-ring filter and black-top-hat transformation. The retinal images used in this study were obtained from the "Digital Retinal Images for Vessel Extraction" (DRIVE) database. The ANN using HLAC output apparent white values in the blood vessel regions and could also extract blood vessels with low contrast. The outputs were evaluated using the area under the curve (AUC) based on receiver operating characteristics (ROC) analysis. The AUC of ANN2 was 0.960 as a result of our study. The result can be used for the quantitative analysis of the blood vessels.

  18. Toward Automated Feature Detection in UAVSAR Images

    Science.gov (United States)

    Parker, J. W.; Donnellan, A.; Glasscoe, M. T.

    2014-12-01

    Edge detection identifies seismic or aseismic fault motion, as demonstrated in repeat-pass inteferograms obtained by the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) program. But this identification is not robust at present: it requires a flattened background image, interpolation into missing data (holes) and outliers, and background noise that is either sufficiently small or roughly white Gaussian. Identification and mitigation of nongaussian background image noise is essential to creating a robust, automated system to search for such features. Clearly a robust method is needed for machine scanning of the thousands of UAVSAR repeat-pass interferograms for evidence of fault slip, landslides, and other local features.Empirical examination of detrended noise based on 20 km east-west profiles through desert terrain with little tectonic deformation for a suite of flight interferograms shows nongaussian characteristics. Statistical measurement of curvature with varying length scale (Allan variance) shows nearly white behavior (Allan variance slope with spatial distance from roughly -1.76 to -2) from 25 to 400 meters, deviations from -2 suggesting short-range differences (such as used in detecting edges) are often freer of noise than longer-range differences. At distances longer than 400 m the Allan variance flattens out without consistency from one interferogram to another. We attribute this additional noise afflicting difference estimates at longer distances to atmospheric water vapor and uncompensated aircraft motion.Paradoxically, California interferograms made with increasing time intervals before and after the El Mayor Cucapah earthquake (2008, M7.2, Mexico) show visually stronger and more interesting edges, but edge detection methods developed for the first year do not produce reliable results over the first two years, because longer time spans suffer reduced coherence in the interferogram. The changes over time are reflecting fault slip and block

  19. Automated Pipelines for Spectroscopic Analysis

    CERN Document Server

    Prieto, Carlos Allende

    2016-01-01

    The Gaia mission will have a profound impact on our understanding of the structure and dynamics of the Milky Way. Gaia is providing an exhaustive census of stellar parallaxes, proper motions, positions, colors and radial velocities, but also leaves some flaring holes in an otherwise complete data set. The radial velocities measured with the on-board high-resolution spectrograph will only reach some 10% of the full sample of stars with astrometry and photometry from the mission, and detailed chemical information will be obtained for less than 1%. Teams all over the world are organizing large-scale projects to provide complementary radial velocities and chemistry, since this can now be done very efficiently from the ground thanks to large and mid-size telescopes with a wide field-of-view and multi-object spectrographs. As a result, automated data processing is taking an ever increasing relevance, and the concept is applying to many more areas, from targeting to analysis. In this paper, I provide a quick overvie...

  20. An automated image analysis framework for segmentation and division plane detection of single live Staphylococcus aureus cells which can operate at millisecond sampling time scales using bespoke Slimfield microscopy

    CERN Document Server

    Wollman, Adam J M; Foster, Simon; Leake, Mark C

    2016-01-01

    Staphylococcus aureus is an important pathogen, giving rise to antimicrobial resistance in cell strains such as Methicillin Resistant S. aureus (MRSA). Here we report an image analysis framework for automated detection and image segmentation of cells in S. aureus cell clusters, and explicit identification of their cell division planes. We use a new combination of several existing analytical tools of image analysis to detect cellular and subcellular morphological features relevant to cell division from millisecond time scale sampled images of live pathogens at a detection precision of single molecules. We demonstrate this approach using a fluorescent reporter GFP fused to the protein EzrA that localises to a mid-cell plane during division and is involved in regulation of cell size and division. This image analysis framework presents a valuable platform from which to study candidate new antimicrobials which target the cell division machinery, but may also have more general application in detecting morphological...

  1. Automated Segmentation of Cardiac Magnetic Resonance Images

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Nilsson, Jens Chr.; Grønning, Bjørn A.

    2001-01-01

    is based on determination of the left-ventricular endocardial and epicardial borders. Since manual border detection is laborious, automated segmentation is highly desirable as a fast, objective and reproducible alternative. Automated segmentation will thus enhance comparability between and within cardiac...... studies and increase accuracy by allowing acquisition of thinner MRI-slices. This abstract demonstrates that statistical models of shape and appearance, namely the deformable models: Active Appearance Models, can successfully segment cardiac MRIs....

  2. Automated classification of colon polyps in endoscopic image data

    Science.gov (United States)

    Gross, Sebastian; Palm, Stephan; Tischendorf, Jens J. W.; Behrens, Alexander; Trautwein, Christian; Aach, Til

    2012-03-01

    Colon cancer is the third most commonly diagnosed type of cancer in the US. In recent years, however, early diagnosis and treatment have caused a significant rise in the five year survival rate. Preventive screening is often performed by colonoscopy (endoscopic inspection of the colon mucosa). Narrow Band Imaging (NBI) is a novel diagnostic approach highlighting blood vessel structures on polyps which are an indicator for future cancer risk. In this paper, we review our automated inter- and intra-observer independent system for the automated classification of polyps into hyperplasias and adenomas based on vessel structures to further improve the classification performance. To surpass the performance limitations we derive a novel vessel segmentation approach, extract 22 features to describe complex vessel topologies, and apply three feature selection strategies. Tests are conducted on 286 NBI images with diagnostically important and challenging polyps (10mm or smaller) taken from our representative polyp database. Evaluations are based on ground truth data determined by histopathological analysis. Feature selection by Simulated Annealing yields the best result with a prediction accuracy of 96.2% (sensitivity: 97.6%, specificity: 94.2%) using eight features. Future development aims at implementing a demonstrator platform to begin clinical trials at University Hospital Aachen.

  3. Automated recognition of cell phenotypes in histology images based on membrane- and nuclei-targeting biomarkers

    Directory of Open Access Journals (Sweden)

    Tözeren Aydın

    2007-09-01

    Full Text Available Abstract Background Three-dimensional in vitro culture of cancer cells are used to predict the effects of prospective anti-cancer drugs in vivo. In this study, we present an automated image analysis protocol for detailed morphological protein marker profiling of tumoroid cross section images. Methods Histologic cross sections of breast tumoroids developed in co-culture suspensions of breast cancer cell lines, stained for E-cadherin and progesterone receptor, were digitized and pixels in these images were classified into five categories using k-means clustering. Automated segmentation was used to identify image regions composed of cells expressing a given biomarker. Synthesized images were created to check the accuracy of the image processing system. Results Accuracy of automated segmentation was over 95% in identifying regions of interest in synthesized images. Image analysis of adjacent histology slides stained, respectively, for Ecad and PR, accurately predicted regions of different cell phenotypes. Image analysis of tumoroid cross sections from different tumoroids obtained under the same co-culture conditions indicated the variation of cellular composition from one tumoroid to another. Variations in the compositions of cross sections obtained from the same tumoroid were established by parallel analysis of Ecad and PR-stained cross section images. Conclusion Proposed image analysis methods offer standardized high throughput profiling of molecular anatomy of tumoroids based on both membrane and nuclei markers that is suitable to rapid large scale investigations of anti-cancer compounds for drug development.

  4. Automated vasculature extraction from placenta images

    Science.gov (United States)

    Almoussa, Nizar; Dutra, Brittany; Lampe, Bryce; Getreuer, Pascal; Wittman, Todd; Salafia, Carolyn; Vese, Luminita

    2011-03-01

    Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important information on how certain diseases progress. One important property is the structure of the placental blood vessels, which supply a fetus with all of its oxygen and nutrition. An essential step in the analysis of the vascular network pattern is the extraction of the blood vessels, which has only been done manually through a costly and time-consuming process. There is no existing method to automatically detect placental blood vessels; in addition, the large variation in the shape, color, and texture of the placenta makes it difficult to apply standard edge-detection algorithms. We describe a method to automatically detect and extract blood vessels from a given image by using image processing techniques and neural networks. We evaluate several local features for every pixel, in addition to a novel modification to an existing road detector. Pixels belonging to blood vessel regions have recognizable responses; hence, we use an artificial neural network to identify the pattern of blood vessels. A set of images where blood vessels are manually highlighted is used to train the network. We then apply the neural network to recognize blood vessels in new images. The network is effective in capturing the most prominent vascular structures of the placenta.

  5. Feasibility Analysis of Crane Automation

    Institute of Scientific and Technical Information of China (English)

    DONG Ming-xiao; MEI Xue-song; JIANG Ge-dong; ZHANG Gui-qing

    2006-01-01

    This paper summarizes the modeling methods, open-loop control and closed-loop control techniques of various forms of cranes, worldwide, and discusses their feasibilities and limitations in engineering. Then the dynamic behaviors of cranes are analyzed. Finally, we propose applied modeling methods and feasible control techniques and demonstrate the feasibilities of crane automation.

  6. Towards Automated Annotation of Benthic Survey Images: Variability of Human Experts and Operational Modes of Automation.

    Directory of Open Access Journals (Sweden)

    Oscar Beijbom

    Full Text Available Global climate change and other anthropogenic stressors have heightened the need to rapidly characterize ecological changes in marine benthic communities across large scales. Digital photography enables rapid collection of survey images to meet this need, but the subsequent image annotation is typically a time consuming, manual task. We investigated the feasibility of using automated point-annotation to expedite cover estimation of the 17 dominant benthic categories from survey-images captured at four Pacific coral reefs. Inter- and intra- annotator variability among six human experts was quantified and compared to semi- and fully- automated annotation methods, which are made available at coralnet.ucsd.edu. Our results indicate high expert agreement for identification of coral genera, but lower agreement for algal functional groups, in particular between turf algae and crustose coralline algae. This indicates the need for unequivocal definitions of algal groups, careful training of multiple annotators, and enhanced imaging technology. Semi-automated annotation, where 50% of the annotation decisions were performed automatically, yielded cover estimate errors comparable to those of the human experts. Furthermore, fully-automated annotation yielded rapid, unbiased cover estimates but with increased variance. These results show that automated annotation can increase spatial coverage and decrease time and financial outlay for image-based reef surveys.

  7. Towards Automated Annotation of Benthic Survey Images: Variability of Human Experts and Operational Modes of Automation

    Science.gov (United States)

    Beijbom, Oscar; Edmunds, Peter J.; Roelfsema, Chris; Smith, Jennifer; Kline, David I.; Neal, Benjamin P.; Dunlap, Matthew J.; Moriarty, Vincent; Fan, Tung-Yung; Tan, Chih-Jui; Chan, Stephen; Treibitz, Tali; Gamst, Anthony; Mitchell, B. Greg; Kriegman, David

    2015-01-01

    Global climate change and other anthropogenic stressors have heightened the need to rapidly characterize ecological changes in marine benthic communities across large scales. Digital photography enables rapid collection of survey images to meet this need, but the subsequent image annotation is typically a time consuming, manual task. We investigated the feasibility of using automated point-annotation to expedite cover estimation of the 17 dominant benthic categories from survey-images captured at four Pacific coral reefs. Inter- and intra- annotator variability among six human experts was quantified and compared to semi- and fully- automated annotation methods, which are made available at coralnet.ucsd.edu. Our results indicate high expert agreement for identification of coral genera, but lower agreement for algal functional groups, in particular between turf algae and crustose coralline algae. This indicates the need for unequivocal definitions of algal groups, careful training of multiple annotators, and enhanced imaging technology. Semi-automated annotation, where 50% of the annotation decisions were performed automatically, yielded cover estimate errors comparable to those of the human experts. Furthermore, fully-automated annotation yielded rapid, unbiased cover estimates but with increased variance. These results show that automated annotation can increase spatial coverage and decrease time and financial outlay for image-based reef surveys. PMID:26154157

  8. Image auto-zoom technology for AFM automation

    Institute of Scientific and Technical Information of China (English)

    LIU Wen-liang; QIAN Jian-qiang; LI Yuan

    2009-01-01

    For the case of atomic force microscope (AFM) automation, we extract the most valuable sub-region of a given AFM image automatically for succeeding scanning to get the higher resolution of interesting region. Two objective functions are sum-marized based on the analysis of evaluation of the information of a sub-region, and corresponding algorithm principles based on standard deviation and Discrete Cosine Transform (DCT) compression are determined from math. Algorithm realizations are analyzed and two select patterns of sub-region: fixed grid mode and sub-region walk mode are compared. To speed up the algorithm of DCT compression which is too slow to practical applied, a new algorithm is proposed based on analysis of DCT's block computing feature, and it can perform hundreds times faster than original. Implementation result of the algorithms proves that this technology can be applied to the AFM automatic operation. Finally the difference between the two objective functions is discussed with detail computations.

  9. Automated analysis of 3D echocardiography

    NARCIS (Netherlands)

    Stralen, Marijn van

    2009-01-01

    In this thesis we aim at automating the analysis of 3D echocardiography, mainly targeting the functional analysis of the left ventricle. Manual analysis of these data is cumbersome, time-consuming and is associated with inter-observer and inter-institutional variability. Methods for reconstruction o

  10. On Automating and Standardising Corpus Callosum Analysis in Brain MRI

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Skoglund, Karl

    2005-01-01

    Corpus callosum analysis is influenced by many factors. The effort in controlling these has previously been incomplete and scattered. This paper sketches a complete pipeline for automated corpus callosum analysis from magnetic resonance images, with focus on measurement standardisation....... The presented pipeline deals with i) estimation of the mid-sagittal plane, ii) localisation and registration of the corpus callosum, iii) parameterisation and representation of its contour, and iv) means of standardising the traditional reference area measurements....

  11. Automated microscopic characterization of metallic ores with image analysis: a key to improve ore processing. I: test of the methodology; Reconocimiento automatizado de menas metalicas mediante analisis digital de imagen: un apoyo al proceso mineralurgico. I: ensayo metodologico

    Energy Technology Data Exchange (ETDEWEB)

    Berrezueta, E.; Castroviejo, R.

    2007-07-01

    Ore microscopy has traditionally been an important support to control ore processing, but the volume of present day processes is beyond the reach of human operators. Automation is therefore compulsory, but its development through digital image analysis, DIA, is limited by various problems, such as the similarity in reflectance values of some important ores, their anisotropism, and the performance of instruments and methods. The results presented show that automated identification and quantification by DIA are possible through multiband (RGB) determinations with a research 3CCD video camera on reflected light microscope. These results were obtained by systematic measurement of selected ores accounting for most of the industrial applications. Polarized light is avoided, so the effects of anisotropism can be neglected. Quality control at various stages and statistical analysis are important, as is the application of complementary criteria (e.g. metallogenetic). The sequential methodology is described and shown through practical examples. (Author)

  12. Image Analysis

    DEFF Research Database (Denmark)

    The 19th Scandinavian Conference on Image Analysis was held at the IT University of Copenhagen in Denmark during June 15-17, 2015. The SCIA conference series has been an ongoing biannual event for more than 30 years and over the years it has nurtured a world-class regional research and development....... The topics of the accepted papers range from novel applications of vision systems, pattern recognition, machine learning, feature extraction, segmentation, 3D vision, to medical and biomedical image analysis. The papers originate from all the Scandinavian countries and several other European countries...

  13. Automated microaneurysm detection algorithms applied to diabetic retinopathy retinal images

    Directory of Open Access Journals (Sweden)

    Akara Sopharak

    2013-07-01

    Full Text Available Diabetic retinopathy is the commonest cause of blindness in working age people. It is characterised and graded by the development of retinal microaneurysms, haemorrhages and exudates. The damage caused by diabetic retinopathy can be prevented if it is treated in its early stages. Therefore, automated early detection can limit the severity of the disease, improve the follow-up management of diabetic patients and assist ophthalmologists in investigating and treating the disease more efficiently. This review focuses on microaneurysm detection as the earliest clinically localised characteristic of diabetic retinopathy, a frequently observed complication in both Type 1 and Type 2 diabetes. Algorithms used for microaneurysm detection from retinal images are reviewed. A number of features used to extract microaneurysm are summarised. Furthermore, a comparative analysis of reported methods used to automatically detect microaneurysms is presented and discussed. The performance of methods and their complexity are also discussed.

  14. Comparison of automated and manual segmentation of hippocampus MR images

    Science.gov (United States)

    Haller, John W.; Christensen, Gary E.; Miller, Michael I.; Joshi, Sarang C.; Gado, Mokhtar; Csernansky, John G.; Vannier, Michael W.

    1995-05-01

    The precision and accuracy of area estimates from magnetic resonance (MR) brain images and using manual and automated segmentation methods are determined. Areas of the human hippocampus were measured to compare a new automatic method of segmentation with regions of interest drawn by an expert. MR images of nine normal subjects and nine schizophrenic patients were acquired with a 1.5-T unit (Siemens Medical Systems, Inc., Iselin, New Jersey). From each individual MPRAGE 3D volume image a single comparable 2-D slice (matrix equals 256 X 256) was chosen which corresponds to the same coronal slice of the hippocampus. The hippocampus was first manually segmented, then segmented using high dimensional transformations of a digital brain atlas to individual brain MR images. The repeatability of a trained rater was assessed by comparing two measurements from each individual subject. Variability was also compared within and between subject groups of schizophrenics and normal subjects. Finally, the precision and accuracy of automated segmentation of hippocampal areas were determined by comparing automated measurements to manual segmentation measurements made by the trained rater on MR and brain slice images. The results demonstrate the high repeatability of area measurement from MR images of the human hippocampus. Automated segmentation using high dimensional transformations from a digital brain atlas provides repeatability superior to that of manual segmentation. Furthermore, the validity of automated measurements was demonstrated by a high correlation with manual segmentation measurements made by a trained rater. Quantitative morphometry of brain substructures (e.g. hippocampus) is feasible by use of a high dimensional transformation of a digital brain atlas to an individual MR image. This method automates the search for neuromorphological correlates of schizophrenia by a new mathematically robust method with unprecedented sensitivity to small local and regional differences.

  15. Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy

    Science.gov (United States)

    Bucht, Curry; Söderberg, Per; Manneberg, Göran

    2010-02-01

    The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor of the corneal endothelium. Pathological conditions and physical trauma may threaten the endothelial cell density to such an extent that the optical property of the cornea and thus clear eyesight is threatened. Diagnosis of the corneal endothelium through morphometry is an important part of several clinical applications. Morphometry of the corneal endothelium is presently carried out by semi automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development and use of fully automated analysis of a very large range of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images, normalizing lights and contrasts. The digitally enhanced images of the corneal endothelium were Fourier transformed, using the fast Fourier transform (FFT) and stored as new images. Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on 292 images captured by CSM. The cell density obtained by the

  16. Automated Real-Time Conjunctival Microvasculature Image Stabilization.

    Science.gov (United States)

    Felder, Anthony E; Mercurio, Cesare; Wanek, Justin; Ansari, Rashid; Shahidi, Mahnaz

    2016-07-01

    The bulbar conjunctiva is a thin, vascularized membrane covering the sclera of the eye. Non-invasive imaging techniques have been utilized to assess the conjunctival vasculature as a means of studying microcirculatory hemodynamics. However, eye motion often confounds quantification of these hemodynamic properties. In the current study, we present a novel optical imaging system for automated stabilization of conjunctival microvasculature images by real-time eye motion tracking and realignment of the optical path. The ability of the system to stabilize conjunctival images acquired over time by reducing image displacements and maintaining the imaging area was demonstrated.

  17. Automation of Cassini Support Imaging Uplink Command Development

    Science.gov (United States)

    Ly-Hollins, Lisa; Breneman, Herbert H.; Brooks, Robert

    2010-01-01

    "Support imaging" is imagery requested by other Cassini science teams to aid in the interpretation of their data. The generation of the spacecraft command sequences for these images is performed by the Cassini Instrument Operations Team. The process initially established for doing this was very labor-intensive, tedious and prone to human error. Team management recognized this process as one that could easily benefit from automation. Team members were tasked to document the existing manual process, develop a plan and strategy to automate the process, implement the plan and strategy, test and validate the new automated process, and deliver the new software tools and documentation to Flight Operations for use during the Cassini extended mission. In addition to the goals of higher efficiency and lower risk in the processing of support imaging requests, an effort was made to maximize adaptability of the process to accommodate uplink procedure changes and the potential addition of new capabilities outside the scope of the initial effort.

  18. Automated processing of webcam images for phenological classification.

    Science.gov (United States)

    Bothmann, Ludwig; Menzel, Annette; Menze, Bjoern H; Schunk, Christian; Kauermann, Göran

    2017-01-01

    Along with the global climate change, there is an increasing interest for its effect on phenological patterns such as start and end of the growing season. Scientific digital webcams are used for this purpose taking every day one or more images from the same natural motive showing for example trees or grassland sites. To derive phenological patterns from the webcam images, regions of interest are manually defined on these images by an expert and subsequently a time series of percentage greenness is derived and analyzed with respect to structural changes. While this standard approach leads to satisfying results and allows to determine dates of phenological change points, it is associated with a considerable amount of manual work and is therefore constrained to a limited number of webcams only. In particular, this forbids to apply the phenological analysis to a large network of publicly accessible webcams in order to capture spatial phenological variation. In order to be able to scale up the analysis to several hundreds or thousands of webcams, we propose and evaluate two automated alternatives for the definition of regions of interest, allowing for efficient analyses of webcam images. A semi-supervised approach selects pixels based on the correlation of the pixels' time series of percentage greenness with a few prototype pixels. An unsupervised approach clusters pixels based on scores of a singular value decomposition. We show for a scientific webcam that the resulting regions of interest are at least as informative as those chosen by an expert with the advantage that no manual action is required. Additionally, we show that the methods can even be applied to publicly available webcams accessed via the internet yielding interesting partitions of the analyzed images. Finally, we show that the methods are suitable for the intended big data applications by analyzing 13988 webcams from the AMOS database. All developed methods are implemented in the statistical software

  19. Automated Technology for Verificiation and Analysis

    DEFF Research Database (Denmark)

    This volume contains the papers presented at the 7th International Symposium on Automated Technology for Verification and Analysis held during October 13-16 in Macao SAR, China. The primary objective of the ATVA conferences remains the same: to exchange and promote the latest advances of state......-of-the-art research on theoretical and practical aspects of automated analysis, verification, and synthesis. Among 74 research papers and 10 tool papers submitted to ATVA 2009, the Program Committee accepted 23 as regular papers and 3 as tool papers. In all, 33 experts from 17 countries worked hard to make sure...

  20. Automated quantification of budding Saccharomyces cerevisiae using a novel image cytometry method.

    Science.gov (United States)

    Laverty, Daniel J; Kury, Alexandria L; Kuksin, Dmitry; Pirani, Alnoor; Flanagan, Kevin; Chan, Leo Li-Ying

    2013-06-01

    The measurements of concentration, viability, and budding percentages of Saccharomyces cerevisiae are performed on a routine basis in the brewing and biofuel industries. Generation of these parameters is of great importance in a manufacturing setting, where they can aid in the estimation of product quality, quantity, and fermentation time of the manufacturing process. Specifically, budding percentages can be used to estimate the reproduction rate of yeast populations, which directly correlates with metabolism of polysaccharides and bioethanol production, and can be monitored to maximize production of bioethanol during fermentation. The traditional method involves manual counting using a hemacytometer, but this is time-consuming and prone to human error. In this study, we developed a novel automated method for the quantification of yeast budding percentages using Cellometer image cytometry. The automated method utilizes a dual-fluorescent nucleic acid dye to specifically stain live cells for imaging analysis of unique morphological characteristics of budding yeast. In addition, cell cycle analysis is performed as an alternative method for budding analysis. We were able to show comparable yeast budding percentages between manual and automated counting, as well as cell cycle analysis. The automated image cytometry method is used to analyze and characterize corn mash samples directly from fermenters during standard fermentation. Since concentration, viability, and budding percentages can be obtained simultaneously, the automated method can be integrated into the fermentation quality assurance protocol, which may improve the quality and efficiency of beer and bioethanol production processes.

  1. Automated Micro-Object Detection for Mobile Diagnostics Using Lens-Free Imaging Technology

    Directory of Open Access Journals (Sweden)

    Mohendra Roy

    2016-05-01

    Full Text Available Lens-free imaging technology has been extensively used recently for microparticle and biological cell analysis because of its high throughput, low cost, and simple and compact arrangement. However, this technology still lacks a dedicated and automated detection system. In this paper, we describe a custom-developed automated micro-object detection method for a lens-free imaging system. In our previous work (Roy et al., we developed a lens-free imaging system using low-cost components. This system was used to generate and capture the diffraction patterns of micro-objects and a global threshold was used to locate the diffraction patterns. In this work we used the same setup to develop an improved automated detection and analysis algorithm based on adaptive threshold and clustering of signals. For this purpose images from the lens-free system were then used to understand the features and characteristics of the diffraction patterns of several types of samples. On the basis of this information, we custom-developed an automated algorithm for the lens-free imaging system. Next, all the lens-free images were processed using this custom-developed automated algorithm. The performance of this approach was evaluated by comparing the counting results with standard optical microscope results. We evaluated the counting results for polystyrene microbeads, red blood cells, and HepG2, HeLa, and MCF7 cells. The comparison shows good agreement between the systems, with a correlation coefficient of 0.91 and linearity slope of 0.877. We also evaluated the automated size profiles of the microparticle samples. This Wi-Fi-enabled lens-free imaging system, along with the dedicated software, possesses great potential for telemedicine applications in resource-limited settings.

  2. High-resolution image analysis.

    Science.gov (United States)

    Preston, K

    1986-01-01

    In many departments of cytology, cytogenetics, hematology, and pathology, research projects using high-resolution computerized microscopy are now being mounted for computation of morphometric measurements on various structural components, as well as for determination of cellular DNA content. The majority of these measurements are made in a partially automated, computer-assisted mode, wherein there is strong interaction between the user and the computerized microscope. At the same time, full automation has been accomplished for both sample preparation and sample examination for clinical determination of the white blood cell differential count. At the time of writing, approximately 1,000 robot differential counting microscopes are in the field, analyzing images of human white blood cells, red blood cells, and platelets at the overall rate of about 100,000 slides per day. This mammoth through-put represents a major accomplishment in the application of machine vision to automated microscopy for hematology. In other areas of automated high-resolution microscopy, such as cytology and cytogenetics, no commercial instruments are available (although a few metaphase-finding machines are available and other new machines have been announced during the past year). This is a disappointing product, considering the nearly half century of research effort in these areas. This paper provides examples of the state of the art in automation of cell analysis for blood smears, cervical smears, and chromosome preparations. Also treated are new developments in multi-resolution automated microscopy, where images are now being generated and analyzed by a single machine over a range of 64:1 magnification and from 10,000 X 20,000 to 500 X 500 in total picture elements (pixels). Examples of images of human lymph node and liver tissue are presented. Semi-automated systems are not treated, although there is mention of recent research in the automation of tissue analysis.

  3. Automated image registration for FDOPA PET studies

    Science.gov (United States)

    Lin, Kang-Ping; Huang, Sung-Cheng; Yu, Dan-Chu; Melega, William; Barrio, Jorge R.; Phelps, Michael E.

    1996-12-01

    In this study, various image registration methods are investigated for their suitability for registration of L-6-[18F]-fluoro-DOPA (FDOPA) PET images. Five different optimization criteria including sum of absolute difference (SAD), mean square difference (MSD), cross-correlation coefficient (CC), standard deviation of pixel ratio (SDPR), and stochastic sign change (SSC) were implemented and Powell's algorithm was used to optimize the criteria. The optimization criteria were calculated either unidirectionally (i.e. only evaluating the criteria for comparing the resliced image 1 with the original image 2) or bidirectionally (i.e. averaging the criteria for comparing the resliced image 1 with the original image 2 and those for the sliced image 2 with the original image 1). Monkey FDOPA images taken at various known orientations were used to evaluate the accuracy of different methods. A set of human FDOPA dynamic images was used to investigate the ability of the methods for correcting subject movement. It was found that a large improvement in performance resulted when bidirectional rather than unidirectional criteria were used. Overall, the SAD, MSD and SDPR methods were found to be comparable in performance and were suitable for registering FDOPA images. The MSD method gave more adequate results for frame-to-frame image registration for correcting subject movement during a dynamic FDOPA study. The utility of the registration method is further demonstrated by registering FDOPA images in monkeys before and after amphetamine injection to reveal more clearly the changes in spatial distribution of FDOPA due to the drug intervention.

  4. Automation of the proximate analysis of coals

    Energy Technology Data Exchange (ETDEWEB)

    1985-01-01

    A study is reported of the feasibility of using a multi-jointed general-purpose robot for the automated analysis of moisture, volatile matter, ash and total post-combustion sulfur in coal and coke. The results obtained with an automated system are compared with those of conventional manual methods. The design of the robot hand and the safety measures provided are now both fully satisfactory, and the analytic values obtained exhibit little scatter. It is concluded that the use of this robot system results in a better working environment and in considerable labour saving. Applications to other tasks are under development.

  5. Automated morphometry of transgenic mouse brains in MR images

    NARCIS (Netherlands)

    Scheenstra, Alize Elske Hiltje

    2011-01-01

    Quantitative and local morphometry of mouse brain MRI is a relatively new field of research, where automated methods can be exploited to rapidly provide accurate and repeatable results. In this thesis we reviewed several existing methods and applications of quantitative morphometry to brain MR image

  6. Automated quality assessment in three-dimensional breast ultrasound images.

    Science.gov (United States)

    Schwaab, Julia; Diez, Yago; Oliver, Arnau; Martí, Robert; van Zelst, Jan; Gubern-Mérida, Albert; Mourri, Ahmed Bensouda; Gregori, Johannes; Günther, Matthias

    2016-04-01

    Automated three-dimensional breast ultrasound (ABUS) is a valuable adjunct to x-ray mammography for breast cancer screening of women with dense breasts. High image quality is essential for proper diagnostics and computer-aided detection. We propose an automated image quality assessment system for ABUS images that detects artifacts at the time of acquisition. Therefore, we study three aspects that can corrupt ABUS images: the nipple position relative to the rest of the breast, the shadow caused by the nipple, and the shape of the breast contour on the image. Image processing and machine learning algorithms are combined to detect these artifacts based on 368 clinical ABUS images that have been rated manually by two experienced clinicians. At a specificity of 0.99, 55% of the images that were rated as low quality are detected by the proposed algorithms. The areas under the ROC curves of the single classifiers are 0.99 for the nipple position, 0.84 for the nipple shadow, and 0.89 for the breast contour shape. The proposed algorithms work fast and reliably, which makes them adequate for online evaluation of image quality during acquisition. The presented concept may be extended to further image modalities and quality aspects.

  7. An Automated Method for Semantic Classification of Regions in Coastal Images

    NARCIS (Netherlands)

    Hoonhout, B.M.; Radermacher, M.; Baart, F.; Van der Maaten, L.J.P.

    2015-01-01

    Large, long-term coastal imagery datasets are nowadays a low-cost source of information for various coastal research disciplines. However, the applicability of many existing algorithms for coastal image analysis is limited for these large datasets due to a lack of automation and robustness. Therefor

  8. Automated indexing of Laue images from polycrystalline materials

    Energy Technology Data Exchange (ETDEWEB)

    Chung, J.S.; Ice, G.E. [Oak Ridge National Lab., TN (United States). Metals and Ceramics Div.

    1998-12-31

    Third generation hard x-ray synchrotron sources and new x-ray optics have revolutionized x-ray microbeams. Now intense sub-micron x-ray beams are routinely available for x-ray diffraction measurement. An important application of sub-micron x-ray beams is analyzing polycrystalline material by measuring the diffraction of individual grains. For these measurements, conventional analysis methods will not work. The most suitable method for microdiffraction on polycrystalline samples is taking broad-bandpass or white-beam Laue images. With this method, the crystal orientation and non-isostatic strain can be measured rapidly without rotation of sample or detector. The essential step is indexing the reflections from more than one grain. An algorithm has recently been developed to index broad bandpass Laue images from multi-grain samples. For a single grain, a unique set of indices is found by comparing measured angles between Laue reflections and angles between possible indices derived from the x-ray energy bandpass and the scattering angle 2 theta. This method has been extended to multigrain diffraction by successively indexing points not recognized in preceding indexing iterations. This automated indexing method can be used in a wide range of applications.

  9. Automated Archiving of Archaeological Aerial Images

    Directory of Open Access Journals (Sweden)

    Michael Doneus

    2016-03-01

    Full Text Available The main purpose of any aerial photo archive is to allow quick access to images based on content and location. Therefore, next to a description of technical parameters and depicted content, georeferencing of every image is of vital importance. This can be done either by identifying the main photographed object (georeferencing of the image content or by mapping the center point and/or the outline of the image footprint. The paper proposes a new image archiving workflow. The new pipeline is based on the parameters that are logged by a commercial, but cost-effective GNSS/IMU solution and processed with in-house-developed software. Together, these components allow one to automatically geolocate and rectify the (oblique aerial images (by a simple planar rectification using the exterior orientation parameters and to retrieve their footprints with reasonable accuracy, which is automatically stored as a vector file. The data of three test flights were used to determine the accuracy of the device, which turned out to be better than 1° for roll and pitch (mean between 0.0 and 0.21 with a standard deviation of 0.17–0.46 and better than 2.5° for yaw angles (mean between 0.0 and −0.14 with a standard deviation of 0.58–0.94. This turned out to be sufficient to enable a fast and almost automatic GIS-based archiving of all of the imagery.

  10. Advances in monitoring dynamic hydrologic conditions in the vadose zone through automated high-resolution ground-penetrating radar imaging and analysis

    Science.gov (United States)

    Mangel, Adam R.

    This body of research focuses on resolving physical and hydrological heterogeneities in the subsurface with ground-penetrating radar (GPR). Essentially, there are two facets of this research centered on the goal of improving the collective understanding of unsaturated flow processes: i) modifications to commercially available equipment to optimize hydrologic value of the data and ii) the development of novel methods for data interpretation and analysis in a hydrologic context given the increased hydrologic value of the data. Regarding modifications to equipment, automation of GPR data collection substantially enhances our ability to measure changes in the hydrologic state of the subsurface at high spatial and temporal resolution (Chapter 1). Additionally, automated collection shows promise for quick high-resolution mapping of dangerous subsurface targets, like unexploded ordinance, that may have alternate signals depending on the hydrologic environment (Chapter 5). Regarding novel methods for data inversion, dispersive GPR data collected during infiltration can constrain important information about the local 1D distribution of water in waveguide layers (Chapters 2 and 3), however, more data is required for reliably analyzing complicated patterns produced by the wetting of the soil. In this regard, data collected in 2D and 3D geometries can further illustrate evidence of heterogeneous flow, while maintaining the content for resolving wave velocities and therefore, water content. This enables the use of algorithms like reflection tomography, which show the ability of the GPR data to independently resolve water content distribution in homogeneous soils (Chapter 5). In conclusion, automation enables the non-invasive study of highly dynamic hydrologic processes by providing the high resolution data required to interpret and resolve spatial and temporal wetting patterns associated with heterogeneous flow. By automating the data collection, it also allows for the novel

  11. Improving Automated Annotation of Benthic Survey Images Using Wide-band Fluorescence

    Science.gov (United States)

    Beijbom, Oscar; Treibitz, Tali; Kline, David I.; Eyal, Gal; Khen, Adi; Neal, Benjamin; Loya, Yossi; Mitchell, B. Greg; Kriegman, David

    2016-03-01

    Large-scale imaging techniques are used increasingly for ecological surveys. However, manual analysis can be prohibitively expensive, creating a bottleneck between collected images and desired data-products. This bottleneck is particularly severe for benthic surveys, where millions of images are obtained each year. Recent automated annotation methods may provide a solution, but reflectance images do not always contain sufficient information for adequate classification accuracy. In this work, the FluorIS, a low-cost modified consumer camera, was used to capture wide-band wide-field-of-view fluorescence images during a field deployment in Eilat, Israel. The fluorescence images were registered with standard reflectance images, and an automated annotation method based on convolutional neural networks was developed. Our results demonstrate a 22% reduction of classification error-rate when using both images types compared to only using reflectance images. The improvements were large, in particular, for coral reef genera Platygyra, Acropora and Millepora, where classification recall improved by 38%, 33%, and 41%, respectively. We conclude that convolutional neural networks can be used to combine reflectance and fluorescence imagery in order to significantly improve automated annotation accuracy and reduce the manual annotation bottleneck.

  12. Volumetric measurements of pulmonary nodules: variability in automated analysis tools

    Science.gov (United States)

    Juluru, Krishna; Kim, Woojin; Boonn, William; King, Tara; Siddiqui, Khan; Siegel, Eliot

    2007-03-01

    Over the past decade, several computerized tools have been developed for detection of lung nodules and for providing volumetric analysis. Incidentally detected lung nodules have traditionally been followed over time by measurements of their axial dimensions on CT scans to ensure stability or document progression. A recently published article by the Fleischner Society offers guidelines on the management of incidentally detected nodules based on size criteria. For this reason, differences in measurements obtained by automated tools from various vendors may have significant implications on management, yet the degree of variability in these measurements is not well understood. The goal of this study is to quantify the differences in nodule maximum diameter and volume among different automated analysis software. Using a dataset of lung scans obtained with both "ultra-low" and conventional doses, we identified a subset of nodules in each of five size-based categories. Using automated analysis tools provided by three different vendors, we obtained size and volumetric measurements on these nodules, and compared these data using descriptive as well as ANOVA and t-test analysis. Results showed significant differences in nodule maximum diameter measurements among the various automated lung nodule analysis tools but no significant differences in nodule volume measurements. These data suggest that when using automated commercial software, volume measurements may be a more reliable marker of tumor progression than maximum diameter. The data also suggest that volumetric nodule measurements may be relatively reproducible among various commercial workstations, in contrast to the variability documented when performing human mark-ups, as is seen in the LIDC (lung imaging database consortium) study.

  13. Automated Loads Analysis System (ATLAS)

    Science.gov (United States)

    Gardner, Stephen; Frere, Scot; O’Reilly, Patrick

    2013-01-01

    ATLAS is a generalized solution that can be used for launch vehicles. ATLAS is used to produce modal transient analysis and quasi-static analysis results (i.e., accelerations, displacements, and forces) for the payload math models on a specific Shuttle Transport System (STS) flight using the shuttle math model and associated forcing functions. This innovation solves the problem of coupling of payload math models into a shuttle math model. It performs a transient loads analysis simulating liftoff, landing, and all flight events between liftoff and landing. ATLAS utilizes efficient and numerically stable algorithms available in MSC/NASTRAN.

  14. An automated image analysis framework for segmentation and division plane detection of single live Staphylococcus aureus cells which can operate at millisecond sampling time scales using bespoke Slimfield microscopy

    Science.gov (United States)

    Wollman, Adam J. M.; Miller, Helen; Foster, Simon; Leake, Mark C.

    2016-10-01

    Staphylococcus aureus is an important pathogen, giving rise to antimicrobial resistance in cell strains such as Methicillin Resistant S. aureus (MRSA). Here we report an image analysis framework for automated detection and image segmentation of cells in S. aureus cell clusters, and explicit identification of their cell division planes. We use a new combination of several existing analytical tools of image analysis to detect cellular and subcellular morphological features relevant to cell division from millisecond time scale sampled images of live pathogens at a detection precision of single molecules. We demonstrate this approach using a fluorescent reporter GFP fused to the protein EzrA that localises to a mid-cell plane during division and is involved in regulation of cell size and division. This image analysis framework presents a valuable platform from which to study candidate new antimicrobials which target the cell division machinery, but may also have more general application in detecting morphologically complex structures of fluorescently labelled proteins present in clusters of other types of cells.

  15. Automated digital image analysis of islet cell mass using Nikon's inverted eclipse Ti microscope and software to improve engraftment may help to advance the therapeutic efficacy and accessibility of islet transplantation across centers.

    Science.gov (United States)

    Gmyr, Valery; Bonner, Caroline; Lukowiak, Bruno; Pawlowski, Valerie; Dellaleau, Nathalie; Belaich, Sandrine; Aluka, Isanga; Moermann, Ericka; Thevenet, Julien; Ezzouaoui, Rimed; Queniat, Gurvan; Pattou, Francois; Kerr-Conte, Julie

    2015-01-01

    Reliable assessment of islet viability, mass, and purity must be met prior to transplanting an islet preparation into patients with type 1 diabetes. The standard method for quantifying human islet preparations is by direct microscopic analysis of dithizone-stained islet samples, but this technique may be susceptible to inter-/intraobserver variability, which may induce false positive/negative islet counts. Here we describe a simple, reliable, automated digital image analysis (ADIA) technique for accurately quantifying islets into total islet number, islet equivalent number (IEQ), and islet purity before islet transplantation. Islets were isolated and purified from n = 42 human pancreata according to the automated method of Ricordi et al. For each preparation, three islet samples were stained with dithizone and expressed as IEQ number. Islets were analyzed manually by microscopy or automatically quantified using Nikon's inverted Eclipse Ti microscope with built-in NIS-Elements Advanced Research (AR) software. The AIDA method significantly enhanced the number of islet preparations eligible for engraftment compared to the standard manual method (p < 0.001). Comparisons of individual methods showed good correlations between mean values of IEQ number (r(2) = 0.91) and total islet number (r(2) = 0.88) and thus increased to r(2) = 0.93 when islet surface area was estimated comparatively with IEQ number. The ADIA method showed very high intraobserver reproducibility compared to the standard manual method (p < 0.001). However, islet purity was routinely estimated as significantly higher with the manual method versus the ADIA method (p < 0.001). The ADIA method also detected small islets between 10 and 50 µm in size. Automated digital image analysis utilizing the Nikon Instruments software is an unbiased, simple, and reliable teaching tool to comprehensively assess the individual size of each islet cell preparation prior to transplantation. Implementation of this

  16. Automated Pointing of Cardiac Imaging Catheters.

    Science.gov (United States)

    Loschak, Paul M; Brattain, Laura J; Howe, Robert D

    2013-12-31

    Intracardiac echocardiography (ICE) catheters enable high-quality ultrasound imaging within the heart, but their use in guiding procedures is limited due to the difficulty of manually pointing them at structures of interest. This paper presents the design and testing of a catheter steering model for robotic control of commercial ICE catheters. The four actuated degrees of freedom (4-DOF) are two catheter handle knobs to produce bi-directional bending in combination with rotation and translation of the handle. An extra degree of freedom in the system allows the imaging plane (dependent on orientation) to be directed at an object of interest. A closed form solution for forward and inverse kinematics enables control of the catheter tip position and the imaging plane orientation. The proposed algorithms were validated with a robotic test bed using electromagnetic sensor tracking of the catheter tip. The ability to automatically acquire imaging targets in the heart may improve the efficiency and effectiveness of intracardiac catheter interventions by allowing visualization of soft tissue structures that are not visible using standard fluoroscopic guidance. Although the system has been developed and tested for manipulating ICE catheters, the methods described here are applicable to any long thin tendon-driven tool (with single or bi-directional bending) requiring accurate tip position and orientation control.

  17. Automated thresholding in radiographic image for welded joints

    Science.gov (United States)

    Yazid, Haniza; Arof, Hamzah; Yazid, Hafizal

    2012-03-01

    Automated detection of welding defects in radiographic images becomes non-trivial when uneven illumination, contrast and noise are present. In this paper, a new surface thresholding method is introduced to detect defects in radiographic images of welding joints. In the first stage, several image processing techniques namely fuzzy c means clustering, region filling, mean filtering, edge detection, Otsu's thresholding and morphological operations method are utilised to locate the area in which defects might exist. This is followed by the implementation of inverse surface thresholding with partial differential equation to locate isolated areas that represent the defects in the second stage. The proposed method obtained a promising result with high precision.

  18. Automated processing of webcam images for phenological classification

    Science.gov (United States)

    Bothmann, Ludwig; Menzel, Annette; Menze, Bjoern H.; Schunk, Christian; Kauermann, Göran

    2017-01-01

    Along with the global climate change, there is an increasing interest for its effect on phenological patterns such as start and end of the growing season. Scientific digital webcams are used for this purpose taking every day one or more images from the same natural motive showing for example trees or grassland sites. To derive phenological patterns from the webcam images, regions of interest are manually defined on these images by an expert and subsequently a time series of percentage greenness is derived and analyzed with respect to structural changes. While this standard approach leads to satisfying results and allows to determine dates of phenological change points, it is associated with a considerable amount of manual work and is therefore constrained to a limited number of webcams only. In particular, this forbids to apply the phenological analysis to a large network of publicly accessible webcams in order to capture spatial phenological variation. In order to be able to scale up the analysis to several hundreds or thousands of webcams, we propose and evaluate two automated alternatives for the definition of regions of interest, allowing for efficient analyses of webcam images. A semi-supervised approach selects pixels based on the correlation of the pixels’ time series of percentage greenness with a few prototype pixels. An unsupervised approach clusters pixels based on scores of a singular value decomposition. We show for a scientific webcam that the resulting regions of interest are at least as informative as those chosen by an expert with the advantage that no manual action is required. Additionally, we show that the methods can even be applied to publicly available webcams accessed via the internet yielding interesting partitions of the analyzed images. Finally, we show that the methods are suitable for the intended big data applications by analyzing 13988 webcams from the AMOS database. All developed methods are implemented in the statistical software

  19. SAND: Automated VLBI imaging and analyzing pipeline

    Science.gov (United States)

    Zhang, Ming

    2016-05-01

    The Search And Non-Destroy (SAND) is a VLBI data reduction pipeline composed of a set of Python programs based on the AIPS interface provided by ObitTalk. It is designed for the massive data reduction of multi-epoch VLBI monitoring research. It can automatically investigate calibrated visibility data, search all the radio emissions above a given noise floor and do the model fitting either on the CLEANed image or directly on the uv data. It then digests the model-fitting results, intelligently identifies the multi-epoch jet component correspondence, and recognizes the linear or non-linear proper motion patterns. The outputs including CLEANed image catalogue with polarization maps, animation cube, proper motion fitting and core light curves. For uncalibrated data, a user can easily add inline modules to do the calibration and self-calibration in a batch for a specific array.

  20. A Modular Approach for Automating Video Analysis

    OpenAIRE

    Nadarajan, Gayathri; Renouf, Arnaud

    2007-01-01

    International audience; Automating the steps involved in video processing has yet to be tackled with much success by vision developers and knowledge engineers. This is due to the difficulty in formulating vision problems and their solutions in a generalised manner. In this collaborated work, we introduce a modular approach that utilises ontologies to capture the goals, domain description and capabilities for performing video analysis. This modularisation is tested on real-world videos from an...

  1. Automated delineation of stroke lesions using brain CT images

    Directory of Open Access Journals (Sweden)

    Céline R. Gillebert

    2014-01-01

    Full Text Available Computed tomographic (CT images are widely used for the identification of abnormal brain tissue following infarct and hemorrhage in stroke. Manual lesion delineation is currently the standard approach, but is both time-consuming and operator-dependent. To address these issues, we present a method that can automatically delineate infarct and hemorrhage in stroke CT images. The key elements of this method are the accurate normalization of CT images from stroke patients into template space and the subsequent voxelwise comparison with a group of control CT images for defining areas with hypo- or hyper-intense signals. Our validation, using simulated and actual lesions, shows that our approach is effective in reconstructing lesions resulting from both infarct and hemorrhage and yields lesion maps spatially consistent with those produced manually by expert operators. A limitation is that, relative to manual delineation, there is reduced sensitivity of the automated method in regions close to the ventricles and the brain contours. However, the automated method presents a number of benefits in terms of offering significant time savings and the elimination of the inter-operator differences inherent to manual tracing approaches. These factors are relevant for the creation of large-scale lesion databases for neuropsychological research. The automated delineation of stroke lesions from CT scans may also enable longitudinal studies to quantify changes in damaged tissue in an objective and reproducible manner.

  2. Techniques for Automated Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Marcus, Ryan C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-09-02

    The performance of a particular HPC code depends on a multitude of variables, including compiler selection, optimization flags, OpenMP pool size, file system load, memory usage, MPI configuration, etc. As a result of this complexity, current predictive models have limited applicability, especially at scale. We present a formulation of scientific codes, nodes, and clusters that reduces complex performance analysis to well-known mathematical techniques. Building accurate predictive models and enhancing our understanding of scientific codes at scale is an important step towards exascale computing.

  3. Failure modes and effects analysis automation

    Science.gov (United States)

    Kamhieh, Cynthia H.; Cutts, Dannie E.; Purves, R. Byron

    1988-01-01

    A failure modes and effects analysis (FMEA) assistant was implemented as a knowledge based system and will be used during design of the Space Station to aid engineers in performing the complex task of tracking failures throughout the entire design effort. The three major directions in which automation was pursued were the clerical components of the FMEA process, the knowledge acquisition aspects of FMEA, and the failure propagation/analysis portions of the FMEA task. The system is accessible to design, safety, and reliability engineers at single user workstations and, although not designed to replace conventional FMEA, it is expected to decrease by many man years the time required to perform the analysis.

  4. Proximate analysis by automated thermogravimetry

    Energy Technology Data Exchange (ETDEWEB)

    Elder, J.P.

    1983-05-01

    A study has been made of the use of the Perkin-Elmer thermogravimetric instrument TGS-2, under the control of the System 4 microprocessor for the automatic proximate analysis of solid fossil fuels and related matter. The programs developed are simple to operate, and do not require detailed temperature calibration of the instrumental system. They have been tested with coals of varying rank, biomass samples and Devonian oil shales all of which were of special importance to the State of Kentucky. Precise, accurate data conforming to ASTM specifications were obtained. The simplicity of the technique suggests that it may complement the classical ASTM method and could be used when this latter procedure cannot be employed. However, its adoption as a standardized method must await the development of statistical data resulting from interlaboratory testing on a variety of fossil fuels. (9 refs.)

  5. Osteolytica: An automated image analysis software package that rapidly measures cancer-induced osteolytic lesions in in vivo models with greater reproducibility compared to other commonly used methods.

    Science.gov (United States)

    Evans, H R; Karmakharm, T; Lawson, M A; Walker, R E; Harris, W; Fellows, C; Huggins, I D; Richmond, P; Chantry, A D

    2016-02-01

    Methods currently used to analyse osteolytic lesions caused by malignancies such as multiple myeloma and metastatic breast cancer vary from basic 2-D X-ray analysis to 2-D images of micro-CT datasets analysed with non-specialised image software such as ImageJ. However, these methods have significant limitations. They do not capture 3-D data, they are time-consuming and they often suffer from inter-user variability. We therefore sought to develop a rapid and reproducible method to analyse 3-D osteolytic lesions in mice with cancer-induced bone disease. To this end, we have developed Osteolytica, an image analysis software method featuring an easy to use, step-by-step interface to measure lytic bone lesions. Osteolytica utilises novel graphics card acceleration (parallel computing) and 3-D rendering to provide rapid reconstruction and analysis of osteolytic lesions. To evaluate the use of Osteolytica we analysed tibial micro-CT datasets from murine models of cancer-induced bone disease and compared the results to those obtained using a standard ImageJ analysis method. Firstly, to assess inter-user variability we deployed four independent researchers to analyse tibial datasets from the U266-NSG murine model of myeloma. Using ImageJ, inter-user variability between the bones was substantial (±19.6%), in contrast to using Osteolytica, which demonstrated minimal variability (±0.5%). Secondly, tibial datasets from U266-bearing NSG mice or BALB/c mice injected with the metastatic breast cancer cell line 4T1 were compared to tibial datasets from aged and sex-matched non-tumour control mice. Analyses by both Osteolytica and ImageJ showed significant increases in bone lesion area in tumour-bearing mice compared to control mice. These results confirm that Osteolytica performs as well as the current 2-D ImageJ osteolytic lesion analysis method. However, Osteolytica is advantageous in that it analyses over the entirety of the bone volume (as opposed to selected 2-D images), it

  6. Flux-P: Automating Metabolic Flux Analysis

    Directory of Open Access Journals (Sweden)

    Birgitta E. Ebert

    2012-11-01

    Full Text Available Quantitative knowledge of intracellular fluxes in metabolic networks is invaluable for inferring metabolic system behavior and the design principles of biological systems. However, intracellular reaction rates can not often be calculated directly but have to be estimated; for instance, via 13C-based metabolic flux analysis, a model-based interpretation of stable carbon isotope patterns in intermediates of metabolism. Existing software such as FiatFlux, OpenFLUX or 13CFLUX supports experts in this complex analysis, but requires several steps that have to be carried out manually, hence restricting the use of this software for data interpretation to a rather small number of experiments. In this paper, we present Flux-P as an approach to automate and standardize 13C-based metabolic flux analysis, using the Bio-jETI workflow framework. Exemplarily based on the FiatFlux software, it demonstrates how services can be created that carry out the different analysis steps autonomously and how these can subsequently be assembled into software workflows that perform automated, high-throughput intracellular flux analysis of high quality and reproducibility. Besides significant acceleration and standardization of the data analysis, the agile workflow-based realization supports flexible changes of the analysis workflows on the user level, making it easy to perform custom analyses.

  7. Multispectral tissue analysis and classification towards enabling automated robotic surgery

    Science.gov (United States)

    Triana, Brian; Cha, Jaepyeong; Shademan, Azad; Krieger, Axel; Kang, Jin U.; Kim, Peter C. W.

    2014-02-01

    Accurate optical characterization of different tissue types is an important tool for potentially guiding surgeons and enabling automated robotic surgery. Multispectral imaging and analysis have been used in the literature to detect spectral variations in tissue reflectance that may be visible to the naked eye. Using this technique, hidden structures can be visualized and analyzed for effective tissue classification. Here, we investigated the feasibility of automated tissue classification using multispectral tissue analysis. Broadband reflectance spectra (200-1050 nm) were collected from nine different ex vivo porcine tissues types using an optical fiber-probe based spectrometer system. We created a mathematical model to train and distinguish different tissue types based upon analysis of the observed spectra using total principal component regression (TPCR). Compared to other reported methods, our technique is computationally inexpensive and suitable for real-time implementation. Each of the 92 spectra was cross-referenced against the nine tissue types. Preliminary results show a mean detection rate of 91.3%, with detection rates of 100% and 70.0% (inner and outer kidney), 100% and 100% (inner and outer liver), 100% (outer stomach), and 90.9%, 100%, 70.0%, 85.7% (four different inner stomach areas, respectively). We conclude that automated tissue differentiation using our multispectral tissue analysis method is feasible in multiple ex vivo tissue specimens. Although measurements were performed using ex vivo tissues, these results suggest that real-time, in vivo tissue identification during surgery may be possible.

  8. Automated identification of mitochondrial regions in complex intracellular space by texture analysis

    Science.gov (United States)

    Pham, Tuan D.

    2014-01-01

    Automated processing and quantification of biological images have been rapidly increasing the attention of researchers in image processing and pattern recognition because the roles of computerized image and pattern analyses are critical for new biological findings and drug discovery based on modern high-throughput and highcontent image screening. This paper presents a study of the automated detection of regions of mitochondria, which are a subcellular structure of eukaryotic cells, in microscopy images. The automated identification of mitochondria in intracellular space that is captured by the state-of-the-art combination of focused ion beam and scanning electron microscope imaging reported here is the first of its type. Existing methods and a proposed algorithm for texture analysis were tested with the real intracellular images. The high correction rate of detecting the locations of the mitochondria in a complex environment suggests the effectiveness of the proposed study.

  9. Automated interpretation of optic nerve images: a data mining framework for glaucoma diagnostic support.

    Science.gov (United States)

    Abidi, Syed S R; Artes, Paul H; Yun, Sanjan; Yu, Jin

    2007-01-01

    Confocal Scanning Laser Tomography (CSLT) techniques capture high-quality images of the optic disc (the retinal region where the optic nerve exits the eye) that are used in the diagnosis and monitoring of glaucoma. We present a hybrid framework, combining image processing and data mining methods, to support the interpretation of CSLT optic nerve images. Our framework features (a) Zernike moment methods to derive shape information from optic disc images; (b) classification of optic disc images, based on shape information, to distinguish between healthy and glaucomatous optic discs. We apply Multi Layer Perceptrons, Support Vector Machines and Bayesian Networks for feature sub-set selection and image classification; and (c) clustering of optic disc images, based on shape information, using Self-Organizing Maps to visualize sub-types of glaucomatous optic disc damage. Our framework offers an automated and objective analysis of optic nerve images that can potentially support both diagnosis and monitoring of glaucoma.

  10. Image mosaicing for automated pipe scanning

    Science.gov (United States)

    Summan, Rahul; Dobie, Gordon; Guarato, Francesco; MacLeod, Charles; Marshall, Stephen; Forrester, Cailean; Pierce, Gareth; Bolton, Gary

    2015-03-01

    Remote visual inspection (RVI) is critical for the inspection of the interior condition of pipelines particularly in the nuclear and oil and gas industries. Conventional RVI equipment produces a video which is analysed online by a trained inspector employing expert knowledge. Due to the potentially disorientating nature of the footage, this is a time intensive and difficult activity. In this paper a new probe for such visual inspections is presented. The device employs a catadioptric lens coupled with feature based structure from motion to create a 3D model of the interior surface of a pipeline. Reliance upon the availability of image features is mitigated through orientation and distance estimates from an inertial measurement unit and encoder respectively. Such a model affords a global view of the data thus permitting a greater appreciation of the nature and extent of defects. Furthermore, the technique estimates the 3D position and orientation of the probe thus providing information to direct remedial action. Results are presented for both synthetic and real pipe sections. The former enables the accuracy of the generated model to be assessed while the latter demonstrates the efficacy of the technique in a practice.

  11. AUTOMATED IMAGE MATCHING WITH CODED POINTS IN STEREOVISION MEASUREMENT

    Institute of Scientific and Technical Information of China (English)

    Dong Mingli; Zhou Xiaogang; Zhu Lianqing; Lü Naiguang; Sun Yunan

    2005-01-01

    A coding-based method to solve the image matching problems in stereovision measurement is presented. The solution is to add and append an identity ID to the retro-reflect point, so it can be identified efficiently under the complicated circumstances and has the characteristics of rotation, zooming, and deformation independence. Its design architecture and implementation process in details based on the theory of stereovision measurement are described. The method is effective on reducing processing data time, improving accuracy of image matching and automation of measuring system through experiments.

  12. Crowdsourcing scoring of immunohistochemistry images: Evaluating Performance of the Crowd and an Automated Computational Method

    Science.gov (United States)

    Irshad, Humayun; Oh, Eun-Yeong; Schmolze, Daniel; Quintana, Liza M.; Collins, Laura; Tamimi, Rulla M.; Beck, Andrew H.

    2017-01-01

    The assessment of protein expression in immunohistochemistry (IHC) images provides important diagnostic, prognostic and predictive information for guiding cancer diagnosis and therapy. Manual scoring of IHC images represents a logistical challenge, as the process is labor intensive and time consuming. Since the last decade, computational methods have been developed to enable the application of quantitative methods for the analysis and interpretation of protein expression in IHC images. These methods have not yet replaced manual scoring for the assessment of IHC in the majority of diagnostic laboratories and in many large-scale research studies. An alternative approach is crowdsourcing the quantification of IHC images to an undefined crowd. The aim of this study is to quantify IHC images for labeling of ER status with two different crowdsourcing approaches, image-labeling and nuclei-labeling, and compare their performance with automated methods. Crowdsourcing- derived scores obtained greater concordance with the pathologist interpretations for both image-labeling and nuclei-labeling tasks (83% and 87%), as compared to the pathologist concordance achieved by the automated method (81%) on 5,338 TMA images from 1,853 breast cancer patients. This analysis shows that crowdsourcing the scoring of protein expression in IHC images is a promising new approach for large scale cancer molecular pathology studies. PMID:28230179

  13. Automating Risk Analysis of Software Design Models

    Directory of Open Access Journals (Sweden)

    Maxime Frydman

    2014-01-01

    Full Text Available The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance.

  14. GPU Accelerated Automated Feature Extraction From Satellite Images

    Directory of Open Access Journals (Sweden)

    K. Phani Tejaswi

    2013-04-01

    Full Text Available The availability of large volumes of remote sensing data insists on higher degree of automation in featureextraction, making it a need of thehour. Fusingdata from multiple sources, such as panchromatic,hyperspectraland LiDAR sensors, enhances the probability of identifying and extracting features such asbuildings, vegetation or bodies of water by using a combination of spectral and elevation characteristics.Utilizing theaforementioned featuresin remote sensing is impracticable in the absence ofautomation.Whileefforts are underway to reduce human intervention in data processing, this attempt alone may notsuffice. Thehuge quantum of data that needs to be processed entailsaccelerated processing to be enabled.GPUs, which were originally designed to provide efficient visualization,arebeing massively employed forcomputation intensive parallel processing environments. Image processing in general and hence automatedfeatureextraction, is highly computation intensive, where performance improvements have a direct impacton societal needs. In this context, an algorithm has been formulated for automated feature extraction froma panchromatic or multispectral image based on image processing techniques.Two Laplacian of Guassian(LoGmasks were applied on the image individually followed by detection of zero crossing points andextracting the pixels based on their standard deviationwiththe surrounding pixels. The two extractedimages with different LoG masks were combined together which resulted in an image withthe extractedfeatures and edges.Finally the user is at liberty to apply the image smoothing step depending on the noisecontent in the extracted image.The image ispassed through a hybrid median filter toremove the salt andpepper noise from the image.This paper discusses theaforesaidalgorithmforautomated featureextraction, necessity of deployment of GPUs for thesame;system-level challenges and quantifies thebenefits of integrating GPUs in such environment. The

  15. Automated analysis of siRNA screens of cells infected by hepatitis C and dengue viruses based on immunofluorescence microscopy images

    Science.gov (United States)

    Matula, Petr; Kumar, Anil; Wörz, Ilka; Harder, Nathalie; Erfle, Holger; Bartenschlager, Ralf; Eils, Roland; Rohr, Karl

    2008-03-01

    We present an image analysis approach as part of a high-throughput microscopy siRNA-based screening system using cell arrays for the identification of cellular genes involved in hepatitis C and dengue virus replication. Our approach comprises: cell nucleus segmentation, quantification of virus replication level in the neighborhood of segmented cell nuclei, localization of regions with transfected cells, cell classification by infection status, and quality assessment of an experiment and single images. In particular, we propose a novel approach for the localization of regions of transfected cells within cell array images, which combines model-based circle fitting and grid fitting. By this scheme we integrate information from single cell array images and knowledge from the complete cell arrays. The approach is fully automatic and has been successfully applied to a large number of cell array images from screening experiments. The experimental results show a good agreement with the expected behaviour of positive as well as negative controls and encourage the application to screens from further high-throughput experiments.

  16. When Phase Contrast Fails: ChainTracer and NucTracer, Two ImageJ Methods for Semi-Automated Single Cell Analysis Using Membrane or DNA Staining.

    Science.gov (United States)

    Syvertsson, Simon; Vischer, Norbert O E; Gao, Yongqiang; Hamoen, Leendert W

    2016-01-01

    Within bacterial populations, genetically identical cells often behave differently. Single-cell measurement methods are required to observe this heterogeneity. Flow cytometry and fluorescence light microscopy are the primary methods to do this. However, flow cytometry requires reasonably strong fluorescence signals and is impractical when bacteria grow in cell chains. Therefore fluorescence light microscopy is often used to measure population heterogeneity in bacteria. Automatic microscopy image analysis programs typically use phase contrast images to identify cells. However, many bacteria divide by forming a cross-wall that is not detectable by phase contrast. We have developed 'ChainTracer', a method based on the ImageJ plugin ObjectJ. It can automatically identify individual cells stained by fluorescent membrane dyes, and measure fluorescence intensity, chain length, cell length, and cell diameter. As a complementary analysis method we developed 'NucTracer', which uses DAPI stained nucleoids as a proxy for single cells. The latter method is especially useful when dealing with crowded images. The methods were tested with Bacillus subtilis and Lactococcus lactis cells expressing a GFP-reporter. In conclusion, ChainTracer and NucTracer are useful single cell measurement methods when bacterial cells are difficult to distinguish with phase contrast.

  17. Automated morphological analysis of bone marrow cells in microscopic images for diagnosis of leukemia: nucleus-plasma separation and cell classification using a hierarchical tree model of hematopoesis

    Science.gov (United States)

    Krappe, Sebastian; Wittenberg, Thomas; Haferlach, Torsten; Münzenmayer, Christian

    2016-03-01

    The morphological differentiation of bone marrow is fundamental for the diagnosis of leukemia. Currently, the counting and classification of the different types of bone marrow cells is done manually under the use of bright field microscopy. This is a time-consuming, subjective, tedious and error-prone process. Furthermore, repeated examinations of a slide may yield intra- and inter-observer variances. For that reason a computer assisted diagnosis system for bone marrow differentiation is pursued. In this work we focus (a) on a new method for the separation of nucleus and plasma parts and (b) on a knowledge-based hierarchical tree classifier for the differentiation of bone marrow cells in 16 different classes. Classification trees are easily interpretable and understandable and provide a classification together with an explanation. Using classification trees, expert knowledge (i.e. knowledge about similar classes and cell lines in the tree model of hematopoiesis) is integrated in the structure of the tree. The proposed segmentation method is evaluated with more than 10,000 manually segmented cells. For the evaluation of the proposed hierarchical classifier more than 140,000 automatically segmented bone marrow cells are used. Future automated solutions for the morphological analysis of bone marrow smears could potentially apply such an approach for the pre-classification of bone marrow cells and thereby shortening the examination time.

  18. Automated Steel Cleanliness Analysis Tool (ASCAT)

    Energy Technology Data Exchange (ETDEWEB)

    Gary Casuccio (RJ Lee Group); Michael Potter (RJ Lee Group); Fred Schwerer (RJ Lee Group); Dr. Richard J. Fruehan (Carnegie Mellon University); Dr. Scott Story (US Steel)

    2005-12-30

    The objective of this study was to develop the Automated Steel Cleanliness Analysis Tool (ASCATTM) to permit steelmakers to evaluate the quality of the steel through the analysis of individual inclusions. By characterizing individual inclusions, determinations can be made as to the cleanliness of the steel. Understanding the complicating effects of inclusions in the steelmaking process and on the resulting properties of steel allows the steel producer to increase throughput, better control the process, reduce remelts, and improve the quality of the product. The ASCAT (Figure 1) is a steel-smart inclusion analysis tool developed around a customized next-generation computer controlled scanning electron microscopy (NG-CCSEM) hardware platform that permits acquisition of inclusion size and composition data at a rate never before possible in SEM-based instruments. With built-in customized ''intelligent'' software, the inclusion data is automatically sorted into clusters representing different inclusion types to define the characteristics of a particular heat (Figure 2). The ASCAT represents an innovative new tool for the collection of statistically meaningful data on inclusions, and provides a means of understanding the complicated effects of inclusions in the steel making process and on the resulting properties of steel. Research conducted by RJLG with AISI (American Iron and Steel Institute) and SMA (Steel Manufactures of America) members indicates that the ASCAT has application in high-grade bar, sheet, plate, tin products, pipes, SBQ, tire cord, welding rod, and specialty steels and alloys where control of inclusions, whether natural or engineered, are crucial to their specification for a given end-use. Example applications include castability of calcium treated steel; interstitial free (IF) degasser grade slag conditioning practice; tundish clogging and erosion minimization; degasser circulation and optimization; quality assessment

  19. NEW TECHNIQUES USED IN AUTOMATED TEXT ANALYSIS

    Directory of Open Access Journals (Sweden)

    M. I strate

    2010-12-01

    Full Text Available Automated analysis of natural language texts is one of the most important knowledge discovery tasks for any organization. According to Gartner Group, almost 90% of knowledge available at an organization today is dispersed throughout piles of documents buried within unstructured text. Analyzing huge volumes of textual information is often involved in making informed and correct business decisions. Traditional analysis methods based on statistics fail to help processing unstructured texts and the society is in search of new technologies for text analysis. There exist a variety of approaches to the analysis of natural language texts, but most of them do not provide results that could be successfully applied in practice. This article concentrates on recent ideas and practical implementations in this area.

  20. Full second order chromatographic/spectrometric data matrices for automated sample identification and component analysis by non-data-reducing image analysis

    DEFF Research Database (Denmark)

    Nielsen, Niles-Peter Vest; Smedsgaard, Jørn; Frisvad, Jens Christian

    1999-01-01

    A data analysis method is proposed for identification and for confirmation of classification schemes, based on single- or multiple-wavelength chromatographic profiles. The proposed method works directly on the chromatographic data without data reduction procedures such as peak area or retention...... index calculation, Chromatographic matrices from analysis of previously identified samples are used for generating a reference chromatogram for each class, and unidentified samples are compared with all reference chromatograms by calculating a resemblance measure for each reference. Once the method...... yielded over 90% agreement with accepted classifications. The method is highly accurate and may be used on all sorts of chromatographic profiles. Characteristic component analysis yielded results in good agreement with existing knowledge of characteristic components, but also succeeded in identifying new...

  1. Automated Imaging System for Pigmented Skin Lesion Diagnosis

    Directory of Open Access Journals (Sweden)

    Mariam Ahmed Sheha

    2016-10-01

    Full Text Available Through the study of pigmented skin lesions risk factors, the appearance of malignant melanoma turns the anomalous occurrence of these lesions to annoying sign. The difficulty of differentiation between malignant melanoma and melanocytic naive is the error-bone problem that usually faces the physicians in diagnosis. To think through the hard mission of pigmented skin lesions diagnosis different clinical diagnosis algorithms were proposed such as pattern analysis, ABCD rule of dermoscopy, Menzies method, and 7-points checklist. Computerized monitoring of these algorithms improves the diagnosis of melanoma compared to simple naked-eye of physician during examination. Toward the serious step of melanoma early detection, aiming to reduce melanoma mortality rate, several computerized studies and procedures were proposed. Through this research different approaches with a huge number of features were discussed to point out the best approach or methodology could be followed to accurately diagnose the pigmented skin lesion. This paper proposes automated system for diagnosis of melanoma to provide quantitative and objective evaluation of skin lesion as opposed to visual assessment, which is subjective in nature. Two different data sets were utilized to reduce the effect of qualitative interpretation problem upon accurate diagnosis. Set of clinical images that are acquired from a standard camera while the other set is acquired from a special dermoscopic camera and so named dermoscopic images. System contribution appears in new, complete and different approaches presented for the aim of pigmented skin lesion diagnosis. These approaches result from using large conclusive set of features fed to different classifiers. The three main types of different features extracted from the region of interest are geometric, chromatic, and texture features. Three statistical methods were proposed to select the most significant features that will cause a valuable effect in

  2. IFDOTMETER: A New Software Application for Automated Immunofluorescence Analysis.

    Science.gov (United States)

    Rodríguez-Arribas, Mario; Pizarro-Estrella, Elisa; Gómez-Sánchez, Rubén; Yakhine-Diop, S M S; Gragera-Hidalgo, Antonio; Cristo, Alejandro; Bravo-San Pedro, Jose M; González-Polo, Rosa A; Fuentes, José M

    2016-04-01

    Most laboratories interested in autophagy use different imaging software for managing and analyzing heterogeneous parameters in immunofluorescence experiments (e.g., LC3-puncta quantification and determination of the number and size of lysosomes). One solution would be software that works on a user's laptop or workstation that can access all image settings and provide quick and easy-to-use analysis of data. Thus, we have designed and implemented an application called IFDOTMETER, which can run on all major operating systems because it has been programmed using JAVA (Sun Microsystems). Briefly, IFDOTMETER software has been created to quantify a variety of biological hallmarks, including mitochondrial morphology and nuclear condensation. The program interface is intuitive and user-friendly, making it useful for users not familiar with computer handling. By setting previously defined parameters, the software can automatically analyze a large number of images without the supervision of the researcher. Once analysis is complete, the results are stored in a spreadsheet. Using software for high-throughput cell image analysis offers researchers the possibility of performing comprehensive and precise analysis of a high number of images in an automated manner, making this routine task easier.

  3. Automated localization of vertebra landmarks in MRI images

    Science.gov (United States)

    Pai, Akshay; Narasimhamurthy, Anand; Rao, V. S. Veeravasarapu; Vaidya, Vivek

    2011-03-01

    The identification of key landmark points in an MR spine image is an important step for tasks such as vertebra counting. In this paper, we propose a template matching based approach for automatic detection of two key landmark points, namely the second cervical vertebra (C2) and the sacrum from sagittal MR images. The approach is comprised of an approximate localization of vertebral column followed by matching with appropriate templates in order to detect/localize the landmarks. A straightforward extension of the work described here is an automated classification of spine section(s). It also serves as a useful building block for further automatic processing such as extraction of regions of interest for subsequent image processing and also in aiding the counting of vertebra.

  4. Automated computational aberration correction method for broadband interferometric imaging techniques.

    Science.gov (United States)

    Pande, Paritosh; Liu, Yuan-Zhi; South, Fredrick A; Boppart, Stephen A

    2016-07-15

    Numerical correction of optical aberrations provides an inexpensive and simpler alternative to the traditionally used hardware-based adaptive optics techniques. In this Letter, we present an automated computational aberration correction method for broadband interferometric imaging techniques. In the proposed method, the process of aberration correction is modeled as a filtering operation on the aberrant image using a phase filter in the Fourier domain. The phase filter is expressed as a linear combination of Zernike polynomials with unknown coefficients, which are estimated through an iterative optimization scheme based on maximizing an image sharpness metric. The method is validated on both simulated data and experimental data obtained from a tissue phantom, an ex vivo tissue sample, and an in vivo photoreceptor layer of the human retina.

  5. Automated analysis of small animal PET studies through deformable registration to an atlas

    Energy Technology Data Exchange (ETDEWEB)

    Gutierrez, Daniel F. [Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, Geneva 4 (Switzerland); Zaidi, Habib [Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, Geneva 4 (Switzerland); Geneva University, Geneva Neuroscience Center, Geneva (Switzerland); University of Groningen, Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, Groningen (Netherlands)

    2012-11-15

    This work aims to develop a methodology for automated atlas-guided analysis of small animal positron emission tomography (PET) data through deformable registration to an anatomical mouse model. A non-rigid registration technique is used to put into correspondence relevant anatomical regions of rodent CT images from combined PET/CT studies to corresponding CT images of the Digimouse anatomical mouse model. The latter provides a pre-segmented atlas consisting of 21 anatomical regions suitable for automated quantitative analysis. Image registration is performed using a package based on the Insight Toolkit allowing the implementation of various image registration algorithms. The optimal parameters obtained for deformable registration were applied to simulated and experimental mouse PET/CT studies. The accuracy of the image registration procedure was assessed by segmenting mouse CT images into seven regions: brain, lungs, heart, kidneys, bladder, skeleton and the rest of the body. This was accomplished prior to image registration using a semi-automated algorithm. Each mouse segmentation was transformed using the parameters obtained during CT to CT image registration. The resulting segmentation was compared with the original Digimouse atlas to quantify image registration accuracy using established metrics such as the Dice coefficient and Hausdorff distance. PET images were then transformed using the same technique and automated quantitative analysis of tracer uptake performed. The Dice coefficient and Hausdorff distance show fair to excellent agreement and a mean registration mismatch distance of about 6 mm. The results demonstrate good quantification accuracy in most of the regions, especially the brain, but not in the bladder, as expected. Normalized mean activity estimates were preserved between the reference and automated quantification techniques with relative errors below 10 % in most of the organs considered. The proposed automated quantification technique is

  6. Extended Field Laser Confocal Microscopy (EFLCM: Combining automated Gigapixel image capture with in silico virtual microscopy

    Directory of Open Access Journals (Sweden)

    Strandh Christer

    2008-07-01

    Full Text Available Abstract Background Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Methods Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM. Results We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. Conclusion The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA instrument for automated screening processes.

  7. Automated Line Tracking of lambda-DNA for Single-Molecule Imaging

    CERN Document Server

    Guan, Juan; Granick, Steve

    2011-01-01

    We describe a straightforward, automated line tracking method to visualize within optical resolution the contour of linear macromolecules as they rearrange shape as a function of time by Brownian diffusion and under external fields such as electrophoresis. Three sequential stages of analysis underpin this method: first, "feature finding" to discriminate signal from noise; second, "line tracking" to approximate those shapes as lines; third, "temporal consistency check" to discriminate reasonable from unreasonable fitted conformations in the time domain. The automated nature of this data analysis makes it straightforward to accumulate vast quantities of data while excluding the unreliable parts of it. We implement the analysis on fluorescence images of lambda-DNA molecules in agarose gel to demonstrate its capability to produce large datasets for subsequent statistical analysis.

  8. Spinal imaging and image analysis

    CERN Document Server

    Yao, Jianhua

    2015-01-01

    This book is instrumental to building a bridge between scientists and clinicians in the field of spine imaging by introducing state-of-the-art computational methods in the context of clinical applications.  Spine imaging via computed tomography, magnetic resonance imaging, and other radiologic imaging modalities, is essential for noninvasively visualizing and assessing spinal pathology. Computational methods support and enhance the physician’s ability to utilize these imaging techniques for diagnosis, non-invasive treatment, and intervention in clinical practice. Chapters cover a broad range of topics encompassing radiological imaging modalities, clinical imaging applications for common spine diseases, image processing, computer-aided diagnosis, quantitative analysis, data reconstruction and visualization, statistical modeling, image-guided spine intervention, and robotic surgery. This volume serves a broad audience as  contributions were written by both clinicians and researchers, which reflects the inte...

  9. A semi-automated single day image differencing technique to identify animals in aerial imagery.

    Directory of Open Access Journals (Sweden)

    Pat Terletzky

    Full Text Available Our research presents a proof-of-concept that explores a new and innovative method to identify large animals in aerial imagery with single day image differencing. We acquired two aerial images of eight fenced pastures and conducted a principal component analysis of each image. We then subtracted the first principal component of the two pasture images followed by heuristic thresholding to generate polygons. The number of polygons represented the number of potential cattle (Bos taurus and horses (Equus caballus in the pasture. The process was considered semi-automated because we were not able to automate the identification of spatial or spectral thresholding values. Imagery was acquired concurrently with ground counts of animal numbers. Across the eight pastures, 82% of the animals were correctly identified, mean percent commission was 53%, and mean percent omission was 18%. The high commission error was due to small mis-alignments generated from image-to-image registration, misidentified shadows, and grouping behavior of animals. The high probability of correctly identifying animals suggests short time interval image differencing could provide a new technique to enumerate wild ungulates occupying grassland ecosystems, especially in isolated or difficult to access areas. To our knowledge, this was the first attempt to use standard change detection techniques to identify and enumerate large ungulates.

  10. A semi-automated single day image differencing technique to identify animals in aerial imagery.

    Science.gov (United States)

    Terletzky, Pat; Ramsey, Robert Douglas

    2014-01-01

    Our research presents a proof-of-concept that explores a new and innovative method to identify large animals in aerial imagery with single day image differencing. We acquired two aerial images of eight fenced pastures and conducted a principal component analysis of each image. We then subtracted the first principal component of the two pasture images followed by heuristic thresholding to generate polygons. The number of polygons represented the number of potential cattle (Bos taurus) and horses (Equus caballus) in the pasture. The process was considered semi-automated because we were not able to automate the identification of spatial or spectral thresholding values. Imagery was acquired concurrently with ground counts of animal numbers. Across the eight pastures, 82% of the animals were correctly identified, mean percent commission was 53%, and mean percent omission was 18%. The high commission error was due to small mis-alignments generated from image-to-image registration, misidentified shadows, and grouping behavior of animals. The high probability of correctly identifying animals suggests short time interval image differencing could provide a new technique to enumerate wild ungulates occupying grassland ecosystems, especially in isolated or difficult to access areas. To our knowledge, this was the first attempt to use standard change detection techniques to identify and enumerate large ungulates.

  11. Applications of Automation Methods for Nonlinear Fracture Test Analysis

    Science.gov (United States)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    Using automated and standardized computer tools to calculate the pertinent test result values has several advantages such as: 1. allowing high-fidelity solutions to complex nonlinear phenomena that would be impractical to express in written equation form, 2. eliminating errors associated with the interpretation and programing of analysis procedures from the text of test standards, 3. lessening the need for expertise in the areas of solid mechanics, fracture mechanics, numerical methods, and/or finite element modeling, to achieve sound results, 4. and providing one computer tool and/or one set of solutions for all users for a more "standardized" answer. In summary, this approach allows a non-expert with rudimentary training to get the best practical solution based on the latest understanding with minimum difficulty.Other existing ASTM standards that cover complicated phenomena use standard computer programs: 1. ASTM C1340/C1340M-10- Standard Practice for Estimation of Heat Gain or Loss Through Ceilings Under Attics Containing Radiant Barriers by Use of a Computer Program 2. ASTM F 2815 - Standard Practice for Chemical Permeation through Protective Clothing Materials: Testing Data Analysis by Use of a Computer Program 3. ASTM E2807 - Standard Specification for 3D Imaging Data Exchange, Version 1.0 The verification, validation, and round-robin processes required of a computer tool closely parallel the methods that are used to ensure the solution validity for equations included in test standard. The use of automated analysis tools allows the creation and practical implementation of advanced fracture mechanics test standards that capture the physics of a nonlinear fracture mechanics problem without adding undue burden or expense to the user. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.

  12. Automated Scanning Electron Microscopy Analysis of Sampled Aerosol

    DEFF Research Database (Denmark)

    Bluhme, Anders Brostrøm; Kling, Kirsten; Mølhave, Kristian

    development of an automated software-based analysis of aerosols using Scanning Electron Microscopy (SEM) and Scanning Transmission Electron Microscopy (STEM) coupled with Energy-Dispersive X-ray Spectroscopy (EDS). The automated analysis will be capable of providing both detailed physical and chemical single...

  13. Automated segmentation of three-dimensional MR brain images

    Science.gov (United States)

    Park, Jonggeun; Baek, Byungjun; Ahn, Choong-Il; Ku, Kyo Bum; Jeong, Dong Kyun; Lee, Chulhee

    2006-03-01

    Brain segmentation is a challenging problem due to the complexity of the brain. In this paper, we propose an automated brain segmentation method for 3D magnetic resonance (MR) brain images which are represented as a sequence of 2D brain images. The proposed method consists of three steps: pre-processing, removal of non-brain regions (e.g., the skull, meninges, other organs, etc), and spinal cord restoration. In pre-processing, we perform adaptive thresholding which takes into account variable intensities of MR brain images corresponding to various image acquisition conditions. In segmentation process, we iteratively apply 2D morphological operations and masking for the sequences of 2D sagittal, coronal, and axial planes in order to remove non-brain tissues. Next, final 3D brain regions are obtained by applying OR operation for segmentation results of three planes. Finally we reconstruct the spinal cord truncated during the previous processes. Experiments are performed with fifteen 3D MR brain image sets with 8-bit gray-scale. Experiment results show the proposed algorithm is fast, and provides robust and satisfactory results.

  14. Management issues in automated audit analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, K.A.; Hochberg, J.G.; Wilhelmy, S.K.; McClary, J.F.; Christoph, G.G.

    1994-03-01

    This paper discusses management issues associated with the design and implementation of an automated audit analysis system that we use to detect security events. It gives the viewpoint of a team directly responsible for developing and managing such a system. We use Los Alamos National Laboratory`s Network Anomaly Detection and Intrusion Reporter (NADIR) as a case in point. We examine issues encountered at Los Alamos, detail our solutions to them, and where appropriate suggest general solutions. After providing an introduction to NADIR, we explore four general management issues: cost-benefit questions, privacy considerations, legal issues, and system integrity. Our experiences are of general interest both to security professionals and to anyone who may wish to implement a similar system. While NADIR investigates security events, the methods used and the management issues are potentially applicable to a broad range of complex systems. These include those used to audit credit card transactions, medical care payments, and procurement systems.

  15. ASteCA - Automated Stellar Cluster Analysis

    CERN Document Server

    Perren, Gabriel I; Piatti, Andrés E

    2014-01-01

    We present ASteCA (Automated Stellar Cluster Analysis), a suit of tools designed to fully automatize the standard tests applied on stellar clusters to determine their basic parameters. The set of functions included in the code make use of positional and photometric data to obtain precise and objective values for a given cluster's center coordinates, radius, luminosity function and integrated color magnitude, as well as characterizing through a statistical estimator its probability of being a true physical cluster rather than a random overdensity of field stars. ASteCA incorporates a Bayesian field star decontamination algorithm capable of assigning membership probabilities using photometric data alone. An isochrone fitting process based on the generation of synthetic clusters from theoretical isochrones and selection of the best fit through a genetic algorithm is also present, which allows ASteCA to provide accurate estimates for a cluster's metallicity, age, extinction and distance values along with its unce...

  16. An automated deformable image registration evaluation of confidence tool

    Science.gov (United States)

    Kirby, Neil; Chen, Josephine; Kim, Hojin; Morin, Olivier; Nie, Ke; Pouliot, Jean

    2016-04-01

    Deformable image registration (DIR) is a powerful tool for radiation oncology, but it can produce errors. Beyond this, DIR accuracy is not a fixed quantity and varies on a case-by-case basis. The purpose of this study is to explore the possibility of an automated program to create a patient- and voxel-specific evaluation of DIR accuracy. AUTODIRECT is a software tool that was developed to perform this evaluation for the application of a clinical DIR algorithm to a set of patient images. In brief, AUTODIRECT uses algorithms to generate deformations and applies them to these images (along with processing) to generate sets of test images, with known deformations that are similar to the actual ones and with realistic noise properties. The clinical DIR algorithm is applied to these test image sets (currently 4). From these tests, AUTODIRECT generates spatial and dose uncertainty estimates for each image voxel based on a Student’s t distribution. In this study, four commercially available DIR algorithms were used to deform a dose distribution associated with a virtual pelvic phantom image set, and AUTODIRECT was used to generate dose uncertainty estimates for each deformation. The virtual phantom image set has a known ground-truth deformation, so the true dose-warping errors of the DIR algorithms were also known. AUTODIRECT predicted error patterns that closely matched the actual error spatial distribution. On average AUTODIRECT overestimated the magnitude of the dose errors, but tuning the AUTODIRECT algorithms should improve agreement. This proof-of-principle test demonstrates the potential for the AUTODIRECT algorithm as an empirical method to predict DIR errors.

  17. Use of an Automated Image Processing Program to Quantify Recombinant Adenovirus Particles

    Science.gov (United States)

    Obenauer-Kutner, Linda J.; Halperin, Rebecca; Ihnat, Peter M.; Tully, Christopher P.; Bordens, Ronald W.; Grace, Michael J.

    2005-02-01

    Electron microscopy has a pivotal role as an analytical tool in pharmaceutical research. However, digital image data have proven to be too large for efficient quantitative analysis. We describe here the development and application of an automated image processing (AIP) program that rapidly quantifies shape measurements of recombinant adenovirus (rAd) obtained from digitized field emission scanning electron microscope (FESEM) images. The program was written using the macro-recording features within Image-Pro® Plus software. The macro program, which is linked to a Microsoft Excel spreadsheet, consists of a series of subroutines designed to automatically measure rAd vector objects from the FESEM images. The application and utility of this macro program has enabled us to rapidly and efficiently analyze very large data sets of rAd samples while minimizing operator time.

  18. Semi-automated discrimination of retinal pigmented epithelial cells in two-photon fluorescence images of mouse retinas

    Science.gov (United States)

    Alexander, Nathan S.; Palczewska, Grazyna; Palczewski, Krzysztof

    2015-01-01

    Automated image segmentation is a critical step toward achieving a quantitative evaluation of disease states with imaging techniques. Two-photon fluorescence microscopy (TPM) has been employed to visualize the retinal pigmented epithelium (RPE) and provide images indicating the health of the retina. However, segmentation of RPE cells within TPM images is difficult due to small differences in fluorescence intensity between cell borders and cell bodies. Here we present a semi-automated method for segmenting RPE cells that relies upon multiple weak features that differentiate cell borders from the remaining image. These features were scored by a search optimization procedure that built up the cell border in segments around a nucleus of interest. With six images used as a test, our method correctly identified cell borders for 69% of nuclei on average. Performance was strongly dependent upon increasing retinosome content in the RPE. TPM image analysis has the potential of providing improved early quantitative assessments of diseases affecting the RPE. PMID:26309765

  19. Analysis of engineering drawings and raster map images

    CERN Document Server

    Henderson, Thomas C

    2013-01-01

    Presents up-to-date methods and algorithms for the automated analysis of engineering drawings and digital cartographic maps Discusses automatic engineering drawing and map analysis techniques Covers detailed accounts of the use of unsupervised segmentation algorithms to map images

  20. Automated extraction of chemical structure information from digital raster images

    Directory of Open Access Journals (Sweden)

    Shedden Kerby A

    2009-02-01

    Full Text Available Abstract Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links

  1. Automated in situ brain imaging for mapping the Drosophila connectome.

    Science.gov (United States)

    Lin, Chi-Wen; Lin, Hsuan-Wen; Chiu, Mei-Tzu; Shih, Yung-Hsin; Wang, Ting-Yuan; Chang, Hsiu-Ming; Chiang, Ann-Shyn

    2015-01-01

    Mapping the connectome, a wiring diagram of the entire brain, requires large-scale imaging of numerous single neurons with diverse morphology. It is a formidable challenge to reassemble these neurons into a virtual brain and correlate their structural networks with neuronal activities, which are measured in different experiments to analyze the informational flow in the brain. Here, we report an in situ brain imaging technique called Fly Head Array Slice Tomography (FHAST), which permits the reconstruction of structural and functional data to generate an integrative connectome in Drosophila. Using FHAST, the head capsules of an array of flies can be opened with a single vibratome sectioning to expose the brains, replacing the painstaking and inconsistent brain dissection process. FHAST can reveal in situ brain neuroanatomy with minimal distortion to neuronal morphology and maintain intact neuronal connections to peripheral sensory organs. Most importantly, it enables the automated 3D imaging of 100 intact fly brains in each experiment. The established head model with in situ brain neuroanatomy allows functional data to be accurately registered and associated with 3D images of single neurons. These integrative data can then be shared, searched, visualized, and analyzed for understanding how brain-wide activities in different neurons within the same circuit function together to control complex behaviors.

  2. Retinal imaging and image analysis

    NARCIS (Netherlands)

    Abramoff, M.D.; Garvin, Mona K.; Sonka, Milan

    2010-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindne

  3. Precision Relative Positioning for Automated Aerial Refueling from a Stereo Imaging System

    Science.gov (United States)

    2015-03-01

    PRECISION RELATIVE POSITIONING FOR AUTOMATED AERIAL REFUELING FROM A STEREO IMAGING SYSTEM THESIS Kyle P. Werner, 2Lt, USAF AFIT-ENG-MS-15-M-048...Government and is not subject to copyright protection in the United States. AFIT-ENG-MS-15-M-048 PRECISION RELATIVE POSITIONING FOR AUTOMATED AERIAL...RELEASE; DISTRIBUTION UNLIMITED. AFIT-ENG-MS-15-M-048 PRECISION RELATIVE POSITIONING FOR AUTOMATED AERIAL REFUELING FROM A STEREO IMAGING SYSTEM THESIS

  4. Ecological Automation Design, Extending Work Domain Analysis

    NARCIS (Netherlands)

    Amelink, M.H.J.

    2010-01-01

    In high–risk domains like aviation, medicine and nuclear power plant control, automation has enabled new capabilities, increased the economy of operation and has greatly contributed to safety. However, automation increases the number of couplings in a system, which can inadvertently lead to more com

  5. Automated Large-Scale Shoreline Variability Analysis From Video

    Science.gov (United States)

    Pearre, N. S.

    2006-12-01

    Land-based video has been used to quantify changes in nearshore conditions for over twenty years. By combining the ability to track rapid, short-term shoreline change and changes associated with longer term or seasonal processes, video has proved to be a cost effective and versatile tool for coastal science. Previous video-based studies of shoreline change have typically examined the position of the shoreline along a small number of cross-shore lines as a proxy for the continuous coast. The goal of this study is twofold: (1) to further develop automated shoreline extraction algorithms for continuous shorelines, and (2) to track the evolution of a nourishment project at Rehoboth Beach, DE that was concluded in June 2005. Seven cameras are situated approximately 30 meters above mean sea level and 70 meters from the shoreline. Time exposure and variance images are captured hourly during daylight and transferred to a local processing computer. After correcting for lens distortion and geo-rectifying to a shore-normal coordinate system, the images are merged to form a composite planform image of 6 km of coast. Automated extraction algorithms establish shoreline and breaker positions throughout a tidal cycle on a daily basis. Short and long term variability in the daily shoreline will be characterized using empirical orthogonal function (EOF) analysis. Periodic sediment volume information will be extracted by incorporating the results of monthly ground-based LIDAR surveys and by correlating the hourly shorelines to the corresponding tide level under conditions with minimal wave activity. The Delaware coast in the area downdrift of the nourishment site is intermittently interrupted by short groins. An Even/Odd analysis of the shoreline response around these groins will be performed. The impact of groins on the sediment volume transport along the coast during periods of accretive and erosive conditions will be discussed. [This work is being supported by DNREC and the

  6. Automated 3D-Objectdocumentation on the Base of an Image Set

    Directory of Open Access Journals (Sweden)

    Sebastian Vetter

    2011-12-01

    Full Text Available Digital stereo-photogrammetry allows users an automatic evaluation of the spatial dimension and the surface texture of objects. The integration of image analysis techniques simplifies the automation of evaluation of large image sets and offers a high accuracy [1]. Due to the substantial similarities of stereoscopic image pairs, correlation techniques provide measurements of subpixel precision for corresponding image points. With the help of an automated point search algorithm in image sets identical points are used to associate pairs of images to stereo models and group them. The found identical points in all images are basis for calculation of the relative orientation of each stereo model as well as defining the relation of neighboured stereo models. By using proper filter strategies incorrect points are removed and the relative orientation of the stereo model can be made automatically. With the help of 3D-reference points or distances at the object or a defined distance of camera basis the stereo model is orientated absolute. An adapted expansion- and matching algorithm offers the possibility to scan the object surface automatically. The result is a three dimensional point cloud; the scan resolution depends on image quality. With the integration of the iterative closest point- algorithm (ICP these partial point clouds are fitted to a total point cloud. In this way, 3D-reference points are not necessary. With the help of the implemented triangulation algorithm a digital surface models (DSM can be created. The texturing can be made automatically by the usage of the images that were used for scanning the object surface. It is possible to texture the surface model directly or to generate orthophotos automatically. By using of calibrated digital SLR cameras with full frame sensor a high accuracy can be reached. A big advantage is the possibility to control the accuracy and quality of the 3d-objectdocumentation with the resolution of the images. The

  7. Automated analysis and annotation of basketball video

    Science.gov (United States)

    Saur, Drew D.; Tan, Yap-Peng; Kulkarni, Sanjeev R.; Ramadge, Peter J.

    1997-01-01

    Automated analysis and annotation of video sequences are important for digital video libraries, content-based video browsing and data mining projects. A successful video annotation system should provide users with useful video content summary in a reasonable processing time. Given the wide variety of video genres available today, automatically extracting meaningful video content for annotation still remains hard by using current available techniques. However, a wide range video has inherent structure such that some prior knowledge about the video content can be exploited to improve our understanding of the high-level video semantic content. In this paper, we develop tools and techniques for analyzing structured video by using the low-level information available directly from MPEG compressed video. Being able to work directly in the video compressed domain can greatly reduce the processing time and enhance storage efficiency. As a testbed, we have developed a basketball annotation system which combines the low-level information extracted from MPEG stream with the prior knowledge of basketball video structure to provide high level content analysis, annotation and browsing for events such as wide- angle and close-up views, fast breaks, steals, potential shots, number of possessions and possession times. We expect our approach can also be extended to structured video in other domains.

  8. Automated determination of spinal centerline in CT and MR images

    Science.gov (United States)

    Štern, Darko; Vrtovec, Tomaž; Pernuš, Franjo; Likar, Boštjan

    2009-02-01

    The spinal curvature is one of the most important parameters for the evaluation of spinal deformities. The spinal centerline, represented by the curve that passes through the centers of the vertebral bodies in three-dimensions (3D), allows valid quantitative measurements of the spinal curvature at any location along the spine. We propose a novel automated method for the determination of the spinal centerline in 3D spine images. Our method exploits the anatomical property that the vertebral body walls are cylindrically-shaped and therefore the lines normal to the edges of the vertebral body walls most often intersect in the middle of the vertebral bodies, i.e. at the location of spinal centerline. These points of intersection are first obtained by a novel algorithm that performs a selective search in the directions normal to the edges of the structures and then connected with a parametric curve that represents the spinal centerline in 3D. As the method is based on anatomical properties of the 3D spine anatomy, it is modality-independent, i.e. applicable to images obtained by computed tomography (CT) and magnetic resonance (MR). The proposed method was evaluated on six CT and four MR images (T1- and T2-weighted) of normal spines and on one scoliotic CT spine image. The qualitative and quantitative results for the normal spines show that the spinal centerline can be successfully determined in both CT and MR spine images, while the results for the scoliotic spine indicate that the method may also be used to evaluate pathological curvatures.

  9. Automated classification of atherosclerotic plaque from magnetic resonance images using predictive models.

    Science.gov (United States)

    Anderson, Russell W; Stomberg, Christopher; Hahm, Charles W; Mani, Venkatesh; Samber, Daniel D; Itskovich, Vitalii V; Valera-Guallar, Laura; Fallon, John T; Nedanov, Pavel B; Huizenga, Joel; Fayad, Zahi A

    2007-01-01

    The information contained within multicontrast magnetic resonance images (MRI) promises to improve tissue classification accuracy, once appropriately analyzed. Predictive models capture relationships empirically, from known outcomes thereby combining pattern classification with experience. In this study, we examine the applicability of predictive modeling for atherosclerotic plaque component classification of multicontrast ex vivo MR images using stained, histopathological sections as ground truth. Ten multicontrast images from seven human coronary artery specimens were obtained on a 9.4 T imaging system using multicontrast-weighted fast spin-echo (T1-, proton density-, and T2-weighted) imaging with 39-mum isotropic voxel size. Following initial data transformations, predictive modeling focused on automating the identification of specimen's plaque, lipid, and media. The outputs of these three models were used to calculate statistics such as total plaque burden and the ratio of hard plaque (fibrous tissue) to lipid. Both logistic regression and an artificial neural network model (Relevant Input Processor Network-RIPNet) were used for predictive modeling. When compared against segmentation resulting from cluster analysis, the RIPNet models performed between 25 and 30% better in absolute terms. This translates to a 50% higher true positive rate over given levels of false positives. This work indicates that it is feasible to build an automated system of plaque detection using MRI and data mining.

  10. NeuriteTracer: a novel ImageJ plugin for automated quantification of neurite outgrowth.

    Science.gov (United States)

    Pool, Madeline; Thiemann, Joachim; Bar-Or, Amit; Fournier, Alyson E

    2008-02-15

    In vitro assays to measure neuronal growth are a fundamental tool used by many neurobiologists studying neuronal development and regeneration. The quantification of these assays requires accurate measurements of neurite length and neuronal cell numbers in neuronal cultures. Generally, these measurements are obtained through labor-intensive manual or semi-manual tracing of images. To automate these measurements, we have written NeuriteTracer, a neurite tracing plugin for the freely available image-processing program ImageJ. The plugin analyzes fluorescence microscopy images of neurites and nuclei of dissociated cultured neurons. Given user-defined thresholds, the plugin counts neuronal nuclei, and traces and measures neurite length. We find that NeuriteTracer accurately measures neurite outgrowth from cerebellar, DRG and hippocampal neurons. Values obtained by NeuriteTracer correlate strongly with those obtained by semi-manual tracing with NeuronJ and by using a sophisticated analysis package, MetaXpress. We reveal the utility of NeuriteTracer by demonstrating its ability to detect the neurite outgrowth promoting capacity of the rho kinase inhibitor Y-27632. Our plugin is an attractive alternative to existing tracing tools because it is fully automated and ready for use within a freely accessible imaging program.

  11. Automated 3D ultrasound image segmentation to aid breast cancer image interpretation.

    Science.gov (United States)

    Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A; Yuan, Jie; Wang, Xueding; Carson, Paul L

    2016-02-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.

  12. Semi-automated Digital Imaging and Processing System for Measuring Lake Ice Thickness

    Science.gov (United States)

    Singh, Preetpal

    to detect equipment failure and identify defective products at the assembly line. The research work in this thesis combines machine vision and image processing technology to build a digital imaging and processing system for monitoring and measuring lake ice thickness in real time. An ultra-compact USB camera is programmed to acquire and transmit high resolution imagery for processing with MATLAB Image Processing toolbox. The image acquisition and transmission process is fully automated; image analysis is semi-automated and requires limited user input. Potential design changes to the prototype and ideas on fully automating the imaging and processing procedure are presented to conclude this research work.

  13. Automation and robotics for genetic analysis.

    Science.gov (United States)

    Smith, J H; Madan, D; Salhaney, J; Engelstein, M

    2001-05-01

    This guide to laboratory robotics covers a wide variety of methods amenable to automation including mapping, genotyping, barcoding and data handling, template preparation, reaction setup, colony and plaque picking, and more.

  14. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters

  15. Automated tracking of lava lake level using thermal images at Kīlauea Volcano, Hawai’i

    Science.gov (United States)

    Patrick, Matthew R.; Swanson, Don; Orr, Tim

    2016-01-01

    Tracking the level of the lava lake in Halema‘uma‘u Crater, at the summit of Kīlauea Volcano, Hawai’i, is an essential part of monitoring the ongoing eruption and forecasting potentially hazardous changes in activity. We describe a simple automated image processing routine that analyzes continuously-acquired thermal images of the lava lake and measures lava level. The method uses three image segmentation approaches, based on edge detection, short-term change analysis, and composite temperature thresholding, to identify and track the lake margin in the images. These relative measurements from the images are periodically calibrated with laser rangefinder measurements to produce real-time estimates of lake elevation. Continuous, automated tracking of the lava level has been an important tool used by the U.S. Geological Survey’s Hawaiian Volcano Observatory since 2012 in real-time operational monitoring of the volcano and its hazard potential.

  16. Statistical Analysis of Filament Features Based on the H{\\alpha} Solar Images from 1988 to 2013 by Computer Automated Detection Method

    CERN Document Server

    Hao, Q; Cao, W; Chen, P F

    2015-01-01

    We improve our filament automated detection method which was proposed in our previous works. It is then applied to process the full disk H$\\alpha$ data mainly obtained by Big Bear Solar Observatory (BBSO) from 1988 to 2013, spanning nearly 3 solar cycles. The butterfly diagrams of the filaments, showing the information of the filament area, spine length, tilt angle, and the barb number, are obtained. The variations of these features with the calendar year and the latitude band are analyzed. The drift velocities of the filaments in different latitude bands are calculated and studied. We also investigate the north-south (N-S) asymmetries of the filament numbers in total and in each subclass classified according to the filament area, spine length, and tilt angle. The latitudinal distribution of the filament number is found to be bimodal. About 80% of all the filaments have tilt angles within [0{\\deg}, 60{\\deg}]. For the filaments within latitudes lower (higher) than 50{\\deg} the northeast (northwest) direction i...

  17. Functional MRI Preprocessing in Lesioned Brains: Manual Versus Automated Region of Interest Analysis.

    Science.gov (United States)

    Garrison, Kathleen A; Rogalsky, Corianne; Sheng, Tong; Liu, Brent; Damasio, Hanna; Winstein, Carolee J; Aziz-Zadeh, Lisa S

    2015-01-01

    Functional magnetic resonance imaging (fMRI) has significant potential in the study and treatment of neurological disorders and stroke. Region of interest (ROI) analysis in such studies allows for testing of strong a priori clinical hypotheses with improved statistical power. A commonly used automated approach to ROI analysis is to spatially normalize each participant's structural brain image to a template brain image and define ROIs using an atlas. However, in studies of individuals with structural brain lesions, such as stroke, the gold standard approach may be to manually hand-draw ROIs on each participant's non-normalized structural brain image. Automated approaches to ROI analysis are faster and more standardized, yet are susceptible to preprocessing error (e.g., normalization error) that can be greater in lesioned brains. The manual approach to ROI analysis has high demand for time and expertise, but may provide a more accurate estimate of brain response. In this study, commonly used automated and manual approaches to ROI analysis were directly compared by reanalyzing data from a previously published hypothesis-driven cognitive fMRI study, involving individuals with stroke. The ROI evaluated is the pars opercularis of the inferior frontal gyrus. Significant differences were identified in task-related effect size and percent-activated voxels in this ROI between the automated and manual approaches to ROI analysis. Task interactions, however, were consistent across ROI analysis approaches. These findings support the use of automated approaches to ROI analysis in studies of lesioned brains, provided they employ a task interaction design.

  18. Automated Image Retrieval of Chest CT Images Based on Local Grey Scale Invariant Features.

    Science.gov (United States)

    Arrais Porto, Marcelo; Cordeiro d'Ornellas, Marcos

    2015-01-01

    Textual-based tools are regularly employed to retrieve medical images for reading and interpretation using current retrieval Picture Archiving and Communication Systems (PACS) but pose some drawbacks. All-purpose content-based image retrieval (CBIR) systems are limited when dealing with medical images and do not fit well into PACS workflow and clinical practice. This paper presents an automated image retrieval approach for chest CT images based local grey scale invariant features from a local database. Performance was measured in terms of precision and recall, average retrieval precision (ARP), and average retrieval rate (ARR). Preliminary results have shown the effectiveness of the proposed approach. The prototype is also a useful tool for radiology research and education, providing valuable information to the medical and broader healthcare community.

  19. Color Medical Image Analysis

    CERN Document Server

    Schaefer, Gerald

    2013-01-01

    Since the early 20th century, medical imaging has been dominated by monochrome imaging modalities such as x-ray, computed tomography, ultrasound, and magnetic resonance imaging. As a result, color information has been overlooked in medical image analysis applications. Recently, various medical imaging modalities that involve color information have been introduced. These include cervicography, dermoscopy, fundus photography, gastrointestinal endoscopy, microscopy, and wound photography. However, in comparison to monochrome images, the analysis of color images is a relatively unexplored area. The multivariate nature of color image data presents new challenges for researchers and practitioners as the numerous methods developed for monochrome images are often not directly applicable to multichannel images. The goal of this volume is to summarize the state-of-the-art in the utilization of color information in medical image analysis.

  20. Automated detection of a prostate Ni-Ti stent in electronic portal images

    DEFF Research Database (Denmark)

    Carl, Jesper; Nielsen, Henning; Nielsen, Jane

    2006-01-01

    of a thermo-expandable Ni-Ti stent. The current study proposes a new detection algorithm for automated detection of the Ni-Ti stent in electronic portal images. The algorithm is based on the Ni-Ti stent having a cylindrical shape with a fixed diameter, which was used as the basis for an automated detection...

  1. Automated Detection of Firearms and Knives in a CCTV Image.

    Science.gov (United States)

    Grega, Michał; Matiolański, Andrzej; Guzik, Piotr; Leszczuk, Mikołaj

    2016-01-01

    Closed circuit television systems (CCTV) are becoming more and more popular and are being deployed in many offices, housing estates and in most public spaces. Monitoring systems have been implemented in many European and American cities. This makes for an enormous load for the CCTV operators, as the number of camera views a single operator can monitor is limited by human factors. In this paper, we focus on the task of automated detection and recognition of dangerous situations for CCTV systems. We propose algorithms that are able to alert the human operator when a firearm or knife is visible in the image. We have focused on limiting the number of false alarms in order to allow for a real-life application of the system. The specificity and sensitivity of the knife detection are significantly better than others published recently. We have also managed to propose a version of a firearm detection algorithm that offers a near-zero rate of false alarms. We have shown that it is possible to create a system that is capable of an early warning in a dangerous situation, which may lead to faster and more effective response times and a reduction in the number of potential victims.

  2. Automated Detection of Firearms and Knives in a CCTV Image

    Directory of Open Access Journals (Sweden)

    Michał Grega

    2016-01-01

    Full Text Available Closed circuit television systems (CCTV are becoming more and more popular and are being deployed in many offices, housing estates and in most public spaces. Monitoring systems have been implemented in many European and American cities. This makes for an enormous load for the CCTV operators, as the number of camera views a single operator can monitor is limited by human factors. In this paper, we focus on the task of automated detection and recognition of dangerous situations for CCTV systems. We propose algorithms that are able to alert the human operator when a firearm or knife is visible in the image. We have focused on limiting the number of false alarms in order to allow for a real-life application of the system. The specificity and sensitivity of the knife detection are significantly better than others published recently. We have also managed to propose a version of a firearm detection algorithm that offers a near-zero rate of false alarms. We have shown that it is possible to create a system that is capable of an early warning in a dangerous situation, which may lead to faster and more effective response times and a reduction in the number of potential victims.

  3. Automated SEM Modal Analysis Applied to the Diogenites

    Science.gov (United States)

    Bowman, L. E.; Spilde, M. N.; Papike, James J.

    1996-01-01

    Analysis of volume proportions of minerals, or modal analysis, is routinely accomplished by point counting on an optical microscope, but the process, particularly on brecciated samples such as the diogenite meteorites, is tedious and prone to error by misidentification of very small fragments, which may make up a significant volume of the sample. Precise volume percentage data can be gathered on a scanning electron microscope (SEM) utilizing digital imaging and an energy dispersive spectrometer (EDS). This form of automated phase analysis reduces error, and at the same time provides more information than could be gathered using simple point counting alone, such as particle morphology statistics and chemical analyses. We have previously studied major, minor, and trace-element chemistry of orthopyroxene from a suite of diogenites. This abstract describes the method applied to determine the modes on this same suite of meteorites and the results of that research. The modal abundances thus determined add additional information on the petrogenesis of the diogenites. In addition, low-abundance phases such as spinels were located for further analysis by this method.

  4. Morphological image analysis

    NARCIS (Netherlands)

    Michielsen, K.; Raedt, H. De; Kawakatsu, T.

    2000-01-01

    We describe a morphological image analysis method to characterize images in terms of geometry and topology. We present a method to compute the morphological properties of the objects building up the image and apply the method to triply periodic minimal surfaces and to images taken from polymer chemi

  5. Morphological image analysis

    NARCIS (Netherlands)

    Michielsen, K; De Raedt, H; Kawakatsu, T; Landau, DP; Lewis, SP; Schuttler, HB

    2001-01-01

    We describe a morphological image analysis method to characterize images in terms of geometry and topology. We present a method to compute the morphological properties of the objects building up the image and apply the method to triply periodic minimal surfaces and to images taken from polymer chemi

  6. Automated generation of curved planar reformations from MR images of the spine

    Energy Technology Data Exchange (ETDEWEB)

    Vrtovec, Tomaz [Faculty of Electrical Engineering, University of Ljubljana, Trzaska 25, SI-1000 Ljubljana (Slovenia); Ourselin, Sebastien [CSIRO ICT Centre, Autonomous Systems Laboratory, BioMedIA Lab, Locked Bag 17, North Ryde, NSW 2113 (Australia); Gomes, Lavier [Department of Radiology, Westmead Hospital, University of Sydney, Hawkesbury Road, Westmead NSW 2145 (Australia); Likar, Bostjan [Faculty of Electrical Engineering, University of Ljubljana, Trzaska 25, SI-1000 Ljubljana (Slovenia); Pernus, Franjo [Faculty of Electrical Engineering, University of Ljubljana, Trzaska 25, SI-1000 Ljubljana (Slovenia)

    2007-05-21

    A novel method for automated curved planar reformation (CPR) of magnetic resonance (MR) images of the spine is presented. The CPR images, generated by a transformation from image-based to spine-based coordinate system, follow the structural shape of the spine and allow the whole course of the curved anatomy to be viewed in individual cross-sections. The three-dimensional (3D) spine curve and the axial vertebral rotation, which determine the transformation, are described by polynomial functions. The 3D spine curve passes through the centres of vertebral bodies, while the axial vertebral rotation determines the rotation of vertebrae around the axis of the spinal column. The optimal polynomial parameters are obtained by a robust refinement of the initial estimates of the centres of vertebral bodies and axial vertebral rotation. The optimization framework is based on the automatic image analysis of MR spine images that exploits some basic anatomical properties of the spine. The method was evaluated on 21 MR images from 12 patients and the results provided a good description of spine anatomy, with mean errors of 2.5 mm and 1.7{sup 0} for the position of the 3D spine curve and axial rotation of vertebrae, respectively. The generated CPR images are independent of the position of the patient in the scanner while comprising both anatomical and geometrical properties of the spine.

  7. Automated generation of curved planar reformations from MR images of the spine

    Science.gov (United States)

    Vrtovec, Tomaz; Ourselin, Sébastien; Gomes, Lavier; Likar, Boštjan; Pernuš, Franjo

    2007-05-01

    A novel method for automated curved planar reformation (CPR) of magnetic resonance (MR) images of the spine is presented. The CPR images, generated by a transformation from image-based to spine-based coordinate system, follow the structural shape of the spine and allow the whole course of the curved anatomy to be viewed in individual cross-sections. The three-dimensional (3D) spine curve and the axial vertebral rotation, which determine the transformation, are described by polynomial functions. The 3D spine curve passes through the centres of vertebral bodies, while the axial vertebral rotation determines the rotation of vertebrae around the axis of the spinal column. The optimal polynomial parameters are obtained by a robust refinement of the initial estimates of the centres of vertebral bodies and axial vertebral rotation. The optimization framework is based on the automatic image analysis of MR spine images that exploits some basic anatomical properties of the spine. The method was evaluated on 21 MR images from 12 patients and the results provided a good description of spine anatomy, with mean errors of 2.5 mm and 1.7° for the position of the 3D spine curve and axial rotation of vertebrae, respectively. The generated CPR images are independent of the position of the patient in the scanner while comprising both anatomical and geometrical properties of the spine.

  8. AUTOMATED VIDEO IMAGE MORPHOMETRY OF THE CORNEAL ENDOTHELIUM

    NARCIS (Netherlands)

    SIERTSEMA, JV; LANDESZ, M; VANDENBROM, H; VANRIJ, G

    1993-01-01

    The central corneal endothelium of 13 eyes in 13 subjects was visualized with a non-contact specular microscope. This report describes the computer-assisted morphometric analysis of enhanced digitized images, using a direct input by means of a frame grabber. The output consisted of mean cell area, c

  9. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters

  10. Comparison of manually produced and automated cross country movement maps using digital image processing techniques

    Science.gov (United States)

    Wynn, L. K.

    1985-01-01

    The Image-Based Information System (IBIS) was used to automate the cross country movement (CCM) mapping model developed by the Defense Mapping Agency (DMA). Existing terrain factor overlays and a CCM map, produced by DMA for the Fort Lewis, Washington area, were digitized and reformatted into geometrically registered images. Terrain factor data from Slope, Soils, and Vegetation overlays were entered into IBIS, and were then combined utilizing IBIS-programmed equations to implement the DMA CCM model. The resulting IBIS-generated CCM map was then compared with the digitized manually produced map to test similarity. The numbers of pixels comprising each CCM region were compared between the two map images, and percent agreement between each two regional counts was computed. The mean percent agreement equalled 86.21%, with an areally weighted standard deviation of 11.11%. Calculation of Pearson's correlation coefficient yielded +9.997. In some cases, the IBIS-calculated map code differed from the DMA codes: analysis revealed that IBIS had calculated the codes correctly. These highly positive results demonstrate the power and accuracy of IBIS in automating models which synthesize a variety of thematic geographic data.

  11. Automated face analysis by feature point tracking has high concurrent validity with manual FACS coding.

    Science.gov (United States)

    Cohn, J F; Zlochower, A J; Lien, J; Kanade, T

    1999-01-01

    The face is a rich source of information about human behavior. Available methods for coding facial displays, however, are human-observer dependent, labor intensive, and difficult to standardize. To enable rigorous and efficient quantitative measurement of facial displays, we have developed an automated method of facial display analysis. In this report, we compare the results with this automated system with those of manual FACS (Facial Action Coding System, Ekman & Friesen, 1978a) coding. One hundred university students were videotaped while performing a series of facial displays. The image sequences were coded from videotape by certified FACS coders. Fifteen action units and action unit combinations that occurred a minimum of 25 times were selected for automated analysis. Facial features were automatically tracked in digitized image sequences using a hierarchical algorithm for estimating optical flow. The measurements were normalized for variation in position, orientation, and scale. The image sequences were randomly divided into a training set and a cross-validation set, and discriminant function analyses were conducted on the feature point measurements. In the training set, average agreement with manual FACS coding was 92% or higher for action units in the brow, eye, and mouth regions. In the cross-validation set, average agreement was 91%, 88%, and 81% for action units in the brow, eye, and mouth regions, respectively. Automated face analysis by feature point tracking demonstrated high concurrent validity with manual FACS coding.

  12. Image analysis of insulation mineral fibres.

    Science.gov (United States)

    Talbot, H; Lee, T; Jeulin, D; Hanton, D; Hobbs, L W

    2000-12-01

    We present two methods for measuring the diameter and length of man-made vitreous fibres based on the automated image analysis of scanning electron microscopy images. The fibres we want to measure are used in materials such as glass wool, which in turn are used for thermal and acoustic insulation. The measurement of the diameters and lengths of these fibres is used by the glass wool industry for quality control purposes. To obtain reliable quality estimators, the measurement of several hundred images is necessary. These measurements are usually obtained manually by operators. Manual measurements, although reliable when performed by skilled operators, are slow due to the need for the operators to rest often to retain their ability to spot faint fibres on noisy backgrounds. Moreover, the task of measuring thousands of fibres every day, even with the help of semi-automated image analysis systems, is dull and repetitive. The need for an automated procedure which could replace manual measurements is quite real. For each of the two methods that we propose to accomplish this task, we present the sample preparation, the microscope setting and the image analysis algorithms used for the segmentation of the fibres and for their measurement. We also show how a statistical analysis of the results can alleviate most measurement biases, and how we can estimate the true distribution of fibre lengths by diameter class by measuring only the lengths of the fibres visible in the field of view.

  13. Automated condition-invariable neurite segmentation and synapse classification using textural analysis-based machine-learning algorithms.

    Science.gov (United States)

    Kandaswamy, Umasankar; Rotman, Ziv; Watt, Dana; Schillebeeckx, Ian; Cavalli, Valeria; Klyachko, Vitaly A

    2013-02-15

    High-resolution live-cell imaging studies of neuronal structure and function are characterized by large variability in image acquisition conditions due to background and sample variations as well as low signal-to-noise ratio. The lack of automated image analysis tools that can be generalized for varying image acquisition conditions represents one of the main challenges in the field of biomedical image analysis. Specifically, segmentation of the axonal/dendritic arborizations in brightfield or fluorescence imaging studies is extremely labor-intensive and still performed mostly manually. Here we describe a fully automated machine-learning approach based on textural analysis algorithms for segmenting neuronal arborizations in high-resolution brightfield images of live cultured neurons. We compare performance of our algorithm to manual segmentation and show that it combines 90% accuracy, with similarly high levels of specificity and sensitivity. Moreover, the algorithm maintains high performance levels under a wide range of image acquisition conditions indicating that it is largely condition-invariable. We further describe an application of this algorithm to fully automated synapse localization and classification in fluorescence imaging studies based on synaptic activity. Textural analysis-based machine-learning approach thus offers a high performance condition-invariable tool for automated neurite segmentation.

  14. Comparative Analysis of Manual and Automated AFEES

    Science.gov (United States)

    1976-05-14

    system and in-house technical studies of AFEES-related issues. The design of the system -vas performed by Computer Sciences Corporation (CSC...34two day" blood pressure and pulse and/or answer any questions that some liaison would have concerning a previously physicalled applicant. This...from the Consumption/ Usage Listings as submitted by Computer Sciences Corporation and estimates for automated applicant forms from Central

  15. Automated Abnormal Mass Detection in the Mammogram Images Using Chebyshev Moments

    Directory of Open Access Journals (Sweden)

    Alireza Talebpour

    2013-01-01

    Full Text Available Breast cancer is the second leading cause of cancer mortality among women after lung cancer. Early diagnosis of this disease has a major role in its treatment. Thus the use of computer systems as a detection tool could be viewed as essential to helping with this disease. In this study a new system for automated mass detection in mammography images is presented as being more accurate and valid. After optimization of the image and extracting a better picture of the breast tissue from the image and applying log-polar transformation, Chebyshev moments can be calculated in all areas of breast tissue. Then after extracting effective features in the diagnosis of mammography images, abnormal masses, which are important for the physician and specialists, can be determined with applying the appropriate threshold. To check the system performance, images in the MIAS (Mammographic Image Analysis Society mammogram database have been used and the results allowed us to draw a FROC (Free Response Receiver Operating Characteristic curve. When compared the FROC curve with similar systems experts, the high ability of our system was confirmed. In this system, images of different thresholds, specifically 445, 450, 455 are processed and then put through a sensitivity analysis. The process garnered good results 100, 92 and 84%, respectively and a false positive rate per image 2.56, 0.86, 0.26, respectively have been calculated. Comparing other automatic mass detection systems, the proposed method has a few advantages over prior systems: Our process allows us to determine the amount of false positives and/or sensitivity parameters within the system. This can be determined by the importance of the detection work being done. The proposed system achieves 100% sensitivity and 2.56 false positive for every image.

  16. Automated segmentation of murine lung tumors in x-ray micro-CT images

    Science.gov (United States)

    Swee, Joshua K. Y.; Sheridan, Clare; de Bruin, Elza; Downward, Julian; Lassailly, Francois; Pizarro, Luis

    2014-03-01

    Recent years have seen micro-CT emerge as a means of providing imaging analysis in pre-clinical study, with in-vivo micro-CT having been shown to be particularly applicable to the examination of murine lung tumors. Despite this, existing studies have involved substantial human intervention during the image analysis process, with the use of fully-automated aids found to be almost non-existent. We present a new approach to automate the segmentation of murine lung tumors designed specifically for in-vivo micro-CT-based pre-clinical lung cancer studies that addresses the specific requirements of such study, as well as the limitations human-centric segmentation approaches experience when applied to such micro-CT data. Our approach consists of three distinct stages, and begins by utilizing edge enhancing and vessel enhancing non-linear anisotropic diffusion filters to extract anatomy masks (lung/vessel structure) in a pre-processing stage. Initial candidate detection is then performed through ROI reduction utilizing obtained masks and a two-step automated segmentation approach that aims to extract all disconnected objects within the ROI, and consists of Otsu thresholding, mathematical morphology and marker-driven watershed. False positive reduction is finally performed on initial candidates through random-forest-driven classification using the shape, intensity, and spatial features of candidates. We provide validation of our approach using data from an associated lung cancer study, showing favorable results both in terms of detection (sensitivity=86%, specificity=89%) and structural recovery (Dice Similarity=0.88) when compared against manual specialist annotation.

  17. Prehospital digital photography and automated image transmission in an emergency medical service – an ancillary retrospective analysis of a prospective controlled trial

    Directory of Open Access Journals (Sweden)

    Bergrath Sebastian

    2013-01-01

    Full Text Available Abstract Background Still picture transmission was performed using a telemedicine system in an Emergency Medical Service (EMS during a prospective, controlled trial. In this ancillary, retrospective study the quality and content of the transmitted pictures and the possible influences of this application on prehospital time requirements were investigated. Methods A digital camera was used with a telemedicine system enabling encrypted audio and data transmission between an ambulance and a remotely located physician. By default, images were compressed (jpeg, 640 x 480 pixels. On occasion, this compression was deactivated (3648 x 2736 pixels. Two independent investigators assessed all transmitted pictures according to predefined criteria. In cases of different ratings, a third investigator had final decision competence. Patient characteristics and time intervals were extracted from the EMS protocol sheets and dispatch centre reports. Results Overall 314 pictures (mean 2.77 ± 2.42 pictures/mission were transmitted during 113 missions (group 1. Pictures were not taken for 151 missions (group 2. Regarding picture quality, the content of 240 (76.4% pictures was clearly identifiable; 45 (14.3% pictures were considered “limited quality” and 29 (9.2% pictures were deemed “not useful” due to not/hardly identifiable content. For pictures with file compression (n = 84 missions and without (n = 17 missions, the content was clearly identifiable in 74% and 97% of the pictures, respectively (p = 0.003. Medical reports (n = 98, 32.8%, medication lists (n = 49, 16.4% and 12-lead ECGs (n = 28, 9.4% were most frequently photographed. The patient characteristics of group 1 vs. 2 were as follows: median age – 72.5 vs. 56.5 years, p = 0.001; frequency of acute coronary syndrome – 24/113 vs. 15/151, p = 0.014. The NACA scores and gender distribution were comparable. Median on-scene times were longer with picture

  18. Quantitative Assessment of Mouse Mammary Gland Morphology Using Automated Digital Image Processing and TEB Detection.

    Science.gov (United States)

    Blacher, Silvia; Gérard, Céline; Gallez, Anne; Foidart, Jean-Michel; Noël, Agnès; Péqueux, Christel

    2016-04-01

    The assessment of rodent mammary gland morphology is largely used to study the molecular mechanisms driving breast development and to analyze the impact of various endocrine disruptors with putative pathological implications. In this work, we propose a methodology relying on fully automated digital image analysis methods including image processing and quantification of the whole ductal tree and of the terminal end buds as well. It allows to accurately and objectively measure both growth parameters and fine morphological glandular structures. Mammary gland elongation was characterized by 2 parameters: the length and the epithelial area of the ductal tree. Ductal tree fine structures were characterized by: 1) branch end-point density, 2) branching density, and 3) branch length distribution. The proposed methodology was compared with quantification methods classically used in the literature. This procedure can be transposed to several software and thus largely used by scientists studying rodent mammary gland morphology.

  19. Automated identification of retained surgical items in radiological images

    Science.gov (United States)

    Agam, Gady; Gan, Lin; Moric, Mario; Gluncic, Vicko

    2015-03-01

    Retained surgical items (RSIs) in patients is a major operating room (OR) patient safety concern. An RSI is any surgical tool, sponge, needle or other item inadvertently left in a patients body during the course of surgery. If left undetected, RSIs may lead to serious negative health consequences such as sepsis, internal bleeding, and even death. To help physicians efficiently and effectively detect RSIs, we are developing computer-aided detection (CADe) software for X-ray (XR) image analysis, utilizing large amounts of currently available image data to produce a clinically effective RSI detection system. Physician analysis of XRs for the purpose of RSI detection is a relatively lengthy process that may take up to 45 minutes to complete. It is also error prone due to the relatively low acuity of the human eye for RSIs in XR images. The system we are developing is based on computer vision and machine learning algorithms. We address the problem of low incidence by proposing synthesis algorithms. The CADe software we are developing may be integrated into a picture archiving and communication system (PACS), be implemented as a stand-alone software application, or be integrated into portable XR machine software through application programming interfaces. Preliminary experimental results on actual XR images demonstrate the effectiveness of the proposed approach.

  20. Analysis of Trinity Power Metrics for Automated Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Michalenko, Ashley Christine [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-28

    This is a presentation from Los Alamos National Laboraotyr (LANL) about the analysis of trinity power metrics for automated monitoring. The following topics are covered: current monitoring efforts, motivation for analysis, tools used, the methodology, work performed during the summer, and future work planned.

  1. Twelve automated thresholding methods for segmentation of PET images: a phantom study.

    Science.gov (United States)

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M

    2012-06-21

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  2. A High-Throughput Automated Microfluidic Platform for Calcium Imaging of Taste Sensing

    Directory of Open Access Journals (Sweden)

    Yi-Hsing Hsiao

    2016-07-01

    Full Text Available The human enteroendocrine L cell line NCI-H716, expressing taste receptors and taste signaling elements, constitutes a unique model for the studies of cellular responses to glucose, appetite regulation, gastrointestinal motility, and insulin secretion. Targeting these gut taste receptors may provide novel treatments for diabetes and obesity. However, NCI-H716 cells are cultured in suspension and tend to form multicellular aggregates, preventing high-throughput calcium imaging due to interferences caused by laborious immobilization and stimulus delivery procedures. Here, we have developed an automated microfluidic platform that is capable of trapping more than 500 single cells into microwells with a loading efficiency of 77% within two minutes, delivering multiple chemical stimuli and performing calcium imaging with enhanced spatial and temporal resolutions when compared to bath perfusion systems. Results revealed the presence of heterogeneity in cellular responses to the type, concentration, and order of applied sweet and bitter stimuli. Sucralose and denatonium benzoate elicited robust increases in the intracellular Ca2+ concentration. However, glucose evoked a rapid elevation of intracellular Ca2+ followed by reduced responses to subsequent glucose stimulation. Using Gymnema sylvestre as a blocking agent for the sweet taste receptor confirmed that different taste receptors were utilized for sweet and bitter tastes. This automated microfluidic platform is cost-effective, easy to fabricate and operate, and may be generally applicable for high-throughput and high-content single-cell analysis and drug screening.

  3. Image patch-based method for automated classification and detection of focal liver lesions on CT

    Science.gov (United States)

    Safdari, Mustafa; Pasari, Raghav; Rubin, Daniel; Greenspan, Hayit

    2013-03-01

    We developed a method for automated classification and detection of liver lesions in CT images based on image patch representation and bag-of-visual-words (BoVW). BoVW analysis has been extensively used in the computer vision domain to analyze scenery images. In the current work we discuss how it can be used for liver lesion classification and detection. The methodology includes building a dictionary for a training set using local descriptors and representing a region in the image using a visual word histogram. Two tasks are described: a classification task, for lesion characterization, and a detection task in which a scan window moves across the image and is determined to be normal liver tissue or a lesion. Data: In the classification task 73 CT images of liver lesions were used, 25 images having cysts, 24 having metastasis and 24 having hemangiomas. A radiologist circumscribed the lesions, creating a region of interest (ROI), in each of the images. He then provided the diagnosis, which was established either by biopsy or clinical follow-up. Thus our data set comprises 73 images and 73 ROIs. In the detection task, a radiologist drew ROIs around each liver lesion and two regions of normal liver, for a total of 159 liver lesion ROIs and 146 normal liver ROIs. The radiologist also demarcated the liver boundary. Results: Classification results of more than 95% were obtained. In the detection task, F1 results obtained is 0.76. Recall is 84%, with precision of 73%. Results show the ability to detect lesions, regardless of shape.

  4. Automated grading of renal cell carcinoma using whole slide imaging

    Directory of Open Access Journals (Sweden)

    Fang-Cheng Yeh

    2014-01-01

    Full Text Available Introduction: Recent technology developments have demonstrated the benefit of using whole slide imaging (WSI in computer-aided diagnosis. In this paper, we explore the feasibility of using automatic WSI analysis to assist grading of clear cell renal cell carcinoma (RCC, which is a manual task traditionally performed by pathologists. Materials and Methods: Automatic WSI analysis was applied to 39 hematoxylin and eosin-stained digitized slides of clear cell RCC with varying grades. Kernel regression was used to estimate the spatial distribution of nuclear size across the entire slides. The analysis results were correlated with Fuhrman nuclear grades determined by pathologists. Results: The spatial distribution of nuclear size provided a panoramic view of the tissue sections. The distribution images facilitated locating regions of interest, such as high-grade regions and areas with necrosis. The statistical analysis showed that the maximum nuclear size was significantly different (P < 0.001 between low-grade (Grades I and II and high-grade tumors (Grades III and IV. The receiver operating characteristics analysis showed that the maximum nuclear size distinguished high-grade and low-grade tumors with a false positive rate of 0.2 and a true positive rate of 1.0. The area under the curve is 0.97. Conclusion: The automatic WSI analysis allows pathologists to see the spatial distribution of nuclei size inside the tumors. The maximum nuclear size can also be used to differentiate low-grade and high-grade clear cell RCC with good sensitivity and specificity. These data suggest that automatic WSI analysis may facilitate pathologic grading of renal tumors and reduce variability encountered with manual grading.

  5. An automated image processing method to quantify collagen fibre organization within cutaneous scar tissue.

    Science.gov (United States)

    Quinn, Kyle P; Golberg, Alexander; Broelsch, G Felix; Khan, Saiqa; Villiger, Martin; Bouma, Brett; Austen, William G; Sheridan, Robert L; Mihm, Martin C; Yarmush, Martin L; Georgakoudi, Irene

    2015-01-01

    Standard approaches to evaluate scar formation within histological sections rely on qualitative evaluations and scoring, which limits our understanding of the remodelling process. We have recently developed an image analysis technique for the rapid quantification of fibre alignment at each pixel location. The goal of this study was to evaluate its application for quantitatively mapping scar formation in histological sections of cutaneous burns. To this end, we utilized directional statistics to define maps of fibre density and directional variance from Masson's trichrome-stained sections for quantifying changes in collagen organization during scar remodelling. Significant increases in collagen fibre density are detectable soon after burn injury in a rat model. Decreased fibre directional variance in the scar was also detectable between 3 weeks and 6 months after injury, indicating increasing fibre alignment. This automated analysis of fibre organization can provide objective surrogate endpoints for evaluating cutaneous wound repair and regeneration.

  6. Automated Analysis of Imaging Based Experiments Project

    Data.gov (United States)

    National Aeronautics and Space Administration — For many applications involving liquid injection, the ability to predict the details of the breakup process is often limited due to the complexity of the two-phase...

  7. Image cytometer method for automated assessment of human spermatozoa concentration

    DEFF Research Database (Denmark)

    Egeberg, D L; Kjaerulff, S; Hansen, C

    2013-01-01

    to investigator bias. Here we show that image cytometry can be used to accurately measure the sperm concentration of human semen samples with great ease and reproducibility. The impact of several factors (pipetting, mixing, round cell content, sperm concentration), which can influence the read-out as well......In the basic clinical work-up of infertile couples, a semen analysis is mandatory and the sperm concentration is one of the most essential variables to be determined. Sperm concentration is usually assessed by manual counting using a haemocytometer and is hence labour intensive and may be subjected...... and easy measurement of human sperm concentration....

  8. Automated sub-5 nm image registration in integrated correlative fluorescence and electron microscopy using cathodoluminescence pointers

    Science.gov (United States)

    Haring, Martijn T.; Liv, Nalan; Zonnevylle, A. Christiaan; Narvaez, Angela C.; Voortman, Lenard M.; Kruit, Pieter; Hoogenboom, Jacob P.

    2017-01-01

    In the biological sciences, data from fluorescence and electron microscopy is correlated to allow fluorescence biomolecule identification within the cellular ultrastructure and/or ultrastructural analysis following live-cell imaging. High-accuracy (sub-100 nm) image overlay requires the addition of fiducial markers, which makes overlay accuracy dependent on the number of fiducials present in the region of interest. Here, we report an automated method for light-electron image overlay at high accuracy, i.e. below 5 nm. Our method relies on direct visualization of the electron beam position in the fluorescence detection channel using cathodoluminescence pointers. We show that image overlay using cathodoluminescence pointers corrects for image distortions, is independent of user interpretation, and does not require fiducials, allowing image correlation with molecular precision anywhere on a sample. PMID:28252673

  9. Automated sub-5 nm image registration in integrated correlative fluorescence and electron microscopy using cathodoluminescence pointers

    Science.gov (United States)

    Haring, Martijn T.; Liv, Nalan; Zonnevylle, A. Christiaan; Narvaez, Angela C.; Voortman, Lenard M.; Kruit, Pieter; Hoogenboom, Jacob P.

    2017-03-01

    In the biological sciences, data from fluorescence and electron microscopy is correlated to allow fluorescence biomolecule identification within the cellular ultrastructure and/or ultrastructural analysis following live-cell imaging. High-accuracy (sub-100 nm) image overlay requires the addition of fiducial markers, which makes overlay accuracy dependent on the number of fiducials present in the region of interest. Here, we report an automated method for light-electron image overlay at high accuracy, i.e. below 5 nm. Our method relies on direct visualization of the electron beam position in the fluorescence detection channel using cathodoluminescence pointers. We show that image overlay using cathodoluminescence pointers corrects for image distortions, is independent of user interpretation, and does not require fiducials, allowing image correlation with molecular precision anywhere on a sample.

  10. Evaluation of a software package for automated quality assessment of contrast detail images--comparison with subjective visual assessment.

    Science.gov (United States)

    Pascoal, A; Lawinski, C P; Honey, I; Blake, P

    2005-12-07

    Contrast detail analysis is commonly used to assess image quality (IQ) associated with diagnostic imaging systems. Applications include routine assessment of equipment performance and optimization studies. Most frequently, the evaluation of contrast detail images involves human observers visually detecting the threshold contrast detail combinations in the image. However, the subjective nature of human perception and the variations in the decision threshold pose limits to the minimum image quality variations detectable with reliability. Objective methods of assessment of image quality such as automated scoring have the potential to overcome the above limitations. A software package (CDRAD analyser) developed for automated scoring of images produced with the CDRAD test object was evaluated. Its performance to assess absolute and relative IQ was compared with that of an average observer. Results show that the software does not mimic the absolute performance of the average observer. The software proved more sensitive and was able to detect smaller low-contrast variations. The observer's performance was superior to the software's in the detection of smaller details. Both scoring methods showed frequent agreement in the detection of image quality variations resulting from changes in kVp and KERMA(detector), which indicates the potential to use the software CDRAD analyser for assessment of relative IQ.

  11. Evaluation of a software package for automated quality assessment of contrast detail images-comparison with subjective visual assessment

    Energy Technology Data Exchange (ETDEWEB)

    Pascoal, A [Medical Engineering and Physics, King' s College London, Faraday Building Denmark Hill, London SE5 8RX (Denmark); Lawinski, C P [KCARE - King' s Centre for Assessment of Radiological Equipment, King' s College Hospital, Faraday Building Denmark Hill, London SE5 8RX (Denmark); Honey, I [KCARE - King' s Centre for Assessment of Radiological Equipment, King' s College Hospital, Faraday Building Denmark Hill, London SE5 8RX (Denmark); Blake, P [KCARE - King' s Centre for Assessment of Radiological Equipment, King' s College Hospital, Faraday Building Denmark Hill, London SE5 8RX (Denmark)

    2005-12-07

    Contrast detail analysis is commonly used to assess image quality (IQ) associated with diagnostic imaging systems. Applications include routine assessment of equipment performance and optimization studies. Most frequently, the evaluation of contrast detail images involves human observers visually detecting the threshold contrast detail combinations in the image. However, the subjective nature of human perception and the variations in the decision threshold pose limits to the minimum image quality variations detectable with reliability. Objective methods of assessment of image quality such as automated scoring have the potential to overcome the above limitations. A software package (CDRAD analyser) developed for automated scoring of images produced with the CDRAD test object was evaluated. Its performance to assess absolute and relative IQ was compared with that of an average observer. Results show that the software does not mimic the absolute performance of the average observer. The software proved more sensitive and was able to detect smaller low-contrast variations. The observer's performance was superior to the software's in the detection of smaller details. Both scoring methods showed frequent agreement in the detection of image quality variations resulting from changes in kVp and KERMA{sub detector}, which indicates the potential to use the software CDRAD analyser for assessment of relative IQ.

  12. AMIsurvey, chimenea and other tools: Automated imaging for transient surveys with existing radio-observatories

    CERN Document Server

    Staley, Tim D

    2015-01-01

    In preparing the way for the Square Kilometre Array and its pathfinders, there is a pressing need to begin probing the transient sky in a fully robotic fashion using the current generation of radio telescopes. Effective exploitation of such surveys requires a largely automated data-reduction process. This paper introduces an end-to-end automated reduction pipeline, AMIsurvey, used for calibrating and imaging data from the Arcminute Microkelvin Imager Large Array. AMIsurvey makes use of several component libraries which have been packaged separately for open-source release. The most scientifically significant of these is chimenea, which implements a telescope agnostic algorithm for automated imaging of pre-calibrated multi-epoch radio-synthesis data, making use of CASA subroutines for the underlying image-synthesis operations. At a lower level, AMIsurvey relies upon two libraries, drive-ami and drive-casa, built to allow use of mature radio-astronomy software packages from within Python scripts. These packages...

  13. Automated interpretation of PET/CT images in patients with lung cancer

    DEFF Research Database (Denmark)

    Gutte, Henrik; Jakobsson, David; Olofsson, Fredrik

    2007-01-01

    PURPOSE: To develop a completely automated method based on image processing techniques and artificial neural networks for the interpretation of combined [(18)F]fluorodeoxyglucose (FDG) positron emission tomography (PET) and computed tomography (CT) images for the diagnosis and staging of lung...... for localization of lesions in the PET images in the feature extraction process. Eight features from each examination were used as inputs to artificial neural networks trained to classify the images. Thereafter, the performance of the network was evaluated in the test set. RESULTS: The performance of the automated...... method measured as the area under the receiver operating characteristic curve, was 0.97 in the test group, with an accuracy of 92%. The sensitivity was 86% at a specificity of 100%. CONCLUSIONS: A completely automated method using artificial neural networks can be used to detect lung cancer...

  14. Extending and applying active appearance models for automated, high precision segmentation in different image modalities

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Fisker, Rune; Ersbøll, Bjarne Kjær

    2001-01-01

    , an initialization scheme is designed thus making the usage of AAMs fully automated. Using these extensions it is demonstrated that AAMs can segment bone structures in radiographs, pork chops in perspective images and the left ventricle in cardiovascular magnetic resonance images in a robust, fast and accurate...

  15. Sample preparation and in situ hybridization techniques for automated molecular cytogenetic analysis of white blood cells

    Energy Technology Data Exchange (ETDEWEB)

    Rijke, F.M. van de; Vrolijk, H.; Sloos, W. [Leiden Univ. (Netherlands)] [and others

    1996-06-01

    With the advent in situ hybridization techniques for the analysis of chromosome copy number or structure in interphase cells, the diagnostic and prognostic potential of cytogenetics has been augmented considerably. In theory, the strategies for detection of cytogenetically aberrant cells by in situ hybridization are simple and straightforward. In practice, however, they are fallible, because false classification of hybridization spot number or patterns occurs. When a decision has to be made on molecular cytogenetic normalcy or abnormalcy of a cell sample, the problem of false classification becomes particularly prominent if the fraction of aberrant cells is relatively small. In such mosaic situations, often > 200 cells have to be evaluated to reach a statistical sound figure. The manual enumeration of in situ hybridization spots in many cells in many patient samples is tedious. Assistance in the evaluation process by automation of microscope functions and image analysis techniques is, therefore, strongly indicated. Next to research and development of microscope hardware, camera technology, and image analysis, the optimization of the specimen for the (semi)automated microscopic analysis is essential, since factors such as cell density, thickness, and overlap have dramatic influences on the speed and complexity of the analysis process. Here we describe experiments that have led to a protocol for blood cell specimen that results in microscope preparations that are well suited for automated molecular cytogenetic analysis. 13 refs., 4 figs., 1 tab.

  16. ORIGAMI Automator Primer. Automated ORIGEN Source Terms and Spent Fuel Storage Pool Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wieselquist, William A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Thompson, Adam B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bowman, Stephen M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Peterson, Joshua L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-04-01

    Source terms and spent nuclear fuel (SNF) storage pool decay heat load analyses for operating nuclear power plants require a large number of Oak Ridge Isotope Generation and Depletion (ORIGEN) calculations. SNF source term calculations also require a significant amount of bookkeeping to track quantities such as core and assembly operating histories, spent fuel pool (SFP) residence times, heavy metal masses, and enrichments. The ORIGEN Assembly Isotopics (ORIGAMI) module in the SCALE code system provides a simple scheme for entering these data. However, given the large scope of the analysis, extensive scripting is necessary to convert formats and process data to create thousands of ORIGAMI input files (one per assembly) and to process the results into formats readily usable by follow-on analysis tools. This primer describes a project within the SCALE Fulcrum graphical user interface (GUI) called ORIGAMI Automator that was developed to automate the scripting and bookkeeping in large-scale source term analyses. The ORIGAMI Automator enables the analyst to (1) easily create, view, and edit the reactor site and assembly information, (2) automatically create and run ORIGAMI inputs, and (3) analyze the results from ORIGAMI. ORIGAMI Automator uses the standard ORIGEN binary concentrations files produced by ORIGAMI, with concentrations available at all time points in each assembly’s life. The GUI plots results such as mass, concentration, activity, and decay heat using a powerful new ORIGEN Post-Processing Utility for SCALE (OPUS) GUI component. This document includes a description and user guide for the GUI, a step-by-step tutorial for a simplified scenario, and appendices that document the file structures used.

  17. Automated detection of diabetic retinopathy in retinal images

    Directory of Open Access Journals (Sweden)

    Carmen Valverde

    2016-01-01

    Full Text Available Diabetic retinopathy (DR is a disease with an increasing prevalence and the main cause of blindness among working-age population. The risk of severe vision loss can be significantly reduced by timely diagnosis and treatment. Systematic screening for DR has been identified as a cost-effective way to save health services resources. Automatic retinal image analysis is emerging as an important screening tool for early DR detection, which can reduce the workload associated to manual grading as well as save diagnosis costs and time. Many research efforts in the last years have been devoted to developing automatic tools to help in the detection and evaluation of DR lesions. However, there is a large variability in the databases and evaluation criteria used in the literature, which hampers a direct comparison of the different studies. This work is aimed at summarizing the results of the available algorithms for the detection and classification of DR pathology. A detailed literature search was conducted using PubMed. Selected relevant studies in the last 10 years were scrutinized and included in the review. Furthermore, we will try to give an overview of the available commercial software for automatic retinal image analysis.

  18. Knowledge Acquisition, Validation, and Maintenance in a Planning System for Automated Image Processing

    Science.gov (United States)

    Chien, Steve A.

    1996-01-01

    A key obstacle hampering fielding of AI planning applications is the considerable expense of developing, verifying, updating, and maintainting the planning knowledge base (KB). Planning systems must be able to compare favorably in terms of software lifecycle costs to other means of automation such as scripts or rule-based expert systems. This paper describes a planning application of automated imaging processing and our overall approach to knowledge acquisition for this application.

  19. An Automated Data Analysis Tool for Livestock Market Data

    Science.gov (United States)

    Williams, Galen S.; Raper, Kellie Curry

    2011-01-01

    This article describes an automated data analysis tool that allows Oklahoma Cooperative Extension Service educators to disseminate results in a timely manner. Primary data collected at Oklahoma Quality Beef Network (OQBN) certified calf auctions across the state results in a large amount of data per sale site. Sale summaries for an individual sale…

  20. Automated Analysis of Child Phonetic Production Using Naturalistic Recordings

    Science.gov (United States)

    Xu, Dongxin; Richards, Jeffrey A.; Gilkerson, Jill

    2014-01-01

    Purpose: Conventional resource-intensive methods for child phonetic development studies are often impractical for sampling and analyzing child vocalizations in sufficient quantity. The purpose of this study was to provide new information on early language development by an automated analysis of child phonetic production using naturalistic…

  1. Reproducibility of In Vivo Corneal Confocal Microscopy Using an Automated Analysis Program for Detection of Diabetic Sensorimotor Polyneuropathy.

    Directory of Open Access Journals (Sweden)

    Ilia Ostrovski

    Full Text Available In vivo Corneal Confocal Microscopy (IVCCM is a validated, non-invasive test for diabetic sensorimotor polyneuropathy (DSP detection, but its utility is limited by the image analysis time and expertise required. We aimed to determine the inter- and intra-observer reproducibility of a novel automated analysis program compared to manual analysis.In a cross-sectional diagnostic study, 20 non-diabetes controls (mean age 41.4±17.3y, HbA1c 5.5±0.4% and 26 participants with type 1 diabetes (42.8±16.9y, 8.0±1.9% underwent two separate IVCCM examinations by one observer and a third by an independent observer. Along with nerve density and branch density, corneal nerve fibre length (CNFL was obtained by manual analysis (CNFLMANUAL, a protocol in which images were manually selected for automated analysis (CNFLSEMI-AUTOMATED, and one in which selection and analysis were performed electronically (CNFLFULLY-AUTOMATED. Reproducibility of each protocol was determined using intraclass correlation coefficients (ICC and, as a secondary objective, the method of Bland and Altman was used to explore agreement between protocols.Mean CNFLManual was 16.7±4.0, 13.9±4.2 mm/mm2 for non-diabetes controls and diabetes participants, while CNFLSemi-Automated was 10.2±3.3, 8.6±3.0 mm/mm2 and CNFLFully-Automated was 12.5±2.8, 10.9 ± 2.9 mm/mm2. Inter-observer ICC and 95% confidence intervals (95%CI were 0.73(0.56, 0.84, 0.75(0.59, 0.85, and 0.78(0.63, 0.87, respectively (p = NS for all comparisons. Intra-observer ICC and 95%CI were 0.72(0.55, 0.83, 0.74(0.57, 0.85, and 0.84(0.73, 0.91, respectively (p<0.05 for CNFLFully-Automated compared to others. The other IVCCM parameters had substantially lower ICC compared to those for CNFL. CNFLSemi-Automated and CNFLFully-Automated underestimated CNFLManual by mean and 95%CI of 35.1(-4.5, 67.5% and 21.0(-21.6, 46.1%, respectively.Despite an apparent measurement (underestimation bias in comparison to the manual strategy of image

  2. Precision automation of cell type classification and sub-cellular fluorescence quantification from laser scanning confocal images

    Directory of Open Access Journals (Sweden)

    Hardy Craig Hall

    2016-02-01

    Full Text Available While novel whole-plant phenotyping technologies have been successfully implemented into functional genomics and breeding programs, the potential of automated phenotyping with cellular resolution is largely unexploited. Laser scanning confocal microscopy has the potential to close this gap by providing spatially highly resolved images containing anatomic as well as chemical information on a subcellular basis. However, in the absence of automated methods, the assessment of the spatial patterns and abundance of fluorescent markers with subcellular resolution is still largely qualitative and time-consuming. Recent advances in image acquisition and analysis, coupled with improvements in microprocessor performance, have brought such automated methods within reach, so that information from thousands of cells per image for hundreds of images may be derived in an experimentally convenient time-frame. Here, we present a MATLAB-based analytical pipeline to 1 segment radial plant organs into individual cells, 2 classify cells into cell type categories based upon random forest classification, 3 divide each cell into sub-regions, and 4 quantify fluorescence intensity to a subcellular degree of precision for a separate fluorescence channel. In this research advance, we demonstrate the precision of this analytical process for the relatively complex tissues of Arabidopsis hypocotyls at various stages of development. High speed and robustness make our approach suitable for phenotyping of large collections of stem-like material and other tissue types.

  3. Automated Photogrammetric Image Matching with Sift Algorithm and Delaunay Triangulation

    DEFF Research Database (Denmark)

    Karagiannis, Georgios; Antón Castro, Francesc/François; Mioc, Darka

    2016-01-01

    An algorithm for image matching of multi-sensor and multi-temporal satellite images is developed. The method is based on the SIFT feature detector proposed by Lowe in (Lowe, 1999). First, SIFT feature points are detected independently in two images (reference and sensed image). The features...... of each feature set for each image are computed. The isomorphism of the Delaunay triangulations is determined to guarantee the quality of the image matching. The algorithm is implemented in Matlab and tested on World-View 2, SPOT6 and TerraSAR-X image patches....

  4. TScratch: a novel and simple software tool for automated analysis of monolayer wound healing assays.

    Science.gov (United States)

    Gebäck, Tobias; Schulz, Martin Michael Peter; Koumoutsakos, Petros; Detmar, Michael

    2009-04-01

    Cell migration plays a major role in development, physiology, and disease, and is frequently evaluated in vitro by the monolayer wound healing assay. The assay analysis, however, is a time-consuming task that is often performed manually. In order to accelerate this analysis, we have developed TScratch, a new, freely available image analysis technique and associated software tool that uses the fast discrete curvelet transform to automate the measurement of the area occupied by cells in the images. This tool helps to significantly reduce the time needed for analysis and enables objective and reproducible quantification of assays. The software also offers a graphical user interface which allows easy inspection of analysis results and, if desired, manual modification of analysis parameters. The automated analysis was validated by comparing its results with manual-analysis results for a range of different cell lines. The comparisons demonstrate a close agreement for the vast majority of images that were examined and indicate that the present computational tool can reproduce statistically significant results in experiments with well-known cell migration inhibitors and enhancers.

  5. Direct identification of pure penicillium species using image analysis

    DEFF Research Database (Denmark)

    Dørge, Thorsten Carlheim; Carstensen, Jens Michael; Frisvad, Jens Christian

    2000-01-01

    This paper presents a method for direct identification of fungal species solely by means of digital image analysis of colonies as seen after growth on a standard medium. The method described is completely automated and hence objective once digital images of the reference fungi have been establish...

  6. Fully automated apparatus for the proximate analysis of coals

    Energy Technology Data Exchange (ETDEWEB)

    Fukumoto, K.; Ishibashi, Y.; Ishii, T.; Maeda, K.; Ogawa, A.; Gotoh, K.

    1985-01-01

    The authors report the development of fully-automated equipment for the proximate analysis of coals, a development undertaken with the twin aims of labour-saving and developing robot applications technology. This system comprises a balance, electric furnaces, a sulfur analyzer, etc., arranged concentrically around a multi-jointed robot which automatically performs all the necessary operations, such as sampling and weighing the materials for analysis, and inserting and removing them from the furnaces. 2 references.

  7. A fully automated multicapillary electrophoresis device for DNA analysis.

    Science.gov (United States)

    Behr, S; Mätzig, M; Levin, A; Eickhoff, H; Heller, C

    1999-06-01

    We describe the construction and performance of a fully automated multicapillary electrophoresis system for the analysis of fluorescently labeled biomolecules. A special detection system allows the simultaneous spectral analysis of all 96 capillaries. The main features are true parallel detection without any moving parts, high robustness, and full compatibility to existing protocols. The device can process up to 40 microtiter plates (96 and 384 well) without human interference, which means up to 15,000 samples before it has to be reloaded.

  8. Automated detection of optic disk in retinal fundus images using intuitionistic fuzzy histon segmentation.

    Science.gov (United States)

    Mookiah, Muthu Rama Krishnan; Acharya, U Rajendra; Chua, Chua Kuang; Min, Lim Choo; Ng, E Y K; Mushrif, Milind M; Laude, Augustinus

    2013-01-01

    The human eye is one of the most sophisticated organs, with perfectly interrelated retina, pupil, iris cornea, lens, and optic nerve. Automatic retinal image analysis is emerging as an important screening tool for early detection of eye diseases. Uncontrolled diabetic retinopathy (DR) and glaucoma may lead to blindness. The identification of retinal anatomical regions is a prerequisite for the computer-aided diagnosis of several retinal diseases. The manual examination of optic disk (OD) is a standard procedure used for detecting different stages of DR and glaucoma. In this article, a novel automated, reliable, and efficient OD localization and segmentation method using digital fundus images is proposed. General-purpose edge detection algorithms often fail to segment the OD due to fuzzy boundaries, inconsistent image contrast, or missing edge features. This article proposes a novel and probably the first method using the Attanassov intuitionistic fuzzy histon (A-IFSH)-based segmentation to detect OD in retinal fundus images. OD pixel intensity and column-wise neighborhood operation are employed to locate and isolate the OD. The method has been evaluated on 100 images comprising 30 normal, 39 glaucomatous, and 31 DR images. Our proposed method has yielded precision of 0.93, recall of 0.91, F-score of 0.92, and mean segmentation accuracy of 93.4%. We have also compared the performance of our proposed method with the Otsu and gradient vector flow (GVF) snake methods. Overall, our result shows the superiority of proposed fuzzy segmentation technique over other two segmentation methods.

  9. A fully automated method for quantifying and localizing white matter hyperintensities on MR images.

    Science.gov (United States)

    Wu, Minjie; Rosano, Caterina; Butters, Meryl; Whyte, Ellen; Nable, Megan; Crooks, Ryan; Meltzer, Carolyn C; Reynolds, Charles F; Aizenstein, Howard J

    2006-12-01

    White matter hyperintensities (WMH), commonly found on T2-weighted FLAIR brain MR images in the elderly, are associated with a number of neuropsychiatric disorders, including vascular dementia, Alzheimer's disease, and late-life depression. Previous MRI studies of WMHs have primarily relied on the subjective and global (i.e., full-brain) ratings of WMH grade. In the current study we implement and validate an automated method for quantifying and localizing WMHs. We adapt a fuzzy-connected algorithm to automate the segmentation of WMHs and use a demons-based image registration to automate the anatomic localization of the WMHs using the Johns Hopkins University White Matter Atlas. The method is validated using the brain MR images acquired from eleven elderly subjects with late-onset late-life depression (LLD) and eight elderly controls. This dataset was chosen because LLD subjects are known to have significant WMH burden. The volumes of WMH identified in our automated method are compared with the accepted gold standard (manual ratings). A significant correlation of the automated method and the manual ratings is found (Pdepression. Progress in Neuro-Psychopharmacology and Biological Psychiatry. 27 (3), 539-544.]), we found there was a significantly greater WMH burden in the LLD subjects versus the controls for both the manual and automated method. The effect size was greater for the automated method, suggesting that it is a more specific measure. Additionally, we describe the anatomic localization of the WMHs in LLD subjects as well as in the control subjects, and detect the regions of interest (ROIs) specific for the WMH burden of LLD patients. Given the emergence of large NeuroImage databases, techniques, such as that described here, will allow for a better understanding of the relationship between WMHs and neuropsychiatric disorders.

  10. Automated Asteroseismic Analysis of Solar-type Stars

    DEFF Research Database (Denmark)

    Karoff, Christoffer; Campante, T.L.; Chaplin, W.J.

    2010-01-01

    The rapidly increasing volume of asteroseismic observations on solar-type stars has revealed a need for automated analysis tools. The reason for this is not only that individual analyses of single stars are rather time consuming, but more importantly that these large volumes of observations open...... the possibility to do population studies on large samples of stars and such population studies demand a consistent analysis. By consistent analysis we understand an analysis that can be performed without the need to make any subjective choices on e.g. mode identification and an analysis where the uncertainties...

  11. A method for fast automated microscope image stitching.

    Science.gov (United States)

    Yang, Fan; Deng, Zhen-Sheng; Fan, Qiu-Hong

    2013-05-01

    Image stitching is an important technology to produce a panorama or larger image by combining several images with overlapped areas. In many biomedical researches, image stitching is highly desirable to acquire a panoramic image which represents large areas of certain structures or whole sections, while retaining microscopic resolution. In this study, we develop a fast normal light microscope image stitching algorithm based on feature extraction. At first, an algorithm of scale-space reconstruction of speeded-up robust features (SURF) was proposed to extract features from the images to be stitched with a short time and higher repeatability. Then, the histogram equalization (HE) method was employed to preprocess the images to enhance their contrast for extracting more features. Thirdly, the rough overlapping zones of the images preprocessed were calculated by phase correlation, and the improved SURF was used to extract the image features in the rough overlapping areas. Fourthly, the features were corresponded by matching algorithm and the transformation parameters were estimated, then the images were blended seamlessly. Finally, this procedure was applied to stitch normal light microscope images to verify its validity. Our experimental results demonstrate that the improved SURF algorithm is very robust to viewpoint, illumination, blur, rotation and zoom of the images and our method is able to stitch microscope images automatically with high precision and high speed. Also, the method proposed in this paper is applicable to registration and stitching of common images as well as stitching the microscope images in the field of virtual microscope for the purpose of observing, exchanging, saving, and establishing a database of microscope images.

  12. Automated Photogrammetric Image Matching with Sift Algorithm and Delaunay Triangulation

    Science.gov (United States)

    Karagiannis, Georgios; Antón Castro, Francesc; Mioc, Darka

    2016-06-01

    An algorithm for image matching of multi-sensor and multi-temporal satellite images is developed. The method is based on the SIFT feature detector proposed by Lowe in (Lowe, 1999). First, SIFT feature points are detected independently in two images (reference and sensed image). The features detected are invariant to image rotations, translations, scaling and also to changes in illumination, brightness and 3-dimensional viewpoint. Afterwards, each feature of the reference image is matched with one in the sensed image if, and only if, the distance between them multiplied by a threshold is shorter than the distances between the point and all the other points in the sensed image. Then, the matched features are used to compute the parameters of the homography that transforms the coordinate system of the sensed image to the coordinate system of the reference image. The Delaunay triangulations of each feature set for each image are computed. The isomorphism of the Delaunay triangulations is determined to guarantee the quality of the image matching. The algorithm is implemented in Matlab and tested on World-View 2, SPOT6 and TerraSAR-X image patches.

  13. Slide Set: Reproducible image analysis and batch processing with ImageJ.

    Science.gov (United States)

    Nanes, Benjamin A

    2015-11-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution.

  14. Microscopic images dataset for automation of RBCs counting.

    Science.gov (United States)

    Abbas, Sherif

    2015-12-01

    A method for Red Blood Corpuscles (RBCs) counting has been developed using RBCs light microscopic images and Matlab algorithm. The Dataset consists of Red Blood Corpuscles (RBCs) images and there RBCs segmented images. A detailed description using flow chart is given in order to show how to produce RBCs mask. The RBCs mask was used to count the number of RBCs in the blood smear image.

  15. Automated quadrilateral mesh generation for digital image structures

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    With the development of advanced imaging technology, digital images are widely used. This paper proposes an automatic quadrilateral mesh generation algorithm for multi-colour imaged structures. It takes an original arbitrary digital image as an input for automatic quadrilateral mesh generation, this includes removing the noise, extracting and smoothing the boundary geometries between different colours, and automatic all-quad mesh generation with the above boundaries as constraints. An application example is...

  16. Microscopic images dataset for automation of RBCs counting

    Directory of Open Access Journals (Sweden)

    Sherif Abbas

    2015-12-01

    Full Text Available A method for Red Blood Corpuscles (RBCs counting has been developed using RBCs light microscopic images and Matlab algorithm. The Dataset consists of Red Blood Corpuscles (RBCs images and there RBCs segmented images. A detailed description using flow chart is given in order to show how to produce RBCs mask. The RBCs mask was used to count the number of RBCs in the blood smear image.

  17. Towards Automated Design, Analysis and Optimization of Declarative Curation Workflows

    Directory of Open Access Journals (Sweden)

    Tianhong Song

    2014-10-01

    Full Text Available Data curation is increasingly important. Our previous work on a Kepler curation package has demonstrated advantages that come from automating data curation pipelines by using workflow systems. However, manually designed curation workflows can be error-prone and inefficient due to a lack of user understanding of the workflow system, misuse of actors, or human error. Correcting problematic workflows is often very time-consuming. A more proactive workflow system can help users avoid such pitfalls. For example, static analysis before execution can be used to detect the potential problems in a workflow and help the user to improve workflow design. In this paper, we propose a declarative workflow approach that supports semi-automated workflow design, analysis and optimization. We show how the workflow design engine helps users to construct data curation workflows, how the workflow analysis engine detects different design problems of workflows and how workflows can be optimized by exploiting parallelism.

  18. Methodology for fully automated segmentation and plaque characterization in intracoronary optical coherence tomography images.

    Science.gov (United States)

    Athanasiou, Lambros S; Bourantas, Christos V; Rigas, George; Sakellarios, Antonis I; Exarchos, Themis P; Siogkas, Panagiotis K; Ricciardi, Andrea; Naka, Katerina K; Papafaklis, Michail I; Michalis, Lampros K; Prati, Francesco; Fotiadis, Dimitrios I

    2014-02-01

    Optical coherence tomography (OCT) is a light-based intracoronary imaging modality that provides high-resolution cross-sectional images of the luminal and plaque morphology. Currently, the segmentation of OCT images and identification of the composition of plaque are mainly performed manually by expert observers. However, this process is laborious and time consuming and its accuracy relies on the expertise of the observer. To address these limitations, we present a methodology that is able to process the OCT data in a fully automated fashion. The proposed methodology is able to detect the lumen borders in the OCT frames, identify the plaque region, and detect four tissue types: calcium (CA), lipid tissue (LT), fibrous tissue (FT), and mixed tissue (MT). The efficiency of the developed methodology was evaluated using annotations from 27 OCT pullbacks acquired from 22 patients. High Pearson's correlation coefficients were obtained between the output of the developed methodology and the manual annotations (from 0.96 to 0.99), while no significant bias with good limits of agreement was shown in the Bland-Altman analysis. The overlapping areas ratio between experts' annotations and methodology in detecting CA, LT, FT, and MT was 0.81, 0.71, 0.87, and 0.81, respectively.

  19. Foreign object detection and removal to improve automated analysis of chest radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Hogeweg, Laurens; Sanchez, Clara I.; Melendez, Jaime; Maduskar, Pragnya; Ginneken, Bram van [Diagnostic Image Analysis Group, Radboud University Nijmegen Medical Centre, Nijmegen 6525 GA (Netherlands); Story, Alistair; Hayward, Andrew [University College London, Centre for Infectious Disease Epidemiology, London NW3 2PF (United Kingdom)

    2013-07-15

    Purpose: Chest radiographs commonly contain projections of foreign objects, such as buttons, brassier clips, jewellery, or pacemakers and wires. The presence of these structures can substantially affect the output of computer analysis of these images. An automated method is presented to detect, segment, and remove foreign objects from chest radiographs.Methods: Detection is performed using supervised pixel classification with a kNN classifier, resulting in a probability estimate per pixel to belong to a projected foreign object. Segmentation is performed by grouping and post-processing pixels with a probability above a certain threshold. Next, the objects are replaced by texture inpainting.Results: The method is evaluated in experiments on 257 chest radiographs. The detection at pixel level is evaluated with receiver operating characteristic analysis on pixels within the unobscured lung fields and an A{sub z} value of 0.949 is achieved. Free response operator characteristic analysis is performed at the object level, and 95.6% of objects are detected with on average 0.25 false positive detections per image. To investigate the effect of removing the detected objects through inpainting, a texture analysis system for tuberculosis detection is applied to images with and without pathology and with and without foreign object removal. Unprocessed, the texture analysis abnormality score of normal images with foreign objects is comparable to those with pathology. After removing foreign objects, the texture score of normal images with and without foreign objects is similar, while abnormal images, whether they contain foreign objects or not, achieve on average higher scores.Conclusions: The authors conclude that removal of foreign objects from chest radiographs is feasible and beneficial for automated image analysis.

  20. Gabor Analysis for Imaging

    DEFF Research Database (Denmark)

    Christensen, Ole; Feichtinger, Hans G.; Paukner, Stephan

    2015-01-01

    , it characterizes a function by its transform over phase space, which is the time–frequency plane (TF-plane) in a musical context or the location–wave-number domain in the context of image processing. Since the transition from the signal domain to the phase space domain introduces an enormous amount of data...... of the generalities relevant for an understanding of Gabor analysis of functions on Rd. We pay special attention to the case d = 2, which is the most important case for image processing and image analysis applications. The chapter is organized as follows. Section 2 presents central tools from functional analysis......, the application of Gabor expansions to image representation is considered in Sect. 6....

  1. An automated detection for axonal boutons in vivo two-photon imaging of mouse

    Science.gov (United States)

    Li, Weifu; Zhang, Dandan; Xie, Qiwei; Chen, Xi; Han, Hua

    2017-02-01

    Activity-dependent changes in the synaptic connections of the brain are tightly related to learning and memory. Previous studies have shown that essentially all new synaptic contacts were made by adding new partners to existing synaptic elements. To further explore synaptic dynamics in specific pathways, concurrent imaging of pre and postsynaptic structures in identified connections is required. Consequently, considerable attention has been paid for the automated detection of axonal boutons. Different from most previous methods proposed in vitro data, this paper considers a more practical case in vivo neuron images which can provide real time information and direct observation of the dynamics of a disease process in mouse. Additionally, we present an automated approach for detecting axonal boutons by starting with deconvolving the original images, then thresholding the enhanced images, and reserving the regions fulfilling a series of criteria. Experimental result in vivo two-photon imaging of mouse demonstrates the effectiveness of our proposed method.

  2. A review of automated image understanding within 3D baggage computed tomography security screening.

    Science.gov (United States)

    Mouton, Andre; Breckon, Toby P

    2015-01-01

    Baggage inspection is the principal safeguard against the transportation of prohibited and potentially dangerous materials at airport security checkpoints. Although traditionally performed by 2D X-ray based scanning, increasingly stringent security regulations have led to a growing demand for more advanced imaging technologies. The role of X-ray Computed Tomography is thus rapidly expanding beyond the traditional materials-based detection of explosives. The development of computer vision and image processing techniques for the automated understanding of 3D baggage-CT imagery is however, complicated by poor image resolutions, image clutter and high levels of noise and artefacts. We discuss the recent and most pertinent advancements and identify topics for future research within the challenging domain of automated image understanding for baggage security screening CT.

  3. Automated scoring of lymphocyte micronuclei by the MetaSystems Metafer image cytometry system and its application in studies of human mutagen sensitivity and biodosimetry of genotoxin exposure.

    Science.gov (United States)

    Rossnerova, Andrea; Spatova, Milada; Schunck, Christian; Sram, Radim J

    2011-01-01

    Automated image analysis scoring of micronuclei (MN) in cells can facilitate the objective and rapid measurement of genetic damage in mammalian and human cells. This approach was repeatedly developed and tested over the past two decades but none of the systems were sufficiently robust for routine analysis of MN until recently. New methodological, hardware and software developments have now allowed more advanced systems to become available. This mini-review presents the current stage of development and validation of the Metasystems Metafer MNScore system for automated image analysis scoring of MN in cytokinesis-blocked binucleated lymphocytes, which is the best-established method for studying MN formation in humans. The results and experience of users of this system from 2004 until today are reviewed in this paper. Significant achievements in the application of this method in research related to mutagen sensitivity phenotype in cancer risk, radiation biodosimetry and biomonitoring studies of air pollution (enriched by new data) are described. Advantages as well as limitations of automated image analysis in comparison with traditional visual analysis are discussed. The current increased use of the Metasystems Metafer MNScore system in various studies and the growing number of publications based on automated image analysis scoring of MN is promising for the ongoing and future application of this approach.

  4. Advanced automated gain adjustments for in-vivo ultrasound imaging

    DEFF Research Database (Denmark)

    Moshavegh, Ramin; Hemmsen, Martin Christian; Martins, Bo

    2015-01-01

    Automatic gain adjustments are necessary on the state-of-the-art ultrasound scanners to obtain optimal scan quality, while reducing the unnecessary user interactions with the scanner. However, when large anechoic regions exist in the scan plane, the sudden and drastic variation of attenuations...... in the scanned media complicates the gain compensation. This paper presents an advanced and automated gain adjustment method that precisely compensate for the gains on scans and dynamically adapts to the drastic attenuation variations between different media. The proposed algorithm makes use of several...

  5. Automated high-throughput assessment of prostate biopsy tissue using infrared spectroscopic chemical imaging

    Science.gov (United States)

    Bassan, Paul; Sachdeva, Ashwin; Shanks, Jonathan H.; Brown, Mick D.; Clarke, Noel W.; Gardner, Peter

    2014-03-01

    Fourier transform infrared (FT-IR) chemical imaging has been demonstrated as a promising technique to complement histopathological assessment of biomedical tissue samples. Current histopathology practice involves preparing thin tissue sections and staining them using hematoxylin and eosin (H&E) after which a histopathologist manually assess the tissue architecture under a visible microscope. Studies have shown that there is disagreement between operators viewing the same tissue suggesting that a complementary technique for verification could improve the robustness of the evaluation, and improve patient care. FT-IR chemical imaging allows the spatial distribution of chemistry to be rapidly imaged at a high (diffraction-limited) spatial resolution where each pixel represents an area of 5.5 × 5.5 μm2 and contains a full infrared spectrum providing a chemical fingerprint which studies have shown contains the diagnostic potential to discriminate between different cell-types, and even the benign or malignant state of prostatic epithelial cells. We report a label-free (i.e. no chemical de-waxing, or staining) method of imaging large pieces of prostate tissue (typically 1 cm × 2 cm) in tens of minutes (at a rate of 0.704 × 0.704 mm2 every 14.5 s) yielding images containing millions of spectra. Due to refractive index matching between sample and surrounding paraffin, minimal signal processing is required to recover spectra with their natural profile as opposed to harsh baseline correction methods, paving the way for future quantitative analysis of biochemical signatures. The quality of the spectral information is demonstrated by building and testing an automated cell-type classifier based upon spectral features.

  6. Some selected quantitative methods of thermal image analysis in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image.

  7. Automated Detection of P. falciparum Using Machine Learning Algorithms with Quantitative Phase Images of Unstained Cells

    Science.gov (United States)

    Park, Han Sang; Rinehart, Matthew T.; Walzer, Katelyn A.; Chi, Jen-Tsan Ashley; Wax, Adam

    2016-01-01

    Malaria detection through microscopic examination of stained blood smears is a diagnostic challenge that heavily relies on the expertise of trained microscopists. This paper presents an automated analysis method for detection and staging of red blood cells infected by the malaria parasite Plasmodium falciparum at trophozoite or schizont stage. Unlike previous efforts in this area, this study uses quantitative phase images of unstained cells. Erythrocytes are automatically segmented using thresholds of optical phase and refocused to enable quantitative comparison of phase images. Refocused images are analyzed to extract 23 morphological descriptors based on the phase information. While all individual descriptors are highly statistically different between infected and uninfected cells, each descriptor does not enable separation of populations at a level satisfactory for clinical utility. To improve the diagnostic capacity, we applied various machine learning techniques, including linear discriminant classification (LDC), logistic regression (LR), and k-nearest neighbor classification (NNC), to formulate algorithms that combine all of the calculated physical parameters to distinguish cells more effectively. Results show that LDC provides the highest accuracy of up to 99.7% in detecting schizont stage infected cells compared to uninfected RBCs. NNC showed slightly better accuracy (99.5%) than either LDC (99.0%) or LR (99.1%) for discriminating late trophozoites from uninfected RBCs. However, for early trophozoites, LDC produced the best accuracy of 98%. Discrimination of infection stage was less accurate, producing high specificity (99.8%) but only 45.0%-66.8% sensitivity with early trophozoites most often mistaken for late trophozoite or schizont stage and late trophozoite and schizont stage most often confused for each other. Overall, this methodology points to a significant clinical potential of using quantitative phase imaging to detect and stage malaria infection

  8. Novel automated motion compensation technique for producing cumulative maximum intensity subharmonic images.

    Science.gov (United States)

    Dave, Jaydev K; Forsberg, Flemming

    2009-09-01

    The aim of this study was to develop a novel automated motion compensation algorithm for producing cumulative maximum intensity (CMI) images from subharmonic imaging (SHI) of breast lesions. SHI is a nonlinear contrast-specific ultrasound imaging technique in which pulses are received at half the frequency of the transmitted pulses. A Logiq 9 scanner (GE Healthcare, Milwaukee, WI, USA) was modified to operate in grayscale SHI mode (transmitting/receiving at 4.4/2.2 MHz) and used to scan 14 women with 16 breast lesions. Manual CMI images were reconstructed by temporal maximum-intensity projection of pixels traced from the first frame to the last. In the new automated technique, the user selects a kernel in the first frame and the algorithm then uses the sum of absolute difference (SAD) technique to identify motion-induced displacements in the remaining frames. A reliability parameter was used to estimate the accuracy of the motion tracking based on the ratio of the minimum SAD to the average SAD. Two thresholds (the mean and 85% of the mean reliability parameter) were used to eliminate images plagued by excessive motion and/or noise. The automated algorithm was compared with the manual technique for computational time, correction of motion artifacts, removal of noisy frames and quality of the final image. The automated algorithm compensated for motion artifacts and noisy frames. The computational time was 2 min compared with 60-90 minutes for the manual method. The quality of the motion-compensated CMI-SHI images generated by the automated technique was comparable to the manual method and provided a snapshot of the microvasculature showing interconnections between vessels, which was less evident in the original data. In conclusion, an automated algorithm for producing CMI-SHI images has been developed. It eliminates the need for manual processing and yields reproducible images, thereby increasing the throughput and efficiency of reconstructing CMI-SHI images. The

  9. A new automated assessment method for contrast-detail images by applying support vector machine and its robustness to nonlinear image processing.

    Science.gov (United States)

    Takei, Takaaki; Ikeda, Mitsuru; Imai, Kuniharu; Yamauchi-Kawaura, Chiyo; Kato, Katsuhiko; Isoda, Haruo

    2013-09-01

    The automated contrast-detail (C-D) analysis methods developed so-far cannot be expected to work well on images processed with nonlinear methods, such as noise reduction methods. Therefore, we have devised a new automated C-D analysis method by applying support vector machine (SVM), and tested for its robustness to nonlinear image processing. We acquired the CDRAD (a commercially available C-D test object) images at a tube voltage of 120 kV and a milliampere-second product (mAs) of 0.5-5.0. A partial diffusion equation based technique was used as noise reduction method. Three radiologists and three university students participated in the observer performance study. The training data for our SVM method was the classification data scored by the one radiologist for the CDRAD images acquired at 1.6 and 3.2 mAs and their noise-reduced images. We also compared the performance of our SVM method with the CDRAD Analyser algorithm. The mean C-D diagrams (that is a plot of the mean of the smallest visible hole diameter vs. hole depth) obtained from our devised SVM method agreed well with the ones averaged across the six human observers for both original and noise-reduced CDRAD images, whereas the mean C-D diagrams from the CDRAD Analyser algorithm disagreed with the ones from the human observers for both original and noise-reduced CDRAD images. In conclusion, our proposed SVM method for C-D analysis will work well for the images processed with the non-linear noise reduction method as well as for the original radiographic images.

  10. Estimating grass-clover ratio variations caused by traffic intensities using image analysis

    DEFF Research Database (Denmark)

    Jørgensen, Rasmus Nyholm; Sørensen, Claus Grøn; Green, Ole

    ensuring that the whole parcel was imaged. Each image was geo-positioned. The image analysis comprised two steps: Extraction of green material and discrimination of grass and clover using the morphological opening approach. This paper shows the initial results using the automated imaging analysis algorithm...

  11. Comparison of the automated evaluation of phantom mama in digital and digitalized images; Comparacao da avaliacao automatizada do phantom mama em imagens digitais e digitalizadas

    Energy Technology Data Exchange (ETDEWEB)

    Santana, Priscila do Carmo, E-mail: pcs@cdtn.b [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Dept. de Engenharia Nuclear. Programa de Pos-Graduacao em Ciencias e Tecnicas Nucleares; Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Fac. de Medicina. Dept. de Propedeutica Complementar; Gomes, Danielle Soares; Oliveira, Marcio Alves; Nogueira, Maria do Socorro, E-mail: mnogue@cdtn.b [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2011-07-01

    Mammography is an essential tool for diagnosis and early detection of breast cancer if it is provided as a very good quality service. The process of evaluating the quality of radiographic images in general, and mammography in particular, can be much more accurate, practical and fast with the help of computer analysis tools. This work compare the automated methodology for the evaluation of scanned digital images the phantom mama. By applied the DIP method techniques was possible determine geometrical and radiometric images evaluated. The evaluated parameters include circular details of low contrast, contrast ratio, spatial resolution, tumor masses, optical density and background in Phantom Mama scanned and digitized images. The both results of images were evaluated. Through this comparison was possible to demonstrate that this automated methodology is presented as a promising alternative for the reduction or elimination of subjectivity in both types of images, but the Phantom Mama present insufficient parameters for spatial resolution evaluation. (author)

  12. An optimized method for automated analysis of algal pigments by HPLC

    NARCIS (Netherlands)

    van Leeuwe, M. A.; Villerius, L. A.; Roggeveld, J.; Visser, R. J. W.; Stefels, J.

    2006-01-01

    A recent development in algal pigment analysis by high-performance liquid chromatography (HPLC) is the application of automation. An optimization of a complete sampling and analysis protocol applied specifically in automation has not yet been performed. In this paper we show that automation can only

  13. Automated detection of cardiac phase from intracoronary ultrasound image sequences.

    Science.gov (United States)

    Sun, Zheng; Dong, Yi; Li, Mengchan

    2015-01-01

    Intracoronary ultrasound (ICUS) is a widely used interventional imaging modality in clinical diagnosis and treatment of cardiac vessel diseases. Due to cyclic cardiac motion and pulsatile blood flow within the lumen, there exist changes of coronary arterial dimensions and relative motion between the imaging catheter and the lumen during continuous pullback of the catheter. The action subsequently causes cyclic changes to the image intensity of the acquired image sequence. Information on cardiac phases is implied in a non-gated ICUS image sequence. A 1-D phase signal reflecting cardiac cycles was extracted according to cyclical changes in local gray-levels in ICUS images. The local extrema of the signal were then detected to retrieve cardiac phases and to retrospectively gate the image sequence. Results of clinically acquired in vivo image data showed that the average inter-frame dissimilarity of lower than 0.1 was achievable with our technique. In terms of computational efficiency and complexity, the proposed method was shown to be competitive when compared with the current methods. The average frame processing time was lower than 30 ms. We effectively reduced the effect of image noises, useless textures, and non-vessel region on the phase signal detection by discarding signal components caused by non-cardiac factors.

  14. Flux-P: Automating Metabolic Flux Analysis

    OpenAIRE

    Ebert, Birgitta E.; Anna-Lena Lamprecht; Bernhard Steffen; Blank, Lars M.

    2012-01-01

    Quantitative knowledge of intracellular fluxes in metabolic networks is invaluable for inferring metabolic system behavior and the design principles of biological systems. However, intracellular reaction rates can not often be calculated directly but have to be estimated; for instance, via 13C-based metabolic flux analysis, a model-based interpretation of stable carbon isotope patterns in intermediates of metabolism. Existing software such as FiatFlux, OpenFLUX or 13CFLUX supports experts in ...

  15. SU-C-304-04: A Compact Modular Computational Platform for Automated On-Board Imager Quality Assurance

    Energy Technology Data Exchange (ETDEWEB)

    Dolly, S [Washington University School of Medicine, Saint Louis, MO (United States); University of Missouri, Columbia, MO (United States); Cai, B; Chen, H; Anastasio, M; Sun, B; Yaddanapudi, S; Noel, C; Goddu, S; Mutic, S; Li, H [Washington University School of Medicine, Saint Louis, MO (United States); Tan, J [UTSouthwestern Medical Center, Dallas, TX (United States)

    2015-06-15

    Purpose: Traditionally, the assessment of X-ray tube output and detector positioning accuracy of on-board imagers (OBI) has been performed manually and subjectively with rulers and dosimeters, and typically takes hours to complete. In this study, we have designed a compact modular computational platform to automatically analyze OBI images acquired with in-house designed phantoms as an efficient and robust surrogate. Methods: The platform was developed as an integrated and automated image analysis-based platform using MATLAB for easy modification and maintenance. Given a set of images acquired with the in-house designed phantoms, the X-ray output accuracy was examined via cross-validation of the uniqueness and integration minimization of important image quality assessment metrics, while machine geometric and positioning accuracy were validated by utilizing pattern-recognition based image analysis techniques. Results: The platform input was a set of images of an in-house designed phantom. The total processing time is about 1–2 minutes. Based on the data acquired from three Varian Truebeam machines over the course of 3 months, the designed test validation strategy achieved higher accuracy than traditional methods. The kVp output accuracy can be verified within +/−2 kVp, the exposure accuracy within 2%, and exposure linearity with a coefficient of variation (CV) of 0.1. Sub-millimeter position accuracy was achieved for the lateral and longitudinal positioning tests, while vertical positioning accuracy within +/−2 mm was achieved. Conclusion: This new platform delivers to the radiotherapy field an automated, efficient, and stable image analysis-based procedure, for the first time, acting as a surrogate for traditional tests for LINAC OBI systems. It has great potential to facilitate OBI quality assurance (QA) with the assistance of advanced image processing techniques. In addition, it provides flexible integration of additional tests for expediting other OBI

  16. Digital image analysis

    DEFF Research Database (Denmark)

    Riber-Hansen, Rikke; Vainer, Ben; Steiniche, Torben

    2012-01-01

    Digital image analysis (DIA) is increasingly implemented in histopathological research to facilitate truly quantitative measurements, decrease inter-observer variation and reduce hands-on time. Originally, efforts were made to enable DIA to reproduce manually obtained results on histological slides...... reproducibility, application of stereology-based quantitative measurements, time consumption, optimization of histological slides, regions of interest selection and recent developments in staining and imaging techniques....

  17. A Mixed Approach Of Automated ECG Analysis

    Science.gov (United States)

    De, A. K.; Das, J.; Majumder, D. Dutta

    1982-11-01

    ECG is one of the non-invasive and risk-free technique for collecting data about the functional state of the heart. However, all these data-processing techniques can be classified into two basically different approaches -- the first and second generation ECG computer program. Not the opposition, but simbiosis of these two approaches will lead to systems with the highest accuracy. In our paper we are going to describe a mixed approach which will show higher accuracy with lesser amount of computational work. Key Words : Primary features, Patients' parameter matrix, Screening, Logical comparison technique, Multivariate statistical analysis, Mixed approach.

  18. Infrared thermal imaging for automated detection of diabetic foot complications

    NARCIS (Netherlands)

    Netten, van Jaap J.; Baal, van Jeff G.; Liu, Chanjuan; Heijden, van der Ferdi; Bus, Sicco A.

    2013-01-01

    Background: Although thermal imaging can be a valuable technology in the prevention and management of diabetic foot disease, it is not yet widely used in clinical practice. Technological advancement in infrared imaging increases its application range. The aim was to explore the first steps in the ap

  19. Automated Segmentability Index for Layer Segmentation of Macular SD-OCT Images

    NARCIS (Netherlands)

    Lee, K.; Buitendijk, G.H.; Bogunovic, H.; Springelkamp, H.; Hofman, A.; Wahle, A.; Sonka, M.; Vingerling, J.R.; Klaver, C.C.W.; Abramoff, M.D.

    2016-01-01

    PURPOSE: To automatically identify which spectral-domain optical coherence tomography (SD-OCT) scans will provide reliable automated layer segmentations for more accurate layer thickness analyses in population studies. METHODS: Six hundred ninety macular SD-OCT image volumes (6.0 x 6.0 x 2.3 mm3) we

  20. Automated Selection of Uniform Regions for CT Image Quality Detection

    CERN Document Server

    Naeemi, Maitham D; Roychodhury, Sohini

    2016-01-01

    CT images are widely used in pathology detection and follow-up treatment procedures. Accurate identification of pathological features requires diagnostic quality CT images with minimal noise and artifact variation. In this work, a novel Fourier-transform based metric for image quality (IQ) estimation is presented that correlates to additive CT image noise. In the proposed method, two windowed CT image subset regions are analyzed together to identify the extent of variation in the corresponding Fourier-domain spectrum. The two square windows are chosen such that their center pixels coincide and one window is a subset of the other. The Fourier-domain spectral difference between these two sub-sampled windows is then used to isolate spatial regions-of-interest (ROI) with low signal variation (ROI-LV) and high signal variation (ROI-HV), respectively. Finally, the spatial variance ($var$), standard deviation ($std$), coefficient of variance ($cov$) and the fraction of abdominal ROI pixels in ROI-LV ($\

  1. Hyper-Cam automated calibration method for continuous hyperspectral imaging measurements

    Science.gov (United States)

    Gagnon, Jean-Philippe; Habte, Zewdu; George, Jacks; Farley, Vincent; Tremblay, Pierre; Chamberland, Martin; Romano, Joao; Rosario, Dalton

    2010-04-01

    The midwave and longwave infrared regions of the electromagnetic spectrum contain rich information which can be captured by hyperspectral sensors thus enabling enhanced detection of targets of interest. A continuous hyperspectral imaging measurement capability operated 24/7 over varying seasons and weather conditions permits the evaluation of hyperspectral imaging for detection of different types of targets in real world environments. Such a measurement site was built at Picatinny Arsenal under the Spectral and Polarimetric Imagery Collection Experiment (SPICE), where two Hyper-Cam hyperspectral imagers are installed at the Precision Armament Laboratory (PAL) and are operated autonomously since Fall of 2009. The Hyper-Cam are currently collecting a complete hyperspectral database that contains the MWIR and LWIR hyperspectral measurements of several targets under day, night, sunny, cloudy, foggy, rainy and snowy conditions. The Telops Hyper-Cam sensor is an imaging spectrometer that enables the spatial and spectral analysis capabilities using a single sensor. It is based on the Fourier-transform technology yielding high spectral resolution and enabling high accuracy radiometric calibration. It provides datacubes of up to 320x256 pixels at spectral resolutions of up to 0.25 cm-1. The MWIR version covers the 3 to 5 μm spectral range and the LWIR version covers the 8 to 12 μm spectral range. This paper describes the automated operation of the two Hyper-Cam sensors being used in the SPICE data collection. The Reveal Automation Control Software (RACS) developed collaboratively between Telops, ARDEC, and ARL enables flexible operating parameters and autonomous calibration. Under the RACS software, the Hyper-Cam sensors can autonomously calibrate itself using their internal blackbody targets, and the calibration events are initiated by user defined time intervals and on internal beamsplitter temperature monitoring. The RACS software is the first software developed for

  2. Prevalence of discordant microscopic changes with automated CBC analysis

    Directory of Open Access Journals (Sweden)

    Fabiano de Jesus Santos

    2014-12-01

    Full Text Available Introduction:The most common cause of diagnostic error is related to errors in laboratory tests as well as errors of results interpretation. In order to reduce them, the laboratory currently has modern equipment which provides accurate and reliable results. The development of automation has revolutionized the laboratory procedures in Brazil and worldwide.Objective:To determine the prevalence of microscopic changes present in blood slides concordant and discordant with results obtained using fully automated procedures.Materials and method:From January to July 2013, 1,000 hematological parameters slides were analyzed. Automated analysis was performed on last generation equipment, which methodology is based on electrical impedance, and is able to quantify all the figurative elements of the blood in a universe of 22 parameters. The microscopy was performed by two experts in microscopy simultaneously.Results:The data showed that only 42.70% were concordant, comparing with 57.30% discordant. The main findings among discordant were: Changes in red blood cells 43.70% (n = 250, white blood cells 38.46% (n = 220, and number of platelet 17.80% (n = 102.Discussion:The data show that some results are not consistent with clinical or physiological state of an individual, and cannot be explained because they have not been investigated, which may compromise the final diagnosis.Conclusion:It was observed that it is of fundamental importance that the microscopy qualitative analysis must be performed in parallel with automated analysis in order to obtain reliable results, causing a positive impact on the prevention, diagnosis, prognosis, and therapeutic follow-up.

  3. Automated Analysis of Security in Networking Systems

    DEFF Research Database (Denmark)

    Buchholtz, Mikael

    2004-01-01

    It has for a long time been a challenge to built secure networking systems. One way to counter this problem is to provide developers of software applications for networking systems with easy-to-use tools that can check security properties before the applications ever reach the marked. These tools...... will both help raise the general level of awareness of the problems and prevent the most basic flaws from occurring. This thesis contributes to the development of such tools. Networking systems typically try to attain secure communication by applying standard cryptographic techniques. In this thesis...... attacks, and attacks launched by insiders. Finally, the perspectives for the application of the analysis techniques are discussed, thereby, coming a small step closer to providing developers with easy- to-use tools for validating the security of networking applications....

  4. Automated analysis for lifecycle assembly processes

    Energy Technology Data Exchange (ETDEWEB)

    Calton, T.L.; Brown, R.G.; Peters, R.R.

    1998-05-01

    Many manufacturing companies today expend more effort on upgrade and disposal projects than on clean-slate design, and this trend is expected to become more prevalent in coming years. However, commercial CAD tools are better suited to initial product design than to the product`s full life cycle. Computer-aided analysis, optimization, and visualization of life cycle assembly processes based on the product CAD data can help ensure accuracy and reduce effort expended in planning these processes for existing products, as well as provide design-for-lifecycle analysis for new designs. To be effective, computer aided assembly planning systems must allow users to express the plan selection criteria that apply to their companies and products as well as to the life cycles of their products. Designing products for easy assembly and disassembly during its entire life cycle for purposes including service, field repair, upgrade, and disposal is a process that involves many disciplines. In addition, finding the best solution often involves considering the design as a whole and by considering its intended life cycle. Different goals and constraints (compared to initial assembly) require one to re-visit the significant fundamental assumptions and methods that underlie current assembly planning techniques. Previous work in this area has been limited to either academic studies of issues in assembly planning or applied studies of life cycle assembly processes, which give no attention to automatic planning. It is believed that merging these two areas will result in a much greater ability to design for; optimize, and analyze life cycle assembly processes.

  5. Efficient parallel Levenberg-Marquardt model fitting towards real-time automated parametric imaging microscopy.

    Science.gov (United States)

    Zhu, Xiang; Zhang, Dianwen

    2013-01-01

    We present a fast, accurate and robust parallel Levenberg-Marquardt minimization optimizer, GPU-LMFit, which is implemented on graphics processing unit for high performance scalable parallel model fitting processing. GPU-LMFit can provide a dramatic speed-up in massive model fitting analyses to enable real-time automated pixel-wise parametric imaging microscopy. We demonstrate the performance of GPU-LMFit for the applications in superresolution localization microscopy and fluorescence lifetime imaging microscopy.

  6. Fully Automated Prostate Magnetic Resonance Imaging and Transrectal Ultrasound Fusion via a Probabilistic Registration Metric

    OpenAIRE

    Sparks, Rachel; Bloch, B. Nicolas; Feleppa, Ernest; Barratt, Dean; Madabhushi, Anant

    2013-01-01

    In this work, we present a novel, automated, registration method to fuse magnetic resonance imaging (MRI) and transrectal ultrasound (TRUS) images of the prostate. Our methodology consists of: (1) delineating the prostate on MRI, (2) building a probabilistic model of prostate location on TRUS, and (3) aligning the MRI prostate segmentation to the TRUS probabilistic model. TRUS-guided needle biopsy is the current gold standard for prostate cancer (CaP) diagnosis. Up to 40% of CaP lesions appea...

  7. Automated registration of multispectral MR vessel wall images of the carotid artery

    Energy Technology Data Exchange (ETDEWEB)

    Klooster, R. van ' t; Staring, M.; Reiber, J. H. C.; Lelieveldt, B. P. F.; Geest, R. J. van der, E-mail: rvdgeest@lumc.nl [Department of Radiology, Division of Image Processing, Leiden University Medical Center, 2300 RC Leiden (Netherlands); Klein, S. [Department of Radiology and Department of Medical Informatics, Biomedical Imaging Group Rotterdam, Erasmus MC, Rotterdam 3015 GE (Netherlands); Kwee, R. M.; Kooi, M. E. [Department of Radiology, Cardiovascular Research Institute Maastricht, Maastricht University Medical Center, Maastricht 6202 AZ (Netherlands)

    2013-12-15

    Purpose: Atherosclerosis is the primary cause of heart disease and stroke. The detailed assessment of atherosclerosis of the carotid artery requires high resolution imaging of the vessel wall using multiple MR sequences with different contrast weightings. These images allow manual or automated classification of plaque components inside the vessel wall. Automated classification requires all sequences to be in alignment, which is hampered by patient motion. In clinical practice, correction of this motion is performed manually. Previous studies applied automated image registration to correct for motion using only nondeformable transformation models and did not perform a detailed quantitative validation. The purpose of this study is to develop an automated accurate 3D registration method, and to extensively validate this method on a large set of patient data. In addition, the authors quantified patient motion during scanning to investigate the need for correction. Methods: MR imaging studies (1.5T, dedicated carotid surface coil, Philips) from 55 TIA/stroke patients with ipsilateral <70% carotid artery stenosis were randomly selected from a larger cohort. Five MR pulse sequences were acquired around the carotid bifurcation, each containing nine transverse slices: T1-weighted turbo field echo, time of flight, T2-weighted turbo spin-echo, and pre- and postcontrast T1-weighted turbo spin-echo images (T1W TSE). The images were manually segmented by delineating the lumen contour in each vessel wall sequence and were manually aligned by applying throughplane and inplane translations to the images. To find the optimal automatic image registration method, different masks, choice of the fixed image, different types of the mutual information image similarity metric, and transformation models including 3D deformable transformation models, were evaluated. Evaluation of the automatic registration results was performed by comparing the lumen segmentations of the fixed image and

  8. An automated method for analysis of microcirculation videos for accurate assessment of tissue perfusion

    Directory of Open Access Journals (Sweden)

    Demir Sumeyra U

    2012-12-01

    Full Text Available Abstract Background Imaging of the human microcirculation in real-time has the potential to detect injuries and illnesses that disturb the microcirculation at earlier stages and may improve the efficacy of resuscitation. Despite advanced imaging techniques to monitor the microcirculation, there are currently no tools for the near real-time analysis of the videos produced by these imaging systems. An automated system tool that can extract microvasculature information and monitor changes in tissue perfusion quantitatively might be invaluable as a diagnostic and therapeutic endpoint for resuscitation. Methods The experimental algorithm automatically extracts microvascular network and quantitatively measures changes in the microcirculation. There are two main parts in the algorithm: video processing and vessel segmentation. Microcirculatory videos are first stabilized in a video processing step to remove motion artifacts. In the vessel segmentation process, the microvascular network is extracted using multiple level thresholding and pixel verification techniques. Threshold levels are selected using histogram information of a set of training video recordings. Pixel-by-pixel differences are calculated throughout the frames to identify active blood vessels and capillaries with flow. Results Sublingual microcirculatory videos are recorded from anesthetized swine at baseline and during hemorrhage using a hand-held Side-stream Dark Field (SDF imaging device to track changes in the microvasculature during hemorrhage. Automatically segmented vessels in the recordings are analyzed visually and the functional capillary density (FCD values calculated by the algorithm are compared for both health baseline and hemorrhagic conditions. These results were compared to independently made FCD measurements using a well-known semi-automated method. Results of the fully automated algorithm demonstrated a significant decrease of FCD values. Similar, but more variable FCD

  9. Automated grading of left ventricular segmental wall motion by an artificial neural network using color kinesis images

    Directory of Open Access Journals (Sweden)

    L.O. Murta Jr.

    2006-01-01

    Full Text Available The present study describes an auxiliary tool in the diagnosis of left ventricular (LV segmental wall motion (WM abnormalities based on color-coded echocardiographic WM images. An artificial neural network (ANN was developed and validated for grading LV segmental WM using data from color kinesis (CK images, a technique developed to display the timing and magnitude of global and regional WM in real time. We evaluated 21 normal subjects and 20 patients with LVWM abnormalities revealed by two-dimensional echocardiography. CK images were obtained in two sets of viewing planes. A method was developed to analyze CK images, providing quantitation of fractional area change in each of the 16 LV segments. Two experienced observers analyzed LVWM from two-dimensional images and scored them as: 1 normal, 2 mild hypokinesia, 3 moderate hypokinesia, 4 severe hypokinesia, 5 akinesia, and 6 dyskinesia. Based on expert analysis of 10 normal subjects and 10 patients, we trained a multilayer perceptron ANN using a back-propagation algorithm to provide automated grading of LVWM, and this ANN was then tested in the remaining subjects. Excellent concordance between expert and ANN analysis was shown by ROC curve analysis, with measured area under the curve of 0.975. An excellent correlation was also obtained for global LV segmental WM index by expert and ANN analysis (R² = 0.99. In conclusion, ANN showed high accuracy for automated semi-quantitative grading of WM based on CK images. This technique can be an important aid, improving diagnostic accuracy and reducing inter-observer variability in scoring segmental LVWM.

  10. Automated grading of left ventricular segmental wall motion by an artificial neural network using color kinesis images.

    Science.gov (United States)

    Murta, L O; Ruiz, E E S; Pazin-Filho, A; Schmidt, A; Almeida-Filho, O C; Simões, M V; Marin-Neto, J A; Maciel, B C

    2006-01-01

    The present study describes an auxiliary tool in the diagnosis of left ventricular (LV) segmental wall motion (WM) abnormalities based on color-coded echocardiographic WM images. An artificial neural network (ANN) was developed and validated for grading LV segmental WM using data from color kinesis (CK) images, a technique developed to display the timing and magnitude of global and regional WM in real time. We evaluated 21 normal subjects and 20 patients with LVWM abnormalities revealed by two-dimensional echocardiography. CK images were obtained in two sets of viewing planes. A method was developed to analyze CK images, providing quantitation of fractional area change in each of the 16 LV segments. Two experienced observers analyzed LVWM from two-dimensional images and scored them as: 1) normal, 2) mild hypokinesia, 3) moderate hypokinesia, 4) severe hypokinesia, 5) akinesia, and 6) dyskinesia. Based on expert analysis of 10 normal subjects and 10 patients, we trained a multilayer perceptron ANN using a back-propagation algorithm to provide automated grading of LVWM, and this ANN was then tested in the remaining subjects. Excellent concordance between expert and ANN analysis was shown by ROC curve analysis, with measured area under the curve of 0.975. An excellent correlation was also obtained for global LV segmental WM index by expert and ANN analysis (R2 = 0.99). In conclusion, ANN showed high accuracy for automated semi-quantitative grading of WM based on CK images. This technique can be an important aid, improving diagnostic accuracy and reducing inter-observer variability in scoring segmental LVWM.

  11. An Innovative Requirements Solution: Combining Six Sigma KJ Language Data Analysis with Automated Content Analysis

    Science.gov (United States)

    2009-03-01

    2008 Carnegie Mellon University An Innovative Requirements Solution: Combining Six Sigma KJ Language Data Analysis with Automated Content...2. REPORT TYPE 3. DATES COVERED 00-00-2009 to 00-00-2009 4. TITLE AND SUBTITLE An Innovative Requirements Solution: Combining Six Sigma KJ...Prescribed by ANSI Std Z39-18 3 An Innovative Requirements Solution: Marrying Six Sigma KJ Analysis with Automation for Text Analysis and

  12. Automated wavelet denoising of photoacoustic signals for burn-depth image reconstruction

    Science.gov (United States)

    Holan, Scott H.; Viator, John A.

    2007-02-01

    Photoacoustic image reconstruction involves dozens or perhaps hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a sample with laser light are used to produce an image of the acoustic source. Each of these point measurements must undergo some signal processing, such as denoising and system deconvolution. In order to efficiently process the numerous signals acquired for photoacoustic imaging, we have developed an automated wavelet algorithm for processing signals generated in a burn injury phantom. We used the discrete wavelet transform to denoise photoacoustic signals generated in an optically turbid phantom containing whole blood. The denoising used universal level independent thresholding, as developed by Donoho and Johnstone. The entire signal processing technique was automated so that no user intervention was needed to reconstruct the images. The signals were backprojected using the automated wavelet processing software and showed reconstruction using denoised signals improved image quality by 21%, using a relative 2-norm difference scheme.

  13. Automation of Large-scale Computer Cluster Monitoring Information Analysis

    Science.gov (United States)

    Magradze, Erekle; Nadal, Jordi; Quadt, Arnulf; Kawamura, Gen; Musheghyan, Haykuhi

    2015-12-01

    High-throughput computing platforms consist of a complex infrastructure and provide a number of services apt to failures. To mitigate the impact of failures on the quality of the provided services, a constant monitoring and in time reaction is required, which is impossible without automation of the system administration processes. This paper introduces a way of automation of the process of monitoring information analysis to provide the long and short term predictions of the service response time (SRT) for a mass storage and batch systems and to identify the status of a service at a given time. The approach for the SRT predictions is based on Adaptive Neuro Fuzzy Inference System (ANFIS). An evaluation of the approaches is performed on real monitoring data from the WLCG Tier 2 center GoeGrid. Ten fold cross validation results demonstrate high efficiency of both approaches in comparison to known methods.

  14. VirtualShave: automated hair removal from digital dermatoscopic images.

    Science.gov (United States)

    Fiorese, M; Peserico, E; Silletti, A

    2011-01-01

    VirtualShave is a novel tool to remove hair from digital dermatoscopic images. First, individual hairs are identified using a top-hat filter followed by morphological postprocessing. Then, they are replaced through PDE-based inpainting with an estimate of the underlying occluded skin. VirtualShave's performance is comparable to that of a human operator removing hair manually, and the resulting images are almost indistinguishable from those of hair-free skin.

  15. Computer Vision-Based Image Analysis of Bacteria.

    Science.gov (United States)

    Danielsen, Jonas; Nordenfelt, Pontus

    2017-01-01

    Microscopy is an essential tool for studying bacteria, but is today mostly used in a qualitative or possibly semi-quantitative manner often involving time-consuming manual analysis. It also makes it difficult to assess the importance of individual bacterial phenotypes, especially when there are only subtle differences in features such as shape, size, or signal intensity, which is typically very difficult for the human eye to discern. With computer vision-based image analysis - where computer algorithms interpret image data - it is possible to achieve an objective and reproducible quantification of images in an automated fashion. Besides being a much more efficient and consistent way to analyze images, this can also reveal important information that was previously hard to extract with traditional methods. Here, we present basic concepts of automated image processing, segmentation and analysis that can be relatively easy implemented for use with bacterial research.

  16. Basic research planning in mathematical pattern recognition and image analysis

    Science.gov (United States)

    Bryant, J.; Guseman, L. F., Jr.

    1981-01-01

    Fundamental problems encountered while attempting to develop automated techniques for applications of remote sensing are discussed under the following categories: (1) geometric and radiometric preprocessing; (2) spatial, spectral, temporal, syntactic, and ancillary digital image representation; (3) image partitioning, proportion estimation, and error models in object scene interference; (4) parallel processing and image data structures; and (5) continuing studies in polarization; computer architectures and parallel processing; and the applicability of "expert systems" to interactive analysis.

  17. Multispectral Image Road Extraction Based Upon Automated Map Conflation

    Science.gov (United States)

    Chen, Bin

    Road network extraction from remotely sensed imagery enables many important and diverse applications such as vehicle tracking, drone navigation, and intelligent transportation studies. There are, however, a number of challenges to road detection from an image. Road pavement material, width, direction, and topology vary across a scene. Complete or partial occlusions caused by nearby buildings, trees, and the shadows cast by them, make maintaining road connectivity difficult. The problems posed by occlusions are exacerbated with the increasing use of oblique imagery from aerial and satellite platforms. Further, common objects such as rooftops and parking lots are made of materials similar or identical to road pavements. This problem of common materials is a classic case of a single land cover material existing for different land use scenarios. This work addresses these problems in road extraction from geo-referenced imagery by leveraging the OpenStreetMap digital road map to guide image-based road extraction. The crowd-sourced cartography has the advantages of worldwide coverage that is constantly updated. The derived road vectors follow only roads and so can serve to guide image-based road extraction with minimal confusion from occlusions and changes in road material. On the other hand, the vector road map has no information on road widths and misalignments between the vector map and the geo-referenced image are small but nonsystematic. Properly correcting misalignment between two geospatial datasets, also known as map conflation, is an essential step. A generic framework requiring minimal human intervention is described for multispectral image road extraction and automatic road map conflation. The approach relies on the road feature generation of a binary mask and a corresponding curvilinear image. A method for generating the binary road mask from the image by applying a spectral measure is presented. The spectral measure, called anisotropy-tunable distance (ATD

  18. Automated Contour Detection for Intravascular Ultrasound Image Sequences Based on Fast Active Contour Algorithm

    Institute of Scientific and Technical Information of China (English)

    DONG Hai-yan; WANG Hui-nan

    2006-01-01

    Intravascular ultrasound can provide high-resolution real-time crosssectional images about lumen, plaque and tissue. Traditionally, the luminal border and medial-adventitial border are traced manually. This process is extremely timeconsuming and the subjective difference would be large. In this paper, a new automated contour detection method is introduced based on fast active contour model.Experimental results found that lumen and vessel area measurements after automated detection showed good agreement with manual tracings with high correlation coefficients (0.94 and 0.95, respectively) and small system difference ( -0.32 and 0.56, respectively). So it can be a reliable and accurate diagnostic tool.

  19. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm

    Directory of Open Access Journals (Sweden)

    Ricardo Andres Pizarro

    2016-12-01

    Full Text Available High-resolution three-dimensional magnetic resonance imaging (3D-MRI is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM algorithm in the quality assessment of structural brain images, using global and region of interest (ROI automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.

  20. Automated Analysis and Classification of Histological Tissue Features by Multi-Dimensional Microscopic Molecular Profiling.

    Directory of Open Access Journals (Sweden)

    Daniel P Riordan

    Full Text Available Characterization of the molecular attributes and spatial arrangements of cells and features within complex human tissues provides a critical basis for understanding processes involved in development and disease. Moreover, the ability to automate steps in the analysis and interpretation of histological images that currently require manual inspection by pathologists could revolutionize medical diagnostics. Toward this end, we developed a new imaging approach called multidimensional microscopic molecular profiling (MMMP that can measure several independent molecular properties in situ at subcellular resolution for the same tissue specimen. MMMP involves repeated cycles of antibody or histochemical staining, imaging, and signal removal, which ultimately can generate information analogous to a multidimensional flow cytometry analysis on intact tissue sections. We performed a MMMP analysis on a tissue microarray containing a diverse set of 102 human tissues using a panel of 15 informative antibody and 5 histochemical stains plus DAPI. Large-scale unsupervised analysis of MMMP data, and visualization of the resulting classifications, identified molecular profiles that were associated with functional tissue features. We then directly annotated H&E images from this MMMP series such that canonical histological features of interest (e.g. blood vessels, epithelium, red blood cells were individually labeled. By integrating image annotation data, we identified molecular signatures that were associated with specific histological annotations and we developed statistical models for automatically classifying these features. The classification accuracy for automated histology labeling was objectively evaluated using a cross-validation strategy, and significant accuracy (with a median per-pixel rate of 77% per feature from 15 annotated samples for de novo feature prediction was obtained. These results suggest that high-dimensional profiling may advance the

  1. Automated tissue classification of intracardiac optical coherence tomography images (Conference Presentation)

    Science.gov (United States)

    Gan, Yu; Tsay, David; Amir, Syed B.; Marboe, Charles C.; Hendon, Christine P.

    2016-03-01

    Remodeling of the myocardium is associated with increased risk of arrhythmia and heart failure. Our objective is to automatically identify regions of fibrotic myocardium, dense collagen, and adipose tissue, which can serve as a way to guide radiofrequency ablation therapy or endomyocardial biopsies. Using computer vision and machine learning, we present an automated algorithm to classify tissue compositions from cardiac optical coherence tomography (OCT) images. Three dimensional OCT volumes were obtained from 15 human hearts ex vivo within 48 hours of donor death (source, NDRI). We first segmented B-scans using a graph searching method. We estimated the boundary of each region by minimizing a cost function, which consisted of intensity, gradient, and contour smoothness. Then, features, including texture analysis, optical properties, and statistics of high moments, were extracted. We used a statistical model, relevance vector machine, and trained this model with abovementioned features to classify tissue compositions. To validate our method, we applied our algorithm to 77 volumes. The datasets for validation were manually segmented and classified by two investigators who were blind to our algorithm results and identified the tissues based on trichrome histology and pathology. The difference between automated and manual segmentation was 51.78 +/- 50.96 μm. Experiments showed that the attenuation coefficients of dense collagen were significantly different from other tissue types (P < 0.05, ANOVA). Importantly, myocardial fibrosis tissues were different from normal myocardium in entropy and kurtosis. The tissue types were classified with an accuracy of 84%. The results show good agreements with histology.

  2. Automated Classification of Glaucoma Images by Wavelet Energy Features

    Directory of Open Access Journals (Sweden)

    N.Annu

    2013-04-01

    Full Text Available Glaucoma is the second leading cause of blindness worldwide. As glaucoma progresses, more optic nerve tissue is lost and the optic cup grows which leads to vision loss. This paper compiles a systemthat could be used by non-experts to filtrate cases of patients not affected by the disease. This work proposes glaucomatous image classification using texture features within images and efficient glaucoma classification based on Probabilistic Neural Network (PNN. Energy distribution over wavelet sub bands is applied to compute these texture features. Wavelet features were obtained from the daubechies (db3, symlets (sym3, and biorthogonal (bio3.3, bio3.5, and bio3.7 wavelet filters. It uses a technique to extract energy signatures obtained using 2-D discrete wavelet transform and the energy obtained from the detailed coefficients can be used to distinguish between normal and glaucomatous images. We observedan accuracy of around 95%, this demonstrates the effectiveness of these methods.

  3. System and method for automated object detection in an image

    Energy Technology Data Exchange (ETDEWEB)

    Kenyon, Garrett T.; Brumby, Steven P.; George, John S.; Paiton, Dylan M.; Schultz, Peter F.

    2015-10-06

    A contour/shape detection model may use relatively simple and efficient kernels to detect target edges in an object within an image or video. A co-occurrence probability may be calculated for two or more edge features in an image or video using an object definition. Edge features may be differentiated between in response to measured contextual support, and prominent edge features may be extracted based on the measured contextual support. The object may then be identified based on the extracted prominent edge features.

  4. Automated Structure Detection in HRTEM Images: An Example with Graphene

    DEFF Research Database (Denmark)

    Kling, Jens; Vestergaard, Jacob Schack; Dahl, Anders Bjorholm

    of time making it difficult to resolve dynamic processes or unstable structures. Tools that assist to get the maximum of information out of recorded images are therefore greatly appreciated. In order to get the most accurate results out of the structure detection, we have optimized the imaging conditions...... used for the FEI Titan ETEM with a monochromator and an objective-lens Cs-corrector. To reduce the knock-on damage of the carbon atoms in the graphene structure, the microscope was operated at 80kV. As this strongly increases the influence of the chromatic aberration of the lenses, the energy spread...

  5. Solar Image Analysis and Visualization

    CERN Document Server

    Ireland, J

    2009-01-01

    This volume presents a selection of papers on the state of the art of image enhancement, automated feature detection, machine learning, and visualization tools in support of solar physics that focus on the challenges presented by new ground-based and space-based instrumentation. The articles and topics were inspired by the Third Solar Image Processing Workshop, held at Trinity College Dublin, Ireland but contributions from other experts have been included as well. This book is mainly aimed at researchers and graduate students working on image processing and compter vision in astronomy and solar physics.

  6. An Imaging System for Automated Characteristic Length Measurement of Debrisat Fragments

    Science.gov (United States)

    Moraguez, Mathew; Patankar, Kunal; Fitz-Coy, Norman; Liou, J.-C.; Sorge, Marlon; Cowardin, Heather; Opiela, John; Krisko, Paula H.

    2015-01-01

    The debris fragments generated by DebriSat's hypervelocity impact test are currently being processed and characterized through an effort of NASA and USAF. The debris characteristics will be used to update satellite breakup models. In particular, the physical dimensions of the debris fragments must be measured to provide characteristic lengths for use in these models. Calipers and commercial 3D scanners were considered as measurement options, but an automated imaging system was ultimately developed to measure debris fragments. By automating the entire process, the measurement results are made repeatable and the human factor associated with calipers and 3D scanning is eliminated. Unlike using calipers to measure, the imaging system obtains non-contact measurements to avoid damaging delicate fragments. Furthermore, this fully automated measurement system minimizes fragment handling, which reduces the potential for fragment damage during the characterization process. In addition, the imaging system reduces the time required to determine the characteristic length of the debris fragment. In this way, the imaging system can measure the tens of thousands of DebriSat fragments at a rate of about six minutes per fragment, compared to hours per fragment in NASA's current 3D scanning measurement approach. The imaging system utilizes a space carving algorithm to generate a 3D point cloud of the article being measured and a custom developed algorithm then extracts the characteristic length from the point cloud. This paper describes the measurement process, results, challenges, and future work of the imaging system used for automated characteristic length measurement of DebriSat fragments.

  7. Computer-assisted tree taxonomy by automated image recognition

    NARCIS (Netherlands)

    Pauwels, E.J.; Zeeuw, P.M.de; Ranguelova, E.B.

    2009-01-01

    We present an algorithm that performs image-based queries within the domain of tree taxonomy. As such, it serves as an example relevant to many other potential applications within the field of biodiversity and photo-identification. Unsupervised matching results are produced through a chain of comput

  8. Automated Coronal Loop Identification Using Digital Image Processing Techniques

    Science.gov (United States)

    Lee, Jong K.; Gary, G. Allen; Newman, Timothy S.

    2003-01-01

    The results of a master thesis project on a study of computer algorithms for automatic identification of optical-thin, 3-dimensional solar coronal loop centers from extreme ultraviolet and X-ray 2-dimensional images will be presented. These center splines are proxies of associated magnetic field lines. The project is pattern recognition problems in which there are no unique shapes or edges and in which photon and detector noise heavily influence the images. The study explores extraction techniques using: (1) linear feature recognition of local patterns (related to the inertia-tensor concept), (2) parametric space via the Hough transform, and (3) topological adaptive contours (snakes) that constrains curvature and continuity as possible candidates for digital loop detection schemes. We have developed synthesized images for the coronal loops to test the various loop identification algorithms. Since the topology of these solar features is dominated by the magnetic field structure, a first-order magnetic field approximation using multiple dipoles provides a priori information in the identification process. Results from both synthesized and solar images will be presented.

  9. Automated marker tracking using noisy X-ray images degraded by the treatment beam

    Energy Technology Data Exchange (ETDEWEB)

    Wisotzky, E. [Fraunhofer Institute for Production Systems and Design Technology (IPK), Berlin (Germany); German Cancer Research Center (DKFZ), Heidelberg (Germany); Fast, M.F.; Nill, S. [The Royal Marsden NHS Foundation Trust, London (United Kingdom). Joint Dept. of Physics; Oelfke, U. [The Royal Marsden NHS Foundation Trust, London (United Kingdom). Joint Dept. of Physics; German Cancer Research Center (DKFZ), Heidelberg (Germany)

    2015-09-01

    This study demonstrates the feasibility of automated marker tracking for the real-time detection of intrafractional target motion using noisy kilovoltage (kV) X-ray images degraded by the megavoltage (MV) treatment beam. The authors previously introduced the in-line imaging geometry, in which the flat-panel detector (FPD) is mounted directly underneath the treatment head of the linear accelerator. They found that the 121 kVp image quality was severely compromised by the 6 MV beam passing through the FPD at the same time. Specific MV-induced artefacts present a considerable challenge for automated marker detection algorithms. For this study, the authors developed a new imaging geometry by re-positioning the FPD and the X-ray tube. This improved the contrast-to-noise-ratio between 40% and 72% at the 1.2 mAs/image exposure setting. The increase in image quality clearly facilitates the quick and stable detection of motion with the aid of a template matching algorithm. The setup was tested with an anthropomorphic lung phantom (including an artificial lung tumour). In the tumour one or three Calypso {sup registered} beacons were embedded to achieve better contrast during MV radiation. For a single beacon, image acquisition and automated marker detection typically took around 76±6 ms. The success rate was found to be highly dependent on imaging dose and gantry angle. To eliminate possible false detections, the authors implemented a training phase prior to treatment beam irradiation and also introduced speed limits for motion between subsequent images.

  10. Automated Detection of Contaminated Radar Image Pixels in Mountain Areas

    Institute of Scientific and Technical Information of China (English)

    LIU Liping; Qin XU; Pengfei ZHANG; Shun LIU

    2008-01-01

    In mountain areas,radar observations are often contaminated(1)by echoes from high-speed moving vehicles and(2)by point-wise ground clutter under either normal propagation(NP)or anomalous propa-gation(AP)conditions.Level II data are collected from KMTX(Salt Lake City,Utah)radar to analyze these two types of contamination in the mountain area around the Great Salt Lake.Human experts provide the"ground truth"for possible contamination of either type on each individual pixel.Common features are then extracted for contaminated pixels of each type.For example,pixels contaminated by echoes from high-speed moving vehicles are characterized by large radial velocity and spectrum width.Echoes from a moving train tend to have larger velocity and reflectivity but smaller spectrum width than those from moving vehicles on highways.These contaminated pixels are only seen in areas of large terrain gradient(in the radial direction along the radar beam).The same is true for the second type of contamination-point-wise ground clutters.Six quality control(QC)parameters are selected to quantify the extracted features.Histograms are computed for each QC parameter and grouped for contaminated pixels of each type and also for non-contaminated pixels.Based on the computed histograms,a fuzzy logical algorithm is developed for automated detection of contaminated pixels.The algorithm is tested with KMTX radar data under different(clear and rainy)weather conditions.

  11. Automated segmentation of regions of interest in whole slide skin histopathological images.

    Science.gov (United States)

    Xu, Hongming; Lu, Cheng; Mandal, Mrinal

    2015-01-01

    In the diagnosis of skin melanoma by analyzing histopathological images, the epidermis and epidermis-dermis junctional areas are regions of interest as they provide the most important histologic diagnosis features. This paper presents an automated technique for segmenting epidermis and dermis regions from whole slide skin histopathological images. The proposed technique first performs epidermis segmentation using a thresholding and thickness measurement based method. The dermis area is then segmented based on a predefined depth of segmentation from the epidermis outer boundary. Experimental results on 66 different skin images show that the proposed technique can robustly segment regions of interest as desired.

  12. Automated 3D ultrasound image segmentation for assistant diagnosis of breast cancer

    Science.gov (United States)

    Wang, Yuxin; Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A.; Du, Sidan; Yuan, Jie; Wang, Xueding; Carson, Paul L.

    2016-04-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.

  13. Automated Detection and Removal of Cloud Shadows on HICO Images

    Science.gov (United States)

    2011-01-01

    Gross, F. Moshary and S. Ahmed, "Impacts of atmospheric corrections on algal bloom detection techniques," 89th AMS Annual Meeting, Phoenix, Arizona... Remote Sens. 36, 880-897, (1998). 4] R. Amin, J. Zhou, A. Gilerson, B. Gross, F. Moshary and S. Ahmed, "Novel optical techniques for detecting and...32157 (1998). 11]J. Cihlar, J. Howarth, " Detection and removal of cloud contamination from AVHRR images," IEEE Trans. Geos. Remote Sens., 32, 583

  14. Automated quantification and integrative analysis of 2D and 3D mitochondrial shape and network properties.

    Directory of Open Access Journals (Sweden)

    Julie Nikolaisen

    Full Text Available Mitochondrial morphology and function are coupled in healthy cells, during pathological conditions and (adaptation to endogenous and exogenous stress. In this sense mitochondrial shape can range from small globular compartments to complex filamentous networks, even within the same cell. Understanding how mitochondrial morphological changes (i.e. "mitochondrial dynamics" are linked to cellular (patho physiology is currently the subject of intense study and requires detailed quantitative information. During the last decade, various computational approaches have been developed for automated 2-dimensional (2D analysis of mitochondrial morphology and number in microscopy images. Although these strategies are well suited for analysis of adhering cells with a flat morphology they are not applicable for thicker cells, which require a three-dimensional (3D image acquisition and analysis procedure. Here we developed and validated an automated image analysis algorithm allowing simultaneous 3D quantification of mitochondrial morphology and network properties in human endothelial cells (HUVECs. Cells expressing a mitochondria-targeted green fluorescence protein (mitoGFP were visualized by 3D confocal microscopy and mitochondrial morphology was quantified using both the established 2D method and the new 3D strategy. We demonstrate that both analyses can be used to characterize and discriminate between various mitochondrial morphologies and network properties. However, the results from 2D and 3D analysis were not equivalent when filamentous mitochondria in normal HUVECs were compared with circular/spherical mitochondria in metabolically stressed HUVECs treated with rotenone (ROT. 2D quantification suggested that metabolic stress induced mitochondrial fragmentation and loss of biomass. In contrast, 3D analysis revealed that the mitochondrial network structure was dissolved without affecting the amount and size of the organelles. Thus, our results demonstrate

  15. Comparison of Automated Image-Based Grain Sizing to Standard Pebble Count Methods

    Science.gov (United States)

    Strom, K. B.

    2009-12-01

    This study explores the use of an automated, image-based method for characterizing grain-size distributions (GSDs) of exposed, open-framework gravel beds. This was done by comparing the GSDs measured with an image-based method to distributions obtained with two pebble-count methods. Selection of grains for the two pebble-count methods was carried out using a gridded sampling frame and the heel-to-toe Wolman walk method at six field sites. At each site, 500-partcle pebble-count samples were collected with each of the two pebble-count methods and digital images were systematically collected over the same sampling area. For the methods used, the pebble counts collected with the gridded sampling frame were assumed to be the most accurate representations of the true grain-size population, and results from the image-based method were compared to the grid derived GSDs for accuracy estimates; comparisons between the grid and Wolman walk methods were conducted to give an indication of possible variation between commonly used methods for each particular field site. Comparison of grain sizes were made at two spatial scales. At the larger scale, results from the image-based method were integrated over the sampling area required to collect the 500-particle pebble-count samples. At the smaller sampling scale, the image derived GSDs were compared to those from 100-particle, pebble-count samples obtained with the gridded sampling frame. The comparisons show that the image-based method performed reasonably well on five of the six study sites. For those five sites, the image-based method slightly underestimate all grain-size percentiles relative to the pebble counts collected with the gridded sampling frame. The average bias for Ψ5, Ψ50, and Ψ95 between the image and grid count methods at the larger sampling scale was 0.07Ψ, 0.04Ψ, and 0.19Ψ respectively; at the smaller sampling scale the average bias was 0.004Ψ, 0.03Ψ, and 0.18Ψ respectively. The average bias between the

  16. Hyperspectral image analysis. A tutorial

    DEFF Research Database (Denmark)

    Amigo Rubio, Jose Manuel; Babamoradi, Hamid; Elcoroaristizabal Martin, Saioa

    2015-01-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processi...... to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares - Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case.......This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processing...

  17. IHC Profiler: An Open Source Plugin for the Quantitative Evaluation and Automated Scoring of Immunohistochemistry Images of Human Tissue Samples

    Science.gov (United States)

    Malhotra, Renu; De, Abhijit

    2014-01-01

    In anatomic pathology, immunohistochemistry (IHC) serves as a diagnostic and prognostic method for identification of disease markers in tissue samples that directly influences classification and grading the disease, influencing patient management. However, till today over most of the world, pathological analysis of tissue samples remained a time-consuming and subjective procedure, wherein the intensity of antibody staining is manually judged and thus scoring decision is directly influenced by visual bias. This instigated us to design a simple method of automated digital IHC image analysis algorithm for an unbiased, quantitative assessment of antibody staining intensity in tissue sections. As a first step, we adopted the spectral deconvolution method of DAB/hematoxylin color spectra by using optimized optical density vectors of the color deconvolution plugin for proper separation of the DAB color spectra. Then the DAB stained image is displayed in a new window wherein it undergoes pixel-by-pixel analysis, and displays the full profile along with its scoring decision. Based on the mathematical formula conceptualized, the algorithm is thoroughly tested by analyzing scores assigned to thousands (n = 1703) of DAB stained IHC images including sample images taken from human protein atlas web resource. The IHC Profiler plugin developed is compatible with the open resource digital image analysis software, ImageJ, which creates a pixel-by-pixel analysis profile of a digital IHC image and further assigns a score in a four tier system. A comparison study between manual pathological analysis and IHC Profiler resolved in a match of 88.6% (P<0.0001, CI = 95%). This new tool developed for clinical histopathological sample analysis can be adopted globally for scoring most protein targets where the marker protein expression is of cytoplasmic and/or nuclear type. We foresee that this method will minimize the problem of inter-observer variations across labs and further help in

  18. IHC Profiler: an open source plugin for the quantitative evaluation and automated scoring of immunohistochemistry images of human tissue samples.

    Directory of Open Access Journals (Sweden)

    Frency Varghese

    Full Text Available In anatomic pathology, immunohistochemistry (IHC serves as a diagnostic and prognostic method for identification of disease markers in tissue samples that directly influences classification and grading the disease, influencing patient management. However, till today over most of the world, pathological analysis of tissue samples remained a time-consuming and subjective procedure, wherein the intensity of antibody staining is manually judged and thus scoring decision is directly influenced by visual bias. This instigated us to design a simple method of automated digital IHC image analysis algorithm for an unbiased, quantitative assessment of antibody staining intensity in tissue sections. As a first step, we adopted the spectral deconvolution method of DAB/hematoxylin color spectra by using optimized optical density vectors of the color deconvolution plugin for proper separation of the DAB color spectra. Then the DAB stained image is displayed in a new window wherein it undergoes pixel-by-pixel analysis, and displays the full profile along with its scoring decision. Based on the mathematical formula conceptualized, the algorithm is thoroughly tested by analyzing scores assigned to thousands (n = 1703 of DAB stained IHC images including sample images taken from human protein atlas web resource. The IHC Profiler plugin developed is compatible with the open resource digital image analysis software, ImageJ, which creates a pixel-by-pixel analysis profile of a digital IHC image and further assigns a score in a four tier system. A comparison study between manual pathological analysis and IHC Profiler resolved in a match of 88.6% (P<0.0001, CI = 95%. This new tool developed for clinical histopathological sample analysis can be adopted globally for scoring most protein targets where the marker protein expression is of cytoplasmic and/or nuclear type. We foresee that this method will minimize the problem of inter-observer variations across labs and

  19. Postprocessing algorithm for automated analysis of pelvic intraoperative neuromonitoring signals

    Directory of Open Access Journals (Sweden)

    Wegner Celine

    2016-09-01

    Full Text Available Two dimensional pelvic intraoperative neuromonitoring (pIONM® is based on electric stimulation of autonomic nerves under observation of electromyography of internal anal sphincter (IAS and manometry of urinary bladder. The method provides nerve identification and verification of its’ functional integrity. Currently pIONM® is gaining increased attention in times where preservation of function is becoming more and more important. Ongoing technical and methodological developments in experimental and clinical settings require further analysis of the obtained signals. This work describes a postprocessing algorithm for pIONM® signals, developed for automated analysis of huge amount of recorded data. The analysis routine includes a graphical representation of the recorded signals in the time and frequency domain, as well as a quantitative evaluation by means of features calculated from the time and frequency domain. The produced plots are summarized automatically in a PowerPoint presentation. The calculated features are filled into a standardized Excel-sheet, ready for statistical analysis.

  20. Automated Hierarchical Time Gain Compensation for In Vivo Ultrasound Imaging

    DEFF Research Database (Denmark)

    Moshavegh, Ramin; Hemmsen, Martin Christian; Martins, Bo

    2015-01-01

    Time gain compensation (TGC) is essential to ensure the optimal image quality of the clinical ultrasound scans. When large fluid collections are present within the scan plane, the attenuation distribution is changed drastically and TGC compensation becomes challenging. This paper presents...... tissue and the ultrasound signal strength. The proposed algorithm was applied to a set of 44 in vivo abdominal movie sequences each containing 15 frames. Matching pairs of in vivo sequences, unprocessed and processed with the proposed AHTGC were visualized side by side and evaluated by two radiologists...

  1. Automated Image-Based Procedures for Adaptive Radiotherapy

    DEFF Research Database (Denmark)

    Bjerre, Troels

    -tissue complication probability (NTCP), margins used to account for interfraction and intrafraction anatomical changes and motion need to be reduced. This can only be achieved through proper treatment plan adaptations and intrafraction motion management. This thesis describes methods in support of image...... to encourage bone rigidity and local tissue volume change only in the gross tumour volume and the lungs. This is highly relevant in adaptive radiotherapy when modelling significant tumour volume changes. - It is described how cone beam CT reconstruction can be modelled as a deformation of a planning CT scan...

  2. Automated Hierarchical Time Gain Compensation for In Vivo Ultrasound Imaging

    DEFF Research Database (Denmark)

    Moshavegh, Ramin; Hemmsen, Martin Christian; Martins, Bo;

    2015-01-01

    in terms of image quality. Wilcoxon signed-rank test was used to evaluate whether radiologists preferred the processed sequences or the unprocessed data. The results indicate that the average visual analogue scale (VAS) is positive ( p-value: 2.34 × 10−13) and estimated to be 1.01 (95% CI: 0.85; 1...... tissue and the ultrasound signal strength. The proposed algorithm was applied to a set of 44 in vivo abdominal movie sequences each containing 15 frames. Matching pairs of in vivo sequences, unprocessed and processed with the proposed AHTGC were visualized side by side and evaluated by two radiologists...

  3. Use of solid film highlighter in automation of D sight image interpretation

    Science.gov (United States)

    Forsyth, David S.; Komorowski, Jerzy P.; Gould, Ronald W.

    1998-03-01

    Many studies have shown inspector variability to be a crucial parameter in nondestructive evaluation (NDE) reliability. Therefore it is desirable to automate the decision making process in NDE as much as possible. The automation of inspection data handling and interpretation will also enable use of data fusion algorithms currently being researched at IAR for increasing inspection reliability by combination of different NDE modes. Enhanced visual inspection techniques such as D Sight have the capability to rapidly inspect lap splice joints using D Sight and other optical methods. IARs NDI analysis software has been sued to perform analysis and feature extraction on D Sight inspections. Different metrics suitable for automated interpretation have been developed and tested on inspections of actual service-retired aircraft specimens using D Sight with solid film highlighter.

  4. Medical Image Analysis Facility

    Science.gov (United States)

    1978-01-01

    To improve the quality of photos sent to Earth by unmanned spacecraft. NASA's Jet Propulsion Laboratory (JPL) developed a computerized image enhancement process that brings out detail not visible in the basic photo. JPL is now applying this technology to biomedical research in its Medical lrnage Analysis Facility, which employs computer enhancement techniques to analyze x-ray films of internal organs, such as the heart and lung. A major objective is study of the effects of I stress on persons with heart disease. In animal tests, computerized image processing is being used to study coronary artery lesions and the degree to which they reduce arterial blood flow when stress is applied. The photos illustrate the enhancement process. The upper picture is an x-ray photo in which the artery (dotted line) is barely discernible; in the post-enhancement photo at right, the whole artery and the lesions along its wall are clearly visible. The Medical lrnage Analysis Facility offers a faster means of studying the effects of complex coronary lesions in humans, and the research now being conducted on animals is expected to have important application to diagnosis and treatment of human coronary disease. Other uses of the facility's image processing capability include analysis of muscle biopsy and pap smear specimens, and study of the microscopic structure of fibroprotein in the human lung. Working with JPL on experiments are NASA's Ames Research Center, the University of Southern California School of Medicine, and Rancho Los Amigos Hospital, Downey, California.

  5. A Fully Automated Method to Detect and Segment a Manufactured Object in an Underwater Color Image

    Directory of Open Access Journals (Sweden)

    Phlypo Ronald

    2010-01-01

    Full Text Available We propose a fully automated active contours-based method for the detection and the segmentation of a moored manufactured object in an underwater image. Detection of objects in underwater images is difficult due to the variable lighting conditions and shadows on the object. The proposed technique is based on the information contained in the color maps and uses the visual attention method, combined with a statistical approach for the detection and an active contour for the segmentation of the object to overcome the above problems. In the classical active contour method the region descriptor is fixed and the convergence of the method depends on the initialization. With our approach, this dependence is overcome with an initialization using the visual attention results and a criterion to select the best region descriptor. This approach improves the convergence and the processing time while providing the advantages of a fully automated method.

  6. Extraction of prostatic lumina and automated recognition for prostatic calculus image using PCA-SVM.

    Science.gov (United States)

    Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D Joshua

    2011-01-01

    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi.

  7. Analysis of automated highway system risks and uncertainties. Volume 5

    Energy Technology Data Exchange (ETDEWEB)

    Sicherman, A.

    1994-10-01

    This volume describes a risk analysis performed to help identify important Automated Highway System (AHS) deployment uncertainties and quantify their effect on costs and benefits for a range of AHS deployment scenarios. The analysis identified a suite of key factors affecting vehicle and roadway costs, capacities and market penetrations for alternative AHS deployment scenarios. A systematic protocol was utilized for obtaining expert judgments of key factor uncertainties in the form of subjective probability percentile assessments. Based on these assessments, probability distributions on vehicle and roadway costs, capacity and market penetration were developed for the different scenarios. The cost/benefit risk methodology and analysis provide insights by showing how uncertainties in key factors translate into uncertainties in summary cost/benefit indices.

  8. Automated Digital Analysis Of Holographic Interferograms Of Pure Translations

    Science.gov (United States)

    Choudry, A.; Frankena, H. J.; van Beek, J. W.

    1983-10-01

    Holographic interferometry is a versatile technique for non-tactile measurement of changes in a wide variety of physical variables such as temperature, strain, position etc. It has a great potential for becoming an important metrologic technique in industrial applications. For holographic interferometry to become more attractive for industrial practice the problem of quantitative analysis of the patterns and thereby eliciting reliable values of the relevant parameters has to be addressed. In an attempt to calibrate the technique of holographic interferometry and ascertain the reliability of the subsequent digital analysis, we have chosen precisely known translations as a basis. Holographic interferograms taken from these are analysed manually and by digital techniques specially developed for such patterns. The results are promising enough to indicate the feasibility of automated digital analysis for determining translations within an acceptable accuracy. Some details of the evaluation techniques, along with a brief discussion of the preliminary results are presented.

  9. Automated semantic indexing of imaging reports to support retrieval of medical images in the multimedia electronic medical record.

    Science.gov (United States)

    Lowe, H J; Antipov, I; Hersh, W; Smith, C A; Mailhot, M

    1999-12-01

    This paper describes preliminary work evaluating automated semantic indexing of radiology imaging reports to represent images stored in the Image Engine multimedia medical record system at the University of Pittsburgh Medical Center. The authors used the SAPHIRE indexing system to automatically identify important biomedical concepts within radiology reports and represent these concepts with terms from the 1998 edition of the U.S. National Library of Medicine's Unified Medical Language System (UMLS) Metathesaurus. This automated UMLS indexing was then compared with manual UMLS indexing of the same reports. Human indexing identified appropriate UMLS Metathesaurus descriptors for 81% of the important biomedical concepts contained in the report set. SAPHIRE automatically identified UMLS Metathesaurus descriptors for 64% of the important biomedical concepts contained in the report set. The overall conclusions of this pilot study were that the UMLS metathesaurus provided adequate coverage of the majority of the important concepts contained within the radiology report test set and that SAPHIRE could automatically identify and translate almost two thirds of these concepts into appropriate UMLS descriptors. Further work is required to improve both the recall and precision of this automated concept extraction process.

  10. A quality assurance framework for the fully automated and objective evaluation of image quality in cone-beam computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Steiding, Christian; Kolditz, Daniel; Kalender, Willi A., E-mail: willi.kalender@imp.uni-erlangen.de [Institute of Medical Physics, University of Erlangen-Nürnberg, Henkestraße 91, 91052 Erlangen, Germany and CT Imaging GmbH, 91052 Erlangen (Germany)

    2014-03-15

    Purpose: Thousands of cone-beam computed tomography (CBCT) scanners for vascular, maxillofacial, neurological, and body imaging are in clinical use today, but there is no consensus on uniform acceptance and constancy testing for image quality (IQ) and dose yet. The authors developed a quality assurance (QA) framework for fully automated and time-efficient performance evaluation of these systems. In addition, the dependence of objective Fourier-based IQ metrics on direction and position in 3D volumes was investigated for CBCT. Methods: The authors designed a dedicated QA phantom 10 cm in length consisting of five compartments, each with a diameter of 10 cm, and an optional extension ring 16 cm in diameter. A homogeneous section of water-equivalent material allows measuring CT value accuracy, image noise and uniformity, and multidimensional global and local noise power spectra (NPS). For the quantitative determination of 3D high-contrast spatial resolution, the modulation transfer function (MTF) of centrally and peripherally positioned aluminum spheres was computed from edge profiles. Additional in-plane and axial resolution patterns were used to assess resolution qualitatively. The characterization of low-contrast detectability as well as CT value linearity and artifact behavior was tested by utilizing sections with soft-tissue-equivalent and metallic inserts. For an automated QA procedure, a phantom detection algorithm was implemented. All tests used in the dedicated QA program were initially verified in simulation studies and experimentally confirmed on a clinical dental CBCT system. Results: The automated IQ evaluation of volume data sets of the dental CBCT system was achieved with the proposed phantom requiring only one scan for the determination of all desired parameters. Typically, less than 5 min were needed for phantom set-up, scanning, and data analysis. Quantitative evaluation of system performance over time by comparison to previous examinations was also

  11. Efficient Parallel Levenberg-Marquardt Model Fitting towards Real-Time Automated Parametric Imaging Microscopy

    OpenAIRE

    Xiang Zhu; Dianwen Zhang

    2013-01-01

    We present a fast, accurate and robust parallel Levenberg-Marquardt minimization optimizer, GPU-LMFit, which is implemented on graphics processing unit for high performance scalable parallel model fitting processing. GPU-LMFit can provide a dramatic speed-up in massive model fitting analyses to enable real-time automated pixel-wise parametric imaging microscopy. We demonstrate the performance of GPU-LMFit for the applications in superresolution localization microscopy and fluorescence lifetim...

  12. An application of image processing techniques in computed tomography image analysis

    DEFF Research Database (Denmark)

    McEvoy, Fintan

    2007-01-01

    number of animals and image slices, automation of the process was desirable. The open-source and free image analysis program ImageJ was used. A macro procedure was created that provided the required functionality. The macro performs a number of basic image processing procedures. These include an initial...... process designed to remove the scanning table from the image and to center the animal in the image. This is followed by placement of a vertical line segment from the mid point of the upper border of the image to the image center. Measurements are made between automatically detected outer and inner...... boundaries of subcutaneous adipose tissue along this line segment. This process was repeated as the image was rotated (with the line position remaining unchanged) so that measurements around the complete circumference were obtained. Additionally, an image was created showing all detected boundary points so...

  13. Automated Formosat Image Processing System for Rapid Response to International Disasters

    Science.gov (United States)

    Cheng, M. C.; Chou, S. C.; Chen, Y. C.; Chen, B.; Liu, C.; Yu, S. J.

    2016-06-01

    FORMOSAT-2, Taiwan's first remote sensing satellite, was successfully launched in May of 2004 into the Sun-synchronous orbit at 891 kilometers of altitude. With the daily revisit feature, the 2-m panchromatic, 8-m multi-spectral resolution images captured have been used for researches and operations in various societal benefit areas. This paper details the orchestration of various tasks conducted in different institutions in Taiwan in the efforts responding to international disasters. The institutes involved including its space agency-National Space Organization (NSPO), Center for Satellite Remote Sensing Research of National Central University, GIS Center of Feng-Chia University, and the National Center for High-performance Computing. Since each institution has its own mandate, the coordinated tasks ranged from receiving emergency observation requests, scheduling and tasking of satellite operation, downlink to ground stations, images processing including data injection, ortho-rectification, to delivery of image products. With the lessons learned from working with international partners, the FORMOSAT Image Processing System has been extensively automated and streamlined with a goal to shorten the time between request and delivery in an efficient manner. The integrated team has developed an Application Interface to its system platform that provides functions of search in archive catalogue, request of data services, mission planning, inquiry of services status, and image download. This automated system enables timely image acquisition and substantially increases the value of data product. Example outcome of these efforts in recent response to support Sentinel Asia in Nepal Earthquake is demonstrated herein.

  14. Automated pathologies detection in retina digital images based on complex continuous wavelet transform phase angles.

    Science.gov (United States)

    Lahmiri, Salim; Gargour, Christian S; Gabrea, Marcel

    2014-10-01

    An automated diagnosis system that uses complex continuous wavelet transform (CWT) to process retina digital images and support vector machines (SVMs) for classification purposes is presented. In particular, each retina image is transformed into two one-dimensional signals by concatenating image rows and columns separately. The mathematical norm of phase angles found in each one-dimensional signal at each level of CWT decomposition are relied on to characterise the texture of normal images against abnormal images affected by exudates, drusen and microaneurysms. The leave-one-out cross-validation method was adopted to conduct experiments and the results from the SVM show that the proposed approach gives better results than those obtained by other methods based on the correct classification rate, sensitivity and specificity.

  15. Trends in biomedical informatics: automated topic analysis of JAMIA articles.

    Science.gov (United States)

    Han, Dong; Wang, Shuang; Jiang, Chao; Jiang, Xiaoqian; Kim, Hyeon-Eui; Sun, Jimeng; Ohno-Machado, Lucila

    2015-11-01

    Biomedical Informatics is a growing interdisciplinary field in which research topics and citation trends have been evolving rapidly in recent years. To analyze these data in a fast, reproducible manner, automation of certain processes is needed. JAMIA is a "generalist" journal for biomedical informatics. Its articles reflect the wide range of topics in informatics. In this study, we retrieved Medical Subject Headings (MeSH) terms and citations of JAMIA articles published between 2009 and 2014. We use tensors (i.e., multidimensional arrays) to represent the interaction among topics, time and citations, and applied tensor decomposition to automate the analysis. The trends represented by tensors were then carefully interpreted and the results were compared with previous findings based on manual topic analysis. A list of most cited JAMIA articles, their topics, and publication trends over recent years is presented. The analyses confirmed previous studies and showed that, from 2012 to 2014, the number of articles related to MeSH terms Methods, Organization & Administration, and Algorithms increased significantly both in number of publications and citations. Citation trends varied widely by topic, with Natural Language Processing having a large number of citations in particular years, and Medical Record Systems, Computerized remaining a very popular topic in all years.

  16. Automated classification of brain tumor type in whole-slide digital pathology images using local representative tiles.

    Science.gov (United States)

    Barker, Jocelyn; Hoogi, Assaf; Depeursinge, Adrien; Rubin, Daniel L

    2016-05-01

    Computerized analysis of digital pathology images offers the potential of improving clinical care (e.g. automated diagnosis) and catalyzing research (e.g. discovering disease subtypes). There are two key challenges thwarting computerized analysis of digital pathology images: first, whole slide pathology images are massive, making computerized analysis inefficient, and second, diverse tissue regions in whole slide images that are not directly relevant to the disease may mislead computerized diagnosis algorithms. We propose a method to overcome both of these challenges that utilizes a coarse-to-fine analysis of the localized characteristics in pathology images. An initial surveying stage analyzes the diversity of coarse regions in the whole slide image. This includes extraction of spatially localized features of shape, color and texture from tiled regions covering the slide. Dimensionality reduction of the features assesses the image diversity in the tiled regions and clustering creates representative groups. A second stage provides a detailed analysis of a single representative tile from each group. An Elastic Net classifier produces a diagnostic decision value for each representative tile. A weighted voting scheme aggregates the decision values from these tiles to obtain a diagnosis at the whole slide level. We evaluated our method by automatically classifying 302 brain cancer cases into two possible diagnoses (glioblastoma multiforme (N = 182) versus lower grade glioma (N = 120)) with an accuracy of 93.1% (p Pathology Classification Challenge, in which our method, trained and tested using 5-fold cross validation, produced a classification accuracy of 100% (p < 0.001). Our method showed high stability and robustness to parameter variation, with accuracy varying between 95.5% and 100% when evaluated for a wide range of parameters. Our approach may be useful to automatically differentiate between the two cancer subtypes.

  17. Image sequence analysis

    CERN Document Server

    1981-01-01

    The processing of image sequences has a broad spectrum of important applica­ tions including target tracking, robot navigation, bandwidth compression of TV conferencing video signals, studying the motion of biological cells using microcinematography, cloud tracking, and highway traffic monitoring. Image sequence processing involves a large amount of data. However, because of the progress in computer, LSI, and VLSI technologies, we have now reached a stage when many useful processing tasks can be done in a reasonable amount of time. As a result, research and development activities in image sequence analysis have recently been growing at a rapid pace. An IEEE Computer Society Workshop on Computer Analysis of Time-Varying Imagery was held in Philadelphia, April 5-6, 1979. A related special issue of the IEEE Transactions on Pattern Anal­ ysis and Machine Intelligence was published in November 1980. The IEEE Com­ puter magazine has also published a special issue on the subject in 1981. The purpose of this book ...

  18. Automated measurement of parameters related to the deformities of lower limbs based on x-rays images.

    Science.gov (United States)

    Wojciechowski, Wadim; Molka, Adrian; Tabor, Zbisław

    2016-03-01

    Measurement of the deformation of the lower limbs in the current standard full-limb X-rays images presents significant challenges to radiologists and orthopedists. The precision of these measurements is deteriorated because of inexact positioning of the leg during image acquisition, problems with selecting reliable anatomical landmarks in projective X-ray images, and inevitable errors of manual measurements. The influence of the random errors resulting from the last two factors on the precision of the measurement can be reduced if an automated measurement method is used instead of a manual one. In the paper a framework for an automated measurement of various metric and angular quantities used in the description of the lower extremity deformation in full-limb frontal X-ray images is described. The results of automated measurements are compared with manual measurements. These results demonstrate that an automated method can be a valuable alternative to the manual measurements.

  19. Automated construction of arterial and venous trees in retinal images.

    Science.gov (United States)

    Hu, Qiao; Abràmoff, Michael D; Garvin, Mona K

    2015-10-01

    While many approaches exist to segment retinal vessels in fundus photographs, only a limited number focus on the construction and disambiguation of arterial and venous trees. Previous approaches are local and/or greedy in nature, making them susceptible to errors or limiting their applicability to large vessels. We propose a more global framework to generate arteriovenous trees in retinal images, given a vessel segmentation. In particular, our approach consists of three stages. The first stage is to generate an overconnected vessel network, named the vessel potential connectivity map (VPCM), consisting of vessel segments and the potential connectivity between them. The second stage is to disambiguate the VPCM into multiple anatomical trees, using a graph-based metaheuristic algorithm. The third stage is to classify these trees into arterial or venous (A/V) trees. We evaluated our approach with a ground truth built based on a public database, showing a pixel-wise classification accuracy of 88.15% using a manual vessel segmentation as input, and 86.11% using an automatic vessel segmentation as input.

  20. Automated analysis of NF-κB nuclear translocation kinetics in high-throughput screening.

    Directory of Open Access Journals (Sweden)

    Zi Di

    Full Text Available Nuclear entry and exit of the NF-κB family of dimeric transcription factors plays an essential role in regulating cellular responses to inflammatory stress. The dynamics of this nuclear translocation can vary significantly within a cell population and may dramatically change e.g. upon drug exposure. Furthermore, there is significant heterogeneity in individual cell response upon stress signaling. In order to systematically determine factors that define NF-κB translocation dynamics, high-throughput screens that enable the analysis of dynamic NF-κB responses in individual cells in real time are essential. Thus far, only NF-κB downstream signaling responses of whole cell populations at the transcriptional level are in high-throughput mode. In this study, we developed a fully automated image analysis method to determine the time-course of NF-κB translocation in individual cells, suitable for high-throughput screenings in the context of compound screening and functional genomics. Two novel segmentation methods were used for defining the individual nuclear and cytoplasmic regions: watershed masked clustering (WMC and best-fit ellipse of Voronoi cell (BEVC. The dynamic NFκB oscillatory response at the single cell and population level was coupled to automated extraction of 26 analogue translocation parameters including number of peaks, time to reach each peak, and amplitude of each peak. Our automated image analysis method was validated through a series of statistical tests demonstrating computational efficient and accurate NF-κB translocation dynamics quantification of our algorithm. Both pharmacological inhibition of NF-κB and short interfering RNAs targeting the inhibitor of NFκB, IκBα, demonstrated the ability of our method to identify compounds and genetic players that interfere with the nuclear transition of NF-κB.

  1. A scanning electron microscope method for automated, quantitative analysis of mineral matter in coal

    Energy Technology Data Exchange (ETDEWEB)

    Creelman, R.A.; Ward, C.R. [R.A. Creelman and Associates, Epping, NSW (Australia)

    1996-07-01

    Quantitative mineralogical analysis has been carried out in a series of nine coal samples from Australia, South Africa and China using a newly-developed automated image analysis system coupled to a scanning electron microscopy. The image analysis system (QEM{asterisk}SEM) gathers X-ray spectra and backscattered electron data from a number of points on a conventional grain-mount polished section under the SEM, and interprets the data from each point in mineralogical terms. The cumulative data in each case was integrated to provide a volumetric modal analysis of the species present in the coal samples, expressed as percentages of the respective coals` mineral matter. Comparison was made of the QEM{asterisk}SEM results to data obtained from the same samples using other methods of quantitative mineralogical analysis, namely X-ray diffraction of the low-temperature oxygen-plasma ash and normative calculation from the (high-temperature) ash analysis and carbonate CO{sub 2} data. Good agreement was obtained from all three methods for quartz in the coals, and also for most of the iron-bearing minerals. The correlation between results from the different methods was less strong, however, for individual clay minerals, or for minerals such as calcite, dolomite and phosphate species that made up only relatively small proportions of the mineral matter. The image analysis approach, using the electron microscope for mineralogical studies, has significant potential as a supplement to optical microscopy in quantitative coal characterisation. 36 refs., 3 figs., 4 tabs.

  2. Applying Hyperspectral Imaging to Heart Rate Estimation for Adaptive Automation

    Science.gov (United States)

    2013-03-01

    previous MAT-B analysis method used baud rates to quantify each task in similar terms; however, the research only developed this method for 3 of the...A. (2002). Evaluating a New Index of Mental Workload in Real ATC Situation Using Psychophysiological Measures. IEEE . Berntson, G . G ., & Stowell...Human Systems Integration. Hoboken, NJ: John Wiley and Sons, Inc. Corral, L. F., Paez, G ., & Strojnik, M. (2012, June 18). Optimal wavelength

  3. Automated segmentation refinement of small lung nodules in CT scans by local shape analysis.

    Science.gov (United States)

    Diciotti, Stefano; Lombardo, Simone; Falchini, Massimo; Picozzi, Giulia; Mascalchi, Mario

    2011-12-01

    One of the most important problems in the segmentation of lung nodules in CT imaging arises from possible attachments occurring between nodules and other lung structures, such as vessels or pleura. In this report, we address the problem of vessels attachments by proposing an automated correction method applied to an initial rough segmentation of the lung nodule. The method is based on a local shape analysis of the initial segmentation making use of 3-D geodesic distance map representations. The correction method has the advantage that it locally refines the nodule segmentation along recognized vessel attachments only, without modifying the nodule boundary elsewhere. The method was tested using a simple initial rough segmentation, obtained by a fixed image thresholding. The validation of the complete segmentation algorithm was carried out on small lung nodules, identified in the ITALUNG screening trial and on small nodules of the lung image database consortium (LIDC) dataset. In fully automated mode, 217/256 (84.8%) lung nodules of ITALUNG and 139/157 (88.5%) individual marks of lung nodules of LIDC were correctly outlined and an excellent reproducibility was also observed. By using an additional interactive mode, based on a controlled manual interaction, 233/256 (91.0%) lung nodules of ITALUNG and 144/157 (91.7%) individual marks of lung nodules of LIDC were overall correctly segmented. The proposed correction method could also be usefully applied to any existent nodule segmentation algorithm for improving the segmentation quality of juxta-vascular nodules.

  4. Detection of DNA Aneuploidy in Exfoliated Airway Epithelia Cells of Sputum Specimens by the Automated Image Cytometry and Its Clinical Value in the Identification of Lung Cancer

    Institute of Scientific and Technical Information of China (English)

    杨健; 周宜开

    2004-01-01

    To evaluate the value of detecton of DNA aneuploidy in exfoliated airway epithelia cells of sputum specimens by the automated image cytometry for the identification of lung cancer, 100patients were divided into patient group (50 patients with lung cancer)and control group (30 patients with tuberculosis and 20 healthy people). Sputum was obtained for the quantitative analysis of DNA content of exfoliated airway epithelial cells with the automated image cytometry, together with the examinations of brush cytology and conventional sputum cytology. Our results showed that DNA aneuploidy (DI>2.5 or 5c) was found in 20 out of 50 sputum samples of lung cancer, 1 out of 30 sputum samples from tuberculosis patients, and none of 20 sputum samples from healthy people. The positive rates of conventional sputum cytology and brush cytology were 16 % and 32 %,which was lower than that of DNA aneuploidy detection by the automated image cytometry (P<0.01 ,P>0.05). Our study showed that automated image cytometry, which uses DNA aneuploidy as a marker for tumor, can detect the malignant cells in sputum samples of lung cancer and it is a sensitive and specific method serving as a complement for the diagnosis of lung cancer.

  5. Automated volume of interest delineation and rendering of cone beam CT images in interventional cardiology

    Science.gov (United States)

    Lorenz, Cristian; Schäfer, Dirk; Eshuis, Peter; Carroll, John; Grass, Michael

    2012-02-01

    Interventional C-arm systems allow the efficient acquisition of 3D cone beam CT images. They can be used for intervention planning, navigation, and outcome assessment. We present a fast and completely automated volume of interest (VOI) delineation for cardiac interventions, covering the whole visceral cavity including mediastinum and lungs but leaving out rib-cage and spine. The problem is addressed in a model based approach. The procedure has been evaluated on 22 patient cases and achieves an average surface error below 2mm. The method is able to cope with varying image intensities, varying truncations due to the limited reconstruction volume, and partially with heavy metal and motion artifacts.

  6. Quality Control in Automated Manufacturing Processes – Combined Features for Image Processing

    Directory of Open Access Journals (Sweden)

    B. Kuhlenkötter

    2006-01-01

    Full Text Available In production processes the use of image processing systems is widespread. Hardware solutions and cameras respectively are available for nearly every application. One important challenge of image processing systems is the development and selection of appropriate algorithms and software solutions in order to realise ambitious quality control for production processes. This article characterises the development of innovative software by combining features for an automatic defect classification on product surfaces. The artificial intelligent method Support Vector Machine (SVM is used to execute the classification task according to the combined features. This software is one crucial element for the automation of a manually operated production process. 

  7. Automated Registration of Images from Multiple Bands of Resourcesat-2 Liss-4 camera

    OpenAIRE

    2014-01-01

    Continuous and automated co-registration and geo-tagging of images from multiple bands of Liss-4 camera is one of the interesting challenges of Resourcesat-2 data processing. Three arrays of the Liss-4 camera are physically separated in the focal plane in alongtrack direction. Thus, same line on the ground will be imaged by extreme bands with a time interval of as much as 2.1 seconds. During this time, the satellite would have covered a distance of about 14 km on the ground and the e...

  8. Automated drawing of network plots in network meta-analysis.

    Science.gov (United States)

    Rücker, Gerta; Schwarzer, Guido

    2016-03-01

    In systematic reviews based on network meta-analysis, the network structure should be visualized. Network plots often have been drawn by hand using generic graphical software. A typical way of drawing networks, also implemented in statistical software for network meta-analysis, is a circular representation, often with many crossing lines. We use methods from graph theory in order to generate network plots in an automated way. We give a number of requirements for graph drawing and present an algorithm that fits prespecified ideal distances between the nodes representing the treatments. The method was implemented in the function netgraph of the R package netmeta and applied to a number of networks from the literature. We show that graph representations with a small number of crossing lines are often preferable to circular representations.

  9. Automated eigensystem realisation algorithm for operational modal analysis

    Science.gov (United States)

    Zhang, Guowen; Ma, Jinghua; Chen, Zhuo; Wang, Ruirong

    2014-07-01

    The eigensystem realisation algorithm (ERA) is one of the most popular methods in civil engineering applications for estimating modal parameters. Three issues have been addressed in the paper: spurious mode elimination, estimating the energy relationship between different modes, and automatic analysis of the stabilisation diagram. On spurious mode elimination, a new criterion, modal similarity index (MSI) is proposed to measure the reliability of the modes obtained by ERA. On estimating the energy relationship between different modes, the mode energy level (MEL) was introduced to measure the energy contribution of each mode, which can be used to indicate the dominant mode. On automatic analysis of the stabilisation diagram, an automation of the mode selection process based on a hierarchical clustering algorithm was developed. An experimental example of the parameter estimation for the Chaotianmen bridge model in Chongqing, China, is presented to demonstrate the efficacy of the proposed method.

  10. Automated reconstruction of standing posture panoramas from multi-sector long limb x-ray images

    Science.gov (United States)

    Miller, Linzey; Trier, Caroline; Ben-Zikri, Yehuda K.; Linte, Cristian A.

    2016-03-01

    Due to the digital X-ray imaging system's limited field of view, several individual sector images are required to capture the posture of an individual in standing position. These images are then "stitched together" to reconstruct the standing posture. We have created an image processing application that automates the stitching, therefore minimizing user input, optimizing workflow, and reducing human error. The application begins with pre-processing the input images by removing artifacts, filtering out isolated noisy regions, and amplifying a seamless bone edge. The resulting binary images are then registered together using a rigid-body intensity based registration algorithm. The identified registration transformations are then used to map the original sector images into the panorama image. Our method focuses primarily on the use of the anatomical content of the images to generate the panoramas as opposed to using external markers employed to aid with the alignment process. Currently, results show robust edge detection prior to registration and we have tested our approach by comparing the resulting automatically-stitched panoramas to the manually stitched panoramas in terms of registration parameters, target registration error of homologous markers, and the homogeneity of the digitally subtracted automatically- and manually-stitched images using 26 patient datasets.

  11. Adiposoft: automated software for the analysis of white adipose tissue cellularity in histological sections.

    Science.gov (United States)

    Galarraga, Miguel; Campión, Javier; Muñoz-Barrutia, Arrate; Boqué, Noemí; Moreno, Haritz; Martínez, José Alfredo; Milagro, Fermín; Ortiz-de-Solórzano, Carlos

    2012-12-01

    The accurate estimation of the number and size of cells provides relevant information on the kinetics of growth and the physiological status of a given tissue or organ. Here, we present Adiposoft, a fully automated open-source software for the analysis of white adipose tissue cellularity in histological sections. First, we describe the sequence of image analysis routines implemented by the program. Then, we evaluate our software by comparing it with other adipose tissue quantification methods, namely, with the manual analysis of cells in histological sections (used as gold standard) and with the automated analysis of cells in suspension, the most commonly used method. Our results show significant concordance between Adiposoft and the other two methods. We also demonstrate the ability of the proposed method to distinguish the cellular composition of three different rat fat depots. Moreover, we found high correlation and low disagreement between Adiposoft and the manual delineation of cells. We conclude that Adiposoft provides accurate results while considerably reducing the amount of time and effort required for the analysis.

  12. Sfm_georef: Automating image measurement of ground control points for SfM-based projects

    Science.gov (United States)

    James, Mike R.

    2016-04-01

    Deriving accurate DEM and orthomosaic image products from UAV surveys generally involves the use of multiple ground control points (GCPs). Here, we demonstrate the automated collection of GCP image measurements for SfM-MVS processed projects, using sfm_georef software (James & Robson, 2012; http://www.lancaster.ac.uk/staff/jamesm/software/sfm_georef.htm). Sfm_georef was originally written to provide geo-referencing procedures for SfM-MVS projects. It has now been upgraded with a 3-D patch-based matching routine suitable for automating GCP image measurement in both aerial and ground-based (oblique) projects, with the aim of reducing the time required for accurate geo-referencing. Sfm_georef is compatible with a range of SfM-MVS software and imports the relevant files that describe the image network, including camera models and tie points. 3-D survey measurements of ground control are then provided, either for natural features or artificial targets distributed over the project area. Automated GCP image measurement is manually initiated through identifying a GCP position in an image by mouse click; the GCP is then represented by a square planar patch in 3-D, textured from the image and oriented parallel to the local topographic surface (as defined by the 3-D positions of nearby tie points). Other images are then automatically examined by projecting the patch into the images (to account for differences in viewing geometry) and carrying out a sub-pixel normalised cross-correlation search in the local area. With two or more observations of a GCP, its 3-D co-ordinates are then derived by ray intersection. With the 3-D positions of three or more GCPs identified, an initial geo-referencing transform can be derived to relate the SfM-MVS co-ordinate system to that of the GCPs. Then, if GCPs are symmetric and identical, image texture from one representative GCP can be used to search automatically for all others throughout the image set. Finally, the GCP observations can be

  13. AutoGate: automating analysis of flow cytometry data.

    Science.gov (United States)

    Meehan, Stephen; Walther, Guenther; Moore, Wayne; Orlova, Darya; Meehan, Connor; Parks, David; Ghosn, Eliver; Philips, Megan; Mitsunaga, Erin; Waters, Jeffrey; Kantor, Aaron; Okamura, Ross; Owumi, Solomon; Yang, Yang; Herzenberg, Leonard A; Herzenberg, Leonore A

    2014-05-01

    Nowadays, one can hardly imagine biology and medicine without flow cytometry to measure CD4 T cell counts in HIV, follow bone marrow transplant patients, characterize leukemias, etc. Similarly, without flow cytometry, there would be a bleak future for stem cell deployment, HIV drug development and full characterization of the cells and cell interactions in the immune system. But while flow instruments have improved markedly, the development of automated tools for processing and analyzing flow data has lagged sorely behind. To address this deficit, we have developed automated flow analysis software technology, provisionally named AutoComp and AutoGate. AutoComp acquires sample and reagent labels from users or flow data files, and uses this information to complete the flow data compensation task. AutoGate replaces the manual subsetting capabilities provided by current analysis packages with newly defined statistical algorithms that automatically and accurately detect, display and delineate subsets in well-labeled and well-recognized formats (histograms, contour and dot plots). Users guide analyses by successively specifying axes (flow parameters) for data subset displays and selecting statistically defined subsets to be used for the next analysis round. Ultimately, this process generates analysis "trees" that can be applied to automatically guide analyses for similar samples. The first AutoComp/AutoGate version is currently in the hands of a small group of users at Stanford, Emory and NIH. When this "early adopter" phase is complete, the authors expect to distribute the software free of charge to .edu, .org and .gov users.

  14. Automated High-Dimensional Flow Cytometric Data Analysis

    Science.gov (United States)

    Pyne, Saumyadipta; Hu, Xinli; Wang, Kui; Rossin, Elizabeth; Lin, Tsung-I.; Maier, Lisa; Baecher-Allan, Clare; McLachlan, Geoffrey; Tamayo, Pablo; Hafler, David; de Jager, Philip; Mesirov, Jill

    Flow cytometry is widely used for single cell interrogation of surface and intracellular protein expression by measuring fluorescence intensity of fluorophore-conjugated reagents. We focus on the recently developed procedure of Pyne et al. (2009, Proceedings of the National Academy of Sciences USA 106, 8519-8524) for automated high- dimensional flow cytometric analysis called FLAME (FLow analysis with Automated Multivariate Estimation). It introduced novel finite mixture models of heavy-tailed and asymmetric distributions to identify and model cell populations in a flow cytometric sample. This approach robustly addresses the complexities of flow data without the need for transformation or projection to lower dimensions. It also addresses the critical task of matching cell populations across samples that enables downstream analysis. It thus facilitates application of flow cytometry to new biological and clinical problems. To facilitate pipelining with standard bioinformatic applications such as high-dimensional visualization, subject classification or outcome prediction, FLAME has been incorporated with the GenePattern package of the Broad Institute. Thereby analysis of flow data can be approached similarly as other genomic platforms. We also consider some new work that proposes a rigorous and robust solution to the registration problem by a multi-level approach that allows us to model and register cell populations simultaneously across a cohort of high-dimensional flow samples. This new approach is called JCM (Joint Clustering and Matching). It enables direct and rigorous comparisons across different time points or phenotypes in a complex biological study as well as for classification of new patient samples in a more clinical setting.

  15. Towards Automated Three-Dimensional Tracking of Nephrons through Stacked Histological Image Sets.

    Science.gov (United States)

    Bhikha, Charita; Andreasen, Arne; Christensen, Erik I; Letts, Robyn F R; Pantanowitz, Adam; Rubin, David M; Thomsen, Jesper S; Zhai, Xiao-Yue

    2015-01-01

    An automated approach for tracking individual nephrons through three-dimensional histological image sets of mouse and rat kidneys is presented. In a previous study, the available images were tracked manually through the image sets in order to explore renal microarchitecture. The purpose of the current research is to reduce the time and effort required to manually trace nephrons by creating an automated, intelligent system as a standard tool for such datasets. The algorithm is robust enough to isolate closely packed nephrons and track their convoluted paths despite a number of nonideal, interfering conditions such as local image distortions, artefacts, and interstitial tissue interference. The system comprises image preprocessing, feature extraction, and a custom graph-based tracking algorithm, which is validated by a rule base and a machine learning algorithm. A study of a selection of automatically tracked nephrons, when compared with manual tracking, yields a 95% tracking accuracy for structures in the cortex, while those in the medulla have lower accuracy due to narrower diameter and higher density. Limited manual intervention is introduced to improve tracking, enabling full nephron paths to be obtained with an average of 17 manual corrections per mouse nephron and 58 manual corrections per rat nephron.

  16. Towards Automated Three-Dimensional Tracking of Nephrons through Stacked Histological Image Sets

    Directory of Open Access Journals (Sweden)

    Charita Bhikha

    2015-01-01

    Full Text Available An automated approach for tracking individual nephrons through three-dimensional histological image sets of mouse and rat kidneys is presented. In a previous study, the available images were tracked manually through the image sets in order to explore renal microarchitecture. The purpose of the current research is to reduce the time and effort required to manually trace nephrons by creating an automated, intelligent system as a standard tool for such datasets. The algorithm is robust enough to isolate closely packed nephrons and track their convoluted paths despite a number of nonideal, interfering conditions such as local image distortions, artefacts, and interstitial tissue interference. The system comprises image preprocessing, feature extraction, and a custom graph-based tracking algorithm, which is validated by a rule base and a machine learning algorithm. A study of a selection of automatically tracked nephrons, when compared with manual tracking, yields a 95% tracking accuracy for structures in the cortex, while those in the medulla have lower accuracy due to narrower diameter and higher density. Limited manual intervention is introduced to improve tracking, enabling full nephron paths to be obtained with an average of 17 manual corrections per mouse nephron and 58 manual corrections per rat nephron.

  17. Semi-automated Acanthamoeba polyphaga detection and computation of Salmonella typhimurium concentration in spatio-temporal images.

    Science.gov (United States)

    Tsibidis, George D; Burroughs, Nigel J; Gaze, William; Wellington, Elizabeth M H

    2011-12-01

    Interaction between bacteria and protozoa is an increasing area of interest, however there are a few systems that allow extensive observation of the interactions. A semi-automated approach is proposed to analyse a large amount of experimental data and avoid a time demanding manual object classification. We examined a surface system consisting of non nutrient agar with a uniform bacterial lawn that extended over the agar surface, and a spatially localised central population of amoebae. Location and identification of protozoa and quantification of bacteria population are performed by the employment of image analysis techniques in a series of spatial images. The quantitative tools are based on intensity thresholding, or on probabilistic models. To accelerate organism identification, correct classification errors and attain quantitative details of all objects a custom written Graphical User Interfaces has also been developed.

  18. Statistical analysis to assess automated level of suspicion scoring methods in breast ultrasound

    Science.gov (United States)

    Galperin, Michael

    2003-05-01

    A well-defined rule-based system has been developed for scoring 0-5 the Level of Suspicion (LOS) based on qualitative lexicon describing the ultrasound appearance of breast lesion. The purposes of the research are to asses and select one of the automated LOS scoring quantitative methods developed during preliminary studies in benign biopsies reduction. The study has used Computer Aided Imaging System (CAIS) to improve the uniformity and accuracy of applying the LOS scheme by automatically detecting, analyzing and comparing breast masses. The overall goal is to reduce biopsies on the masses with lower levels of suspicion, rather that increasing the accuracy of diagnosis of cancers (will require biopsy anyway). On complex cysts and fibroadenoma cases experienced radiologists were up to 50% less certain in true negatives than CAIS. Full correlation analysis was applied to determine which of the proposed LOS quantification methods serves CAIS accuracy the best. This paper presents current results of applying statistical analysis for automated LOS scoring quantification for breast masses with known biopsy results. It was found that First Order Ranking method yielded most the accurate results. The CAIS system (Image Companion, Data Companion software) is developed by Almen Laboratories and was used to achieve the results.

  19. Development of a fully automated online mixing system for SAXS protein structure analysis

    DEFF Research Database (Denmark)

    Nielsen, Søren Skou; Arleth, Lise

    2010-01-01

    This thesis presents the development of an automated high-throughput mixing and exposure system for Small-Angle Scattering analysis on a synchrotron using polymer microfluidics. Software and hardware for both automated mixing, exposure control on a beamline and automated data reduction and prelim......This thesis presents the development of an automated high-throughput mixing and exposure system for Small-Angle Scattering analysis on a synchrotron using polymer microfluidics. Software and hardware for both automated mixing, exposure control on a beamline and automated data reduction...... and preliminary analysis is presented. Three mixing systems that have been the corner stones of the development process are presented including a fully functioning high-throughput microfluidic system that is able to produce and expose 36 mixed samples per hour using 30 μL of sample volume. The system is tested...

  20. Pattern recognition software and techniques for biological image analysis.

    Directory of Open Access Journals (Sweden)

    Lior Shamir

    Full Text Available The increasing prevalence of automated image acquisition systems is enabling new types of microscopy experiments that generate large image datasets. However, there is a perceived lack of robust image analysis systems required to process these diverse datasets. Most automated image analysis systems are tailored for specific types of microscopy, contrast methods, probes, and even cell types. This imposes significant constraints on experimental design, limiting their application to the narrow set of imaging methods for which they were designed. One of the approaches to address these limitations is pattern recognition, which was originally developed for remote sensing, and is increasingly being applied to the biology domain. This approach relies on training a computer to recognize patterns in images rather than developing algorithms or tuning parameters for specific image processing tasks. The generality of this approach promises to enable data mining in extensive image repositories, and provide objective and quantitative imaging assays for routine use. Here, we provide a brief overview of the technologies behind pattern recognition and its use in computer vision for biological and biomedical imaging. We list available software tools that can be used by biologists and suggest practical experimental considerations to make the best use of pattern recognition techniques for imaging assays.

  1. Pattern recognition software and techniques for biological image analysis.

    Science.gov (United States)

    Shamir, Lior; Delaney, John D; Orlov, Nikita; Eckley, D Mark; Goldberg, Ilya G

    2010-11-24

    The increasing prevalence of automated image acquisition systems is enabling new types of microscopy experiments that generate large image datasets. However, there is a perceived lack of robust image analysis systems required to process these diverse datasets. Most automated image analysis systems are tailored for specific types of microscopy, contrast methods, probes, and even cell types. This imposes significant constraints on experimental design, limiting their application to the narrow set of imaging methods for which they were designed. One of the approaches to address these limitations is pattern recognition, which was originally developed for remote sensing, and is increasingly being applied to the biology domain. This approach relies on training a computer to recognize patterns in images rather than developing algorithms or tuning parameters for specific image processing tasks. The generality of this approach promises to enable data mining in extensive image repositories, and provide objective and quantitative imaging assays for routine use. Here, we provide a brief overview of the technologies behind pattern recognition and its use in computer vision for biological and biomedical imaging. We list available software tools that can be used by biologists and suggest practical experimental considerations to make the best use of pattern recognition techniques for imaging assays.

  2. Automated quantitative gait analysis in animal models of movement disorders

    Directory of Open Access Journals (Sweden)

    Vandeputte Caroline

    2010-08-01

    Full Text Available Abstract Background Accurate and reproducible behavioral tests in animal models are of major importance in the development and evaluation of new therapies for central nervous system disease. In this study we investigated for the first time gait parameters of rat models for Parkinson's disease (PD, Huntington's disease (HD and stroke using the Catwalk method, a novel automated gait analysis test. Static and dynamic gait parameters were measured in all animal models, and these data were compared to readouts of established behavioral tests, such as the cylinder test in the PD and stroke rats and the rotarod tests for the HD group. Results Hemiparkinsonian rats were generated by unilateral injection of the neurotoxin 6-hydroxydopamine in the striatum or in the medial forebrain bundle. For Huntington's disease, a transgenic rat model expressing a truncated huntingtin fragment with multiple CAG repeats was used. Thirdly, a stroke model was generated by a photothrombotic induced infarct in the right sensorimotor cortex. We found that multiple gait parameters were significantly altered in all three disease models compared to their respective controls. Behavioural deficits could be efficiently measured using the cylinder test in the PD and stroke animals, and in the case of the PD model, the deficits in gait essentially confirmed results obtained by the cylinder test. However, in the HD model and the stroke model the Catwalk analysis proved more sensitive than the rotarod test and also added new and more detailed information on specific gait parameters. Conclusion The automated quantitative gait analysis test may be a useful tool to study both motor impairment and recovery associated with various neurological motor disorders.

  3. An automated method for comparing motion artifacts in cine four-dimensional computed tomography images.

    Science.gov (United States)

    Cui, Guoqiang; Jew, Brian; Hong, Julian C; Johnston, Eric W; Loo, Billy W; Maxim, Peter G

    2012-11-08

    The aim of this study is to develop an automated method to objectively compare motion artifacts in two four-dimensional computed tomography (4D CT) image sets, and identify the one that would appear to human observers with fewer or smaller artifacts. Our proposed method is based on the difference of the normalized correlation coefficients between edge slices at couch transitions, which we hypothesize may be a suitable metric to identify motion artifacts. We evaluated our method using ten pairs of 4D CT image sets that showed subtle differences in artifacts between images in a pair, which were identifiable by human observers. One set of 4D CT images was sorted using breathing traces in which our clinically implemented 4D CT sorting software miscalculated the respiratory phase, which expectedly led to artifacts in the images. The other set of images consisted of the same images; however, these were sorted using the same breathing traces but with corrected phases. Next we calculated the normalized correlation coefficients between edge slices at all couch transitions for all respiratory phases in both image sets to evaluate for motion artifacts. For nine image set pairs, our method identified the 4D CT sets sorted using the breathing traces with the corrected respiratory phase to result in images with fewer or smaller artifacts, whereas for one image pair, no difference was noted. Two observers independently assessed the accuracy of our method. Both observers identified 9 image sets that were sorted using the breathing traces with corrected respiratory phase as having fewer or smaller artifacts. In summary, using the 4D CT data of ten pairs of 4D CT image sets, we have demonstrated proof of principle that our method is able to replicate the results of two human observers in identifying the image set with fewer or smaller artifacts.

  4. Automated segmentation of cardiac visceral fat in low-dose non-contrast chest CT images

    Science.gov (United States)

    Xie, Yiting; Liang, Mingzhu; Yankelevitz, David F.; Henschke, Claudia I.; Reeves, Anthony P.

    2015-03-01

    Cardiac visceral fat was segmented from low-dose non-contrast chest CT images using a fully automated method. Cardiac visceral fat is defined as the fatty tissues surrounding the heart region, enclosed by the lungs and posterior to the sternum. It is measured by constraining the heart region with an Anatomy Label Map that contains robust segmentations of the lungs and other major organs and estimating the fatty tissue within this region. The algorithm was evaluated on 124 low-dose and 223 standard-dose non-contrast chest CT scans from two public datasets. Based on visual inspection, 343 cases had good cardiac visceral fat segmentation. For quantitative evaluation, manual markings of cardiac visceral fat regions were made in 3 image slices for 45 low-dose scans and the Dice similarity coefficient (DSC) was computed. The automated algorithm achieved an average DSC of 0.93. Cardiac visceral fat volume (CVFV), heart region volume (HRV) and their ratio were computed for each case. The correlation between cardiac visceral fat measurement and coronary artery and aortic calcification was also evaluated. Results indicated the automated algorithm for measuring cardiac visceral fat volume may be an alternative method to the traditional manual assessment of thoracic region fat content in the assessment of cardiovascular disease risk.

  5. NOTE: Automated wavelet denoising of photoacoustic signals for circulating melanoma cell detection and burn image reconstruction

    Science.gov (United States)

    Holan, Scott H.; Viator, John A.

    2008-06-01

    Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples.

  6. Image based performance analysis of thermal imagers

    Science.gov (United States)

    Wegner, D.; Repasi, E.

    2016-05-01

    Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.

  7. Estimation of urinary stone composition by automated processing of CT images

    CERN Document Server

    Chevreau, Grégoire; Conort, Pierre; Renard-Penna, Raphaëlle; Mallet, Alain; Daudon, Michel; Mozer, Pierre; 10.1007/s00240-009-0195-3

    2009-01-01

    The objective of this article was developing an automated tool for routine clinical practice to estimate urinary stone composition from CT images based on the density of all constituent voxels. A total of 118 stones for which the composition had been determined by infrared spectroscopy were placed in a helical CT scanner. A standard acquisition, low-dose and high-dose acquisitions were performed. All voxels constituting each stone were automatically selected. A dissimilarity index evaluating variations of density around each voxel was created in order to minimize partial volume effects: stone composition was established on the basis of voxel density of homogeneous zones. Stone composition was determined in 52% of cases. Sensitivities for each compound were: uric acid: 65%, struvite: 19%, cystine: 78%, carbapatite: 33.5%, calcium oxalate dihydrate: 57%, calcium oxalate monohydrate: 66.5%, brushite: 75%. Low-dose acquisition did not lower the performances (P < 0.05). This entirely automated approach eliminat...

  8. Multimodal microscopy for automated histologic analysis of prostate cancer

    Directory of Open Access Journals (Sweden)

    Sinha Saurabh

    2011-02-01

    Full Text Available Abstract Background Prostate cancer is the single most prevalent cancer in US men whose gold standard of diagnosis is histologic assessment of biopsies. Manual assessment of stained tissue of all biopsies limits speed and accuracy in clinical practice and research of prostate cancer diagnosis. We sought to develop a fully-automated multimodal microscopy method to distinguish cancerous from non-cancerous tissue samples. Methods We recorded chemical data from an unstained tissue microarray (TMA using Fourier transform infrared (FT-IR spectroscopic imaging. Using pattern recognition, we identified epithelial cells without user input. We fused the cell type information with the corresponding stained images commonly used in clinical practice. Extracted morphological features, optimized by two-stage feature selection method using a minimum-redundancy-maximal-relevance (mRMR criterion and sequential floating forward selection (SFFS, were applied to classify tissue samples as cancer or non-cancer. Results We achieved high accuracy (area under ROC curve (AUC >0.97 in cross-validations on each of two data sets that were stained under different conditions. When the classifier was trained on one data set and tested on the other data set, an AUC value of ~0.95 was observed. In the absence of IR data, the performance of the same classification system dropped for both data sets and between data sets. Conclusions We were able to achieve very effective fusion of the information from two different images that provide very different types of data with different characteristics. The method is entirely transparent to a user and does not involve any adjustment or decision-making based on spectral data. By combining the IR and optical data, we achieved high accurate classification.

  9. Fully automated segmentation of left ventricle using dual dynamic programming in cardiac cine MR images

    Science.gov (United States)

    Jiang, Luan; Ling, Shan; Li, Qiang

    2016-03-01

    Cardiovascular diseases are becoming a leading cause of death all over the world. The cardiac function could be evaluated by global and regional parameters of left ventricle (LV) of the heart. The purpose of this study is to develop and evaluate a fully automated scheme for segmentation of LV in short axis cardiac cine MR images. Our fully automated method consists of three major steps, i.e., LV localization, LV segmentation at end-diastolic phase, and LV segmentation propagation to the other phases. First, the maximum intensity projection image along the time phases of the midventricular slice, located at the center of the image, was calculated to locate the region of interest of LV. Based on the mean intensity of the roughly segmented blood pool in the midventricular slice at each phase, end-diastolic (ED) and end-systolic (ES) phases were determined. Second, the endocardial and epicardial boundaries of LV of each slice at ED phase were synchronously delineated by use of a dual dynamic programming technique. The external costs of the endocardial and epicardial boundaries were defined with the gradient values obtained from the original and enhanced images, respectively. Finally, with the advantages of the continuity of the boundaries of LV across adjacent phases, we propagated the LV segmentation from the ED phase to the other phases by use of dual dynamic programming technique. The preliminary results on 9 clinical cardiac cine MR cases show that the proposed method can obtain accurate segmentation of LV based on subjective evaluation.

  10. 14 CFR 1261.413 - Analysis of costs; automation; prevention of overpayments, delinquencies, or defaults.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Analysis of costs; automation; prevention of overpayments, delinquencies, or defaults. 1261.413 Section 1261.413 Aeronautics and Space NATIONAL...) § 1261.413 Analysis of costs; automation; prevention of overpayments, delinquencies, or defaults....

  11. galaxieEST: addressing EST identity through automated phylogenetic analysis

    Directory of Open Access Journals (Sweden)

    Larsson Karl-Henrik

    2004-07-01

    Full Text Available Abstract Background Research involving expressed sequence tags (ESTs is intricately coupled to the existence of large, well-annotated sequence repositories. Comparatively complete and satisfactory annotated public sequence libraries are, however, available only for a limited range of organisms, rendering the absence of sequences and gene structure information a tangible problem for those working with taxa lacking an EST or genome sequencing project. Paralogous genes belonging to the same gene family but distinguished by derived characteristics are particularly prone to misidentification and erroneous annotation; high but incomplete levels of sequence similarity are typically difficult to interpret and have formed the basis of many unsubstantiated assumptions of orthology. In these cases, a phylogenetic study of the query sequence together with the most similar sequences in the database may be of great value to the identification process. In order to facilitate this laborious procedure, a project to employ automated phylogenetic analysis in the identification of ESTs was initiated. Results galaxieEST is an open source Perl-CGI script package designed to complement traditional similarity-based identification of EST sequences through employment of automated phylogenetic analysis. It uses a series of BLAST runs as a sieve to retrieve nucleotide and protein sequences for inclusion in neighbour joining and parsimony analyses; the output includes the BLAST output, the results of the phylogenetic analyses, and the corresponding multiple alignments. galaxieEST is available as an on-line web service for identification of fungal ESTs and for download / local installation for use with any organism group at http://galaxie.cgb.ki.se/galaxieEST.html. Conclusions By addressing sequence relatedness in addition to similarity, galaxieEST provides an integrative view on EST origin and identity, which may prove particularly useful in cases where similarity searches

  12. Automated analysis of damages for radiation in plastics surfaces; Analisis automatizado de danos por radiacion en superficies plasticas

    Energy Technology Data Exchange (ETDEWEB)

    Andrade, C.; Camacho M, E.; Tavera, L.; Balcazar, M. [ININ, 52045 Ocoyoacac, Estado de Mexico (Mexico)

    1990-02-15

    Analysis of damages done by the radiation in a polymer characterized by optic properties of polished surfaces, of uniformity and chemical resistance that the acrylic; resistant until the 150 centigrade grades of temperature, and with an approximate weight of half of the glass. An objective of this work is the development of a method that analyze in automated form the superficial damages induced by radiation in plastic materials means an images analyst. (Author)

  13. Benchmarking, Research, Development, and Support for ORNL Automated Image and Signature Retrieval (AIR/ASR) Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Tobin, K.W.

    2004-06-01

    This report describes the results of a Cooperative Research and Development Agreement (CRADA) with Applied Materials, Inc. (AMAT) of Santa Clara, California. This project encompassed the continued development and integration of the ORNL Automated Image Retrieval (AIR) technology, and an extension of the technology denoted Automated Signature Retrieval (ASR), and other related technologies with the Defect Source Identification (DSI) software system that was under development by AMAT at the time this work was performed. In the semiconductor manufacturing environment, defect imagery is used to diagnose problems in the manufacturing line, train yield management engineers, and examine historical data for trends. Image management in semiconductor data systems is a growing cause of concern in the industry as fabricators are now collecting up to 20,000 images each week. In response to this concern, researchers at the Oak Ridge National Laboratory (ORNL) developed a semiconductor-specific content-based image retrieval method and system, also known as AIR. The system uses an image-based query-by-example method to locate and retrieve similar imagery from a database of digital imagery using visual image characteristics. The query method is based on a unique architecture that takes advantage of the statistical, morphological, and structural characteristics of image data, generated by inspection equipment in industrial applications. The system improves the manufacturing process by allowing rapid access to historical records of similar events so that errant process equipment can be isolated and corrective actions can be quickly taken to improve yield. The combined ORNL and AMAT technology is referred to hereafter as DSI-AIR and DSI-ASR.

  14. Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Ani eEloyan

    2012-08-01

    Full Text Available Successful automated diagnoses of attention deficit hyperactive disorder (ADHD using imaging and functional biomarkers would have fundamental consequences on the public health impact of the disease. In this work, we show results on the predictability of ADHD using imaging biomarkers and discuss the scientific and diagnostic impacts of the research. We created a prediction model using the landmark ADHD 200 data set focusing on resting state functional connectivity (rs-fc and structural brain imaging. We predicted ADHD status and subtype, obtained by behavioral examination, using imaging data, intelligence quotients and other covariates. The novel contributions of this manuscript include a thorough exploration of prediction and image feature extraction methodology on this form of data, including the use of singular value decompositions, CUR decompositions, random forest, gradient boosting, bagging, voxel-based morphometry and support vector machines as well as important insights into the value, and potentially lack thereof, of imaging biomarkers of disease. The key results include the CUR-based decomposition of the rs-fc-fMRI along with gradient boosting and the prediction algorithm based on a motor network parcellation and random forest algorithm. We conjecture that the CUR decomposition is largely diagnosing common population directions of head motion. Of note, a byproduct of this research is a potential automated method for detecting subtle in-scanner motion. The final prediction algorithm, a weighted combination of several algorithms, had an external test set specificity of 94% with sensitivity of 21%. The most promising imaging biomarker was a correlation graph from a motor network parcellation. In summary, we have undertaken a large-scale statistical exploratory prediction exercise on the unique ADHD 200 data set. The exercise produced several potential leads for future scientific exploration of the neurological basis of ADHD.

  15. Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging.

    Science.gov (United States)

    Eloyan, Ani; Muschelli, John; Nebel, Mary Beth; Liu, Han; Han, Fang; Zhao, Tuo; Barber, Anita D; Joel, Suresh; Pekar, James J; Mostofsky, Stewart H; Caffo, Brian

    2012-01-01

    Successful automated diagnoses of attention deficit hyperactive disorder (ADHD) using imaging and functional biomarkers would have fundamental consequences on the public health impact of the disease. In this work, we show results on the predictability of ADHD using imaging biomarkers and discuss the scientific and diagnostic impacts of the research. We created a prediction model using the landmark ADHD 200 data set focusing on resting state functional connectivity (rs-fc) and structural brain imaging. We predicted ADHD status and subtype, obtained by behavioral examination, using imaging data, intelligence quotients and other covariates. The novel contributions of this manuscript include a thorough exploration of prediction and image feature extraction methodology on this form of data, including the use of singular value decompositions (SVDs), CUR decompositions, random forest, gradient boosting, bagging, voxel-based morphometry, and support vector machines as well as important insights into the value, and potentially lack thereof, of imaging biomarkers of disease. The key results include the CUR-based decomposition of the rs-fc-fMRI along with gradient boosting and the prediction algorithm based on a motor network parcellation and random forest algorithm. We conjecture that the CUR decomposition is largely diagnosing common population directions of head motion. Of note, a byproduct of this research is a potential automated method for detecting subtle in-scanner motion. The final prediction algorithm, a weighted combination of several algorithms, had an external test set specificity of 94% with sensitivity of 21%. The most promising imaging biomarker was a correlation graph from a motor network parcellation. In summary, we have undertaken a large-scale statistical exploratory prediction exercise on the unique ADHD 200 data set. The exercise produced several potential leads for future scientific exploration of the neurological basis of ADHD.

  16. Automated detection of retinal cell nuclei in 3D micro-CT images of zebrafish using support vector machine classification

    Science.gov (United States)

    Ding, Yifu; Tavolara, Thomas; Cheng, Keith

    2016-03-01

    Our group is developing a method to examine biological specimens in cellular detail using synchrotron microCT. The method can acquire 3D images of tissue at micrometer-scale resolutions, allowing for individual cell types to be visualized in the context of the entire specimen. For model organism research, this tool will enable the rapid characterization of tissue architecture and cellular morphology from every organ system. This characterization is critical for proposed and ongoing "phenome" projects that aim to phenotype whole-organism mutants and diseased tissues from different organisms including humans. With the envisioned collection of hundreds to thousands of images for a phenome project, it is important to develop quantitative image analysis tools for the automated scoring of organism phenotypes across organ systems. Here we present a first step towards that goal, demonstrating the use of support vector machines (SVM) in detecting retinal cell nuclei in 3D images of wild-type zebrafish. In addition, we apply the SVM classifier on a mutant zebrafish to examine whether SVMs can be used to capture phenotypic differences in these images. The longterm goal of this work is to allow cellular and tissue morphology to be characterized quantitatively for many organ systems, at the level of the whole-organism.

  17. Application of Reflectance Transformation Imaging Technique to Improve Automated Edge Detection in a Fossilized Oyster Reef

    Science.gov (United States)

    Djuricic, Ana; Puttonen, Eetu; Harzhauser, Mathias; Dorninger, Peter; Székely, Balázs; Mandic, Oleg; Nothegger, Clemens; Molnár, Gábor; Pfeifer, Norbert

    2016-04-01

    The world's largest fossilized oyster reef is located in Stetten, Lower Austria excavated during field campaigns of the Natural History Museum Vienna between 2005 and 2008. It is studied in paleontology to learn about change in climate from past events. In order to support this study, a laser scanning and photogrammetric campaign was organized in 2014 for 3D documentation of the large and complex site. The 3D point clouds and high resolution images from this field campaign are visualized by photogrammetric methods in form of digital surface models (DSM, 1 mm resolution) and orthophoto (0.5 mm resolution) to help paleontological interpretation of data. Due to size of the reef, automated analysis techniques are needed to interpret all digital data obtained from the field. One of the key components in successful automation is detection of oyster shell edges. We have tested Reflectance Transformation Imaging (RTI) to visualize the reef data sets for end-users through a cultural heritage viewing interface (RTIViewer). The implementation includes a Lambert shading method to visualize DSMs derived from terrestrial laser scanning using scientific software OPALS. In contrast to shaded RTI no devices consisting of a hardware system with LED lights, or a body to rotate the light source around the object are needed. The gray value for a given shaded pixel is related to the angle between light source and the normal at that position. Brighter values correspond to the slope surfaces facing the light source. Increasing of zenith angle results in internal shading all over the reef surface. In total, oyster reef surface contains 81 DSMs with 3 m x 2 m each. Their surface was illuminated by moving the virtual sun every 30 degrees (12 azimuth angles from 20-350) and every 20 degrees (4 zenith angles from 20-80). This technique provides paleontologists an interactive approach to virtually inspect the oyster reef, and to interpret the shell surface by changing the light source direction

  18. Automated integer programming based separation of arteries and veins from thoracic CT images.

    Science.gov (United States)

    Payer, Christian; Pienn, Michael; Bálint, Zoltán; Shekhovtsov, Alexander; Talakic, Emina; Nagy, Eszter; Olschewski, Andrea; Olschewski, Horst; Urschler, Martin

    2016-12-01

    Automated computer-aided analysis of lung vessels has shown to yield promising results for non-invasive diagnosis of lung diseases. To detect vascular changes which affect pulmonary arteries and veins differently, both compartments need to be identified. We present a novel, fully automatic method that separates arteries and veins in thoracic computed tomography images, by combining local as well as global properties of pulmonary vessels. We split the problem into two parts: the extraction of multiple distinct vessel subtrees, and their subsequent labeling into arteries and veins. Subtree extraction is performed with an integer program (IP), based on local vessel geometry. As naively solving this IP is time-consuming, we show how to drastically reduce computational effort by reformulating it as a Markov Random Field. Afterwards, each subtree is labeled as either arterial or venous by a second IP, using two anatomical properties of pulmonary vessels: the uniform distribution of arteries and veins, and the parallel configuration and close proximity of arteries and bronchi. We evaluate algorithm performance by comparing the results with 25 voxel-based manual reference segmentations. On this dataset, we show good performance of the subtree extraction, consisting of very few non-vascular structures (median value: 0.9%) and merged subtrees (median value: 0.6%). The resulting separation of arteries and veins achieves a median voxel-based overlap of 96.3% with the manual reference segmentations, outperforming a state-of-the-art interactive method. In conclusion, our novel approach provides an opportunity to become an integral part of computer aided pulmonary diagnosis, where artery/vein separation is important.

  19. Automated choroid segmentation based on gradual intensity distance in HD-OCT images.

    Science.gov (United States)

    Chen, Qiang; Fan, Wen; Niu, Sijie; Shi, Jiajia; Shen, Honglie; Yuan, Songtao

    2015-04-06

    The choroid is an important structure of the eye and plays a vital role in the pathology of retinal diseases. This paper presents an automated choroid segmentation method for high-definition optical coherence tomography (HD-OCT) images, including Bruch's membrane (BM) segmentation and choroidal-scleral interface (CSI) segmentation. An improved retinal nerve fiber layer (RNFL) complex removal algorithm is presented to segment BM by considering the structure characteristics of retinal layers. By analyzing the characteristics of CSI boundaries, we present a novel algorithm to generate a gradual intensity distance image. Then an improved 2-D graph search method with curve smooth constraints is used to obtain the CSI segmentation. Experimental results with 212 HD-OCT images from 110 eyes in 66 patients demonstrate that the proposed method can achieve high segmentation accuracy. The mean choroid thickness difference and overlap ratio between our proposed method and outlines drawn by experts was 6.72µm and 85.04%, respectively.

  20. Technique for Automated Recognition of Sunspots on Full-Disk Solar Images

    Directory of Open Access Journals (Sweden)

    Zharkov S

    2005-01-01

    Full Text Available A new robust technique is presented for automated identification of sunspots on full-disk white-light (WL solar images obtained from SOHO/MDI instrument and Ca II K1 line images from the Meudon Observatory. Edge-detection methods are applied to find sunspot candidates followed by local thresholding using statistical properties of the region around sunspots. Possible initial oversegmentation of images is remedied with a median filter. The features are smoothed by using morphological closing operations and filled by applying watershed, followed by dilation operator to define regions of interest containing sunspots. A number of physical and geometrical parameters of detected sunspot features are extracted and stored in a relational database along with umbra-penumbra information in the form of pixel run-length data within a bounding rectangle. The detection results reveal very good agreement with the manual synoptic maps and a very high correlation with those produced manually by NOAA Observatory, USA.

  1. Automated system for acquisition and image processing for the control and monitoring boned nopal

    Science.gov (United States)

    Luevano, E.; de Posada, E.; Arronte, M.; Ponce, L.; Flores, T.

    2013-11-01

    This paper describes the design and fabrication of a system for acquisition and image processing to control the removal of thorns nopal vegetable (Opuntia ficus indica) in an automated machine that uses pulses of a laser of Nd: YAG. The areolas, areas where thorns grow on the bark of the Nopal, are located applying segmentation algorithms to the images obtained by a CCD. Once the position of the areolas is known, coordinates are sent to a motors system that controls the laser to interact with all areolas and remove the thorns of the nopal. The electronic system comprises a video decoder, memory for image and software storage, and digital signal processor for system control. The firmware programmed tasks on acquisition, preprocessing, segmentation, recognition and interpretation of the areolas. This system achievement identifying areolas and generating table of coordinates of them, which will be send the motor galvo system that controls the laser for removal

  2. Reflections on ultrasound image analysis.

    Science.gov (United States)

    Alison Noble, J

    2016-10-01

    Ultrasound (US) image analysis has advanced considerably in twenty years. Progress in ultrasound image analysis has always been fundamental to the advancement of image-guided interventions research due to the real-time acquisition capability of ultrasound and this has remained true over the two decades. But in quantitative ultrasound image analysis - which takes US images and turns them into more meaningful clinical information - thinking has perhaps more fundamentally changed. From roots as a poor cousin to Computed Tomography (CT) and Magnetic Resonance (MR) image analysis, both of which have richer anatomical definition and thus were better suited to the earlier eras of medical image analysis which were dominated by model-based methods, ultrasound image analysis has now entered an exciting new era, assisted by advances in machine learning and the growing clinical and commercial interest in employing low-cost portable ultrasound devices outside traditional hospital-based clinical settings. This short article provides a perspective on this change, and highlights some challenges ahead and potential opportunities in ultrasound image analysis which may both have high impact on healthcare delivery worldwide in the future but may also, perhaps, take the subject further away from CT and MR image analysis research with time.

  3. Automated characterization of blood vessels as arteries and veins in retinal images.

    Science.gov (United States)

    Mirsharif, Qazaleh; Tajeripour, Farshad; Pourreza, Hamidreza

    2013-01-01

    In recent years researchers have found that alternations in arterial or venular tree of the retinal vasculature are associated with several public health problems such as diabetic retinopathy which is also the leading cause of blindness in the world. A prerequisite for automated assessment of subtle changes in arteries and veins, is to accurately separate those vessels from each other. This is a difficult task due to high similarity between arteries and veins in addition to variation of color and non-uniform illumination inter and intra retinal images. In this paper a novel structural and automated method is presented for artery/vein classification of blood vessels in retinal images. The proposed method consists of three main steps. In the first step, several image enhancement techniques are employed to improve the images. Then a specific feature extraction process is applied to separate major arteries from veins. Indeed, vessels are divided to smaller segments and feature extraction and vessel classification are applied to each small vessel segment instead of each vessel point. Finally, a post processing step is added to improve the results obtained from the previous step using structural characteristics of the retinal vascular network. In the last stage, vessel features at intersection and bifurcation points are processed for detection of arterial and venular sub trees. Ultimately vessel labels are revised by publishing the dominant label through each identified connected tree of arteries or veins. Evaluation of the proposed approach against two different datasets of retinal images including DRIVE database demonstrates the good performance and robustness of the method. The proposed method may be used for determination of arteriolar to venular diameter ratio in retinal images. Also the proposed method potentially allows for further investigation of labels of thinner arteries and veins which might be found by tracing them back to the major vessels.

  4. The use of the Kalman filter in the automated segmentation of EIT lung images.

    Science.gov (United States)

    Zifan, A; Liatsis, P; Chapman, B E

    2013-06-01

    In this paper, we present a new pipeline for the fast and accurate segmentation of impedance images of the lungs using electrical impedance tomography (EIT). EIT is an emerging, promising, non-invasive imaging modality that produces real-time, low spatial but high temporal resolution images of impedance inside a body. Recovering impedance itself constitutes a nonlinear ill-posed inverse problem, therefore the problem is usually linearized, which produces impedance-change images, rather than static impedance ones. Such images are highly blurry and fuzzy along object boundaries. We provide a mathematical reasoning behind the high suitability of the Kalman filter when it comes to segmenting and tracking conductivity changes in EIT lung images. Next, we use a two-fold approach to tackle the segmentation problem. First, we construct a global lung shape to restrict the search region of the Kalman filter. Next, we proceed with augmenting the Kalman filter by incorporating an adaptive foreground detection system to provide the boundary contours for the Kalman filter to carry out the tracking of the conductivity changes as the lungs undergo deformation in a respiratory cycle. The proposed method has been validated by using performance statistics such as misclassified area, and false positive rate, and compared to previous approaches. The results show that the proposed automated method can be a fast and reliable segmentation tool for EIT imaging.

  5. Automating quality assurance of digital linear accelerators using a radioluminescent phosphor coated phantom and optical imaging

    Science.gov (United States)

    Jenkins, Cesare H.; Naczynski, Dominik J.; Yu, Shu-Jung S.; Yang, Yong; Xing, Lei

    2016-09-01

    Performing mechanical and geometric quality assurance (QA) tests for medical linear accelerators (LINAC) is a predominantly manual process that consumes significant time and resources. In order to alleviate this burden this study proposes a novel strategy to automate the process of performing these tests. The autonomous QA system consists of three parts: (1) a customized phantom coated with radioluminescent material; (2) an optical imaging system capable of visualizing the incidence of the radiation beam, light field or lasers on the phantom; and (3) software to process the captured signals. The radioluminescent phantom, which enables visualization of the radiation beam on the same surface as the light field and lasers, is placed on the couch and imaged while a predefined treatment plan is delivered from the LINAC. The captured images are then processed to self-calibrate the system and perform measurements for evaluating light field/radiation coincidence, jaw position indicators, cross-hair centering, treatment couch position indicators and localizing laser alignment. System accuracy is probed by intentionally introducing errors and by comparing with current clinical methods. The accuracy of self-calibration is evaluated by examining measurement repeatability under fixed and variable phantom setups. The integrated system was able to automatically collect, analyze and report the results for the mechanical alignment tests specified by TG-142. The average difference between introduced and measured errors was 0.13 mm. The system was shown to be consistent with current techniques. Measurement variability increased slightly from 0.1 mm to 0.2 mm when the phantom setup was varied, but no significant difference in the mean measurement value was detected. Total measurement time was less than 10 minutes for all tests as a result of automation. The system’s unique features of a phosphor-coated phantom and fully automated, operator independent self-calibration offer the

  6. Chest-wall segmentation in automated 3D breast ultrasound images using thoracic volume classification

    Science.gov (United States)

    Tan, Tao; van Zelst, Jan; Zhang, Wei; Mann, Ritse M.; Platel, Bram; Karssemeijer, Nico

    2014-03-01

    Computer-aided detection (CAD) systems are expected to improve effectiveness and efficiency of radiologists in reading automated 3D breast ultrasound (ABUS) images. One challenging task on developing CAD is to reduce a large number of false positives. A large amount of false positives originate from acoustic shadowing caused by ribs. Therefore determining the location of the chestwall in ABUS is necessary in CAD systems to remove these false positives. Additionally it can be used as an anatomical landmark for inter- and intra-modal image registration. In this work, we extended our previous developed chestwall segmentation method that fits a cylinder to automated detected rib-surface points and we fit the cylinder model by minimizing a cost function which adopted a term of region cost computed from a thoracic volume classifier to improve segmentation accuracy. We examined the performance on a dataset of 52 images where our previous developed method fails. Using region-based cost, the average mean distance of the annotated points to the segmented chest wall decreased from 7.57±2.76 mm to 6.22±2.86 mm.art.

  7. Automatic nipple detection on 3D images of an automated breast ultrasound system (ABUS)

    Science.gov (United States)

    Javanshir Moghaddam, Mandana; Tan, Tao; Karssemeijer, Nico; Platel, Bram

    2014-03-01

    Recent studies have demonstrated that applying Automated Breast Ultrasound in addition to mammography in women with dense breasts can lead to additional detection of small, early stage breast cancers which are occult in corresponding mammograms. In this paper, we proposed a fully automatic method for detecting the nipple location in 3D ultrasound breast images acquired from Automated Breast Ultrasound Systems. The nipple location is a valuable landmark to report the position of possible abnormalities in a breast or to guide image registration. To detect the nipple location, all images were normalized. Subsequently, features have been extracted in a multi scale approach and classification experiments were performed using a gentle boost classifier to identify the nipple location. The method was applied on a dataset of 100 patients with 294 different 3D ultrasound views from Siemens and U-systems acquisition systems. Our database is a representative sample of cases obtained in clinical practice by four medical centers. The automatic method could accurately locate the nipple in 90% of AP (Anterior-Posterior) views and in 79% of the other views.

  8. Automated segmentation of oral mucosa from wide-field OCT images (Conference Presentation)

    Science.gov (United States)

    Goldan, Ryan N.; Lee, Anthony M. D.; Cahill, Lucas; Liu, Kelly; MacAulay, Calum; Poh, Catherine F.; Lane, Pierre

    2016-03-01

    Optical Coherence Tomography (OCT) can discriminate morphological tissue features important for oral cancer detection such as the presence or absence of basement membrane and epithelial thickness. We previously reported an OCT system employing a rotary-pullback catheter capable of in vivo, rapid, wide-field (up to 90 x 2.5mm2) imaging in the oral cavity. Due to the size and complexity of these OCT data sets, rapid automated image processing software that immediately displays important tissue features is required to facilitate prompt bed-side clinical decisions. We present an automated segmentation algorithm capable of detecting the epithelial surface and basement membrane in 3D OCT images of the oral cavity. The algorithm was trained using volumetric OCT data acquired in vivo from a variety of tissue types and histology-confirmed pathologies spanning normal through cancer (8 sites, 21 patients). The algorithm was validated using a second dataset of similar size and tissue diversity. We demonstrate application of the algorithm to an entire OCT volume to map epithelial thickness, and detection of the basement membrane, over the tissue surface. These maps may be clinically useful for delineating pre-surgical tumor margins, or for biopsy site guidance.

  9. A Semi-Automated Functional Test Data Analysis Tool

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Peng; Haves, Philip; Kim, Moosung

    2005-05-01

    The growing interest in commissioning is creating a demand that will increasingly be met by mechanical contractors and less experienced commissioning agents. They will need tools to help them perform commissioning effectively and efficiently. The widespread availability of standardized procedures, accessible in the field, will allow commissioning to be specified with greater certainty as to what will be delivered, enhancing the acceptance and credibility of commissioning. In response, a functional test data analysis tool is being developed to analyze the data collected during functional tests for air-handling units. The functional test data analysis tool is designed to analyze test data, assess performance of the unit under test and identify the likely causes of the failure. The tool has a convenient user interface to facilitate manual entry of measurements made during a test. A graphical display shows the measured performance versus the expected performance, highlighting significant differences that indicate the unit is not able to pass the test. The tool is described as semiautomated because the measured data need to be entered manually, instead of being passed from the building control system automatically. However, the data analysis and visualization are fully automated. The tool is designed to be used by commissioning providers conducting functional tests as part of either new building commissioning or retro-commissioning, as well as building owners and operators interested in conducting routine tests periodically to check the performance of their HVAC systems.

  10. Intelligent Control in Automation Based on Wireless Traffic Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kurt Derr; Milos Manic

    2007-08-01

    Wireless technology is a central component of many factory automation infrastructures in both the commercial and government sectors, providing connectivity among various components in industrial realms (distributed sensors, machines, mobile process controllers). However wireless technologies provide more threats to computer security than wired environments. The advantageous features of Bluetooth technology resulted in Bluetooth units shipments climbing to five million per week at the end of 2005 [1, 2]. This is why the real-time interpretation and understanding of Bluetooth traffic behavior is critical in both maintaining the integrity of computer systems and increasing the efficient use of this technology in control type applications. Although neuro-fuzzy approaches have been applied to wireless 802.11 behavior analysis in the past, a significantly different Bluetooth protocol framework has not been extensively explored using this technology. This paper presents a new neurofuzzy traffic analysis algorithm of this still new territory of Bluetooth traffic. Further enhancements of this algorithm are presented along with the comparison against the traditional, numerical approach. Through test examples, interesting Bluetooth traffic behavior characteristics were captured, and the comparative elegance of this computationally inexpensive approach was demonstrated. This analysis can be used to provide directions for future development and use of this prevailing technology in various control type applications, as well as making the use of it more secure.

  11. Intelligent Control in Automation Based on Wireless Traffic Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kurt Derr; Milos Manic

    2007-09-01

    Wireless technology is a central component of many factory automation infrastructures in both the commercial and government sectors, providing connectivity among various components in industrial realms (distributed sensors, machines, mobile process controllers). However wireless technologies provide more threats to computer security than wired environments. The advantageous features of Bluetooth technology resulted in Bluetooth units shipments climbing to five million per week at the end of 2005 [1, 2]. This is why the real-time interpretation and understanding of Bluetooth traffic behavior is critical in both maintaining the integrity of computer systems and increasing the efficient use of this technology in control type applications. Although neuro-fuzzy approaches have been applied to wireless 802.11 behavior analysis in the past, a significantly different Bluetooth protocol framework has not been extensively explored using this technology. This paper presents a new neurofuzzy traffic analysis algorithm of this still new territory of Bluetooth traffic. Further enhancements of this algorithm are presented along with the comparison against the traditional, numerical approach. Through test examples, interesting Bluetooth traffic behavior characteristics were captured, and the comparative elegance of this computationally inexpensive approach was demonstrated. This analysis can be used to provide directions for future development and use of this prevailing technology in various control type applications, as well as making the use of it more secure.

  12. Development of a software for INAA analysis automation

    Energy Technology Data Exchange (ETDEWEB)

    Zahn, Guilherme S.; Genezini, Frederico A.; Figueiredo, Ana Maria G.; Ticianelli, Regina B., E-mail: gzahn@ipen [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    In this work, a software to automate the post-counting tasks in comparative INAA has been developed that aims to become more flexible than the available options, integrating itself with some of the routines currently in use in the IPEN Activation Analysis Laboratory and allowing the user to choose between a fully-automatic analysis or an Excel-oriented one. The software makes use of the Genie 2000 data importing and analysis routines and stores each 'energy-counts-uncertainty' table as a separate ASCII file that can be used later on if required by the analyst. Moreover, it generates an Excel-compatible CSV (comma separated values) file with only the relevant results from the analyses for each sample or comparator, as well as the results of the concentration calculations and the results obtained with four different statistical tools (unweighted average, weighted average, normalized residuals and Rajeval technique), allowing the analyst to double-check the results. Finally, a 'summary' CSV file is also produced, with the final concentration results obtained for each element in each sample. (author)

  13. Automated microscopic characterization of metallic ores with image analysis: a key to improve ore processing. II: metallogenetic discriminating criteria; Reconocimiento automatizado de menas metalicas mediante analisis digital de imagen: un apoyo al proceso mineralurgico. II: criterios metalogeneticos discriminantes

    Energy Technology Data Exchange (ETDEWEB)

    Castroviejo, R.; Berrezueta, E.

    2009-07-01

    ore microscopy may furnish very important information for geo metallurgists, but todays needs for automation are difficult to meet with the optical microscope unless and adequate methodology is developed. Some limitations of the routine procedure, related to risks of mis identification caused by the spectral similarity of some ores, ask for complementary criteria. Defining ore deposit typologies and the corresponding assemblages guides the choice of species and limits the number. Comparison of the reflectance values of the ores in each mineral association defined shows that their automated identification is possible in most of the common occurrence. The number of species to be actually considered being greatly limited, performance is increased. The system is not intended to substitute for a mineralogist, but to enhance enormously his performance, while offering the industry an economic procedure to procedure a wealth of information which would not be possible with traditional methods, as the point counter. (Author) 33 refs.

  14. Automated analysis of three-dimensional stress echocardiography

    NARCIS (Netherlands)

    K.Y.E. Leung (Esther); M. van Stralen (Marijn); M.G. Danilouchkine (Mikhail); G. van Burken (Gerard); M.L. Geleijnse (Marcel); J.H.C. Reiber (Johan); N. de Jong (Nico); A.F.W. van der Steen (Ton); J.G. Bosch (Johan)

    2011-01-01

    textabstractReal-time three-dimensional (3D) ultrasound imaging has been proposed as an alternative for two-dimensional stress echocardiography for assessing myocardial dysfunction and underlying coronary artery disease. Analysis of 3D stress echocardiography is no simple task and requires considera

  15. Hybrid Segmentation of Vessels and Automated Flow Measures in In-Vivo Ultrasound Imaging

    DEFF Research Database (Denmark)

    Moshavegh, Ramin; Martins, Bo; Hansen, Kristoffer Lindskov

    2016-01-01

    Vector Flow Imaging (VFI) has received an increasing attention in the scientific field of ultrasound, as it enables angle independent visualization of blood flow. VFI can be used in volume flow estimation, but a vessel segmentation is needed to make it fully automatic. A novel vessel segmentation...... procedure is crucial for wall-to-wall visualization, automation of adjustments, and quantification of flow in state-of-the-art ultrasound scanners. We propose and discuss a method for accurate vessel segmentation that fuses VFI data and B-mode for robustly detecting and delineating vessels. The proposed...

  16. Automated Image Segmentation And Characterization Technique For Effective Isolation And Representation Of Human Face

    Directory of Open Access Journals (Sweden)

    Rajesh Reddy N

    2014-01-01

    Full Text Available In areas such as defense and forensics, it is necessary to identify the face of the criminals from the already available database. Automated face recognition system involves face isolation, feature extraction and classification technique. Challenges in face recognition system are isolating the face effectively as it may be affected by illumination, posture and variation in skin color. Hence it is necessary to develop an effective algorithm that isolates face from the image. In this paper, advanced face isolation technique and feature extraction technique has been proposed.

  17. Image Analysis in CT Angiography

    NARCIS (Netherlands)

    Manniesing, R.

    2006-01-01

    In this thesis we develop and validate novel image processing techniques for the analysis of vascular structures in medical images. First a new type of filter is proposed which is capable of enhancing vascular structures while suppressing noise in the remainder of the image. This filter is based on

  18. Automated Bearing Fault Diagnosis Using 2D Analysis of Vibration Acceleration Signals under Variable Speed Conditions

    Directory of Open Access Journals (Sweden)

    Sheraz Ali Khan

    2016-01-01

    Full Text Available Traditional fault diagnosis methods of bearings detect characteristic defect frequencies in the envelope power spectrum of the vibration signal. These defect frequencies depend upon the inherently nonstationary shaft speed. Time-frequency and subband signal analysis of vibration signals has been used to deal with random variations in speed, whereas design variations require retraining a new instance of the classifier for each operating speed. This paper presents an automated approach for fault diagnosis in bearings based upon the 2D analysis of vibration acceleration signals under variable speed conditions. Images created from the vibration signals exhibit unique textures for each fault, which show minimal variation with shaft speed. Microtexture analysis of these images is used to generate distinctive fault signatures for each fault type, which can be used to detect those faults at different speeds. A k-nearest neighbor classifier trained using fault signatures generated for one operating speed is used to detect faults at all the other operating speeds. The proposed approach is tested on the bearing fault dataset of Case Western Reserve University, and the results are compared with those of a spectrum imaging-based approach.

  19. Progress on automated data analysis algorithms for ultrasonic inspection of composites

    Science.gov (United States)

    Aldrin, John C.; Forsyth, David S.; Welter, John T.

    2015-03-01

    Progress is presented on the development and demonstration of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. New algorithms have been implemented to reliably identify indications in time-of-flight images near the front and back walls of composite panels. Adaptive call criteria have also been applied to address sensitivity to variation in backwall signal level, panel thickness variation, and internal signal noise. ADA processing results are presented for a variety of test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions. Software tools have been developed to support both ADA algorithm design and certification, producing a statistical evaluation of indication results and false calls using a matching process with predefined truth tables. Parametric studies were performed to evaluate detection and false call results with respect to varying algorithm settings.

  20. Automation of PCXMC and ImPACT for NASA Astronaut Medical Imaging Dose and Risk Tracking

    Science.gov (United States)

    Bahadori, Amir; Picco, Charles; Flores-McLaughlin, John; Shavers, Mark; Semones, Edward

    2011-01-01

    To automate astronaut organ and effective dose calculations from occupational X-ray and computed tomography (CT) examinations incorporating PCXMC and ImPACT tools and to estimate the associated lifetime cancer risk per the National Council on Radiation Protection & Measurements (NCRP) using MATLAB(R). Methods: NASA follows guidance from the NCRP on its operational radiation safety program for astronauts. NCRP Report 142 recommends that astronauts be informed of the cancer risks from reported exposures to ionizing radiation from medical imaging. MATLAB(R) code was written to retrieve exam parameters for medical imaging procedures from a NASA database, calculate associated dose and risk, and return results to the database, using the Microsoft .NET Framework. This code interfaces with the PCXMC executable and emulates the ImPACT Excel spreadsheet to calculate organ doses from X-rays and CTs, respectively, eliminating the need to utilize the PCXMC graphical user interface (except for a few special cases) and the ImPACT spreadsheet. Results: Using MATLAB(R) code to interface with PCXMC and replicate ImPACT dose calculation allowed for rapid evaluation of multiple medical imaging exams. The user inputs the exam parameter data into the database and runs the code. Based on the imaging modality and input parameters, the organ doses are calculated. Output files are created for record, and organ doses, effective dose, and cancer risks associated with each exam are written to the database. Annual and post-flight exposure reports, which are used by the flight surgeon to brief the astronaut, are generated from the database. Conclusions: Automating PCXMC and ImPACT for evaluation of NASA astronaut medical imaging radiation procedures allowed for a traceable and rapid method for tracking projected cancer risks associated with over 12,000 exposures. This code will be used to evaluate future medical radiation exposures, and can easily be modified to accommodate changes to the risk

  1. Reference image selection for difference imaging analysis

    CERN Document Server

    Huckvale, Leo; Sale, Stuart E

    2014-01-01

    Difference image analysis (DIA) is an effective technique for obtaining photometry in crowded fields, relative to a chosen reference image. As yet, however, optimal reference image selection is an unsolved problem. We examine how this selection depends on the combination of seeing, background and detector pixel size. Our tests use a combination of simulated data and quality indicators from DIA of well-sampled optical data and under-sampled near-infrared data from the OGLE and VVV surveys, respectively. We search for a figure-of-merit (FoM) which could be used to select reference images for each survey. While we do not find a universally applicable FoM, survey-specific measures indicate that the effect of spatial under-sampling may require a change in strategy from the standard DIA approach, even though seeing remains the primary criterion. We find that background is not an important criterion for reference selection, at least for the dynamic range in the images we test. For our analysis of VVV data in particu...

  2. Digital Images Analysis

    OpenAIRE

    2012-01-01

    International audience; A specific field of image processing focuses on the evaluation of image quality and assessment of their authenticity. A loss of image quality may be due to the various processes by which it passes. In assessing the authenticity of the image we detect forgeries, detection of hidden messages, etc. In this work, we present an overview of these areas; these areas have in common the need to develop theories and techniques to detect changes in the image that it is not detect...

  3. Automated segmentation and geometrical modeling of the tricuspid aortic valve in 3D echocardiographic images.

    Science.gov (United States)

    Pouch, Alison M; Wang, Hongzhi; Takabe, Manabu; Jackson, Benjamin M; Sehgal, Chandra M; Gorman, Joseph H; Gorman, Robert C; Yushkevich, Paul A

    2013-01-01

    The aortic valve has been described with variable anatomical definitions, and the consistency of 2D manual measurement of valve dimensions in medical image data has been questionable. Given the importance of image-based morphological assessment in the diagnosis and surgical treatment of aortic valve disease, there is considerable need to develop a standardized framework for 3D valve segmentation and shape representation. Towards this goal, this work integrates template-based medial modeling and multi-atlas label fusion techniques to automatically delineate and quantitatively describe aortic leaflet geometry in 3D echocardiographic (3DE) images, a challenging task that has been explored only to a limited extent. The method makes use of expert knowledge of aortic leaflet image appearance, generates segmentations with consistent topology, and establishes a shape-based coordinate system on the aortic leaflets that enables standardized automated measurements. In this study, the algorithm is evaluated on 11 3DE images of normal human aortic leaflets acquired at mid systole. The clinical relevance of the method is its ability to capture leaflet geometry in 3DE image data with minimal user interaction while producing consistent measurements of 3D aortic leaflet geometry.

  4. Automated Analysis of Vital Signs Identified Patients with Substantial Bleeding Prior to Hospital Arrival

    Science.gov (United States)

    2015-10-01

    culminating with the first and only deployment of an automated emergency care decision system on board active air ambulances: the APPRAISE system, a...hardware/software platform for automated , real-time analysis of vital-sign data. After developing the APPRAISE system using data from trauma patients

  5. 40 CFR 13.19 - Analysis of costs; automation; prevention of overpayments, delinquencies or defaults.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Analysis of costs; automation; prevention of overpayments, delinquencies or defaults. 13.19 Section 13.19 Protection of Environment...; automation; prevention of overpayments, delinquencies or defaults. (a) The Administrator may...

  6. RFI detection by automated feature extraction and statistical analysis

    Science.gov (United States)

    Winkel, B.; Kerp, J.; Stanko, S.

    2007-01-01

    In this paper we present an interference detection toolbox consisting of a high dynamic range Digital Fast-Fourier-Transform spectrometer (DFFT, based on FPGA-technology) and data analysis software for automated radio frequency interference (RFI) detection. The DFFT spectrometer allows high speed data storage of spectra on time scales of less than a second. The high dynamic range of the device assures constant calibration even during extremely powerful RFI events. The software uses an algorithm which performs a two-dimensional baseline fit in the time-frequency domain, searching automatically for RFI signals superposed on the spectral data. We demonstrate, that the software operates successfully on computer-generated RFI data as well as on real DFFT data recorded at the Effelsberg 100-m telescope. At 21-cm wavelength RFI signals can be identified down to the 4σ_rms level. A statistical analysis of all RFI events detected in our observational data revealed that: (1) mean signal strength is comparable to the astronomical line emission of the Milky Way, (2) interferences are polarised, (3) electronic devices in the neighbourhood of the telescope contribute significantly to the RFI radiation. We also show that the radiometer equation is no longer fulfilled in presence of RFI signals.

  7. RFI detection by automated feature extraction and statistical analysis

    CERN Document Server

    Winkel, B; Stanko, S; Winkel, Benjamin; Kerp, Juergen; Stanko, Stephan

    2006-01-01

    In this paper we present an interference detection toolbox consisting of a high dynamic range Digital Fast-Fourier-Transform spectrometer (DFFT, based on FPGA-technology) and data analysis software for automated radio frequency interference (RFI) detection. The DFFT spectrometer allows high speed data storage of spectra on time scales of less than a second. The high dynamic range of the device assures constant calibration even during extremely powerful RFI events. The software uses an algorithm which performs a two-dimensional baseline fit in the time-frequency domain, searching automatically for RFI signals superposed on the spectral data. We demonstrate, that the software operates successfully on computer-generated RFI data as well as on real DFFT data recorded at the Effelsberg 100-m telescope. At 21-cm wavelength RFI signals can be identified down to the 4-sigma level. A statistical analysis of all RFI events detected in our observational data revealed that: (1) mean signal strength is comparable to the a...

  8. Automated analysis for detecting beams in laser wakefield simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ushizima, Daniela M.; Rubel, Oliver; Prabhat, Mr.; Weber, Gunther H.; Bethel, E. Wes; Aragon, Cecilia R.; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Hamann, Bernd; Messmer, Peter; Hagen, Hans

    2008-07-03

    Laser wakefield particle accelerators have shown the potential to generate electric fields thousands of times higher than those of conventional accelerators. The resulting extremely short particle acceleration distance could yield a potential new compact source of energetic electrons and radiation, with wide applications from medicine to physics. Physicists investigate laser-plasma internal dynamics by running particle-in-cell simulations; however, this generates a large dataset that requires time-consuming, manual inspection by experts in order to detect key features such as beam formation. This paper describes a framework to automate the data analysis and classification of simulation data. First, we propose a new method to identify locations with high density of particles in the space-time domain, based on maximum extremum point detection on the particle distribution. We analyze high density electron regions using a lifetime diagram by organizing and pruning the maximum extrema as nodes in a minimum spanning tree. Second, we partition the multivariate data using fuzzy clustering to detect time steps in a experiment that may contain a high quality electron beam. Finally, we combine results from fuzzy clustering and bunch lifetime analysis to estimate spatially confined beams. We demonstrate our algorithms successfully on four different simulation datasets.

  9. SU-E-J-252: Reproducibility of Radiogenomic Image Features: Comparison of Two Semi-Automated Segmentation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Lee, M; Woo, B; Kim, J [Seoul National University, Seoul (Korea, Republic of); Jamshidi, N; Kuo, M [UCLA School of Medicine, Los Angeles, CA (United States)

    2015-06-15

    Purpose: Objective and reliable quantification of imaging phenotype is an essential part of radiogenomic studies. We compared the reproducibility of two semi-automatic segmentation methods for quantitative image phenotyping in magnetic resonance imaging (MRI) of glioblastoma multiforme (GBM). Methods: MRI examinations with T1 post-gadolinium and FLAIR sequences of 10 GBM patients were downloaded from the Cancer Image Archive site. Two semi-automatic segmentation tools with different algorithms (deformable model and grow cut method) were used to segment contrast enhancement, necrosis and edema regions by two independent observers. A total of 21 imaging features consisting of area and edge groups were extracted automatically from the segmented tumor. The inter-observer variability and coefficient of variation (COV) were calculated to evaluate the reproducibility. Results: Inter-observer correlations and coefficient of variation of imaging features with the deformable model ranged from 0.953 to 0.999 and 2.1% to 9.2%, respectively, and the grow cut method ranged from 0.799 to 0.976 and 3.5% to 26.6%, respectively. Coefficient of variation for especially important features which were previously reported as predictive of patient survival were: 3.4% with deformable model and 7.4% with grow cut method for the proportion of contrast enhanced tumor region; 5.5% with deformable model and 25.7% with grow cut method for the proportion of necrosis; and 2.1% with deformable model and 4.4% with grow cut method for edge sharpness of tumor on CE-T1W1. Conclusion: Comparison of two semi-automated tumor segmentation techniques shows reliable image feature extraction for radiogenomic analysis of GBM patients with multiparametric Brain MRI.

  10. An algorithm for automated detection, localization and measurement of local calcium signals from camera-based imaging.

    Science.gov (United States)

    Ellefsen, Kyle L; Settle, Brett; Parker, Ian; Smith, Ian F

    2014-09-01

    Local Ca(2+) transients such as puffs and sparks form the building blocks of cellular Ca(2+) signaling in numerous cell types. They have traditionally been studied by linescan confocal microscopy, but advances in TIRF microscopy together with improved electron-multiplied CCD (EMCCD) cameras now enable rapid (>500 frames s(-1)) imaging of subcellular Ca(2+) signals with high spatial resolution in two dimensions. This approach yields vastly more information (ca. 1 Gb min(-1)) than linescan imaging, rendering visual identification and analysis of local events imaged both laborious and subject to user bias. Here we describe a routine to rapidly automate identification and analysis of local Ca(2+) events. This features an intuitive graphical user-interfaces and runs under Matlab and the open-source Python software. The underlying algorithm features spatial and temporal noise filtering to reliably detect even small events in the presence of noisy and fluctuating baselines; localizes sites of Ca(2+) release with sub-pixel resolution; facilitates user review and editing of data; and outputs time-sequences of fluorescence ratio signals for identified event sites along with Excel-compatible tables listing amplitudes and kinetics of events.

  11. Automated Mineral Analysis to Characterize Metalliferous Mine Waste

    Science.gov (United States)

    Hensler, Ana-Sophie; Lottermoser, Bernd G.; Vossen, Peter; Langenberg, Lukas C.

    2016-10-01

    The objective of this study was to investigate the applicability of automated QEMSCAN® mineral analysis combined with bulk geochemical analysis to evaluate the environmental risk of non-acid producing mine waste present at the historic Albertsgrube Pb-Zn mine site, Hastenrath, North Rhine-Westphalia, Germany. Geochemical analyses revealed elevated average abundances of As, Cd, Cu, Mn, Pb, Sb and Zn and near neutral to slightly alkaline paste pH values. Mineralogical analyses using the QEMSCAN® revealed diverse mono- and polymineralic particles across all samples, with grain sizes ranging from a few μm up to 2000 μm. Calcite and dolomite (up to 78 %), smithsonite (up to 24 %) and Ca sulphate (up to 11.5 %) are present mainly as coarse-grained particles. By contrast, significant amounts of quartz, muscovite/illite, sphalerite (up to 10.8 %), galena (up to 1 %), pyrite (up to 3.4 %) and cerussite/anglesite (up to 4.3 %) are present as fine-grained (<500 μm) particles. QEMSCAN® analysis also identified disseminated sauconite, coronadite/chalcophanite, chalcopyrite, jarosite, apatite, rutile, K-feldspar, biotite, Fe (hydr) oxides/CO3 and unknown Zn Pb(Fe) and Zn Pb Ca (Fe Ti) phases. Many of the metal-bearing sulphide grains occur as separate particles with exposed surface areas and thus, may be matter of environmental concern because such mineralogical hosts will continue to release metals and metalloids (As, Cd, Sb, Zn) at near neutral pH into ground and surface waters. QEMSCAN® mineral analysis allows acquisition of fully quantitative data on the mineralogical composition, textural characteristics and grain size estimation of mine waste material and permits the recognition of mine waste as “high-risk” material that would have otherwise been classified by traditional geochemical tests as benign.

  12. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    of the book is to present the fascinating world of medical