WorldWideScience

Sample records for automated image analysis

  1. Autoradiography and automated image analysis

    International Nuclear Information System (INIS)

    Vardy, P.H.; Willard, A.G.

    1982-01-01

    Limitations with automated image analysis and the solution of problems encountered are discussed. With transmitted light, unstained plastic sections with planar profiles should be used. Stains potentiate signal so that television registers grains as falsely larger areas of low light intensity. Unfocussed grains in paraffin sections will not be seen by image analysers due to change in darkness and size. With incident illumination, the use of crossed polars, oil objectives and an oil filled light trap continuous with the base of the slide will reduce glare. However this procedure so enormously attenuates the light reflected by silver grains, that detection may be impossible. Autoradiographs should then be photographed and the negative images of silver grains on film analysed automatically using transmitted light

  2. Automated Aesthetic Analysis of Photographic Images.

    Science.gov (United States)

    Aydın, Tunç Ozan; Smolic, Aljoscha; Gross, Markus

    2015-01-01

    We present a perceptually calibrated system for automatic aesthetic evaluation of photographic images. Our work builds upon the concepts of no-reference image quality assessment, with the main difference being our focus on rating image aesthetic attributes rather than detecting image distortions. In contrast to the recent attempts on the highly subjective aesthetic judgment problems such as binary aesthetic classification and the prediction of an image's overall aesthetics rating, our method aims on providing a reliable objective basis of comparison between aesthetic properties of different photographs. To that end our system computes perceptually calibrated ratings for a set of fundamental and meaningful aesthetic attributes, that together form an "aesthetic signature" of an image. We show that aesthetic signatures can still be used to improve upon the current state-of-the-art in automatic aesthetic judgment, but also enable interesting new photo editing applications such as automated aesthetic analysis, HDR tone mapping evaluation, and providing aesthetic feedback during multi-scale contrast manipulation.

  3. Automated Image Analysis Corrosion Working Group Update: February 1, 2018

    Energy Technology Data Exchange (ETDEWEB)

    Wendelberger, James G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-02-01

    These are slides for the automated image analysis corrosion working group update. The overall goals were: automate the detection and quantification of features in images (faster, more accurate), how to do this (obtain data, analyze data), focus on Laser Scanning Confocal Microscope (LCM) data (laser intensity, laser height/depth, optical RGB, optical plus laser RGB).

  4. Automated image analysis of the pathological lung in CT

    NARCIS (Netherlands)

    Sluimer, Ingrid Christine

    2005-01-01

    The general objective of the thesis is automation of the analysis of the pathological lung from CT images. Specifically, we aim for automated detection and classification of abnormalities in the lung parenchyma. We first provide a review of computer analysis techniques applied to CT of the

  5. Cardiac imaging: working towards fully-automated machine analysis & interpretation.

    Science.gov (United States)

    Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido

    2017-03-01

    Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered: This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary: Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation.

  6. Automated image analysis in the study of collagenous colitis

    DEFF Research Database (Denmark)

    Fiehn, Anne-Marie Kanstrup; Kristensson, Martin; Engel, Ulla

    2016-01-01

    PURPOSE: The aim of this study was to develop an automated image analysis software to measure the thickness of the subepithelial collagenous band in colon biopsies with collagenous colitis (CC) and incomplete CC (CCi). The software measures the thickness of the collagenous band on microscopic...

  7. Automated Acquisition and Analysis of Digital Radiographic Images

    International Nuclear Information System (INIS)

    Poland, R.

    1999-01-01

    Engineers at the Savannah River Technology Center have designed, built, and installed a fully automated small field-of-view, lens-coupled, digital radiography imaging system. The system is installed in one of the Savannah River Site''s production facilities to be used for the evaluation of production components. Custom software routines developed for the system automatically acquire, enhance, and diagnostically evaluate critical geometric features of various components that have been captured radiographically. Resolution of the digital radiograms and accuracy of the acquired measurements approaches 0.001 inches. To date, there has been zero deviation in measurement repeatability. The automated image acquisition methodology will be discussed, unique enhancement algorithms will be explained, and the automated routines for measuring the critical component features will be presented. An additional feature discussed is the independent nature of the modular software components, which allows images to be automatically acquired, processed, and evaluated by the computer in the background, while the operator reviews other images on the monitor. System components were also a key in gaining the required image resolution. System factors such as scintillator selection, x-ray source energy, optical components and layout, as well as geometric unsharpness issues are considered in the paper. Finally the paper examines the numerous quality improvement factors and cost saving advantages that will be realized at the Savannah River Site due to the implementation of the Automated Pinch Weld Analysis System (APWAS)

  8. Quantifying biodiversity using digital cameras and automated image analysis.

    Science.gov (United States)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and

  9. Automated image analysis for quantification of filamentous bacteria

    DEFF Research Database (Denmark)

    Fredborg, M.; Rosenvinge, F. S.; Spillum, E.

    2015-01-01

    Background: Antibiotics of the beta-lactam group are able to alter the shape of the bacterial cell wall, e.g. filamentation or a spheroplast formation. Early determination of antimicrobial susceptibility may be complicated by filamentation of bacteria as this can be falsely interpreted as growth...... displaying different resistant profiles and differences in filamentation kinetics were used to study a novel image analysis algorithm to quantify length of bacteria and bacterial filamentation. A total of 12 beta-lactam antibiotics or beta-lactam-beta-lactamase inhibitor combinations were analyzed...... in systems relying on colorimetry or turbidometry (such as Vitek-2, Phoenix, MicroScan WalkAway). The objective was to examine an automated image analysis algorithm for quantification of filamentous bacteria using the 3D digital microscopy imaging system, oCelloScope. Results: Three E. coli strains...

  10. Automated rice leaf disease detection using color image analysis

    Science.gov (United States)

    Pugoy, Reinald Adrian D. L.; Mariano, Vladimir Y.

    2011-06-01

    In rice-related institutions such as the International Rice Research Institute, assessing the health condition of a rice plant through its leaves, which is usually done as a manual eyeball exercise, is important to come up with good nutrient and disease management strategies. In this paper, an automated system that can detect diseases present in a rice leaf using color image analysis is presented. In the system, the outlier region is first obtained from a rice leaf image to be tested using histogram intersection between the test and healthy rice leaf images. Upon obtaining the outlier, it is then subjected to a threshold-based K-means clustering algorithm to group related regions into clusters. Then, these clusters are subjected to further analysis to finally determine the suspected diseases of the rice leaf.

  11. AUTOMATED DATA ANALYSIS FOR CONSECUTIVE IMAGES FROM DROPLET COMBUSTION EXPERIMENTS

    Directory of Open Access Journals (Sweden)

    Christopher Lee Dembia

    2012-09-01

    Full Text Available A simple automated image analysis algorithm has been developed that processes consecutive images from high speed, high resolution digital images of burning fuel droplets. The droplets burn under conditions that promote spherical symmetry. The algorithm performs the tasks of edge detection of the droplet’s boundary using a grayscale intensity threshold, and shape fitting either a circle or ellipse to the droplet’s boundary. The results are compared to manual measurements of droplet diameters done with commercial software. Results show that it is possible to automate data analysis for consecutive droplet burning images even in the presence of a significant amount of noise from soot formation. An adaptive grayscale intensity threshold provides the ability to extract droplet diameters for the wide range of noise encountered. In instances where soot blocks portions of the droplet, the algorithm manages to provide accurate measurements if a circle fit is used instead of an ellipse fit, as an ellipse can be too accommodating to the disturbance.

  12. Crowdsourcing and Automated Retinal Image Analysis for Diabetic Retinopathy.

    Science.gov (United States)

    Mudie, Lucy I; Wang, Xueyang; Friedman, David S; Brady, Christopher J

    2017-09-23

    As the number of people with diabetic retinopathy (DR) in the USA is expected to increase threefold by 2050, the need to reduce health care costs associated with screening for this treatable disease is ever present. Crowdsourcing and automated retinal image analysis (ARIA) are two areas where new technology has been applied to reduce costs in screening for DR. This paper reviews the current literature surrounding these new technologies. Crowdsourcing has high sensitivity for normal vs abnormal images; however, when multiple categories for severity of DR are added, specificity is reduced. ARIAs have higher sensitivity and specificity, and some commercial ARIA programs are already in use. Deep learning enhanced ARIAs appear to offer even more improvement in ARIA grading accuracy. The utilization of crowdsourcing and ARIAs may be a key to reducing the time and cost burden of processing images from DR screening.

  13. Granulometric profiling of aeolian dust deposits by automated image analysis

    Science.gov (United States)

    Varga, György; Újvári, Gábor; Kovács, János; Jakab, Gergely; Kiss, Klaudia; Szalai, Zoltán

    2016-04-01

    Determination of granulometric parameters is of growing interest in the Earth sciences. Particle size data of sedimentary deposits provide insights into the physicochemical environment of transport, accumulation and post-depositional alterations of sedimentary particles, and are important proxies applied in paleoclimatic reconstructions. It is especially true for aeolian dust deposits with a fairly narrow grain size range as a consequence of the extremely selective nature of wind sediment transport. Therefore, various aspects of aeolian sedimentation (wind strength, distance to source(s), possible secondary source regions and modes of sedimentation and transport) can be reconstructed only from precise grain size data. As terrestrial wind-blown deposits are among the most important archives of past environmental changes, proper explanation of the proxy data is a mandatory issue. Automated imaging provides a unique technique to gather direct information on granulometric characteristics of sedimentary particles. Granulometric data obtained from automatic image analysis of Malvern Morphologi G3-ID is a rarely applied new technique for particle size and shape analyses in sedimentary geology. Size and shape data of several hundred thousand (or even million) individual particles were automatically recorded in this study from 15 loess and paleosoil samples from the captured high-resolution images. Several size (e.g. circle-equivalent diameter, major axis, length, width, area) and shape parameters (e.g. elongation, circularity, convexity) were calculated by the instrument software. At the same time, the mean light intensity after transmission through each particle is automatically collected by the system as a proxy of optical properties of the material. Intensity values are dependent on chemical composition and/or thickness of the particles. The results of the automated imaging were compared to particle size data determined by three different laser diffraction instruments

  14. Automated regional behavioral analysis for human brain images.

    Science.gov (United States)

    Lancaster, Jack L; Laird, Angela R; Eickhoff, Simon B; Martinez, Michael J; Fox, P Mickle; Fox, Peter T

    2012-01-01

    Behavioral categories of functional imaging experiments along with standardized brain coordinates of associated activations were used to develop a method to automate regional behavioral analysis of human brain images. Behavioral and coordinate data were taken from the BrainMap database (http://www.brainmap.org/), which documents over 20 years of published functional brain imaging studies. A brain region of interest (ROI) for behavioral analysis can be defined in functional images, anatomical images or brain atlases, if images are spatially normalized to MNI or Talairach standards. Results of behavioral analysis are presented for each of BrainMap's 51 behavioral sub-domains spanning five behavioral domains (Action, Cognition, Emotion, Interoception, and Perception). For each behavioral sub-domain the fraction of coordinates falling within the ROI was computed and compared with the fraction expected if coordinates for the behavior were not clustered, i.e., uniformly distributed. When the difference between these fractions is large behavioral association is indicated. A z-score ≥ 3.0 was used to designate statistically significant behavioral association. The left-right symmetry of ~100K activation foci was evaluated by hemisphere, lobe, and by behavioral sub-domain. Results highlighted the classic left-side dominance for language while asymmetry for most sub-domains (~75%) was not statistically significant. Use scenarios were presented for anatomical ROIs from the Harvard-Oxford cortical (HOC) brain atlas, functional ROIs from statistical parametric maps in a TMS-PET study, a task-based fMRI study, and ROIs from the ten "major representative" functional networks in a previously published resting state fMRI study. Statistically significant behavioral findings for these use scenarios were consistent with published behaviors for associated anatomical and functional regions.

  15. Automated Image Analysis of Offshore Infrastructure Marine Biofouling

    Directory of Open Access Journals (Sweden)

    Kate Gormley

    2018-01-01

    Full Text Available In the UK, some of the oldest oil and gas installations have been in the water for over 40 years and have considerable colonisation by marine organisms, which may lead to both industry challenges and/or potential biodiversity benefits (e.g., artificial reefs. The project objective was to test the use of an automated image analysis software (CoralNet on images of marine biofouling from offshore platforms on the UK continental shelf, with the aim of (i training the software to identify the main marine biofouling organisms on UK platforms; (ii testing the software performance on 3 platforms under 3 different analysis criteria (methods A–C; (iii calculating the percentage cover of marine biofouling organisms and (iv providing recommendations to industry. Following software training with 857 images, and testing of three platforms, results showed that diversity of the three platforms ranged from low (in the central North Sea to moderate (in the northern North Sea. The two central North Sea platforms were dominated by the plumose anemone Metridium dianthus; and the northern North Sea platform showed less obvious species domination. Three different analysis criteria were created, where the method of selection of points, number of points assessed and confidence level thresholds (CT varied: (method A random selection of 20 points with CT 80%, (method B stratified random of 50 points with CT of 90% and (method C a grid approach of 100 points with CT of 90%. Performed across the three platforms, the results showed that there were no significant differences across the majority of species and comparison pairs. No significant difference (across all species was noted between confirmed annotations methods (A, B and C. It was considered that the software performed well for the classification of the main fouling species in the North Sea. Overall, the study showed that the use of automated image analysis software may enable a more efficient and consistent

  16. Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao

    2015-01-01

    With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP) neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance. PMID:25838818

  17. Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images

    Directory of Open Access Journals (Sweden)

    Jianfang Cao

    2015-01-01

    Full Text Available With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance.

  18. Automated image analysis of microstructure changes in metal alloys

    Science.gov (United States)

    Hoque, Mohammed E.; Ford, Ralph M.; Roth, John T.

    2005-02-01

    The ability to identify and quantify changes in the microstructure of metal alloys is valuable in metal cutting and shaping applications. For example, certain metals, after being cryogenically and electrically treated, have shown large increases in their tool life when used in manufacturing cutting and shaping processes. However, the mechanisms of microstructure changes in alloys under various treatments, which cause them to behave differently, are not yet fully understood. The changes are currently evaluated in a semi-quantitative manner by visual inspection of images of the microstructure. This research applies pattern recognition technology to quantitatively measure the changes in microstructure and to validate the initial assertion of increased tool life under certain treatments. Heterogeneous images of aluminum and tungsten carbide of various categories were analyzed using a process including background correction, adaptive thresholding, edge detection and other algorithms for automated analysis of microstructures. The algorithms are robust across a variety of operating conditions. This research not only facilitates better understanding of the effects of electric and cryogenic treatment of these materials, but also their impact on tooling and metal-cutting processes.

  19. Application of automated image analysis to coal petrography

    Science.gov (United States)

    Chao, E.C.T.; Minkin, J.A.; Thompson, C.L.

    1982-01-01

    The coal petrologist seeks to determine the petrographic characteristics of organic and inorganic coal constituents and their lateral and vertical variations within a single coal bed or different coal beds of a particular coal field. Definitive descriptions of coal characteristics and coal facies provide the basis for interpretation of depositional environments, diagenetic changes, and burial history and determination of the degree of coalification or metamorphism. Numerous coal core or columnar samples must be studied in detail in order to adequately describe and define coal microlithotypes, lithotypes, and lithologic facies and their variations. The large amount of petrographic information required can be obtained rapidly and quantitatively by use of an automated image-analysis system (AIAS). An AIAS can be used to generate quantitative megascopic and microscopic modal analyses for the lithologic units of an entire columnar section of a coal bed. In our scheme for megascopic analysis, distinctive bands 2 mm or more thick are first demarcated by visual inspection. These bands consist of either nearly pure microlithotypes or lithotypes such as vitrite/vitrain or fusite/fusain, or assemblages of microlithotypes. Megascopic analysis with the aid of the AIAS is next performed to determine volume percentages of vitrite, inertite, minerals, and microlithotype mixtures in bands 0.5 to 2 mm thick. The microlithotype mixtures are analyzed microscopically by use of the AIAS to determine their modal composition in terms of maceral and optically observable mineral components. Megascopic and microscopic data are combined to describe the coal unit quantitatively in terms of (V) for vitrite, (E) for liptite, (I) for inertite or fusite, (M) for mineral components other than iron sulfide, (S) for iron sulfide, and (VEIM) for the composition of the mixed phases (Xi) i = 1,2, etc. in terms of the maceral groups vitrinite V, exinite E, inertinite I, and optically observable mineral

  20. Automated three-dimensional analysis of particle measurements using an optical profilometer and image analysis software.

    Science.gov (United States)

    Bullman, V

    2003-07-01

    The automated collection of topographic images from an optical profilometer coupled with existing image analysis software offers the unique ability to quantify three-dimensional particle morphology. Optional software available with most optical profilers permits automated collection of adjacent topographic images of particles dispersed onto a suitable substrate. Particles are recognized in the image as a set of continuous pixels with grey-level values above the grey level assigned to the substrate, whereas particle height or thickness is represented in the numerical differences between these grey levels. These images are loaded into remote image analysis software where macros automate image processing, and then distinguish particles for feature analysis, including standard two-dimensional measurements (e.g. projected area, length, width, aspect ratios) and third-dimensional measurements (e.g. maximum height, mean height). Feature measurements from each calibrated image are automatically added to cumulative databases and exported to a commercial spreadsheet or statistical program for further data processing and presentation. An example is given that demonstrates the superiority of quantitative three-dimensional measurements by optical profilometry and image analysis in comparison with conventional two-dimensional measurements for the characterization of pharmaceutical powders with plate-like particles.

  1. An automated system for whole microscopic image acquisition and analysis.

    Science.gov (United States)

    Bueno, Gloria; Déniz, Oscar; Fernández-Carrobles, María Del Milagro; Vállez, Noelia; Salido, Jesús

    2014-09-01

    The field of anatomic pathology has experienced major changes over the last decade. Virtual microscopy (VM) systems have allowed experts in pathology and other biomedical areas to work in a safer and more collaborative way. VMs are automated systems capable of digitizing microscopic samples that were traditionally examined one by one. The possibility of having digital copies reduces the risk of damaging original samples, and also makes it easier to distribute copies among other pathologists. This article describes the development of an automated high-resolution whole slide imaging (WSI) system tailored to the needs and problems encountered in digital imaging for pathology, from hardware control to the full digitization of samples. The system has been built with an additional digital monochromatic camera together with the color camera by default and LED transmitted illumination (RGB). Monochrome cameras are the preferred method of acquisition for fluorescence microscopy. The system is able to digitize correctly and form large high resolution microscope images for both brightfield and fluorescence. The quality of the digital images has been quantified using three metrics based on sharpness, contrast and focus. It has been proved on 150 tissue samples of brain autopsies, prostate biopsies and lung cytologies, at five magnifications: 2.5×, 10×, 20×, 40×, and 63×. The article is focused on the hardware set-up and the acquisition software, although results of the implemented image processing techniques included in the software and applied to the different tissue samples are also presented. © 2014 Wiley Periodicals, Inc.

  2. Automated X-ray image analysis for cargo security: Critical review and future promise.

    Science.gov (United States)

    Rogers, Thomas W; Jaccard, Nicolas; Morton, Edward J; Griffin, Lewis D

    2017-01-01

    We review the relatively immature field of automated image analysis for X-ray cargo imagery. There is increasing demand for automated analysis methods that can assist in the inspection and selection of containers, due to the ever-growing volumes of traded cargo and the increasing concerns that customs- and security-related threats are being smuggled across borders by organised crime and terrorist networks. We split the field into the classical pipeline of image preprocessing and image understanding. Preprocessing includes: image manipulation; quality improvement; Threat Image Projection (TIP); and material discrimination and segmentation. Image understanding includes: Automated Threat Detection (ATD); and Automated Contents Verification (ACV). We identify several gaps in the literature that need to be addressed and propose ideas for future research. Where the current literature is sparse we borrow from the single-view, multi-view, and CT X-ray baggage domains, which have some characteristics in common with X-ray cargo.

  3. Automated Nanofiber Diameter Measurement in SEM Images Using a Robust Image Analysis Method

    Directory of Open Access Journals (Sweden)

    Ertan Öznergiz

    2014-01-01

    Full Text Available Due to the high surface area, porosity, and rigidity, applications of nanofibers and nanosurfaces have developed in recent years. Nanofibers and nanosurfaces are typically produced by electrospinning method. In the production process, determination of average fiber diameter is crucial for quality assessment. Average fiber diameter is determined by manually measuring the diameters of randomly selected fibers on scanning electron microscopy (SEM images. However, as the number of the images increases, manual fiber diameter determination becomes a tedious and time consuming task as well as being sensitive to human errors. Therefore, an automated fiber diameter measurement system is desired. In the literature, this task is achieved by using image analysis algorithms. Typically, these methods first isolate each fiber in the image and measure the diameter of each isolated fiber. Fiber isolation is an error-prone process. In this study, automated calculation of nanofiber diameter is achieved without fiber isolation using image processing and analysis algorithms. Performance of the proposed method was tested on real data. The effectiveness of the proposed method is shown by comparing automatically and manually measured nanofiber diameter values.

  4. Quantization of polyphenolic compounds in histological sections of grape berries by automated color image analysis

    Science.gov (United States)

    Clement, Alain; Vigouroux, Bertnand

    2003-04-01

    We present new results in applied color image analysis that put in evidence the significant influence of soil on localization and appearance of polyphenols in grapes. These results have been obtained with a new unsupervised classification algorithm founded on hierarchical analysis of color histograms. The process is automated thanks to a software platform we developed specifically for color image analysis and it's applications.

  5. Automated image analysis of atomic force microscopy images of rotavirus particles

    Energy Technology Data Exchange (ETDEWEB)

    Venkataraman, S. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Allison, D.P. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Biochemistry, Cellular, and Molecular Biology, University of Tennessee, Knoxville, TN 37996 (United States); Molecular Imaging Inc. Tempe, AZ, 85282 (United States); Qi, H. [Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Morrell-Falvey, J.L. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Kallewaard, N.L. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Crowe, J.E. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Doktycz, M.J. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States)]. E-mail: doktyczmj@ornl.gov

    2006-06-15

    A variety of biological samples can be imaged by the atomic force microscope (AFM) under environments that range from vacuum to ambient to liquid. Generally imaging is pursued to evaluate structural features of the sample or perhaps identify some structural changes in the sample that are induced by the investigator. In many cases, AFM images of sample features and induced structural changes are interpreted in general qualitative terms such as markedly smaller or larger, rougher, highly irregular, or smooth. Various manual tools can be used to analyze images and extract more quantitative data, but this is usually a cumbersome process. To facilitate quantitative AFM imaging, automated image analysis routines are being developed. Viral particles imaged in water were used as a test case to develop an algorithm that automatically extracts average dimensional information from a large set of individual particles. The extracted information allows statistical analyses of the dimensional characteristics of the particles and facilitates interpretation related to the binding of the particles to the surface. This algorithm is being extended for analysis of other biological samples and physical objects that are imaged by AFM.

  6. Automated striatal uptake analysis of 18F-FDOPA PET images applied to Parkinson's disease patients

    International Nuclear Information System (INIS)

    Chang Icheng; Lue Kunhan; Hsieh Hungjen; Liu Shuhsin; Kao, Chinhao K.

    2011-01-01

    6-[ 18 F]Fluoro-L-DOPA (FDOPA) is a radiopharmaceutical valuable for assessing the presynaptic dopaminergic function when used with positron emission tomography (PET). More specifically, the striatal-to-occipital ratio (SOR) of FDOPA uptake images has been extensively used as a quantitative parameter in these PET studies. Our aim was to develop an easy, automated method capable of performing objective analysis of SOR in FDOPA PET images of Parkinson's disease (PD) patients. Brain images from FDOPA PET studies of 21 patients with PD and 6 healthy subjects were included in our automated striatal analyses. Images of each individual were spatially normalized into an FDOPA template. Subsequently, the image slice with the highest level of basal ganglia activity was chosen among the series of normalized images. Also, the immediate preceding and following slices of the chosen image were then selected. Finally, the summation of these three images was used to quantify and calculate the SOR values. The results obtained by automated analysis were compared with manual analysis by a trained and experienced image processing technologist. The SOR values obtained from the automated analysis had a good agreement and high correlation with manual analysis. The differences in caudate, putamen, and striatum were -0.023, -0.029, and -0.025, respectively; correlation coefficients 0.961, 0.957, and 0.972, respectively. We have successfully developed a method for automated striatal uptake analysis of FDOPA PET images. There was no significant difference between the SOR values obtained from this method and using manual analysis. Yet it is an unbiased time-saving and cost-effective program and easy to implement on a personal computer. (author)

  7. Automated analysis of image mammogram for breast cancer diagnosis

    Science.gov (United States)

    Nurhasanah, Sampurno, Joko; Faryuni, Irfana Diah; Ivansyah, Okto

    2016-03-01

    Medical imaging help doctors in diagnosing and detecting diseases that attack the inside of the body without surgery. Mammogram image is a medical image of the inner breast imaging. Diagnosis of breast cancer needs to be done in detail and as soon as possible for determination of next medical treatment. The aim of this work is to increase the objectivity of clinical diagnostic by using fractal analysis. This study applies fractal method based on 2D Fourier analysis to determine the density of normal and abnormal and applying the segmentation technique based on K-Means clustering algorithm to image abnormal for determine the boundary of the organ and calculate the area of organ segmentation results. The results show fractal method based on 2D Fourier analysis can be used to distinguish between the normal and abnormal breast and segmentation techniques with K-Means Clustering algorithm is able to generate the boundaries of normal and abnormal tissue organs, so area of the abnormal tissue can be determined.

  8. Automated counting of bacterial colonies by image analysis.

    Science.gov (United States)

    Chiang, Pei-Ju; Tseng, Min-Jen; He, Zong-Sian; Li, Chia-Hsun

    2015-01-01

    Research on microorganisms often involves culturing as a means to determine the survival and proliferation of bacteria. The number of colonies in a culture is counted to calculate the concentration of bacteria in the original broth; however, manual counting can be time-consuming and imprecise. To save time and prevent inconsistencies, this study proposes a fully automated counting system using image processing methods. To accurately estimate the number of viable bacteria in a known volume of suspension, colonies distributing over the whole surface area of a plate, including the central and rim areas of a Petri dish are taken into account. The performance of the proposed system is compared with verified manual counts, as well as with two freely available counting software programs. Comparisons show that the proposed system is an effective method with excellent accuracy with mean value of absolute percentage error of 3.37%. A user-friendly graphical user interface is also developed and freely available for download, providing researchers in biomedicine with a more convenient instrument for the enumeration of bacterial colonies. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Automated Dermoscopy Image Analysis of Pigmented Skin Lesions

    Directory of Open Access Journals (Sweden)

    Alfonso Baldi

    2010-03-01

    Full Text Available Dermoscopy (dermatoscopy, epiluminescence microscopy is a non-invasive diagnostic technique for the in vivo observation of pigmented skin lesions (PSLs, allowing a better visualization of surface and subsurface structures (from the epidermis to the papillary dermis. This diagnostic tool permits the recognition of morphologic structures not visible by the naked eye, thus opening a new dimension in the analysis of the clinical morphologic features of PSLs. In order to reduce the learning-curve of non-expert clinicians and to mitigate problems inherent in the reliability and reproducibility of the diagnostic criteria used in pattern analysis, several indicative methods based on diagnostic algorithms have been introduced in the last few years. Recently, numerous systems designed to provide computer-aided analysis of digital images obtained by dermoscopy have been reported in the literature. The goal of this article is to review these systems, focusing on the most recent approaches based on content-based image retrieval systems (CBIR.

  10. Fluorescence In Situ Hybridization (FISH Signal Analysis Using Automated Generated Projection Images

    Directory of Open Access Journals (Sweden)

    Xingwei Wang

    2012-01-01

    Full Text Available Fluorescence in situ hybridization (FISH tests provide promising molecular imaging biomarkers to more accurately and reliably detect and diagnose cancers and genetic disorders. Since current manual FISH signal analysis is low-efficient and inconsistent, which limits its clinical utility, developing automated FISH image scanning systems and computer-aided detection (CAD schemes has been attracting research interests. To acquire high-resolution FISH images in a multi-spectral scanning mode, a huge amount of image data with the stack of the multiple three-dimensional (3-D image slices is generated from a single specimen. Automated preprocessing these scanned images to eliminate the non-useful and redundant data is important to make the automated FISH tests acceptable in clinical applications. In this study, a dual-detector fluorescence image scanning system was applied to scan four specimen slides with FISH-probed chromosome X. A CAD scheme was developed to detect analyzable interphase cells and map the multiple imaging slices recorded FISH-probed signals into the 2-D projection images. CAD scheme was then applied to each projection image to detect analyzable interphase cells using an adaptive multiple-threshold algorithm, identify FISH-probed signals using a top-hat transform, and compute the ratios between the normal and abnormal cells. To assess CAD performance, the FISH-probed signals were also independently visually detected by an observer. The Kappa coefficients for agreement between CAD and observer ranged from 0.69 to 1.0 in detecting/counting FISH signal spots in four testing samples. The study demonstrated the feasibility of automated FISH signal analysis that applying a CAD scheme to the automated generated 2-D projection images.

  11. [Clinical application of automated digital image analysis for morphology review of peripheral blood leukocyte].

    Science.gov (United States)

    Xing, Ying; Yan, Xiaohua; Pu, Chengwei; Shang, Ke; Dong, Ning; Wang, Run; Wang, Jianzhong

    2016-03-01

    To explore the clinical application of automated digital image analysis in leukocyte morphology examination when review criteria of hematology analyzer are triggered. The reference range of leukocyte differentiation by automated digital image analysis was established by analyzing 304 healthy blood samples from Peking University First Hospital. Six hundred and ninty-seven blood samples from Peking University First Hospital were randomly collected from November 2013 to April 2014, complete blood cells were counted on hematology analyzer, blood smears were made and stained at the same time. Blood smears were detected by automated digital image analyzer and the results were checked (reclassification) by a staff with abundant morphology experience. The same smear was examined manually by microscope. The results by manual microscopic differentiation were used as"golden standard", and diagnostic efficiency of abnormal specimens by automated digital image analysis was calculated, including sensitivity, specificity and accuracy. The difference of abnormal leukocytes detected by two different methods was analyzed in 30 samples of hematological and infectious diseases. Specificity of identifying abnormalities of white blood cells by automated digital image analysis was more than 90% except monocyte. Sensitivity of neutrophil toxic abnormities (including Döhle body, toxic granulate and vacuolization) was 100%; sensitivity of blast cells, immature granulates and atypical lymphocytes were 91.7%, 60% to 81.5% and 61.5%, respectively. Sensitivity of leukocyte differential count was 91.8% for neutrophils, 88.5% for lymphocytes, 69.1% for monocytes, 78.9% for eosinophils and 36.3 for basophils. The positive rate of recognizing abnormal cells (blast, immature granulocyte and atypical lymphocyte) by manual microscopic method was 46.7%, 53.3% and 10%, respectively. The positive rate of automated digital image analysis was 43.3%, 60% and 10%, respectively. There was no statistic

  12. A feasibility assessment of automated FISH image and signal analysis to assist cervical cancer detection

    Science.gov (United States)

    Wang, Xingwei; Li, Yuhua; Liu, Hong; Li, Shibo; Zhang, Roy R.; Zheng, Bin

    2012-02-01

    Fluorescence in situ hybridization (FISH) technology provides a promising molecular imaging tool to detect cervical cancer. Since manual FISH analysis is difficult, time-consuming, and inconsistent, the automated FISH image scanning systems have been developed. Due to limited focal depth of scanned microscopic image, a FISH-probed specimen needs to be scanned in multiple layers that generate huge image data. To improve diagnostic efficiency of using automated FISH image analysis, we developed a computer-aided detection (CAD) scheme. In this experiment, four pap-smear specimen slides were scanned by a dual-detector fluorescence image scanning system that acquired two spectrum images simultaneously, which represent images of interphase cells and FISH-probed chromosome X. During image scanning, once detecting a cell signal, system captured nine image slides by automatically adjusting optical focus. Based on the sharpness index and maximum intensity measurement, cells and FISH signals distributed in 3-D space were projected into a 2-D con-focal image. CAD scheme was applied to each con-focal image to detect analyzable interphase cells using an adaptive multiple-threshold algorithm and detect FISH-probed signals using a top-hat transform. The ratio of abnormal cells was calculated to detect positive cases. In four scanned specimen slides, CAD generated 1676 con-focal images that depicted analyzable cells. FISH-probed signals were independently detected by our CAD algorithm and an observer. The Kappa coefficients for agreement between CAD and observer ranged from 0.69 to 1.0 in detecting/counting FISH signal spots. The study demonstrated the feasibility of applying automated FISH image and signal analysis to assist cyto-geneticists in detecting cervical cancers.

  13. An automated image analysis system to measure and count organisms in laboratory microcosms.

    Directory of Open Access Journals (Sweden)

    François Mallard

    Full Text Available 1. Because of recent technological improvements in the way computer and digital camera perform, the potential use of imaging for contributing to the study of communities, populations or individuals in laboratory microcosms has risen enormously. However its limited use is due to difficulties in the automation of image analysis. 2. We present an accurate and flexible method of image analysis for detecting, counting and measuring moving particles on a fixed but heterogeneous substrate. This method has been specifically designed to follow individuals, or entire populations, in experimental laboratory microcosms. It can be used in other applications. 3. The method consists in comparing multiple pictures of the same experimental microcosm in order to generate an image of the fixed background. This background is then used to extract, measure and count the moving organisms, leaving out the fixed background and the motionless or dead individuals. 4. We provide different examples (springtails, ants, nematodes, daphnia to show that this non intrusive method is efficient at detecting organisms under a wide variety of conditions even on faintly contrasted and heterogeneous substrates. 5. The repeatability and reliability of this method has been assessed using experimental populations of the Collembola Folsomia candida. 6. We present an ImageJ plugin to automate the analysis of digital pictures of laboratory microcosms. The plugin automates the successive steps of the analysis and recursively analyses multiple sets of images, rapidly producing measurements from a large number of replicated microcosms.

  14. Automated Analysis of Microscopic Images of Isolated Pancreatic Islets

    Czech Academy of Sciences Publication Activity Database

    Habart, D.; Švihlík, J.; Schier, Jan; Cahová, M.; Girman, P.; Zacharovová, K.; Berková, Z.; Kříž, J.; Fabryová, E.; Kosinová, L.; Papáčková, Z.; Kybic, J.; Saudek, F.

    2016-01-01

    Roč. 25, č. 12 (2016), s. 2145-2156 ISSN 0963-6897 Grant - others:GA ČR(CZ) GA14-10440S Institutional support: RVO:67985556 Keywords : enumeration of islets * image processing * image segmentation * islet transplantation * machine-learning * quality control Subject RIV: IN - Informatics, Computer Science Impact factor: 3.006, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/schier-0465945.pdf

  15. Automation of chromosomes analysis. Automatic system for image processing

    International Nuclear Information System (INIS)

    Le Go, R.; Cosnac, B. de; Spiwack, A.

    1975-01-01

    The A.S.T.I. is an automatic system relating to the fast conversational processing of all kinds of images (cells, chromosomes) converted to a numerical data set (120000 points, 16 grey levels stored in a MOS memory) through a fast D.O. analyzer. The system performs automatically the isolation of any individual image, the area and weighted area of which are computed. These results are directly displayed on the command panel and can be transferred to a mini-computer for further computations. A bright spot allows parts of an image to be picked out and the results to be displayed. This study is particularly directed towards automatic karyo-typing [fr

  16. OpenComet: An automated tool for comet assay image analysis

    Directory of Open Access Journals (Sweden)

    Benjamin M. Gyori

    2014-01-01

    Full Text Available Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time.

  17. Unsupervised fully automated inline analysis of global left ventricular function in CINE MR imaging.

    Science.gov (United States)

    Theisen, Daniel; Sandner, Torleif A; Bauner, Kerstin; Hayes, Carmel; Rist, Carsten; Reiser, Maximilian F; Wintersperger, Bernd J

    2009-08-01

    To implement and evaluate the accuracy of unsupervised fully automated inline analysis of global ventricular function and myocardial mass (MM). To compare automated with manual segmentation in patients with cardiac disorders. In 50 patients, cine imaging of the left ventricle was performed with an accelerated retrogated steady state free precession sequence (GRAPPA; R = 2) on a 1.5 Tesla whole body scanner (MAGNETOM Avanto, Siemens Healthcare, Germany). A spatial resolution of 1.4 x 1.9 mm was achieved with a slice thickness of 8 mm and a temporal resolution of 42 milliseconds. Ventricular coverage was based on 9 to 12 short axis slices extending from the annulus of the mitral valve to the apex with 2 mm gaps. Fully automated segmentation and contouring was performed instantaneously after image acquisition. In addition to automated processing, cine data sets were also manually segmented using a semi-automated postprocessing software. Results of both methods were compared with regard to end-diastolic volume (EDV), end-systolic volume (ESV), ejection fraction (EF), and MM. A subgroup analysis was performed in patients with normal (> or =55%) and reduced EF (<55%) based on the results of the manual analysis. Thirty-two percent of patients had a reduced left ventricular EF of <55%. Volumetric results of the automated inline analysis for EDV (r = 0.96), ESV (r = 0.95), EF (r = 0.89), and MM (r = 0.96) showed high correlation with the results of manual segmentation (all P < 0.001). Head-to-head comparison did not show significant differences between automated and manual evaluation for EDV (153.6 +/- 52.7 mL vs. 149.1 +/- 48.3 mL; P = 0.05), ESV (61.6 +/- 31.0 mL vs. 64.1 +/- 31.7 mL; P = 0.08), and EF (58.0 +/- 11.6% vs. 58.6 +/- 11.6%; P = 0.5). However, differences were significant for MM (150.0 +/- 61.3 g vs. 142.4 +/- 59.0 g; P < 0.01). The standard error was 15.6 (EDV), 9.7 (ESV), 5.0 (EF), and 17.1 (mass). The mean time for manual analysis was 15 minutes

  18. An image processing framework for automated analysis of swimming behavior in tadpoles with vestibular alterations

    Science.gov (United States)

    Zarei, Kasra; Fritzsch, Bernd; Buchholz, James H. J.

    2017-03-01

    Micogravity, as experienced during prolonged space flight, presents a problem for space exploration. Animal models, specifically tadpoles, with altered connections of the vestibular ear allow the examination of the effects of microgravity and can be quantitatively monitored through tadpole swimming behavior. We describe an image analysis framework for performing automated quantification of tadpole swimming behavior. Speckle reducing anisotropic diffusion is used to smooth tadpole image signals by diffusing noise while retaining edges. A narrow band level set approach is used for sharp tracking of the tadpole body. The use of level set method for interface tracking provides an inherent advantage of using level set based image segmentation algorithm (active contouring). Active contour segmentation is followed by two-dimensional skeletonization, which allows the automated quantification of tadpole deflection angles, and subsequently tadpole escape (or C-start) response times. Evaluation of the image analysis methodology was performed by comparing the automated quantifications of deflection angles to manual assessments (obtained using a standard grading scheme), and produced a high correlation (r2 = 0.99) indicating high reliability and accuracy of the proposed method. The methods presented form an important element of objective quantification of the escape response of the tadpole vestibular system to mechanical and biochemical manipulations, and can ultimately contribute to a better understanding of the effects of altered gravity perception on humans.

  19. The impact of air pollution on the level of micronuclei measured by automated image analysis

    Czech Academy of Sciences Publication Activity Database

    Rössnerová, Andrea; Špátová, Milada; Rossner, P.; Solanský, I.; Šrám, Radim

    2009-01-01

    Roč. 669, 1-2 (2009), s. 42-47 ISSN 0027-5107 R&D Projects: GA AV ČR 1QS500390506; GA MŠk 2B06088; GA MŠk 2B08005 Institutional research plan: CEZ:AV0Z50390512 Keywords : micronuclei * binucleated cells * automated image analysis Subject RIV: DN - Health Impact of the Environment Quality Impact factor: 3.556, year: 2009

  20. Quantification of Pulmonary Fibrosis in a Bleomycin Mouse Model Using Automated Histological Image Analysis.

    Directory of Open Access Journals (Sweden)

    Jean-Claude Gilhodes

    Full Text Available Current literature on pulmonary fibrosis induced in animal models highlights the need of an accurate, reliable and reproducible histological quantitative analysis. One of the major limits of histological scoring concerns the fact that it is observer-dependent and consequently subject to variability, which may preclude comparative studies between different laboratories. To achieve a reliable and observer-independent quantification of lung fibrosis we developed an automated software histological image analysis performed from digital image of entire lung sections. This automated analysis was compared to standard evaluation methods with regard to its validation as an end-point measure of fibrosis. Lung fibrosis was induced in mice by intratracheal administration of bleomycin (BLM at 0.25, 0.5, 0.75 and 1 mg/kg. A detailed characterization of BLM-induced fibrosis was performed 14 days after BLM administration using lung function testing, micro-computed tomography and Ashcroft scoring analysis. Quantification of fibrosis by automated analysis was assessed based on pulmonary tissue density measured from thousands of micro-tiles processed from digital images of entire lung sections. Prior to analysis, large bronchi and vessels were manually excluded from the original images. Measurement of fibrosis has been expressed by two indexes: the mean pulmonary tissue density and the high pulmonary tissue density frequency. We showed that tissue density indexes gave access to a very accurate and reliable quantification of morphological changes induced by BLM even for the lowest concentration used (0.25 mg/kg. A reconstructed 2D-image of the entire lung section at high resolution (3.6 μm/pixel has been performed from tissue density values allowing the visualization of their distribution throughout fibrotic and non-fibrotic regions. A significant correlation (p<0.0001 was found between automated analysis and the above standard evaluation methods. This correlation

  1. Quantification of Pulmonary Fibrosis in a Bleomycin Mouse Model Using Automated Histological Image Analysis.

    Science.gov (United States)

    Gilhodes, Jean-Claude; Julé, Yvon; Kreuz, Sebastian; Stierstorfer, Birgit; Stiller, Detlef; Wollin, Lutz

    2017-01-01

    Current literature on pulmonary fibrosis induced in animal models highlights the need of an accurate, reliable and reproducible histological quantitative analysis. One of the major limits of histological scoring concerns the fact that it is observer-dependent and consequently subject to variability, which may preclude comparative studies between different laboratories. To achieve a reliable and observer-independent quantification of lung fibrosis we developed an automated software histological image analysis performed from digital image of entire lung sections. This automated analysis was compared to standard evaluation methods with regard to its validation as an end-point measure of fibrosis. Lung fibrosis was induced in mice by intratracheal administration of bleomycin (BLM) at 0.25, 0.5, 0.75 and 1 mg/kg. A detailed characterization of BLM-induced fibrosis was performed 14 days after BLM administration using lung function testing, micro-computed tomography and Ashcroft scoring analysis. Quantification of fibrosis by automated analysis was assessed based on pulmonary tissue density measured from thousands of micro-tiles processed from digital images of entire lung sections. Prior to analysis, large bronchi and vessels were manually excluded from the original images. Measurement of fibrosis has been expressed by two indexes: the mean pulmonary tissue density and the high pulmonary tissue density frequency. We showed that tissue density indexes gave access to a very accurate and reliable quantification of morphological changes induced by BLM even for the lowest concentration used (0.25 mg/kg). A reconstructed 2D-image of the entire lung section at high resolution (3.6 μm/pixel) has been performed from tissue density values allowing the visualization of their distribution throughout fibrotic and non-fibrotic regions. A significant correlation (pfibrosis in mice, which will be very valuable for future preclinical drug explorations.

  2. Scoring of radiation-induced micronuclei in cytokinesis-blocked human lymphocytes by automated image analysis

    International Nuclear Information System (INIS)

    Verhaegen, F.; Seuntjens, J.; Thierens, H.

    1994-01-01

    The micronucleus assay in human lymphocytes is, at present, frequently used to assess chromosomal damage caused by ionizing radiation or mutagens. Manual scoring of micronuclei (MN) by trained personnel is very time-consuming, tiring work, and the results depend on subjective interpretation of scoring criteria. More objective scoring can be accomplished only if the test can be automated. Furthermore, an automated system allows scoring of large numbers of cells, thereby increasing the statistical significance of the results. This is of special importance for screening programs for low doses of chromosome-damaging agents. In this paper, the first results of our effort to automate the micronucleus assay with an image-analysis system are represented. The method we used is described in detail, and the results are compared to those of other groups. Our system is able to detect 88% of the binucleated lymphocytes on the slides. The procedure consists of a fully automated localization of binucleated cells and counting of the MN within these cells, followed by a simple and fast manual operation in which the false positives are removed. Preliminary measurements for blood samples irradiated with a dose of 1 Gy X-rays indicate that the automated system can find 89% ± 12% of the micronuclei within the binucleated cells compared to a manual screening. 18 refs., 8 figs., 1 tab

  3. A method for the automated detection phishing websites through both site characteristics and image analysis

    Science.gov (United States)

    White, Joshua S.; Matthews, Jeanna N.; Stacy, John L.

    2012-06-01

    Phishing website analysis is largely still a time-consuming manual process of discovering potential phishing sites, verifying if suspicious sites truly are malicious spoofs and if so, distributing their URLs to the appropriate blacklisting services. Attackers increasingly use sophisticated systems for bringing phishing sites up and down rapidly at new locations, making automated response essential. In this paper, we present a method for rapid, automated detection and analysis of phishing websites. Our method relies on near real-time gathering and analysis of URLs posted on social media sites. We fetch the pages pointed to by each URL and characterize each page with a set of easily computed values such as number of images and links. We also capture a screen-shot of the rendered page image, compute a hash of the image and use the Hamming distance between these image hashes as a form of visual comparison. We provide initial results demonstrate the feasibility of our techniques by comparing legitimate sites to known fraudulent versions from Phishtank.com, by actively introducing a series of minor changes to a phishing toolkit captured in a local honeypot and by performing some initial analysis on a set of over 2.8 million URLs posted to Twitter over a 4 days in August 2011. We discuss the issues encountered during our testing such as resolvability and legitimacy of URL's posted on Twitter, the data sets used, the characteristics of the phishing sites we discovered, and our plans for future work.

  4. ARAM: an automated image analysis software to determine rosetting parameters and parasitaemia in Plasmodium samples.

    Science.gov (United States)

    Kudella, Patrick Wolfgang; Moll, Kirsten; Wahlgren, Mats; Wixforth, Achim; Westerhausen, Christoph

    2016-04-18

    Rosetting is associated with severe malaria and a primary cause of death in Plasmodium falciparum infections. Detailed understanding of this adhesive phenomenon may enable the development of new therapies interfering with rosette formation. For this, it is crucial to determine parameters such as rosetting and parasitaemia of laboratory strains or patient isolates, a bottleneck in malaria research due to the time consuming and error prone manual analysis of specimens. Here, the automated, free, stand-alone analysis software automated rosetting analyzer for micrographs (ARAM) to determine rosetting rate, rosette size distribution as well as parasitaemia with a convenient graphical user interface is presented. Automated rosetting analyzer for micrographs is an executable with two operation modes for automated identification of objects on images. The default mode detects red blood cells and fluorescently labelled parasitized red blood cells by combining an intensity-gradient with a threshold filter. The second mode determines object location and size distribution from a single contrast method. The obtained results are compared with standardized manual analysis. Automated rosetting analyzer for micrographs calculates statistical confidence probabilities for rosetting rate and parasitaemia. Automated rosetting analyzer for micrographs analyses 25 cell objects per second reliably delivering identical results compared to manual analysis. For the first time rosette size distribution is determined in a precise and quantitative manner employing ARAM in combination with established inhibition tests. Additionally ARAM measures the essential observables parasitaemia, rosetting rate and size as well as location of all detected objects and provides confidence intervals for the determined observables. No other existing software solution offers this range of function. The second, non-malaria specific, analysis mode of ARAM offers the functionality to detect arbitrary objects

  5. Long-term live cell imaging and automated 4D analysis of drosophila neuroblast lineages.

    Directory of Open Access Journals (Sweden)

    Catarina C F Homem

    Full Text Available The developing Drosophila brain is a well-studied model system for neurogenesis and stem cell biology. In the Drosophila central brain, around 200 neural stem cells called neuroblasts undergo repeated rounds of asymmetric cell division. These divisions typically generate a larger self-renewing neuroblast and a smaller ganglion mother cell that undergoes one terminal division to create two differentiating neurons. Although single mitotic divisions of neuroblasts can easily be imaged in real time, the lack of long term imaging procedures has limited the use of neuroblast live imaging for lineage analysis. Here we describe a method that allows live imaging of cultured Drosophila neuroblasts over multiple cell cycles for up to 24 hours. We describe a 4D image analysis protocol that can be used to extract cell cycle times and growth rates from the resulting movies in an automated manner. We use it to perform lineage analysis in type II neuroblasts where clonal analysis has indicated the presence of a transit-amplifying population that potentiates the number of neurons. Indeed, our experiments verify type II lineages and provide quantitative parameters for all cell types in those lineages. As defects in type II neuroblast lineages can result in brain tumor formation, our lineage analysis method will allow more detailed and quantitative analysis of tumorigenesis and asymmetric cell division in the Drosophila brain.

  6. A semi-automated image analysis procedure for in situ plankton imaging systems.

    Directory of Open Access Journals (Sweden)

    Hongsheng Bi

    Full Text Available Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects ( 95%. First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that

  7. Automated structural imaging analysis detects premanifest Huntington's disease neurodegeneration within 1 year.

    Science.gov (United States)

    Majid, D S Adnan; Stoffers, Diederick; Sheldon, Sarah; Hamza, Samar; Thompson, Wesley K; Goldstein, Jody; Corey-Bloom, Jody; Aron, Adam R

    2011-07-01

    Intense efforts are underway to evaluate neuroimaging measures as biomarkers for neurodegeneration in premanifest Huntington's disease (preHD). We used a completely automated longitudinal analysis method to compare structural scans in preHD individuals and controls. Using a 1-year longitudinal design, we analyzed T(1) -weighted structural scans in 35 preHD individuals and 22 age-matched controls. We used the SIENA (Structural Image Evaluation, using Normalization, of Atrophy) software tool to yield overall percentage brain volume change (PBVC) and voxel-level changes in atrophy. We calculated sample sizes for a hypothetical disease-modifying (neuroprotection) study. We found significantly greater yearly atrophy in preHD individuals versus controls (mean PBVC controls, -0.149%; preHD, -0.388%; P = .031, Cohen's d = .617). For a preHD subgroup closest to disease onset, yearly atrophy was more than 3 times that of controls (mean PBVC close-to-onset preHD, -0.510%; P = .019, Cohen's d = .920). This atrophy was evident at the voxel level in periventricular regions, consistent with well-established preHD basal ganglia atrophy. We estimated that a neuroprotection study using SIENA would only need 74 close-to-onset individuals in each arm (treatment vs placebo) to detect a 50% slowing in yearly atrophy with 80% power. Automated whole-brain analysis of structural MRI can reliably detect preHD disease progression in 1 year. These results were attained with a readily available imaging analysis tool, SIENA, which is observer independent, automated, and robust with respect to image quality, slice thickness, and different pulse sequences. This MRI biomarker approach could be used to evaluate neuroprotection in preHD. Copyright © 2011 Movement Disorder Society.

  8. Autoscope: automated otoscopy image analysis to diagnose ear pathology and use of clinically motivated eardrum features

    Science.gov (United States)

    Senaras, Caglar; Moberly, Aaron C.; Teknos, Theodoros; Essig, Garth; Elmaraghy, Charles; Taj-Schaal, Nazhat; Yu, Lianbo; Gurcan, Metin

    2017-03-01

    In this study, we propose an automated otoscopy image analysis system called Autoscope. To the best of our knowledge, Autoscope is the first system designed to detect a wide range of eardrum abnormalities by using high-resolution otoscope images and report the condition of the eardrum as "normal" or "abnormal." In order to achieve this goal, first, we developed a preprocessing step to reduce camera-specific problems, detect the region of interest in the image, and prepare the image for further analysis. Subsequently, we designed a new set of clinically motivated eardrum features (CMEF). Furthermore, we evaluated the potential of the visual MPEG-7 descriptors for the task of tympanic membrane image classification. Then, we fused the information extracted from the CMEF and state-of-the-art computer vision features (CVF), which included MPEG-7 descriptors and two additional features together, using a state of the art classifier. In our experiments, 247 tympanic membrane images with 14 different types of abnormality were used, and Autoscope was able to classify the given tympanic membrane images as normal or abnormal with 84.6% accuracy.

  9. automated image analysis system for homogeneity evaluation of nuclear fuel plates

    International Nuclear Information System (INIS)

    Hassan, A.H.H.

    2005-01-01

    the main aim of this work is to design an automated image analysis system developed for inspection of fuel plates manufactured for the operation of ETRR-2 of egypt. the proposed system aims to evaluate homogeneity of the core of the fuel plate, and detecting white spot outside the fuel core. A vision system has been introduced to capture images for plates to be characterized and software has been developed to analyze the captured images based on the gray level co-occurrence matrix (GLCM). the images are digitized using digital camera. it is common practice to adopt a preprocessing step for the images with a special purpose of reduction/eliminating the noise. two preprocessing steps are carried out, application of a median type low pass filter and contrast improvement by extending the image's histogram. the analysis of texture features of co-occurrence matrix (COM) is a good tool to investigate the identification of fuel plates images based on different structures of COM considering neighbouring distance, direction

  10. CEST ANALYSIS: AUTOMATED CHANGE DETECTION FROM VERY-HIGH-RESOLUTION REMOTE SENSING IMAGES

    Directory of Open Access Journals (Sweden)

    M. Ehlers

    2012-08-01

    Full Text Available A fast detection, visualization and assessment of change in areas of crisis or catastrophes are important requirements for coordination and planning of help. Through the availability of new satellites and/or airborne sensors with very high spatial resolutions (e.g., WorldView, GeoEye new remote sensing data are available for a better detection, delineation and visualization of change. For automated change detection, a large number of algorithms has been proposed and developed. From previous studies, however, it is evident that to-date no single algorithm has the potential for being a reliable change detector for all possible scenarios. This paper introduces the Combined Edge Segment Texture (CEST analysis, a decision-tree based cooperative suite of algorithms for automated change detection that is especially designed for the generation of new satellites with very high spatial resolution. The method incorporates frequency based filtering, texture analysis, and image segmentation techniques. For the frequency analysis, different band pass filters can be applied to identify the relevant frequency information for change detection. After transforming the multitemporal images via a fast Fourier transform (FFT and applying the most suitable band pass filter, different methods are available to extract changed structures: differencing and correlation in the frequency domain and correlation and edge detection in the spatial domain. Best results are obtained using edge extraction. For the texture analysis, different 'Haralick' parameters can be calculated (e.g., energy, correlation, contrast, inverse distance moment with 'energy' so far providing the most accurate results. These algorithms are combined with a prior segmentation of the image data as well as with morphological operations for a final binary change result. A rule-based combination (CEST of the change algorithms is applied to calculate the probability of change for a particular location. CEST

  11. Results of Automated Retinal Image Analysis for Detection of Diabetic Retinopathy from the Nakuru Study, Kenya

    DEFF Research Database (Denmark)

    Juul Bøgelund Hansen, Morten; Abramoff, M. D.; Folk, J. C.

    2015-01-01

    Objective Digital retinal imaging is an established method of screening for diabetic retinopathy (DR). It has been established that currently about 1% of the world's blind or visually impaired is due to DR. However, the increasing prevalence of diabetes mellitus and DR is creating an increased...... gave an AUC of 0.878 (95% CI 0.850-0.905). It showed a negative predictive value of 98%. The IDP missed no vision threatening retinopathy in any patients and none of the false negative cases met criteria for treatment. Conclusions In this epidemiological sample, the IDP's grading was comparable...... workload on those with expertise in grading retinal images. Safe and reliable automated analysis of retinal images may support screening services worldwide. This study aimed to compare the Iowa Detection Program (IDP) ability to detect diabetic eye diseases (DED) to human grading carried out at Moorfields...

  12. Bacterial growth on surfaces: Automated image analysis for quantification of growth rate-related parameters

    DEFF Research Database (Denmark)

    Møller, S.; Sternberg, Claus; Poulsen, L. K.

    1995-01-01

    species-specific hybridizations with fluorescence-labelled ribosomal probes to estimate the single-cell concentration of RNA. By automated analysis of digitized images of stained cells, we determined four independent growth rate-related parameters: cellular RNA and DNA contents, cell volume......, and the frequency of dividing cells in a cell population. These parameters were used to compare physiological states of liquid-suspended and surfacegrowing Pseudomonas putida KT2442 in chemostat cultures. The major finding is that the correlation between substrate availability and cellular growth rate found...

  13. Fully automated quantitative analysis of breast cancer risk in DCE-MR images

    Science.gov (United States)

    Jiang, Luan; Hu, Xiaoxin; Gu, Yajia; Li, Qiang

    2015-03-01

    Amount of fibroglandular tissue (FGT) and background parenchymal enhancement (BPE) in dynamic contrast enhanced magnetic resonance (DCE-MR) images are two important indices for breast cancer risk assessment in the clinical practice. The purpose of this study is to develop and evaluate a fully automated scheme for quantitative analysis of FGT and BPE in DCE-MR images. Our fully automated method consists of three steps, i.e., segmentation of whole breast, fibroglandular tissues, and enhanced fibroglandular tissues. Based on the volume of interest extracted automatically, dynamic programming method was applied in each 2-D slice of a 3-D MR scan to delineate the chest wall and breast skin line for segmenting the whole breast. This step took advantages of the continuity of chest wall and breast skin line across adjacent slices. We then further used fuzzy c-means clustering method with automatic selection of cluster number for segmenting the fibroglandular tissues within the segmented whole breast area. Finally, a statistical method was used to set a threshold based on the estimated noise level for segmenting the enhanced fibroglandular tissues in the subtraction images of pre- and post-contrast MR scans. Based on the segmented whole breast, fibroglandular tissues, and enhanced fibroglandular tissues, FGT and BPE were automatically computed. Preliminary results of technical evaluation and clinical validation showed that our fully automated scheme could obtain good segmentation of the whole breast, fibroglandular tissues, and enhanced fibroglandular tissues to achieve accurate assessment of FGT and BPE for quantitative analysis of breast cancer risk.

  14. Automated static image analysis as a novel tool in describing the physical properties of dietary fiber

    Directory of Open Access Journals (Sweden)

    Marcin Andrzej KUREK

    2015-01-01

    Full Text Available Abstract The growing interest in the usage of dietary fiber in food has caused the need to provide precise tools for describing its physical properties. This research examined two dietary fibers from oats and beets, respectively, in variable particle sizes. The application of automated static image analysis for describing the hydration properties and particle size distribution of dietary fiber was analyzed. Conventional tests for water holding capacity (WHC were conducted. The particles were measured at two points: dry and after water soaking. The most significant water holding capacity (7.00 g water/g solid was achieved by the smaller sized oat fiber. Conversely, the water holding capacity was highest (4.20 g water/g solid in larger sized beet fiber. There was evidence for water absorption increasing with a decrease in particle size in regards to the same fiber source. Very strong correlations were drawn between particle shape parameters, such as fiber length, straightness, width and hydration properties measured conventionally. The regression analysis provided the opportunity to estimate whether the automated static image analysis method could be an efficient tool in describing the hydration properties of dietary fiber. The application of the method was validated using mathematical model which was verified in comparison to conventional WHC measurement results.

  15. A New Method for Automated Identification and Morphometry of Myelinated Fibers Through Light Microscopy Image Analysis.

    Science.gov (United States)

    Novas, Romulo Bourget; Fazan, Valeria Paula Sassoli; Felipe, Joaquim Cezar

    2016-02-01

    Nerve morphometry is known to produce relevant information for the evaluation of several phenomena, such as nerve repair, regeneration, implant, transplant, aging, and different human neuropathies. Manual morphometry is laborious, tedious, time consuming, and subject to many sources of error. Therefore, in this paper, we propose a new method for the automated morphometry of myelinated fibers in cross-section light microscopy images. Images from the recurrent laryngeal nerve of adult rats and the vestibulocochlear nerve of adult guinea pigs were used herein. The proposed pipeline for fiber segmentation is based on the techniques of competitive clustering and concavity analysis. The evaluation of the proposed method for segmentation of images was done by comparing the automatic segmentation with the manual segmentation. To further evaluate the proposed method considering morphometric features extracted from the segmented images, the distributions of these features were tested for statistical significant difference. The method achieved a high overall sensitivity and very low false-positive rates per image. We detect no statistical difference between the distribution of the features extracted from the manual and the pipeline segmentations. The method presented a good overall performance, showing widespread potential in experimental and clinical settings allowing large-scale image analysis and, thus, leading to more reliable results.

  16. Automated spine and vertebrae detection in CT images using object-based image analysis.

    Science.gov (United States)

    Schwier, M; Chitiboi, T; Hülnhagen, T; Hahn, H K

    2013-09-01

    Although computer assistance has become common in medical practice, some of the most challenging tasks that remain unsolved are in the area of automatic detection and recognition. The human visual perception is in general far superior to computer vision algorithms. Object-based image analysis is a relatively new approach that aims to lift image analysis from a pixel-based processing to a semantic region-based processing of images. It allows effective integration of reasoning processes and contextual concepts into the recognition method. In this paper, we present an approach that applies object-based image analysis to the task of detecting the spine in computed tomography images. A spine detection would be of great benefit in several contexts, from the automatic labeling of vertebrae to the assessment of spinal pathologies. We show with our approach how region-based features, contextual information and domain knowledge, especially concerning the typical shape and structure of the spine and its components, can be used effectively in the analysis process. The results of our approach are promising with a detection rate for vertebral bodies of 96% and a precision of 99%. We also gain a good two-dimensional segmentation of the spine along the more central slices and a coarse three-dimensional segmentation. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Development of an automated imaging pipeline for the analysis of the zebrafish larval kidney.

    Directory of Open Access Journals (Sweden)

    Jens H Westhoff

    Full Text Available The analysis of kidney malformation caused by environmental influences during nephrogenesis or by hereditary nephropathies requires animal models allowing the in vivo observation of developmental processes. The zebrafish has emerged as a useful model system for the analysis of vertebrate organ development and function, and it is suitable for the identification of organotoxic or disease-modulating compounds on a larger scale. However, to fully exploit its potential in high content screening applications, dedicated protocols are required allowing the consistent visualization of inner organs such as the embryonic kidney. To this end, we developed a high content screening compatible pipeline for the automated imaging of standardized views of the developing pronephros in zebrafish larvae. Using a custom designed tool, cavities were generated in agarose coated microtiter plates allowing for accurate positioning and orientation of zebrafish larvae. This enabled the subsequent automated acquisition of stable and consistent dorsal views of pronephric kidneys. The established pipeline was applied in a pilot screen for the analysis of the impact of potentially nephrotoxic drugs on zebrafish pronephros development in the Tg(wt1b:EGFP transgenic line in which the developing pronephros is highlighted by GFP expression. The consistent image data that was acquired allowed for quantification of gross morphological pronephric phenotypes, revealing concentration dependent effects of several compounds on nephrogenesis. In addition, applicability of the imaging pipeline was further confirmed in a morpholino based model for cilia-associated human genetic disorders associated with different intraflagellar transport genes. The developed tools and pipeline can be used to study various aspects in zebrafish kidney research, and can be readily adapted for the analysis of other organ systems.

  18. Automated discrimination of lower and higher grade gliomas based on histopathological image analysis

    Directory of Open Access Journals (Sweden)

    Hojjat Seyed Mousavi

    2015-01-01

    Full Text Available Introduction: Histopathological images have rich structural information, are multi-channel in nature and contain meaningful pathological information at various scales. Sophisticated image analysis tools that can automatically extract discriminative information from the histopathology image slides for diagnosis remain an area of significant research activity. In this work, we focus on automated brain cancer grading, specifically glioma grading. Grading of a glioma is a highly important problem in pathology and is largely done manually by medical experts based on an examination of pathology slides (images. To complement the efforts of clinicians engaged in brain cancer diagnosis, we develop novel image processing algorithms and systems to automatically grade glioma tumor into two categories: Low-grade glioma (LGG and high-grade glioma (HGG which represent a more advanced stage of the disease. Results: We propose novel image processing algorithms based on spatial domain analysis for glioma tumor grading that will complement the clinical interpretation of the tissue. The image processing techniques are developed in close collaboration with medical experts to mimic the visual cues that a clinician looks for in judging of the grade of the disease. Specifically, two algorithmic techniques are developed: (1 A cell segmentation and cell-count profile creation for identification of Pseudopalisading Necrosis, and (2 a customized operation of spatial and morphological filters to accurately identify microvascular proliferation (MVP. In both techniques, a hierarchical decision is made via a decision tree mechanism. If either Pseudopalisading Necrosis or MVP is found present in any part of the histopathology slide, the whole slide is identified as HGG, which is consistent with World Health Organization guidelines. Experimental results on the Cancer Genome Atlas database are presented in the form of: (1 Successful detection rates of pseudopalisading necrosis

  19. Automated analysis of heterogeneous carbon nanostructures by high-resolution electron microscopy and on-line image processing

    Energy Technology Data Exchange (ETDEWEB)

    Toth, P., E-mail: toth.pal@uni-miskolc.hu [Department of Chemical Engineering, University of Utah, 50 S. Central Campus Drive, Salt Lake City, UT 84112-9203 (United States); Farrer, J.K. [Department of Physics and Astronomy, Brigham Young University, N283 ESC, Provo, UT 84602 (United States); Palotas, A.B. [Department of Combustion Technology and Thermal Energy, University of Miskolc, H3515, Miskolc-Egyetemvaros (Hungary); Lighty, J.S.; Eddings, E.G. [Department of Chemical Engineering, University of Utah, 50 S. Central Campus Drive, Salt Lake City, UT 84112-9203 (United States)

    2013-06-15

    High-resolution electron microscopy is an efficient tool for characterizing heterogeneous nanostructures; however, currently the analysis is a laborious and time-consuming manual process. In order to be able to accurately and robustly quantify heterostructures, one must obtain a statistically high number of micrographs showing images of the appropriate sub-structures. The second step of analysis is usually the application of digital image processing techniques in order to extract meaningful structural descriptors from the acquired images. In this paper it will be shown that by applying on-line image processing and basic machine vision algorithms, it is possible to fully automate the image acquisition step; therefore, the number of acquired images in a given time can be increased drastically without the need for additional human labor. The proposed automation technique works by computing fields of structural descriptors in situ and thus outputs sets of the desired structural descriptors in real-time. The merits of the method are demonstrated by using combustion-generated black carbon samples. - Highlights: ► The HRTEM analysis of heterogeneous nanostructures is a tedious manual process. ► Automatic HRTEM image acquisition and analysis can improve data quantity and quality. ► We propose a method based on on-line image analysis for the automation of HRTEM image acquisition. ► The proposed method is demonstrated using HRTEM images of soot particles.

  20. Ranking quantitative resistance to Septoria tritici blotch in elite wheat cultivars using automated image analysis.

    Science.gov (United States)

    Karisto, Petteri; Hund, Andreas; Yu, Kang; Anderegg, Jonas; Walter, Achim; Mascher, Fabio; McDonald, Bruce A; Mikaberidze, Alexey

    2017-12-06

    Quantitative resistance is likely to be more durable than major gene resistance for controlling Septoria tritici blotch (STB) on wheat. Earlier studies hypothesized that resistance affecting the degree of host damage, as measured by the percentage of leaf area covered by STB lesions, is distinct from resistance that affects pathogen reproduction, as measured by the density of pycnidia produced within lesions. We tested this hypothesis using a collection of 335 elite European winter wheat cultivars that was naturally infected by a diverse population of Zymoseptoria tritici in a replicated field experiment. We used automated image analysis (AIA) of 21420 scanned wheat leaves to obtain quantitative measures of conditional STB intesity that were precise, objective, and reproducible. These measures allowed us to explicitly separate resistance affecting host damage from resistance affecting pathogen reproduction, enabling us to confirm that these resistance traits are largely independent. The cultivar rankings based on host damage were different from the rankings based on pathogen reproduction, indicating that the two forms of resistance should be considered separately in breeding programs aiming to increase STB resistance. We hypothesize that these different forms of resistance are under separate genetic control, enabling them to be recombined to form new cultivars that are highly resistant to STB. We found a significant correlation between rankings based on automated image analysis and rankings based on traditional visual scoring, suggesting that image analysis can complement conventional measurements of STB resistance, based largely on host damage, while enabling a much more precise measure of pathogen reproduction. We showed that measures of pathogen reproduction early in the growing season were the best predictors of host damage late in the growing season, illustrating the importance of breeding for resistance that reduces pathogen reproduction in order to minimize

  1. Automating Shallow Seismic Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Steeples, Don W.

    2004-12-09

    This seven-year, shallow-seismic reflection research project had the aim of improving geophysical imaging of possible contaminant flow paths. Thousands of chemically contaminated sites exist in the United States, including at least 3,700 at Department of Energy (DOE) facilities. Imaging technologies such as shallow seismic reflection (SSR) and ground-penetrating radar (GPR) sometimes are capable of identifying geologic conditions that might indicate preferential contaminant-flow paths. Historically, SSR has been used very little at depths shallower than 30 m, and even more rarely at depths of 10 m or less. Conversely, GPR is rarely useful at depths greater than 10 m, especially in areas where clay or other electrically conductive materials are present near the surface. Efforts to image the cone of depression around a pumping well using seismic methods were only partially successful (for complete references of all research results, see the full Final Technical Report, DOE/ER/14826-F), but peripheral results included development of SSR methods for depths shallower than one meter, a depth range that had not been achieved before. Imaging at such shallow depths, however, requires geophone intervals of the order of 10 cm or less, which makes such surveys very expensive in terms of human time and effort. We also showed that SSR and GPR could be used in a complementary fashion to image the same volume of earth at very shallow depths. The primary research focus of the second three-year period of funding was to develop and demonstrate an automated method of conducting two-dimensional (2D) shallow-seismic surveys with the goal of saving time, effort, and money. Tests involving the second generation of the hydraulic geophone-planting device dubbed the ''Autojuggie'' showed that large numbers of geophones can be placed quickly and automatically and can acquire high-quality data, although not under rough topographic conditions. In some easy

  2. Results of Automated Retinal Image Analysis for Detection of Diabetic Retinopathy from the Nakuru Study, Kenya.

    Science.gov (United States)

    Hansen, Morten B; Abràmoff, Michael D; Folk, James C; Mathenge, Wanjiku; Bastawrous, Andrew; Peto, Tunde

    2015-01-01

    Digital retinal imaging is an established method of screening for diabetic retinopathy (DR). It has been established that currently about 1% of the world's blind or visually impaired is due to DR. However, the increasing prevalence of diabetes mellitus and DR is creating an increased workload on those with expertise in grading retinal images. Safe and reliable automated analysis of retinal images may support screening services worldwide. This study aimed to compare the Iowa Detection Program (IDP) ability to detect diabetic eye diseases (DED) to human grading carried out at Moorfields Reading Centre on the population of Nakuru Study from Kenya. Retinal images were taken from participants of the Nakuru Eye Disease Study in Kenya in 2007/08 (n = 4,381 participants [NW6 Topcon Digital Retinal Camera]). First, human grading was performed for the presence or absence of DR, and for those with DR this was sub-divided in to referable or non-referable DR. The automated IDP software was deployed to identify those with DR and also to categorize the severity of DR. The primary outcomes were sensitivity, specificity, and positive and negative predictive value of IDP versus the human grader as reference standard. Altogether 3,460 participants were included. 113 had DED, giving a prevalence of 3.3% (95% CI, 2.7-3.9%). Sensitivity of the IDP to detect DED as by the human grading was 91.0% (95% CI, 88.0-93.4%). The IDP ability to detect DED gave an AUC of 0.878 (95% CI 0.850-0.905). It showed a negative predictive value of 98%. The IDP missed no vision threatening retinopathy in any patients and none of the false negative cases met criteria for treatment. In this epidemiological sample, the IDP's grading was comparable to that of human graders'. It therefore might be feasible to consider inclusion into usual epidemiological grading.

  3. Automated Analysis of 123I-beta-CIT SPECT Images with Statistical Probabilistic Anatomical Mapping

    International Nuclear Information System (INIS)

    Eo, Jae Seon; Lee, Hoyoung; Lee, Jae Sung; Kim, Yu Kyung; Jeon, Bumseok; Lee, Dong Soo

    2014-01-01

    Population-based statistical probabilistic anatomical maps have been used to generate probabilistic volumes of interest for analyzing perfusion and metabolic brain imaging. We investigated the feasibility of automated analysis for dopamine transporter images using this technique and evaluated striatal binding potentials in Parkinson's disease and Wilson's disease. We analyzed 2β-Carbomethoxy-3β-(4- 123 I-iodophenyl)tropane ( 123 I-beta-CIT) SPECT images acquired from 26 people with Parkinson's disease (M:F=11:15,mean age=49±12 years), 9 people with Wilson's disease (M: F=6:3, mean age=26±11 years) and 17 normal controls (M:F=5:12, mean age=39±16 years). A SPECT template was created using striatal statistical probabilistic map images. All images were spatially normalized onto the template, and probability-weighted regional counts in striatal structures were estimated. The binding potential was calculated using the ratio of specific and nonspecific binding activities at equilibrium. Voxel-based comparisons between groups were also performed using statistical parametric mapping. Qualitative assessment showed that spatial normalizations of the SPECT images were successful for all images. The striatal binding potentials of participants with Parkinson's disease and Wilson's disease were significantly lower than those of normal controls. Statistical parametric mapping analysis found statistically significant differences only in striatal regions in both disease groups compared to controls. We successfully evaluated the regional 123 I-beta-CIT distribution using the SPECT template and probabilistic map data automatically. This procedure allows an objective and quantitative comparison of the binding potential, which in this case showed a significantly decreased binding potential in the striata of patients with Parkinson's disease or Wilson's disease

  4. How automated image analysis techniques help scientists in species identification and classification?

    Science.gov (United States)

    Yousef Kalafi, Elham; Town, Christopher; Kaur Dhillon, Sarinder

    2017-09-04

    Identification of taxonomy at a specific level is time consuming and reliant upon expert ecologists. Hence the demand for automated species identification increased over the last two decades. Automation of data classification is primarily focussed on images, incorporating and analysing image data has recently become easier due to developments in computational technology. Research efforts in identification of species include specimens' image processing, extraction of identical features, followed by classifying them into correct categories. In this paper, we discuss recent automated species identification systems, categorizing and evaluating their methods. We reviewed and compared different methods in step by step scheme of automated identification and classification systems of species images. The selection of methods is influenced by many variables such as level of classification, number of training data and complexity of images. The aim of writing this paper is to provide researchers and scientists an extensive background study on work related to automated species identification, focusing on pattern recognition techniques in building such systems for biodiversity studies.

  5. Automated Image Analysis of Lung Branching Morphogenesis from Microscopic Images of Fetal Rat Explants

    Directory of Open Access Journals (Sweden)

    Pedro L. Rodrigues

    2014-01-01

    Full Text Available Background. Regulating mechanisms of branching morphogenesis of fetal lung rat explants have been an essential tool for molecular research. This work presents a new methodology to accurately quantify the epithelial, outer contour, and peripheral airway buds of lung explants during cellular development from microscopic images. Methods. The outer contour was defined using an adaptive and multiscale threshold algorithm whose level was automatically calculated based on an entropy maximization criterion. The inner lung epithelium was defined by a clustering procedure that groups small image regions according to the minimum description length principle and local statistical properties. Finally, the number of peripheral buds was counted as the skeleton branched ends from a skeletonized image of the lung inner epithelia. Results. The time for lung branching morphometric analysis was reduced in 98% in contrast to the manual method. Best results were obtained in the first two days of cellular development, with lesser standard deviations. Nonsignificant differences were found between the automatic and manual results in all culture days. Conclusions. The proposed method introduces a series of advantages related to its intuitive use and accuracy, making the technique suitable to images with different lighting characteristics and allowing a reliable comparison between different researchers.

  6. Automated analysis of retinal imaging using machine learning techniques for computer vision.

    Science.gov (United States)

    De Fauw, Jeffrey; Keane, Pearse; Tomasev, Nenad; Visentin, Daniel; van den Driessche, George; Johnson, Mike; Hughes, Cian O; Chu, Carlton; Ledsam, Joseph; Back, Trevor; Peto, Tunde; Rees, Geraint; Montgomery, Hugh; Raine, Rosalind; Ronneberger, Olaf; Cornebise, Julien

    2016-01-01

    There are almost two million people in the United Kingdom living with sight loss, including around 360,000 people who are registered as blind or partially sighted. Sight threatening diseases, such as diabetic retinopathy and age related macular degeneration have contributed to the 40% increase in outpatient attendances in the last decade but are amenable to early detection and monitoring. With early and appropriate intervention, blindness may be prevented in many cases. Ophthalmic imaging provides a way to diagnose and objectively assess the progression of a number of pathologies including neovascular ("wet") age-related macular degeneration (wet AMD) and diabetic retinopathy. Two methods of imaging are commonly used: digital photographs of the fundus (the 'back' of the eye) and Optical Coherence Tomography (OCT, a modality that uses light waves in a similar way to how ultrasound uses sound waves). Changes in population demographics and expectations and the changing pattern of chronic diseases creates a rising demand for such imaging. Meanwhile, interrogation of such images is time consuming, costly, and prone to human error. The application of novel analysis methods may provide a solution to these challenges. This research will focus on applying novel machine learning algorithms to automatic analysis of both digital fundus photographs and OCT in Moorfields Eye Hospital NHS Foundation Trust patients. Through analysis of the images used in ophthalmology, along with relevant clinical and demographic information, DeepMind Health will investigate the feasibility of automated grading of digital fundus photographs and OCT and provide novel quantitative measures for specific disease features and for monitoring the therapeutic success.

  7. Automated ship image acquisition

    Science.gov (United States)

    Hammond, T. R.

    2008-04-01

    The experimental Automated Ship Image Acquisition System (ASIA) collects high-resolution ship photographs at a shore-based laboratory, with minimal human intervention. The system uses Automatic Identification System (AIS) data to direct a high-resolution SLR digital camera to ship targets and to identify the ships in the resulting photographs. The photo database is then searchable using the rich data fields from AIS, which include the name, type, call sign and various vessel identification numbers. The high-resolution images from ASIA are intended to provide information that can corroborate AIS reports (e.g., extract identification from the name on the hull) or provide information that has been omitted from the AIS reports (e.g., missing or incorrect hull dimensions, cargo, etc). Once assembled into a searchable image database, the images can be used for a wide variety of marine safety and security applications. This paper documents the author's experience with the practicality of composing photographs based on AIS reports alone, describing a number of ways in which this can go wrong, from errors in the AIS reports, to fixed and mobile obstructions and multiple ships in the shot. The frequency with which various errors occurred in automatically-composed photographs collected in Halifax harbour in winter time were determined by manual examination of the images. 45% of the images examined were considered of a quality sufficient to read identification markings, numbers and text off the entire ship. One of the main technical challenges for ASIA lies in automatically differentiating good and bad photographs, so that few bad ones would be shown to human users. Initial attempts at automatic photo rating showed 75% agreement with manual assessments.

  8. Automated local bright feature image analysis of nuclear proteindistribution identifies changes in tissue phenotype

    Energy Technology Data Exchange (ETDEWEB)

    Knowles, David; Sudar, Damir; Bator, Carol; Bissell, Mina

    2006-02-01

    The organization of nuclear proteins is linked to cell and tissue phenotypes. When cells arrest proliferation, undergo apoptosis, or differentiate, the distribution of nuclear proteins changes. Conversely, forced alteration of the distribution of nuclear proteins modifies cell phenotype. Immunostaining and fluorescence microscopy have been critical for such findings. However, there is an increasing need for quantitative analysis of nuclear protein distribution to decipher epigenetic relationships between nuclear structure and cell phenotype, and to unravel the mechanisms linking nuclear structure and function. We have developed imaging methods to quantify the distribution of fluorescently-stained nuclear protein NuMA in different mammary phenotypes obtained using three-dimensional cell culture. Automated image segmentation of DAPI-stained nuclei was generated to isolate thousands of nuclei from three-dimensional confocal images. Prominent features of fluorescently-stained NuMA were detected using a novel local bright feature analysis technique, and their normalized spatial density calculated as a function of the distance from the nuclear perimeter to its center. The results revealed marked changes in the distribution of the density of NuMA bright features as non-neoplastic cells underwent phenotypically normal acinar morphogenesis. In contrast, we did not detect any reorganization of NuMA during the formation of tumor nodules by malignant cells. Importantly, the analysis also discriminated proliferating non-neoplastic cells from proliferating malignant cells, suggesting that these imaging methods are capable of identifying alterations linked not only to the proliferation status but also to the malignant character of cells. We believe that this quantitative analysis will have additional applications for classifying normal and pathological tissues.

  9. A methodology for automated CPA extraction using liver biopsy image analysis and machine learning techniques.

    Science.gov (United States)

    Tsipouras, Markos G; Giannakeas, Nikolaos; Tzallas, Alexandros T; Tsianou, Zoe E; Manousou, Pinelopi; Hall, Andrew; Tsoulos, Ioannis; Tsianos, Epameinondas

    2017-03-01

    Collagen proportional area (CPA) extraction in liver biopsy images provides the degree of fibrosis expansion in liver tissue, which is the most characteristic histological alteration in hepatitis C virus (HCV). Assessment of the fibrotic tissue is currently based on semiquantitative staging scores such as Ishak and Metavir. Since its introduction as a fibrotic tissue assessment technique, CPA calculation based on image analysis techniques has proven to be more accurate than semiquantitative scores. However, CPA has yet to reach everyday clinical practice, since the lack of standardized and robust methods for computerized image analysis for CPA assessment have proven to be a major limitation. The current work introduces a three-stage fully automated methodology for CPA extraction based on machine learning techniques. Specifically, clustering algorithms have been employed for background-tissue separation, as well as for fibrosis detection in liver tissue regions, in the first and the third stage of the methodology, respectively. Due to the existence of several types of tissue regions in the image (such as blood clots, muscle tissue, structural collagen, etc.), classification algorithms have been employed to identify liver tissue regions and exclude all other non-liver tissue regions from CPA computation. For the evaluation of the methodology, 79 liver biopsy images have been employed, obtaining 1.31% mean absolute CPA error, with 0.923 concordance correlation coefficient. The proposed methodology is designed to (i) avoid manual threshold-based and region selection processes, widely used in similar approaches presented in the literature, and (ii) minimize CPA calculation time. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. A Novel Automated High-Content Analysis Workflow Capturing Cell Population Dynamics from Induced Pluripotent Stem Cell Live Imaging Data.

    Science.gov (United States)

    Kerz, Maximilian; Folarin, Amos; Meleckyte, Ruta; Watt, Fiona M; Dobson, Richard J; Danovi, Davide

    2016-10-01

    Most image analysis pipelines rely on multiple channels per image with subcellular reference points for cell segmentation. Single-channel phase-contrast images are often problematic, especially for cells with unfavorable morphology, such as induced pluripotent stem cells (iPSCs). Live imaging poses a further challenge, because of the introduction of the dimension of time. Evaluations cannot be easily integrated with other biological data sets including analysis of endpoint images. Here, we present a workflow that incorporates a novel CellProfiler-based image analysis pipeline enabling segmentation of single-channel images with a robust R-based software solution to reduce the dimension of time to a single data point. These two packages combined allow robust segmentation of iPSCs solely on phase-contrast single-channel images and enable live imaging data to be easily integrated to endpoint data sets while retaining the dynamics of cellular responses. The described workflow facilitates characterization of the response of live-imaged iPSCs to external stimuli and definition of cell line-specific, phenotypic signatures. We present an efficient tool set for automated high-content analysis suitable for cells with challenging morphology. This approach has potentially widespread applications for human pluripotent stem cells and other cell types. © 2016 Society for Laboratory Automation and Screening.

  11. Automated segmentation of muscle and adipose tissue on CT images for human body composition analysis

    Science.gov (United States)

    Chung, Howard; Cobzas, Dana; Birdsell, Laura; Lieffers, Jessica; Baracos, Vickie

    2009-02-01

    The ability to compute body composition in cancer patients lends itself to determining the specific clinical outcomes associated with fat and lean tissue stores. For example, a wasting syndrome of advanced disease associates with shortened survival. Moreover, certain tissue compartments represent sites for drug distribution and are likely determinants of chemotherapy efficacy and toxicity. CT images are abundant, but these cannot be fully exploited unless there exist practical and fast approaches for tissue quantification. Here we propose a fully automated method for segmenting muscle, visceral and subcutaneous adipose tissues, taking the approach of shape modeling for the analysis of skeletal muscle. Muscle shape is represented using PCA encoded Free Form Deformations with respect to a mean shape. The shape model is learned from manually segmented images and used in conjunction with a tissue appearance prior. VAT and SAT are segmented based on the final deformed muscle shape. In comparing the automatic and manual methods, coefficients of variation (COV) (1 - 2%), were similar to or smaller than inter- and intra-observer COVs reported for manual segmentation.

  12. Automated analysis for early signs of cerebral infarctions on brain X-ray CT images

    International Nuclear Information System (INIS)

    Oshima, Kazuki; Hara, Takeshi; Zhou, X.; Muramatsu, Chisako; Fujita, Hiroshi; Sakashita, Keiji

    2010-01-01

    t-PA (tissue plasminogen activator) thrombolysis is an effective clinical treatment for the acute cerebral infarction by breakdown to blood clots. However there is a risk of hemorrhage with its use. The guideline of the treatment is denying cerebral hemorrhage and widespread Early CT sign (ECS) on CT images. In this study, we analyzed the CT value of normal brain and ECS with normal brain model by comparing patient brain CT scan with a statistical normal model. Our method has constructed normal brain models consisted of 60 normal brain X-ray CT images. We calculated Z-score based on statistical model for 16 cases of cerebral infarction with ECS, 3 cases of cerebral infarction without ECS, and 25 cases of normal brain. The results of statistical analysis showed that there was a statistically significant difference between control and abnormal groups. This result implied that the automated detection scheme for ECS by using Z-score would be a possible application for brain computer-aided diagnosis (CAD). (author)

  13. Experimental saltwater intrusion in coastal aquifers using automated image analysis: Applications to homogeneous aquifers

    Science.gov (United States)

    Robinson, G.; Ahmed, Ashraf A.; Hamill, G. A.

    2016-07-01

    This paper presents the applications of a novel methodology to quantify saltwater intrusion parameters in laboratory-scale experiments. The methodology uses an automated image analysis procedure, minimising manual inputs and the subsequent systematic errors that can be introduced. This allowed the quantification of the width of the mixing zone which is difficult to measure in experimental methods that are based on visual observations. Glass beads of different grain sizes were tested for both steady-state and transient conditions. The transient results showed good correlation between experimental and numerical intrusion rates. The experimental intrusion rates revealed that the saltwater wedge reached a steady state condition sooner while receding than advancing. The hydrodynamics of the experimental mixing zone exhibited similar traits; a greater increase in the width of the mixing zone was observed in the receding saltwater wedge, which indicates faster fluid velocities and higher dispersion. The angle of intrusion analysis revealed the formation of a volume of diluted saltwater at the toe position when the saltwater wedge is prompted to recede. In addition, results of different physical repeats of the experiment produced an average coefficient of variation less than 0.18 of the measured toe length and width of the mixing zone.

  14. Development and application of an automated analysis method for individual cerebral perfusion single photon emission tomography images

    CERN Document Server

    Cluckie, A J

    2001-01-01

    Neurological images may be analysed by performing voxel by voxel comparisons with a group of control subject images. An automated, 3D, voxel-based method has been developed for the analysis of individual single photon emission tomography (SPET) scans. Clusters of voxels are identified that represent regions of abnormal radiopharmaceutical uptake. Morphological operators are applied to reduce noise in the clusters, then quantitative estimates of the size and degree of the radiopharmaceutical uptake abnormalities are derived. Statistical inference has been performed using a Monte Carlo method that has not previously been applied to SPET scans, or for the analysis of individual images. This has been validated for group comparisons of SPET scans and for the analysis of an individual image using comparison with a group. Accurate statistical inference was obtained independent of experimental factors such as degrees of freedom, image smoothing and voxel significance level threshold. The analysis method has been eval...

  15. Automated analysis of retinal images for detection of referable diabetic retinopathy.

    Science.gov (United States)

    Abràmoff, Michael D; Folk, James C; Han, Dennis P; Walker, Jonathan D; Williams, David F; Russell, Stephen R; Massin, Pascale; Cochener, Beatrice; Gain, Philippe; Tang, Li; Lamard, Mathieu; Moga, Daniela C; Quellec, Gwénolé; Niemeijer, Meindert

    2013-03-01

    The diagnostic accuracy of computer detection programs has been reported to be comparable to that of specialists and expert readers, but no computer detection programs have been validated in an independent cohort using an internationally recognized diabetic retinopathy (DR) standard. To determine the sensitivity and specificity of the Iowa Detection Program (IDP) to detect referable diabetic retinopathy (RDR). In primary care DR clinics in France, from January 1, 2005, through December 31, 2010, patients were photographed consecutively, and retinal color images were graded for retinopathy severity according to the International Clinical Diabetic Retinopathy scale and macular edema by 3 masked independent retinal specialists and regraded with adjudication until consensus. The IDP analyzed the same images at a predetermined and fixed set point. We defined RDR as more than mild nonproliferative retinopathy and/or macular edema. A total of 874 people with diabetes at risk for DR. Sensitivity and specificity of the IDP to detect RDR, area under the receiver operating characteristic curve, sensitivity and specificity of the retinal specialists' readings, and mean interobserver difference (κ). The RDR prevalence was 21.7% (95% CI, 19.0%-24.5%). The IDP sensitivity was 96.8% (95% CI, 94.4%-99.3%) and specificity was 59.4% (95% CI, 55.7%-63.0%), corresponding to 6 of 874 false-negative results (none met treatment criteria). The area under the receiver operating characteristic curve was 0.937 (95% CI, 0.916-0.959). Before adjudication and consensus, the sensitivity/specificity of the retinal specialists were 0.80/0.98, 0.71/1.00, and 0.91/0.95, and the mean intergrader κ was 0.822. The IDP has high sensitivity and specificity to detect RDR. Computer analysis of retinal photographs for DR and automated detection of RDR can be implemented safely into the DR screening pipeline, potentially improving access to screening and health care productivity and reducing visual loss

  16. Development of Automated Image Analysis Tools for Verification of Radiotherapy Field Accuracy with AN Electronic Portal Imaging Device.

    Science.gov (United States)

    Dong, Lei

    1995-01-01

    The successful management of cancer with radiation relies on the accurate deposition of a prescribed dose to a prescribed anatomical volume within the patient. Treatment set-up errors are inevitable because the alignment of field shaping devices with the patient must be repeated daily up to eighty times during the course of a fractionated radiotherapy treatment. With the invention of electronic portal imaging devices (EPIDs), patient's portal images can be visualized daily in real-time after only a small fraction of the radiation dose has been delivered to each treatment field. However, the accuracy of human visual evaluation of low-contrast portal images has been found to be inadequate. The goal of this research is to develop automated image analysis tools to detect both treatment field shape errors and patient anatomy placement errors with an EPID. A moments method has been developed to align treatment field images to compensate for lack of repositioning precision of the image detector. A figure of merit has also been established to verify the shape and rotation of the treatment fields. Following proper alignment of treatment field boundaries, a cross-correlation method has been developed to detect shifts of the patient's anatomy relative to the treatment field boundary. Phantom studies showed that the moments method aligned the radiation fields to within 0.5mm of translation and 0.5^ circ of rotation and that the cross-correlation method aligned anatomical structures inside the radiation field to within 1 mm of translation and 1^ circ of rotation. A new procedure of generating and using digitally reconstructed radiographs (DRRs) at megavoltage energies as reference images was also investigated. The procedure allowed a direct comparison between a designed treatment portal and the actual patient setup positions detected by an EPID. Phantom studies confirmed the feasibility of the methodology. Both the moments method and the cross -correlation technique were

  17. Automated quantification and sizing of unbranched filamentous cyanobacteria by model-based object-oriented image analysis.

    Science.gov (United States)

    Zeder, Michael; Van den Wyngaert, Silke; Köster, Oliver; Felder, Kathrin M; Pernthaler, Jakob

    2010-03-01

    Quantification and sizing of filamentous cyanobacteria in environmental samples or cultures are time-consuming and are often performed by using manual or semiautomated microscopic analysis. Automation of conventional image analysis is difficult because filaments may exhibit great variations in length and patchy autofluorescence. Moreover, individual filaments frequently cross each other in microscopic preparations, as deduced by modeling. This paper describes a novel approach based on object-oriented image analysis to simultaneously determine (i) filament number, (ii) individual filament lengths, and (iii) the cumulative filament length of unbranched cyanobacterial morphotypes in fluorescent microscope images in a fully automated high-throughput manner. Special emphasis was placed on correct detection of overlapping objects by image analysis and on appropriate coverage of filament length distribution by using large composite images. The method was validated with a data set for Planktothrix rubescens from field samples and was compared with manual filament tracing, the line intercept method, and the Utermöhl counting approach. The computer program described allows batch processing of large images from any appropriate source and annotation of detected filaments. It requires no user interaction, is available free, and thus might be a useful tool for basic research and drinking water quality control.

  18. Automated analysis of images acquired with electronic portal imaging device during delivery of quality assurance plans for inversely optimized arc therapy

    DEFF Research Database (Denmark)

    Fredh, Anna; Korreman, Stine; Rosenschöld, Per Munck af

    2010-01-01

    This work presents an automated method for comprehensively analyzing EPID images acquired for quality assurance of RapidArc treatment delivery. In-house-developed software has been used for the analysis and long-term results from measurements on three linacs are presented....

  19. Quantification of Eosinophilic Granule Protein Deposition in Biopsies of Inflammatory Skin Diseases by Automated Image Analysis of Highly Sensitive Immunostaining

    Directory of Open Access Journals (Sweden)

    Peter Kiehl

    1999-01-01

    Full Text Available Eosinophilic granulocytes are major effector cells in inflammation. Extracellular deposition of toxic eosinophilic granule proteins (EGPs, but not the presence of intact eosinophils, is crucial for their functional effect in situ. As even recent morphometric approaches to quantify the involvement of eosinophils in inflammation have been only based on cell counting, we developed a new method for the cell‐independent quantification of EGPs by image analysis of immunostaining. Highly sensitive, automated immunohistochemistry was done on paraffin sections of inflammatory skin diseases with 4 different primary antibodies against EGPs. Image analysis of immunostaining was performed by colour translation, linear combination and automated thresholding. Using strictly standardized protocols, the assay was proven to be specific and accurate concerning segmentation in 8916 fields of 520 sections, well reproducible in repeated measurements and reliable over 16 weeks observation time. The method may be valuable for the cell‐independent segmentation of immunostaining in other applications as well.

  20. AI (artificial intelligence in histopathology--from image analysis to automated diagnosis.

    Directory of Open Access Journals (Sweden)

    Aleksandar Bogovac

    2010-02-01

    Full Text Available The technological progress in digitalization of complete histological glass slides has opened a new door in tissue--based diagnosis. The presentation of microscopic images as a whole in a digital matrix is called virtual slide. A virtual slide allows calculation and related presentation of image information that otherwise can only be seen by individual human performance. The digital world permits attachments of several (if not all fields of view and the contemporary visualization on a screen. The presentation of all microscopic magnifications is possible if the basic pixel resolution is less than 0.25 microns. To introduce digital tissue--based diagnosis into the daily routine work of a surgical pathologist requires a new setup of workflow arrangement and procedures. The quality of digitized images is sufficient for diagnostic purposes; however, the time needed for viewing virtual slides exceeds that of viewing original glass slides by far. The reason lies in a slower and more difficult sampling procedure, which is the selection of information containing fields of view. By application of artificial intelligence, tissue--based diagnosis in routine work can be managed automatically in steps as follows: 1. The individual image quality has to be measured, and corrected, if necessary. 2. A diagnostic algorithm has to be applied. An algorithm has be developed, that includes both object based (object features, structures and pixel based (texture measures. 3. These measures serve for diagnosis classification and feedback to order additional information, for example in virtual immunohistochemical slides. 4. The measures can serve for automated image classification and detection of relevant image information by themselves without any labeling. 5. The pathologists' duty will not be released by such a system; to the contrary, it will manage and supervise the system, i.e., just working at a "higher level". Virtual slides are already in use for teaching and

  1. Automated medical image segmentation techniques

    Directory of Open Access Journals (Sweden)

    Sharma Neeraj

    2010-01-01

    Full Text Available Accurate segmentation of medical images is a key step in contouring during radiotherapy planning. Computed topography (CT and Magnetic resonance (MR imaging are the most widely used radiographic techniques in diagnosis, clinical studies and treatment planning. This review provides details of automated segmentation methods, specifically discussed in the context of CT and MR images. The motive is to discuss the problems encountered in segmentation of CT and MR images, and the relative merits and limitations of methods currently available for segmentation of medical images.

  2. Automated classification of inflammation in colon histological sections based on digital microscopy and advanced image analysis.

    Science.gov (United States)

    Ficsor, Levente; Varga, Viktor Sebestyén; Tagscherer, Attila; Tulassay, Zsolt; Molnar, Bela

    2008-03-01

    Automated and quantitative histological analysis can improve diagnostic efficacy in colon sections. Our objective was to develop a parameter set for automated classification of aspecific colitis, ulcerative colitis, and Crohn's disease using digital slides, tissue cytometric parameters, and virtual microscopy. Routinely processed hematoxylin-and-eosin-stained histological sections from specimens that showed normal mucosa (24 cases), aspecific colitis (11 cases), ulcerative colitis (25 cases), and Crohn's disease (9 cases) diagnosed by conventional optical microscopy were scanned and digitized in high resolution (0.24 mum/pixel). Thirty-eight cytometric parameters based on morphometry were determined on cells, glands, and superficial epithelium. Fourteen tissue cytometric parameters based on ratios of tissue compartments were counted as well. Leave-one-out discriminant analysis was used for classification of the samples groups. Cellular morphometric features showed no significant differences in these benign colon alterations. However, gland related morphological differences (Gland Shape) for normal mucosa, ulcerative colitis, and aspecific colitis were found (P parameters showed significant differences (P parameters were the ratio of cell number in glands and in the whole slide, biopsy/gland surface ratio. These differences resulted in 88% overall accuracy in the classification. Crohn's disease could be discriminated only in 56%. Automated virtual microscopy can be used to classify colon mucosa as normal, ulcerative colitis, and aspecific colitis with reasonable accuracy. Further developments of dedicated parameters are necessary to identify Crohn's disease on digital slides. Copyright 2008 International Society for Analytical Cytology.

  3. Analysis of irradiated U-7wt%Mo dispersion fuel microstructures using automated image processing

    Energy Technology Data Exchange (ETDEWEB)

    Collette, R. [Colorado School of Mines, Nuclear Science and Engineering Program, 1500 Illinois St, Golden, CO 80401 (United States); King, J., E-mail: kingjc@mines.edu [Colorado School of Mines, Nuclear Science and Engineering Program, 1500 Illinois St, Golden, CO 80401 (United States); Buesch, C. [Oregon State University, 1500 SW Jefferson St., Corvallis, OR 97331 (United States); Keiser, D.D.; Williams, W.; Miller, B.D.; Schulthess, J. [Nuclear Fuels and Materials Division, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-6188 (United States)

    2016-07-15

    The High Performance Research Reactor Fuel Development (HPPRFD) program is responsible for developing low enriched uranium (LEU) fuel substitutes for high performance reactors fueled with highly enriched uranium (HEU) that have not yet been converted to LEU. The uranium-molybdenum (U-Mo) fuel system was selected for this effort. In this study, fission gas pore segmentation was performed on U-7wt%Mo dispersion fuel samples at three separate fission densities using an automated image processing interface developed in MATLAB. Pore size distributions were attained that showed both expected and unexpected fission gas behavior. In general, it proved challenging to identify any dominant trends when comparing fission bubble data across samples from different fuel plates due to varying compositions and fabrication techniques. The results exhibited fair agreement with the fission density vs. porosity correlation developed by the Russian reactor conversion program. - Highlights: • Automated image processing is used to extract fission gas bubble data from irradiated U−Mo fuel samples. • Verification and validation tests are performed to ensure the algorithm's accuracy. • Fission bubble parameters are predictably difficult to compare across samples of varying compositions. • The 2-D results suggest the need for more homogenized fuel sampling in future studies. • The results also demonstrate the value of 3-D reconstruction techniques.

  4. Automated microscopic characterization of metallic ores with image analysis: a key to improve ore processing. I: test of the methodology

    International Nuclear Information System (INIS)

    Berrezueta, E.; Castroviejo, R.

    2007-01-01

    Ore microscopy has traditionally been an important support to control ore processing, but the volume of present day processes is beyond the reach of human operators. Automation is therefore compulsory, but its development through digital image analysis, DIA, is limited by various problems, such as the similarity in reflectance values of some important ores, their anisotropism, and the performance of instruments and methods. The results presented show that automated identification and quantification by DIA are possible through multiband (RGB) determinations with a research 3CCD video camera on reflected light microscope. These results were obtained by systematic measurement of selected ores accounting for most of the industrial applications. Polarized light is avoided, so the effects of anisotropism can be neglected. Quality control at various stages and statistical analysis are important, as is the application of complementary criteria (e.g. metallogenetic). The sequential methodology is described and shown through practical examples. (Author)

  5. Automated image analysis to quantify the subnuclear organization of transcriptional coregulatory protein complexes in living cell populations

    Science.gov (United States)

    Voss, Ty C.; Demarco, Ignacio A.; Booker, Cynthia F.; Day, Richard N.

    2004-06-01

    Regulated gene transcription is dependent on the steady-state concentration of DNA-binding and coregulatory proteins assembled in distinct regions of the cell nucleus. For example, several different transcriptional coactivator proteins, such as the Glucocorticoid Receptor Interacting Protein (GRIP), localize to distinct spherical intranuclear bodies that vary from approximately 0.2-1 micron in diameter. We are using multi-spectral wide-field microscopy of cells expressing coregulatory proteins labeled with the fluorescent proteins (FP) to study the mechanisms that control the assembly and distribution of these structures in living cells. However, variability between cells in the population makes an unbiased and consistent approach to this image analysis absolutely critical. To address this challenge, we developed a protocol for rigorous quantification of subnuclear organization in cell populations. Cells transiently co-expressing a green FP (GFP)-GRIP and the monomeric red FP (mRFP) are selected for imaging based only on the signal in the red channel, eliminating bias due to knowledge of coregulator organization. The impartially selected images of the GFP-coregulatory protein are then analyzed using an automated algorithm to objectively identify and measure the intranuclear bodies. By integrating all these features, this combination of unbiased image acquisition and automated analysis facilitates the precise and consistent measurement of thousands of protein bodies from hundreds of individual living cells that represent the population.

  6. Screening of subfertile men for testicular carcinoma in situ by an automated image analysis-based cytological test of the ejaculate

    DEFF Research Database (Denmark)

    Almstrup, K; Lippert, Marianne; Mogensen, Hanne O

    2011-01-01

    and detected in ejaculates with specific CIS markers. We have built a high throughput framework involving automated immunocytochemical staining, scanning microscopy and in silico image analysis allowing automated detection and grading of CIS-like stained objects in semen samples. In this study, 1175 ejaculates...... a slightly lower sensitivity (0.51), possibly because of obstruction. We conclude that this novel non-invasive test combining automated immunocytochemistry and advanced image analysis allows identification of TC at the CIS stage with a high specificity, but a negative test does not completely exclude CIS...

  7. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python.

    Science.gov (United States)

    Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri

    2014-01-01

    In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  8. Colorimetric focus-forming assay with automated focus counting by image analysis for quantification of infectious hepatitis C virions.

    Directory of Open Access Journals (Sweden)

    Wonseok Kang

    Full Text Available Hepatitis C virus (HCV infection is the leading cause of liver transplantation in Western countries. Studies of HCV infection using cell culture-produced HCV (HCVcc in vitro systems require quantification of infectious HCV virions, which has conventionally been performed by immunofluorescence-based focus-forming assay with manual foci counting; however, this is a laborious and time-consuming procedure with potentially biased results. In the present study, we established and optimized a method for convenient and objective quantification of HCV virions by colorimetric focus-forming assay with automated focus counting by image analysis. In testing different enzymes and chromogenic substrates, we obtained superior foci development using alkaline phosphatase-conjugated secondary antibody with BCIP/NBT chromogenic substrate. We additionally found that type I collagen coating minimized cell detachment during vigorous washing of the assay plate. After the colorimetric focus-forming assay, the foci number was determined using an ELISpot reader and image analysis software. The foci number and the calculated viral titer determined by this method strongly correlated with those determined by immunofluorescence-based focus-forming assay and manual foci counting. These results indicate that colorimetric focus-forming assay with automated focus counting by image analysis is applicable as a more-efficient and objective method for quantification of infectious HCV virions.

  9. CometQ: An automated tool for the detection and quantification of DNA damage using comet assay image analysis.

    Science.gov (United States)

    Ganapathy, Sreelatha; Muraleedharan, Aparna; Sathidevi, Puthumangalathu Savithri; Chand, Parkash; Rajkumar, Ravi Philip

    2016-09-01

    DNA damage analysis plays an important role in determining the approaches for treatment and prevention of various diseases like cancer, schizophrenia and other heritable diseases. Comet assay is a sensitive and versatile method for DNA damage analysis. The main objective of this work is to implement a fully automated tool for the detection and quantification of DNA damage by analysing comet assay images. The comet assay image analysis consists of four stages: (1) classifier (2) comet segmentation (3) comet partitioning and (4) comet quantification. Main features of the proposed software are the design and development of four comet segmentation methods, and the automatic routing of the input comet assay image to the most suitable one among these methods depending on the type of the image (silver stained or fluorescent stained) as well as the level of DNA damage (heavily damaged or lightly/moderately damaged). A classifier stage, based on support vector machine (SVM) is designed and implemented at the front end, to categorise the input image into one of the above four groups to ensure proper routing. Comet segmentation is followed by comet partitioning which is implemented using a novel technique coined as modified fuzzy clustering. Comet parameters are calculated in the comet quantification stage and are saved in an excel file. Our dataset consists of 600 silver stained images obtained from 40 Schizophrenia patients with different levels of severity, admitted to a tertiary hospital in South India and 56 fluorescent stained images obtained from different internet sources. The performance of "CometQ", the proposed standalone application for automated analysis of comet assay images, is evaluated by a clinical expert and is also compared with that of a most recent and related software-OpenComet. CometQ gave 90.26% positive predictive value (PPV) and 93.34% sensitivity which are much higher than those of OpenComet, especially in the case of silver stained images. The

  10. Semi-automated image analysis for the assessment of megafaunal densities at the Arctic deep-sea observatory HAUSGARTEN.

    Science.gov (United States)

    Schoening, Timm; Bergmann, Melanie; Ontrup, Jörg; Taylor, James; Dannheim, Jennifer; Gutt, Julian; Purser, Autun; Nattkemper, Tim W

    2012-01-01

    Megafauna play an important role in benthic ecosystem function and are sensitive indicators of environmental change. Non-invasive monitoring of benthic communities can be accomplished by seafloor imaging. However, manual quantification of megafauna in images is labor-intensive and therefore, this organism size class is often neglected in ecosystem studies. Automated image analysis has been proposed as a possible approach to such analysis, but the heterogeneity of megafaunal communities poses a non-trivial challenge for such automated techniques. Here, the potential of a generalized object detection architecture, referred to as iSIS (intelligent Screening of underwater Image Sequences), for the quantification of a heterogenous group of megafauna taxa is investigated. The iSIS system is tuned for a particular image sequence (i.e. a transect) using a small subset of the images, in which megafauna taxa positions were previously marked by an expert. To investigate the potential of iSIS and compare its results with those obtained from human experts, a group of eight different taxa from one camera transect of seafloor images taken at the Arctic deep-sea observatory HAUSGARTEN is used. The results show that inter- and intra-observer agreements of human experts exhibit considerable variation between the species, with a similar degree of variation apparent in the automatically derived results obtained by iSIS. Whilst some taxa (e. g. Bathycrinus stalks, Kolga hyalina, small white sea anemone) were well detected by iSIS (i. e. overall Sensitivity: 87%, overall Positive Predictive Value: 67%), some taxa such as the small sea cucumber Elpidia heckeri remain challenging, for both human observers and iSIS.

  11. Semi-automated image analysis for the assessment of megafaunal densities at the Arctic deep-sea observatory HAUSGARTEN.

    Directory of Open Access Journals (Sweden)

    Timm Schoening

    Full Text Available Megafauna play an important role in benthic ecosystem function and are sensitive indicators of environmental change. Non-invasive monitoring of benthic communities can be accomplished by seafloor imaging. However, manual quantification of megafauna in images is labor-intensive and therefore, this organism size class is often neglected in ecosystem studies. Automated image analysis has been proposed as a possible approach to such analysis, but the heterogeneity of megafaunal communities poses a non-trivial challenge for such automated techniques. Here, the potential of a generalized object detection architecture, referred to as iSIS (intelligent Screening of underwater Image Sequences, for the quantification of a heterogenous group of megafauna taxa is investigated. The iSIS system is tuned for a particular image sequence (i.e. a transect using a small subset of the images, in which megafauna taxa positions were previously marked by an expert. To investigate the potential of iSIS and compare its results with those obtained from human experts, a group of eight different taxa from one camera transect of seafloor images taken at the Arctic deep-sea observatory HAUSGARTEN is used. The results show that inter- and intra-observer agreements of human experts exhibit considerable variation between the species, with a similar degree of variation apparent in the automatically derived results obtained by iSIS. Whilst some taxa (e. g. Bathycrinus stalks, Kolga hyalina, small white sea anemone were well detected by iSIS (i. e. overall Sensitivity: 87%, overall Positive Predictive Value: 67%, some taxa such as the small sea cucumber Elpidia heckeri remain challenging, for both human observers and iSIS.

  12. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python

    Directory of Open Access Journals (Sweden)

    Nicolas eRey-Villamizar

    2014-04-01

    Full Text Available In this article, we describe use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis task, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral brain tissue images surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels, 6,000$times$10,000$times$500 voxels with 16 bits/voxel, implying image sizes exceeding 250GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analytics for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment consisting. Our Python script enables efficient data storage and movement between compute and storage servers, logging all processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  13. Comparison of Manual Mapping and Automated Object-Based Image Analysis of Non-Submerged Aquatic Vegetation from Very-High-Resolution UAS Images

    Directory of Open Access Journals (Sweden)

    Eva Husson

    2016-09-01

    Full Text Available Aquatic vegetation has important ecological and regulatory functions and should be monitored in order to detect ecosystem changes. Field data collection is often costly and time-consuming; remote sensing with unmanned aircraft systems (UASs provides aerial images with sub-decimetre resolution and offers a potential data source for vegetation mapping. In a manual mapping approach, UAS true-colour images with 5-cm-resolution pixels allowed for the identification of non-submerged aquatic vegetation at the species level. However, manual mapping is labour-intensive, and while automated classification methods are available, they have rarely been evaluated for aquatic vegetation, particularly at the scale of individual vegetation stands. We evaluated classification accuracy and time-efficiency for mapping non-submerged aquatic vegetation at three levels of detail at five test sites (100 m × 100 m differing in vegetation complexity. We used object-based image analysis and tested two classification methods (threshold classification and Random Forest using eCognition®. The automated classification results were compared to results from manual mapping. Using threshold classification, overall accuracy at the five test sites ranged from 93% to 99% for the water-versus-vegetation level and from 62% to 90% for the growth-form level. Using Random Forest classification, overall accuracy ranged from 56% to 94% for the growth-form level and from 52% to 75% for the dominant-taxon level. Overall classification accuracy decreased with increasing vegetation complexity. In test sites with more complex vegetation, automated classification was more time-efficient than manual mapping. This study demonstrated that automated classification of non-submerged aquatic vegetation from true-colour UAS images was feasible, indicating good potential for operative mapping of aquatic vegetation. When choosing the preferred mapping method (manual versus automated the desired level of

  14. An image analysis pipeline for automated classification of imaging light conditions and for quantification of wheat canopy cover time series in field phenotyping.

    Science.gov (United States)

    Yu, Kang; Kirchgessner, Norbert; Grieder, Christoph; Walter, Achim; Hund, Andreas

    2017-01-01

    Robust segmentation of canopy cover (CC) from large amounts of images taken under different illumination/light conditions in the field is essential for high throughput field phenotyping (HTFP). We attempted to address this challenge by evaluating different vegetation indices and segmentation methods for analyzing images taken at varying illuminations throughout the early growth phase of wheat in the field. 40,000 images taken on 350 wheat genotypes in two consecutive years were assessed for this purpose. We proposed an image analysis pipeline that allowed for image segmentation using automated thresholding and machine learning based classification methods and for global quality control of the resulting CC time series. This pipeline enabled accurate classification of imaging light conditions into two illumination scenarios, i.e. high light-contrast (HLC) and low light-contrast (LLC), in a series of continuously collected images by employing a support vector machine (SVM) model. Accordingly, the scenario-specific pixel-based classification models employing decision tree and SVM algorithms were able to outperform the automated thresholding methods, as well as improved the segmentation accuracy compared to general models that did not discriminate illumination differences. The three-band vegetation difference index (NDI3) was enhanced for segmentation by incorporating the HSV-V and the CIE Lab-a color components, i.e. the product images NDI3*V and NDI3*a. Field illumination scenarios can be successfully identified by the proposed image analysis pipeline, and the illumination-specific image segmentation can improve the quantification of CC development. The integrated image analysis pipeline proposed in this study provides great potential for automatically delivering robust data in HTFP.

  15. Application of Deep Learning in Automated Analysis of Molecular Images in Cancer: A Survey

    Science.gov (United States)

    Xue, Yong; Chen, Shihui; Liu, Yong

    2017-01-01

    Molecular imaging enables the visualization and quantitative analysis of the alterations of biological procedures at molecular and/or cellular level, which is of great significance for early detection of cancer. In recent years, deep leaning has been widely used in medical imaging analysis, as it overcomes the limitations of visual assessment and traditional machine learning techniques by extracting hierarchical features with powerful representation capability. Research on cancer molecular images using deep learning techniques is also increasing dynamically. Hence, in this paper, we review the applications of deep learning in molecular imaging in terms of tumor lesion segmentation, tumor classification, and survival prediction. We also outline some future directions in which researchers may develop more powerful deep learning models for better performance in the applications in cancer molecular imaging. PMID:29114182

  16. Application of Deep Learning in Automated Analysis of Molecular Images in Cancer: A Survey

    Directory of Open Access Journals (Sweden)

    Yong Xue

    2017-01-01

    Full Text Available Molecular imaging enables the visualization and quantitative analysis of the alterations of biological procedures at molecular and/or cellular level, which is of great significance for early detection of cancer. In recent years, deep leaning has been widely used in medical imaging analysis, as it overcomes the limitations of visual assessment and traditional machine learning techniques by extracting hierarchical features with powerful representation capability. Research on cancer molecular images using deep learning techniques is also increasing dynamically. Hence, in this paper, we review the applications of deep learning in molecular imaging in terms of tumor lesion segmentation, tumor classification, and survival prediction. We also outline some future directions in which researchers may develop more powerful deep learning models for better performance in the applications in cancer molecular imaging.

  17. Scaling up Ecological Measurements of Coral Reefs Using Semi-Automated Field Image Collection and Analysis

    Directory of Open Access Journals (Sweden)

    Manuel González-Rivero

    2016-01-01

    Full Text Available Ecological measurements in marine settings are often constrained in space and time, with spatial heterogeneity obscuring broader generalisations. While advances in remote sensing, integrative modelling and meta-analysis enable generalisations from field observations, there is an underlying need for high-resolution, standardised and geo-referenced field data. Here, we evaluate a new approach aimed at optimising data collection and analysis to assess broad-scale patterns of coral reef community composition using automatically annotated underwater imagery, captured along 2 km transects. We validate this approach by investigating its ability to detect spatial (e.g., across regions and temporal (e.g., over years change, and by comparing automated annotation errors to those of multiple human annotators. Our results indicate that change of coral reef benthos can be captured at high resolution both spatially and temporally, with an average error below 5%, among key benthic groups. Cover estimation errors using automated annotation varied between 2% and 12%, slightly larger than human errors (which varied between 1% and 7%, but small enough to detect significant changes among dominant groups. Overall, this approach allows a rapid collection of in-situ observations at larger spatial scales (km than previously possible, and provides a pathway to link, calibrate, and validate broader analyses across even larger spatial scales (10–10,000 km2.

  18. Development and application of an automated analysis method for individual cerebral perfusion single photon emission tomography images

    International Nuclear Information System (INIS)

    Cluckie, Alice Jane

    2001-01-01

    Neurological images may be analysed by performing voxel by voxel comparisons with a group of control subject images. An automated, 3D, voxel-based method has been developed for the analysis of individual single photon emission tomography (SPET) scans. Clusters of voxels are identified that represent regions of abnormal radiopharmaceutical uptake. Morphological operators are applied to reduce noise in the clusters, then quantitative estimates of the size and degree of the radiopharmaceutical uptake abnormalities are derived. Statistical inference has been performed using a Monte Carlo method that has not previously been applied to SPET scans, or for the analysis of individual images. This has been validated for group comparisons of SPET scans and for the analysis of an individual image using comparison with a group. Accurate statistical inference was obtained independent of experimental factors such as degrees of freedom, image smoothing and voxel significance level threshold. The analysis method has been evaluated for application to cerebral perfusion SPET imaging in ischaemic stroke. It has been shown that useful quantitative estimates, high sensitivity and high specificity may be obtained. Sensitivity and the accuracy of signal quantification were found to be dependent on the operator defined analysis parameters. Recommendations for the values of these parameters have been made. The analysis method developed has been compared with an established method and shown to result in higher specificity for the data and analysis parameter sets tested. In addition, application to a group of ischaemic stroke patient SPET scans has demonstrated its clinical utility. The influence of imaging conditions has been assessed using phantom data acquired with different gamma camera SPET acquisition parameters. A lower limit of five million counts and standardisation of all acquisition parameters has been recommended for the analysis of individual SPET scans. (author)

  19. Application of automated image analysis reduces the workload of manual screening of sentinel lymph node biopsies in breast cancer

    DEFF Research Database (Denmark)

    Holten-Rossing, Henrik; Talman, Maj-Lis Møller; Jylling, Anne Marie Bak

    2017-01-01

    AIMS: Breast cancer is one of the most common cancer diseases in women, with >1.67 million cases being diagnosed worldwide each year. In breast cancer, the sentinel lymph node (SLN) pinpoints the first lymph node(s) into which the tumour spreads, and it is usually located in the ipsilateral axill...... tool for selecting those slides that a pathologist does not need to see. The implementation of automated digital image analysis of SLNBs in breast cancer would decrease the workload in this context for examining pathologists by almost 60%....

  20. A simple viability analysis for unicellular cyanobacteria using a new autofluorescence assay, automated microscopy, and ImageJ

    Directory of Open Access Journals (Sweden)

    Schulze Katja

    2011-11-01

    Full Text Available Abstract Background Currently established methods to identify viable and non-viable cells of cyanobacteria are either time-consuming (eg. plating or preparation-intensive (eg. fluorescent staining. In this paper we present a new and fast viability assay for unicellular cyanobacteria, which uses red chlorophyll fluorescence and an unspecific green autofluorescence for the differentiation of viable and non-viable cells without the need of sample preparation. Results The viability assay for unicellular cyanobacteria using red and green autofluorescence was established and validated for the model organism Synechocystis sp. PCC 6803. Both autofluorescence signals could be observed simultaneously allowing a direct classification of viable and non-viable cells. The results were confirmed by plating/colony count, absorption spectra and chlorophyll measurements. The use of an automated fluorescence microscope and a novel ImageJ based image analysis plugin allow a semi-automated analysis. Conclusions The new method simplifies the process of viability analysis and allows a quick and accurate analysis. Furthermore results indicate that a combination of the new assay with absorption spectra or chlorophyll concentration measurements allows the estimation of the vitality of cells.

  1. Automated MicroSPECT/MicroCT Image Analysis of the Mouse Thyroid Gland.

    Science.gov (United States)

    Cheng, Peng; Hollingsworth, Brynn; Scarberry, Daniel; Shen, Daniel H; Powell, Kimerly; Smart, Sean C; Beech, John; Sheng, Xiaochao; Kirschner, Lawrence S; Menq, Chia-Hsiang; Jhiang, Sissy M

    2017-11-01

    The ability of thyroid follicular cells to take up iodine enables the use of radioactive iodine (RAI) for imaging and targeted killing of RAI-avid thyroid cancer following thyroidectomy. To facilitate identifying novel strategies to improve 131 I therapeutic efficacy for patients with RAI refractory disease, it is desired to optimize image acquisition and analysis for preclinical mouse models of thyroid cancer. A customized mouse cradle was designed and used for microSPECT/CT image acquisition at 1 hour (t1) and 24 hours (t24) post injection of 123 I, which mainly reflect RAI influx/efflux equilibrium and RAI retention in the thyroid, respectively. FVB/N mice with normal thyroid glands and TgBRAF V600E mice with thyroid tumors were imaged. In-house CTViewer software was developed to streamline image analysis with new capabilities, along with display of 3D voxel-based 123 I gamma photon intensity in MATLAB. The customized mouse cradle facilitates consistent tissue configuration among image acquisitions such that rigid body registration can be applied to align serial images of the same mouse via the in-house CTViewer software. CTViewer is designed specifically to streamline SPECT/CT image analysis with functions tailored to quantify thyroid radioiodine uptake. Automatic segmentation of thyroid volumes of interest (VOI) from adjacent salivary glands in t1 images is enabled by superimposing the thyroid VOI from the t24 image onto the corresponding aligned t1 image. The extent of heterogeneity in 123 I accumulation within thyroid VOIs can be visualized by 3D display of voxel-based 123 I gamma photon intensity. MicroSPECT/CT image acquisition and analysis for thyroidal RAI uptake is greatly improved by the cradle and the CTViewer software, respectively. Furthermore, the approach of superimposing thyroid VOIs from t24 images to select thyroid VOIs on corresponding aligned t1 images can be applied to studies in which the target tissue has differential radiotracer retention

  2. Quantitative measurements of human sperm nuclei using automated microscopy and image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wyrobek, A.J.; Firpo, M. (Lawrence Livermore National Lab., CA (United States)); Sudar, D. (Univ. of California, San Francisco (United States))

    1993-01-01

    A package of computer codes, called Morphometry Automation Program (MAP), was developed to (a) detect human sperm smeared onto glass slides, (b) measure more than 30 aspects of the size, shape, texture, and staining of their nuclei, and (c) retain operator evaluation of the process. MAP performs the locating and measurement functions automatically, without operator assistance. In addition to standard measurements, MAP utilizes axial projections of nuclear area and stain intensity to detect asymmetries. MAP also stores for each cell the gray-scale images for later display and evaluation, and it retains coordinates for optional relocation and inspection under the microscope. MAP operates on the Quantitative Image Processing System (QUIPS) at LLNL. MAP has potential applications in the evaluation of infertility and in reproductive toxicology, such as (a) classifying sperm into clinical shape categories for assessing fertility status, (b) identifying subtle effects of host factors (diet, stress, etc.), (c) assessing the risk of potential spermatogenic toxicants (tobacco, drugs, etc.), and (d) investigating associations with abnormal pregnancy outcomes (time to pregnancy, early fetal loss, etc.).

  3. AUTOMATION OF IMAGE DATA PROCESSING

    Directory of Open Access Journals (Sweden)

    Preuss Ryszard

    2014-12-01

    Full Text Available This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft . At present, image data obtained by various registration systems (metric and non - metric cameras placed on airplanes , satellites , or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images . For fast images georeferencing automatic image matching algorithms are currently applied . They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage . Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object ( area. In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic , DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules . I mage processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters . The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system.

  4. Automated boundary segmentation and wound analysis for longitudinal corneal OCT images

    Science.gov (United States)

    Wang, Fei; Shi, Fei; Zhu, Weifang; Pan, Lingjiao; Chen, Haoyu; Huang, Haifan; Zheng, Kangkeng; Chen, Xinjian

    2017-03-01

    Optical coherence tomography (OCT) has been widely applied in the examination and diagnosis of corneal diseases, but the information directly achieved from the OCT images by manual inspection is limited. We propose an automatic processing method to assist ophthalmologists in locating the boundaries in corneal OCT images and analyzing the recovery of corneal wounds after treatment from longitudinal OCT images. It includes the following steps: preprocessing, epithelium and endothelium boundary segmentation and correction, wound detection, corneal boundary fitting and wound analysis. The method was tested on a data set with longitudinal corneal OCT images from 20 subjects. Each subject has five images acquired after corneal operation over a period of time. The segmentation and classification accuracy of the proposed algorithm is high and can be used for analyzing wound recovery after corneal surgery.

  5. Automated image analysis of cyclin D1 protein expression in invasive lobular breast carcinoma provides independent prognostic information.

    Science.gov (United States)

    Tobin, Nicholas P; Lundgren, Katja L; Conway, Catherine; Anagnostaki, Lola; Costello, Sean; Landberg, Göran

    2012-11-01

    The emergence of automated image analysis algorithms has aided the enumeration, quantification, and immunohistochemical analyses of tumor cells in both whole section and tissue microarray samples. To date, the focus of such algorithms in the breast cancer setting has been on traditional markers in the common invasive ductal carcinoma subtype. Here, we aimed to optimize and validate an automated analysis of the cell cycle regulator cyclin D1 in a large collection of invasive lobular carcinoma and relate its expression to clinicopathologic data. The image analysis algorithm was trained to optimally match manual scoring of cyclin D1 protein expression in a subset of invasive lobular carcinoma tissue microarray cores. The algorithm was capable of distinguishing cyclin D1-positive cells and illustrated high correlation with traditional manual scoring (κ=0.63). It was then applied to our entire cohort of 483 patients, with subsequent statistical comparisons to clinical data. We found no correlation between cyclin D1 expression and tumor size, grade, and lymph node status. However, overexpression of the protein was associated with reduced recurrence-free survival (P=.029), as was positive nodal status (Pinvasive lobular carcinoma. Finally, high cyclin D1 expression was associated with increased hazard ratio in multivariate analysis (hazard ratio, 1.75; 95% confidence interval, 1.05-2.89). In conclusion, we describe an image analysis algorithm capable of reliably analyzing cyclin D1 staining in invasive lobular carcinoma and have linked overexpression of the protein to increased recurrence risk. Our findings support the use of cyclin D1 as a clinically informative biomarker for invasive lobular breast cancer. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Investigation into diagnostic agreement using automated computer-assisted histopathology pattern recognition image analysis

    Directory of Open Access Journals (Sweden)

    Joshua D Webster

    2012-01-01

    Full Text Available The extent to which histopathology pattern recognition image analysis (PRIA agrees with microscopic assessment has not been established. Thus, a commercial PRIA platform was evaluated in two applications using whole-slide images. Substantial agreement, lacking significant constant or proportional errors, between PRIA and manual morphometric image segmentation was obtained for pulmonary metastatic cancer areas (Passing/Bablok regression. Bland-Altman analysis indicated heteroscedastic measurements and tendency toward increasing variance with increasing tumor burden, but no significant trend in mean bias. The average between-methods percent tumor content difference was -0.64. Analysis of between-methods measurement differences relative to the percent tumor magnitude revealed that method disagreement had an impact primarily in the smallest measurements (tumor burden 0.988, indicating high reproducibility for both methods, yet PRIA reproducibility was superior (C.V.: PRIA = 7.4, manual = 17.1. Evaluation of PRIA on morphologically complex teratomas led to diagnostic agreement with pathologist assessments of pluripotency on subsets of teratomas. Accommodation of the diversity of teratoma histologic features frequently resulted in detrimental trade-offs, increasing PRIA error elsewhere in images. PRIA error was nonrandom and influenced by variations in histomorphology. File-size limitations encountered while training algorithms and consequences of spectral image processing dominance contributed to diagnostic inaccuracies experienced for some teratomas. PRIA appeared better suited for tissues with limited phenotypic diversity. Technical improvements may enhance diagnostic agreement, and consistent pathologist input will benefit further development and application of PRIA.

  7. Bright field microscopy as an alternative to whole cell fluorescence in automated analysis of macrophage images.

    Directory of Open Access Journals (Sweden)

    Jyrki Selinummi

    2009-10-01

    Full Text Available Fluorescence microscopy is the standard tool for detection and analysis of cellular phenomena. This technique, however, has a number of drawbacks such as the limited number of available fluorescent channels in microscopes, overlapping excitation and emission spectra of the stains, and phototoxicity.We here present and validate a method to automatically detect cell population outlines directly from bright field images. By imaging samples with several focus levels forming a bright field -stack, and by measuring the intensity variations of this stack over the -dimension, we construct a new two dimensional projection image of increased contrast. With additional information for locations of each cell, such as stained nuclei, this bright field projection image can be used instead of whole cell fluorescence to locate borders of individual cells, separating touching cells, and enabling single cell analysis. Using the popular CellProfiler freeware cell image analysis software mainly targeted for fluorescence microscopy, we validate our method by automatically segmenting low contrast and rather complex shaped murine macrophage cells.The proposed approach frees up a fluorescence channel, which can be used for subcellular studies. It also facilitates cell shape measurement in experiments where whole cell fluorescent staining is either not available, or is dependent on a particular experimental condition. We show that whole cell area detection results using our projected bright field images match closely to the standard approach where cell areas are localized using fluorescence, and conclude that the high contrast bright field projection image can directly replace one fluorescent channel in whole cell quantification. Matlab code for calculating the projections can be downloaded from the supplementary site: http://sites.google.com/site/brightfieldorstaining.

  8. Optimizing object-based image analysis for semi-automated geomorphological mapping

    NARCIS (Netherlands)

    Anders, N.; Smith, M.; Seijmonsbergen, H.; Bouten, W.; Hengl, T.; Evans, I.S.; Wilson, J.P.; Gould, M.

    2011-01-01

    Object-Based Image Analysis (OBIA) is considered a useful tool for analyzing high-resolution digital terrain data. In the past, both segmentation and classification parameters were optimized manually by trial and error. We propose a method to automatically optimize classification parameters for

  9. Automated wholeslide analysis of multiplex-brightfield IHC images for cancer cells and carcinoma-associated fibroblasts

    Science.gov (United States)

    Lorsakul, Auranuch; Andersson, Emilia; Vega Harring, Suzana; Sade, Hadassah; Grimm, Oliver; Bredno, Joerg

    2017-03-01

    Multiplex-brightfield immunohistochemistry (IHC) staining and quantitative measurement of multiple biomarkers can support therapeutic targeting of carcinoma-associated fibroblasts (CAF). This paper presents an automated digitalpathology solution to simultaneously analyze multiple biomarker expressions within a single tissue section stained with an IHC duplex assay. Our method was verified against ground truth provided by expert pathologists. In the first stage, the automated method quantified epithelial-carcinoma cells expressing cytokeratin (CK) using robust nucleus detection and supervised cell-by-cell classification algorithms with a combination of nucleus and contextual features. Using fibroblast activation protein (FAP) as biomarker for CAFs, the algorithm was trained, based on ground truth obtained from pathologists, to automatically identify tumor-associated stroma using a supervised-generation rule. The algorithm reported distance to nearest neighbor in the populations of tumor cells and activated-stromal fibroblasts as a wholeslide measure of spatial relationships. A total of 45 slides from six indications (breast, pancreatic, colorectal, lung, ovarian, and head-and-neck cancers) were included for training and verification. CK-positive cells detected by the algorithm were verified by a pathologist with good agreement (R2=0.98) to ground-truth count. For the area occupied by FAP-positive cells, the inter-observer agreement between two sets of ground-truth measurements was R2=0.93 whereas the algorithm reproduced the pathologists' areas with R2=0.96. The proposed methodology enables automated image analysis to measure spatial relationships of cells stained in an IHC-multiplex assay. Our proof-of-concept results show an automated algorithm can be trained to reproduce the expert assessment and provide quantitative readouts that potentially support a cutoff determination in hypothesis testing related to CAF-targeting-therapy decisions.

  10. Immunohistochemical Ki-67/KL1 double stains increase accuracy of Ki-67 indices in breast cancer and simplify automated image analysis

    DEFF Research Database (Denmark)

    Nielsen, Patricia S; Bentzer, Nina K; Jensen, Vibeke

    2014-01-01

    observers and automated image analysis. RESULTS: Indices were predominantly higher for single stains than double stains (P≤0.002), yet the difference between observers was statistically significant (Pmanual and automated indices ranged from 0...... by digital image analysis. This study aims to detect the difference in accuracy and precision between manual indices of single and double stains, to develop an automated quantification of double stains, and to explore the relation between automated indices and tumor characteristics when quantified...... in different regions: hot spots, global tumor areas, and invasive fronts. MATERIALS AND METHODS: Paraffin-embedded, formalin-fixed tissue from 100 consecutive patients with invasive breast cancer was immunohistochemically stained for Ki-67 and Ki-67/KL1. Ki-67 was manually scored in different regions by 2...

  11. Automated Image Analysis in Undetermined Sections of Human Permanent Third Molars

    DEFF Research Database (Denmark)

    Bjørndal, Lars; Darvann, Tron Andre; Bro-Nielsen, Morten

    1997-01-01

    A computerized histomorphometric analysis was made of Karnovsky-fixed, hydroxethylmethacrylate embedded and toluidine blue/pyronin-stained sections to determine: (1) the two-dimensional size of the coronal odontoblasts given by their cytoplasm:nucleus ratio; (2) the ratio between the number of co...... sectioning profiles should be analysed. The use of advanced image processing on undemineralized tooth sections provides a rational foundation for further work on the reactions of the odontoblasts to external injuries including dental caries....

  12. Validation of tumor protein marker quantification by two independent automated immunofluorescence image analysis platforms

    Science.gov (United States)

    Peck, Amy R; Girondo, Melanie A; Liu, Chengbao; Kovatich, Albert J; Hooke, Jeffrey A; Shriver, Craig D; Hu, Hai; Mitchell, Edith P; Freydin, Boris; Hyslop, Terry; Chervoneva, Inna; Rui, Hallgeir

    2016-01-01

    Protein marker levels in formalin-fixed, paraffin-embedded tissue sections traditionally have been assayed by chromogenic immunohistochemistry and evaluated visually by pathologists. Pathologist scoring of chromogen staining intensity is subjective and generates low-resolution ordinal or nominal data rather than continuous data. Emerging digital pathology platforms now allow quantification of chromogen or fluorescence signals by computer-assisted image analysis, providing continuous immunohistochemistry values. Fluorescence immunohistochemistry offers greater dynamic signal range than chromogen immunohistochemistry, and combined with image analysis holds the promise of enhanced sensitivity and analytic resolution, and consequently more robust quantification. However, commercial fluorescence scanners and image analysis software differ in features and capabilities, and claims of objective quantitative immunohistochemistry are difficult to validate as pathologist scoring is subjective and there is no accepted gold standard. Here we provide the first side-by-side validation of two technologically distinct commercial fluorescence immunohistochemistry analysis platforms. We document highly consistent results by (1) concordance analysis of fluorescence immunohistochemistry values and (2) agreement in outcome predictions both for objective, data-driven cutpoint dichotomization with Kaplan–Meier analyses or employment of continuous marker values to compute receiver-operating curves. The two platforms examined rely on distinct fluorescence immunohistochemistry imaging hardware, microscopy vs line scanning, and functionally distinct image analysis software. Fluorescence immunohistochemistry values for nuclear-localized and tyrosine-phosphorylated Stat5a/b computed by each platform on a cohort of 323 breast cancer cases revealed high concordance after linear calibration, a finding confirmed on an independent 382 case cohort, with concordance correlation coefficients >0

  13. Automated detection and analysis of fluorescent in situ hybridization spots depicted in digital microscopic images of Pap-smear specimens

    Science.gov (United States)

    Wang, Xingwei; Zheng, Bin; Li, Shibo; Zhang, Roy; Mulvihill, John J.; Chen, Wei R.; Liu, Hong

    2009-03-01

    Fluorescence in situ hybridization (FISH) technology has been widely recognized as a promising molecular and biomedical optical imaging tool to screen and diagnose cervical cancer. However, manual FISH analysis is time-consuming and may introduce large inter-reader variability. In this study, a computerized scheme is developed and tested. It automatically detects and analyzes FISH spots depicted on microscopic fluorescence images. The scheme includes two stages: (1) a feature-based classification rule to detect useful interphase cells, and (2) a knowledge-based expert classifier to identify splitting FISH spots and improve the accuracy of counting independent FISH spots. The scheme then classifies detected analyzable cells as normal or abnormal. In this study, 150 FISH images were acquired from Pap-smear specimens and examined by both an experienced cytogeneticist and the scheme. The results showed that (1) the agreement between the cytogeneticist and the scheme was 96.9% in classifying between analyzable and unanalyzable cells (Kappa=0.917), and (2) agreements in detecting normal and abnormal cells based on FISH spots were 90.5% and 95.8% with Kappa=0.867. This study demonstrated the feasibility of automated FISH analysis, which may potentially improve detection efficiency and produce more accurate and consistent results than manual FISH analysis.

  14. Automated Image Analysis of HER2 Fluorescence In Situ Hybridization to Refine Definitions of Genetic Heterogeneity in Breast Cancer Tissue

    Directory of Open Access Journals (Sweden)

    Gedmante Radziuviene

    2017-01-01

    Full Text Available Human epidermal growth factor receptor 2 gene- (HER2- targeted therapy for breast cancer relies primarily on HER2 overexpression established by immunohistochemistry (IHC with borderline cases being further tested for amplification by fluorescence in situ hybridization (FISH. Manual interpretation of HER2 FISH is based on a limited number of cells and rather complex definitions of equivocal, polysomic, and genetically heterogeneous (GH cases. Image analysis (IA can extract high-capacity data and potentially improve HER2 testing in borderline cases. We investigated statistically derived indicators of HER2 heterogeneity in HER2 FISH data obtained by automated IA of 50 IHC borderline (2+ cases of invasive ductal breast carcinoma. Overall, IA significantly underestimated the conventional HER2, CEP17 counts, and HER2/CEP17 ratio; however, it collected more amplified cells in some cases below the lower limit of GH definition by manual procedure. Indicators for amplification, polysomy, and bimodality were extracted by factor analysis and allowed clustering of the tumors into amplified, nonamplified, and equivocal/polysomy categories. The bimodality indicator provided independent cell diversity characteristics for all clusters. Tumors classified as bimodal only partially coincided with the conventional GH heterogeneity category. We conclude that automated high-capacity nonselective tumor cell assay can generate evidence-based HER2 intratumor heterogeneity indicators to refine GH definitions.

  15. Automated optical inspection and image analysis of superconducting radio-frequency cavities

    International Nuclear Information System (INIS)

    Wenskat, Marc

    2017-04-01

    The inner surface of superconducting cavities plays a crucial role to achieve highest accelerating fields and low losses. For an investigation of this inner surface of more than 100 cavities within the cavity fabrication for the European XFEL and the ILC HiGrade Research Project, an optical inspection robot OBACHT was constructed. To analyze up to 2325 images per cavity, an image processing and analysis code was developed and new variables to describe the cavity surface were obtained. The accuracy of this code is up to 97% and the PPV 99% within the resolution of 15.63 μm. The optical obtained surface roughness is in agreement with standard profilometric methods. The image analysis algorithm identified and quantified vendor specific fabrication properties as the electron beam welding speed and the different surface roughness due to the different chemical treatments. In addition, a correlation of ρ=-0.93 with a significance of 6σ between an obtained surface variable and the maximal accelerating field was found.

  16. Automated optical inspection and image analysis of superconducting radio-frequency cavities

    Science.gov (United States)

    Wenskat, M.

    2017-05-01

    The inner surface of superconducting cavities plays a crucial role to achieve highest accelerating fields and low losses. For an investigation of this inner surface of more than 100 cavities within the cavity fabrication for the European XFEL and the ILC HiGrade Research Project, an optical inspection robot OBACHT was constructed. To analyze up to 2325 images per cavity, an image processing and analysis code was developed and new variables to describe the cavity surface were obtained. The accuracy of this code is up to 97 % and the positive predictive value (PPV) 99 % within the resolution of 15.63 μm. The optical obtained surface roughness is in agreement with standard profilometric methods. The image analysis algorithm identified and quantified vendor specific fabrication properties as the electron beam welding speed and the different surface roughness due to the different chemical treatments. In addition, a correlation of ρ = -0.93 with a significance of 6 σ between an obtained surface variable and the maximal accelerating field was found.

  17. Automated optical inspection and image analysis of superconducting radio-frequency cavities

    Energy Technology Data Exchange (ETDEWEB)

    Wenskat, Marc

    2017-04-15

    The inner surface of superconducting cavities plays a crucial role to achieve highest accelerating fields and low losses. For an investigation of this inner surface of more than 100 cavities within the cavity fabrication for the European XFEL and the ILC HiGrade Research Project, an optical inspection robot OBACHT was constructed. To analyze up to 2325 images per cavity, an image processing and analysis code was developed and new variables to describe the cavity surface were obtained. The accuracy of this code is up to 97% and the PPV 99% within the resolution of 15.63 μm. The optical obtained surface roughness is in agreement with standard profilometric methods. The image analysis algorithm identified and quantified vendor specific fabrication properties as the electron beam welding speed and the different surface roughness due to the different chemical treatments. In addition, a correlation of ρ=-0.93 with a significance of 6σ between an obtained surface variable and the maximal accelerating field was found.

  18. Automating the Analysis of Spatial Grids A Practical Guide to Data Mining Geospatial Images for Human & Environmental Applications

    CERN Document Server

    Lakshmanan, Valliappa

    2012-01-01

    The ability to create automated algorithms to process gridded spatial data is increasingly important as remotely sensed datasets increase in volume and frequency. Whether in business, social science, ecology, meteorology or urban planning, the ability to create automated applications to analyze and detect patterns in geospatial data is increasingly important. This book provides students with a foundation in topics of digital image processing and data mining as applied to geospatial datasets. The aim is for readers to be able to devise and implement automated techniques to extract information from spatial grids such as radar, satellite or high-resolution survey imagery.

  19. Automated analysis of retinal imaging using machine learning techniques for computer vision [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Jeffrey De Fauw

    2016-07-01

    Full Text Available There are almost two million people in the United Kingdom living with sight loss, including around 360,000 people who are registered as blind or partially sighted. Sight threatening diseases, such as diabetic retinopathy and age related macular degeneration have contributed to the 40% increase in outpatient attendances in the last decade but are amenable to early detection and monitoring. With early and appropriate intervention, blindness may be prevented in many cases.   Ophthalmic imaging provides a way to diagnose and objectively assess the progression of a number of pathologies including neovascular (“wet” age-related macular degeneration (wet AMD and diabetic retinopathy. Two methods of imaging are commonly used: digital photographs of the fundus (the ‘back’ of the eye and Optical Coherence Tomography (OCT, a modality that uses light waves in a similar way to how ultrasound uses sound waves. Changes in population demographics and expectations and the changing pattern of chronic diseases creates a rising demand for such imaging. Meanwhile, interrogation of such images is time consuming, costly, and prone to human error. The application of novel analysis methods may provide a solution to these challenges.   This research will focus on applying novel machine learning algorithms to automatic analysis of both digital fundus photographs and OCT in Moorfields Eye Hospital NHS Foundation Trust patients.   Through analysis of the images used in ophthalmology, along with relevant clinical and demographic information, Google DeepMind Health will investigate the feasibility of automated grading of digital fundus photographs and OCT and provide novel quantitative measures for specific disease features and for monitoring the therapeutic success.

  20. Automated analysis of retinal imaging using machine learning techniques for computer vision [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Jeffrey De Fauw

    2017-06-01

    Full Text Available There are almost two million people in the United Kingdom living with sight loss, including around 360,000 people who are registered as blind or partially sighted. Sight threatening diseases, such as diabetic retinopathy and age related macular degeneration have contributed to the 40% increase in outpatient attendances in the last decade but are amenable to early detection and monitoring. With early and appropriate intervention, blindness may be prevented in many cases. Ophthalmic imaging provides a way to diagnose and objectively assess the progression of a number of pathologies including neovascular (“wet” age-related macular degeneration (wet AMD and diabetic retinopathy. Two methods of imaging are commonly used: digital photographs of the fundus (the ‘back’ of the eye and Optical Coherence Tomography (OCT, a modality that uses light waves in a similar way to how ultrasound uses sound waves. Changes in population demographics and expectations and the changing pattern of chronic diseases creates a rising demand for such imaging. Meanwhile, interrogation of such images is time consuming, costly, and prone to human error. The application of novel analysis methods may provide a solution to these challenges. This research will focus on applying novel machine learning algorithms to automatic analysis of both digital fundus photographs and OCT in Moorfields Eye Hospital NHS Foundation Trust patients. Through analysis of the images used in ophthalmology, along with relevant clinical and demographic information, DeepMind Health will investigate the feasibility of automated grading of digital fundus photographs and OCT and provide novel quantitative measures for specific disease features and for monitoring the therapeutic success.

  1. Automated Morphological and Morphometric Analysis of Mass Spectrometry Imaging Data: Application to Biomarker Discovery

    Science.gov (United States)

    Picard de Muller, Gaël; Ait-Belkacem, Rima; Bonnel, David; Longuespée, Rémi; Stauber, Jonathan

    2017-12-01

    Mass spectrometry imaging datasets are mostly analyzed in terms of average intensity in regions of interest. However, biological tissues have different morphologies with several sizes, shapes, and structures. The important biological information, contained in this highly heterogeneous cellular organization, could be hidden by analyzing the average intensities. Finding an analytical process of morphology would help to find such information, describe tissue model, and support identification of biomarkers. This study describes an informatics approach for the extraction and identification of mass spectrometry image features and its application to sample analysis and modeling. For the proof of concept, two different tissue types (healthy kidney and CT-26 xenograft tumor tissues) were imaged and analyzed. A mouse kidney model and tumor model were generated using morphometric - number of objects and total surface - information. The morphometric information was used to identify m/z that have a heterogeneous distribution. It seems to be a worthwhile pursuit as clonal heterogeneity in a tumor is of clinical relevance. This study provides a new approach to find biomarker or support tissue classification with more information. [Figure not available: see fulltext.

  2. Statistical colour models: an automated digital image analysis method for quantification of histological biomarkers.

    Science.gov (United States)

    Shu, Jie; Dolman, G E; Duan, Jiang; Qiu, Guoping; Ilyas, Mohammad

    2016-04-27

    Colour is the most important feature used in quantitative immunohistochemistry (IHC) image analysis; IHC is used to provide information relating to aetiology and to confirm malignancy. Statistical modelling is a technique widely used for colour detection in computer vision. We have developed a statistical model of colour detection applicable to detection of stain colour in digital IHC images. Model was first trained by massive colour pixels collected semi-automatically. To speed up the training and detection processes, we removed luminance channel, Y channel of YCbCr colour space and chose 128 histogram bins which is the optimal number. A maximum likelihood classifier is used to classify pixels in digital slides into positively or negatively stained pixels automatically. The model-based tool was developed within ImageJ to quantify targets identified using IHC and histochemistry. The purpose of evaluation was to compare the computer model with human evaluation. Several large datasets were prepared and obtained from human oesophageal cancer, colon cancer and liver cirrhosis with different colour stains. Experimental results have demonstrated the model-based tool achieves more accurate results than colour deconvolution and CMYK model in the detection of brown colour, and is comparable to colour deconvolution in the detection of pink colour. We have also demostrated the proposed model has little inter-dataset variations. A robust and effective statistical model is introduced in this paper. The model-based interactive tool in ImageJ, which can create a visual representation of the statistical model and detect a specified colour automatically, is easy to use and available freely at http://rsb.info.nih.gov/ij/plugins/ihc-toolbox/index.html . Testing to the tool by different users showed only minor inter-observer variations in results.

  3. The Gold Standard Paradox in Digital Image Analysis: Manual Versus Automated Scoring as Ground Truth.

    Science.gov (United States)

    Aeffner, Famke; Wilson, Kristin; Martin, Nathan T; Black, Joshua C; Hendriks, Cris L Luengo; Bolon, Brad; Rudmann, Daniel G; Gianani, Roberto; Koegler, Sally R; Krueger, Joseph; Young, G Dave

    2017-09-01

    - Novel therapeutics often target complex cellular mechanisms. Increasingly, quantitative methods like digital tissue image analysis (tIA) are required to evaluate correspondingly complex biomarkers to elucidate subtle phenotypes that can inform treatment decisions with these targeted therapies. These tIA systems need a gold standard, or reference method, to establish analytical validity. Conventional, subjective histopathologic scores assigned by an experienced pathologist are the gold standard in anatomic pathology and are an attractive reference method. The pathologist's score can establish the ground truth to assess a tIA solution's analytical performance. The paradox of this validation strategy, however, is that tIA is often used to assist pathologists to score complex biomarkers because it is more objective and reproducible than manual evaluation alone by overcoming known biases in a human's visual evaluation of tissue, and because it can generate endpoints that cannot be generated by a human observer. - To discuss common visual and cognitive traps known in traditional pathology-based scoring paradigms that may impact characterization of tIA-assisted scoring accuracy, sensitivity, and specificity. - This manuscript reviews the current literature from the past decades available for traditional subjective pathology scoring paradigms and known cognitive and visual traps relevant to these scoring paradigms. - Awareness of the gold standard paradox is necessary when using traditional pathologist scores to analytically validate a tIA tool because image analysis is used specifically to overcome known sources of bias in visual assessment of tissue sections.

  4. Semi-automated relative quantification of cell culture contamination with mycoplasma by Photoshop-based image analysis on immunofluorescence preparations.

    Science.gov (United States)

    Kumar, Ashok; Yerneni, Lakshmana K

    2009-01-01

    Mycoplasma contamination in cell culture is a serious setback for the cell-culturist. The experiments undertaken using contaminated cell cultures are known to yield unreliable or false results due to various morphological, biochemical and genetic effects. Earlier surveys revealed incidences of mycoplasma contamination in cell cultures to range from 15 to 80%. Out of a vast array of methods for detecting mycoplasma in cell culture, the cytological methods directly demonstrate the contaminating organism present in association with the cultured cells. In this investigation, we report the adoption of a cytological immunofluorescence assay (IFA), in an attempt to obtain a semi-automated relative quantification of contamination by employing the user-friendly Photoshop-based image analysis. The study performed on 77 cell cultures randomly collected from various laboratories revealed mycoplasma contamination in 18 cell cultures simultaneously by IFA and Hoechst DNA fluorochrome staining methods. It was observed that the Photoshop-based image analysis on IFA stained slides was very valuable as a sensitive tool in providing quantitative assessment on the extent of contamination both per se and in comparison to cellularity of cell cultures. The technique could be useful in estimating the efficacy of anti-mycoplasma agents during decontaminating measures.

  5. Automated analysis of gastric emptying

    International Nuclear Information System (INIS)

    Abutaleb, A.; Frey, D.; Spicer, K.; Spivey, M.; Buckles, D.

    1986-01-01

    The authors devised a novel method to automate the analysis of nuclear gastric emptying studies. Many previous methods have been used to measure gastric emptying but, are cumbersome and require continuing interference by the operator to use. Two specific problems that occur are related to patient movement between images and changes in the location of the radioactive material within the stomach. Their method can be used with either dual or single phase studies. For dual phase studies the authors use In-111 labeled water and Tc-99MSC (Sulfur Colloid) labeled scrambled eggs. For single phase studies either the liquid or solid phase material is used

  6. Automated Quality Assurance Applied to Mammographic Imaging

    Directory of Open Access Journals (Sweden)

    Anne Davis

    2002-07-01

    Full Text Available Quality control in mammography is based upon subjective interpretation of the image quality of a test phantom. In order to suppress subjectivity due to the human observer, automated computer analysis of the Leeds TOR(MAM test phantom is investigated. Texture analysis via grey-level co-occurrence matrices is used to detect structures in the test object. Scoring of the substructures in the phantom is based on grey-level differences between regions and information from grey-level co-occurrence matrices. The results from scoring groups of particles within the phantom are presented.

  7. Automated Glacier Mapping using Object Based Image Analysis. Case Studies from Nepal, the European Alps and Norway

    Science.gov (United States)

    Vatle, S. S.

    2015-12-01

    Frequent and up-to-date glacier outlines are needed for many applications of glaciology, not only glacier area change analysis, but also for masks in volume or velocity analysis, for the estimation of water resources and as model input data. Remote sensing offers a good option for creating glacier outlines over large areas, but manual correction is frequently necessary, especially in areas containing supraglacial debris. We show three different workflows for mapping clean ice and debris-covered ice within Object Based Image Analysis (OBIA). By working at the object level as opposed to the pixel level, OBIA facilitates using contextual, spatial and hierarchical information when assigning classes, and additionally permits the handling of multiple data sources. Our first example shows mapping debris-covered ice in the Manaslu Himalaya, Nepal. SAR Coherence data is used in combination with optical and topographic data to classify debris-covered ice, obtaining an accuracy of 91%. Our second example shows using a high-resolution LiDAR derived DEM over the Hohe Tauern National Park in Austria. Breaks in surface morphology are used in creating image objects; debris-covered ice is then classified using a combination of spectral, thermal and topographic properties. Lastly, we show a completely automated workflow for mapping glacier ice in Norway. The NDSI and NIR/SWIR band ratio are used to map clean ice over the entire country but the thresholds are calculated automatically based on a histogram of each image subset. This means that in theory any Landsat scene can be inputted and the clean ice can be automatically extracted. Debris-covered ice can be included semi-automatically using contextual and morphological information.

  8. Automated Motivic Analysis

    DEFF Research Database (Denmark)

    Lartillot, Olivier

    2016-01-01

    Motivic analysis provides very detailed understanding of musical composi- tions, but is also particularly difficult to formalize and systematize. A computational automation of the discovery of motivic patterns cannot be reduced to a mere extraction of all possible sequences of descriptions....... The systematic approach inexorably leads to a proliferation of redundant structures that needs to be addressed properly. Global filtering techniques cause a drastic elimination of interesting structures that damages the quality of the analysis. On the other hand, a selection of closed patterns allows...... for lossless compression. The structural complexity resulting from successive repetitions of patterns can be controlled through a simple modelling of cycles. Generally, motivic patterns cannot always be defined solely as sequences of descriptions in a fixed set of dimensions: throughout the descriptions...

  9. Automated image analysis for quantification of reactive oxygen species in plant leaves.

    Science.gov (United States)

    Sekulska-Nalewajko, Joanna; Gocławski, Jarosław; Chojak-Koźniewska, Joanna; Kuźniak, Elżbieta

    2016-10-15

    The paper presents an image processing method for the quantitative assessment of ROS accumulation areas in leaves stained with DAB or NBT for H 2 O 2 and O 2 - detection, respectively. Three types of images determined by the combination of staining method and background color are considered. The method is based on the principle of supervised machine learning with manually labeled image patterns used for training. The method's algorithm is developed as a JavaScript macro in the public domain Fiji (ImageJ) environment. It allows to select the stained regions of ROS-mediated histochemical reactions, subsequently fractionated according to the weak, medium and intense staining intensity and thus ROS accumulation. It also evaluates total leaf blade area. The precision of ROS accumulation area detection is validated by the Dice Similarity Coefficient in the case of manual patterns. The proposed framework reduces the computation complexity, once prepared, requires less image processing expertise than the competitive methods and represents a routine quantitative imaging assay for a general histochemical image classification. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Deep Learning Automates the Quantitative Analysis of Individual Cells in Live-Cell Imaging Experiments.

    Science.gov (United States)

    Van Valen, David A; Kudo, Takamasa; Lane, Keara M; Macklin, Derek N; Quach, Nicolas T; DeFelice, Mialy M; Maayan, Inbal; Tanouchi, Yu; Ashley, Euan A; Covert, Markus W

    2016-11-01

    Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.

  11. Automated mask generation for PIV image analysis based on pixel intensity statistics

    Science.gov (United States)

    Masullo, Alessandro; Theunissen, Raf

    2017-06-01

    The measurement of displacements near the vicinity of surfaces involves advanced PIV algorithms requiring accurate knowledge of object boundaries. These data typically come in the form of a logical mask, generated manually or through automatic algorithms. The automatic detection of masks usually necessitates special features or reference points such as bright lines, high contrast objects, and sufficiently observable coherence between pixels. These are, however, not always present in experimental images necessitating a more robust and general approach. In this work, the authors propose a novel method for the automatic detection of static image regions which do not contain relevant information for the estimation of particle image displacements and can consequently be excluded or masked out. The method does not require any a priori knowledge of the static objects (i.e., contrast, brightness, or strong features) as it exploits statistical information from multiple PIV images. Based on the observation that the temporal variation in light intensity follows a completely different distribution for flow regions and object regions, the method utilizes a normality test and an automatic thresholding method on the retrieved probability to identify regions to be masked. The method is assessed through a Monte Carlo simulation with synthetic images and its performance under realistic imaging conditions is proven based on three experimental test cases.

  12. Deep Learning Automates the Quantitative Analysis of Individual Cells in Live-Cell Imaging Experiments

    Science.gov (United States)

    Van Valen, David A.; Lane, Keara M.; Quach, Nicolas T.; Maayan, Inbal

    2016-01-01

    Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems. PMID:27814364

  13. Contaminant analysis automation, an overview

    International Nuclear Information System (INIS)

    Hollen, R.; Ramos, O. Jr.

    1996-01-01

    To meet the environmental restoration and waste minimization goals of government and industry, several government laboratories, universities, and private companies have formed the Contaminant Analysis Automation (CAA) team. The goal of this consortium is to design and fabricate robotics systems that standardize and automate the hardware and software of the most common environmental chemical methods. In essence, the CAA team takes conventional, regulatory- approved (EPA Methods) chemical analysis processes and automates them. The automation consists of standard laboratory modules (SLMs) that perform the work in a much more efficient, accurate, and cost- effective manner

  14. Automated Analysis of Accountability

    DEFF Research Database (Denmark)

    Bruni, Alessandro; Giustolisi, Rosario; Schürmann, Carsten

    2017-01-01

    that are amenable to automated verification. Our definitions are general enough to be applied to different classes of protocols and different automated security verification tools. Furthermore, we point out formally the relation between verifiability and accountability. We validate our definitions...... with the automatic verification of three protocols: a secure exam protocol, Google’s Certificate Transparency, and an improved version of Bingo Voting. We find through automated verification that all three protocols satisfy verifiability while only the first two protocols meet accountability....

  15. Automated image enhancement using power law transformations

    Indian Academy of Sciences (India)

    We propose a scheme for automating power law transformations which are used for image enhancement. The scheme we propose does not require the user to choose the exponent in the power law transformation. This method works well for images having poor contrast, especially to those images in which the peaks ...

  16. Automated image enhancement using power law transformations

    Indian Academy of Sciences (India)

    Abstract. We propose a scheme for automating power law transformations which are used for image enhancement. The scheme we propose does not require the user to choose the exponent in the power law transformation. This method works well for images having poor contrast, especially to those images in which the ...

  17. Automated imaging system for single molecules

    Science.gov (United States)

    Schwartz, David Charles; Runnheim, Rodney; Forrest, Daniel

    2012-09-18

    There is provided a high throughput automated single molecule image collection and processing system that requires minimal initial user input. The unique features embodied in the present disclosure allow automated collection and initial processing of optical images of single molecules and their assemblies. Correct focus may be automatically maintained while images are collected. Uneven illumination in fluorescence microscopy is accounted for, and an overall robust imaging operation is provided yielding individual images prepared for further processing in external systems. Embodiments described herein are useful in studies of any macromolecules such as DNA, RNA, peptides and proteins. The automated image collection and processing system and method of same may be implemented and deployed over a computer network, and may be ergonomically optimized to facilitate user interaction.

  18. Automated Temperature and Emission Measure Analysis of Coronal Loops and Active Regions Observed with the Atmospheric Imaging Assembly on the Solar Dynamics Observatory (SDO/AIA)

    Science.gov (United States)

    Aschwanden, Markus J.; Boerner, Paul; Schrijver, Carolus J.; Malanushenko, Anna

    2013-03-01

    We developed numerical codes designed for automated analysis of SDO/AIA image datasets in the six coronal filters, including: i) coalignment test between different wavelengths with measurements of the altitude of the EUV-absorbing chromosphere, ii) self-calibration by empirical correction of instrumental response functions, iii) automated generation of differential emission measure [DEM] distributions with peak-temperature maps [ T p( x, y)] and emission measure maps [ EM p( x, y)] of the full Sun or active region areas, iv) composite DEM distributions [d EM( T)/d T] of active regions or subareas, v) automated detection of coronal loops, and vi) automated background subtraction and thermal analysis of coronal loops, which yields statistics of loop temperatures [ T e], temperature widths [ σ T], emission measures [ EM], electron densities [ n e], and loop widths [ w]. The combination of these numerical codes allows for automated and objective processing of numerous coronal loops. As an example, we present the results of an application to the active region NOAA 11158, observed on 15 February 2011, shortly before it produced the largest (X2.2) flare during the current solar cycle. We detect 570 loop segments at temperatures in the entire range of log( T e)=5.7 - 7.0 K and corroborate previous TRACE and AIA results on their near-isothermality and the validity of the Rosner-Tucker-Vaiana (RTV) law at soft X-ray temperatures ( T≳2 MK) and its failure at lower EUV temperatures.

  19. Automated magnification calibration in transmission electron microscopy using Fourier analysis of replica images.

    NARCIS (Netherlands)

    Laak, J.A.W.M. van der; Dijkman, H.B.P.M.; Pahlplatz, M.M.M.

    2006-01-01

    The magnification factor in transmission electron microscopy is not very precise, hampering for instance quantitative analysis of specimens. Calibration of the magnification is usually performed interactively using replica specimens, containing line or grating patterns with known spacing. In the

  20. Automated brain tumor segmentation in magnetic resonance imaging based on sliding-window technique and symmetry analysis.

    Science.gov (United States)

    Lian, Yanyun; Song, Zhijian

    2014-01-01

    Brain tumor segmentation from magnetic resonance imaging (MRI) is an important step toward surgical planning, treatment planning, monitoring of therapy. However, manual tumor segmentation commonly used in clinic is time-consuming and challenging, and none of the existed automated methods are highly robust, reliable and efficient in clinic application. An accurate and automated tumor segmentation method has been developed for brain tumor segmentation that will provide reproducible and objective results close to manual segmentation results. Based on the symmetry of human brain, we employed sliding-window technique and correlation coefficient to locate the tumor position. At first, the image to be segmented was normalized, rotated, denoised, and bisected. Subsequently, through vertical and horizontal sliding-windows technique in turn, that is, two windows in the left and the right part of brain image moving simultaneously pixel by pixel in two parts of brain image, along with calculating of correlation coefficient of two windows, two windows with minimal correlation coefficient were obtained, and the window with bigger average gray value is the location of tumor and the pixel with biggest gray value is the locating point of tumor. At last, the segmentation threshold was decided by the average gray value of the pixels in the square with center at the locating point and 10 pixels of side length, and threshold segmentation and morphological operations were used to acquire the final tumor region. The method was evaluated on 3D FSPGR brain MR images of 10 patients. As a result, the average ratio of correct location was 93.4% for 575 slices containing tumor, the average Dice similarity coefficient was 0.77 for one scan, and the average time spent on one scan was 40 seconds. An fully automated, simple and efficient segmentation method for brain tumor is proposed and promising for future clinic use. Correlation coefficient is a new and effective feature for tumor location.

  1. ARTIP: Automated Radio Telescope Image Processing Pipeline

    Science.gov (United States)

    Sharma, Ravi; Gyanchandani, Dolly; Kulkarni, Sarang; Gupta, Neeraj; Pathak, Vineet; Pande, Arti; Joshi, Unmesh

    2018-02-01

    The Automated Radio Telescope Image Processing Pipeline (ARTIP) automates the entire process of flagging, calibrating, and imaging for radio-interferometric data. ARTIP starts with raw data, i.e. a measurement set and goes through multiple stages, such as flux calibration, bandpass calibration, phase calibration, and imaging to generate continuum and spectral line images. Each stage can also be run independently. The pipeline provides continuous feedback to the user through various messages, charts and logs. It is written using standard python libraries and the CASA package. The pipeline can deal with datasets with multiple spectral windows and also multiple target sources which may have arbitrary combinations of flux/bandpass/phase calibrators.

  2. Simplified automated image analysis for detection and phenotyping of Mycobacterium tuberculosis on porous supports by monitoring growing microcolonies.

    Directory of Open Access Journals (Sweden)

    Alice L den Hertog

    Full Text Available BACKGROUND: Even with the advent of nucleic acid (NA amplification technologies the culture of mycobacteria for diagnostic and other applications remains of critical importance. Notably microscopic observed drug susceptibility testing (MODS, as opposed to traditional culture on solid media or automated liquid culture, has shown potential to both speed up and increase the provision of mycobacterial culture in high burden settings. METHODS: Here we explore the growth of Mycobacterial tuberculosis microcolonies, imaged by automated digital microscopy, cultured on a porous aluminium oxide (PAO supports. Repeated imaging during colony growth greatly simplifies "computer vision" and presumptive identification of microcolonies was achieved here using existing publically available algorithms. Our system thus allows the growth of individual microcolonies to be monitored and critically, also to change the media during the growth phase without disrupting the microcolonies. Transfer of identified microcolonies onto selective media allowed us, within 1-2 bacterial generations, to rapidly detect the drug susceptibility of individual microcolonies, eliminating the need for time consuming subculturing or the inoculation of multiple parallel cultures. SIGNIFICANCE: Monitoring the phenotype of individual microcolonies as they grow has immense potential for research, screening, and ultimately M. tuberculosis diagnostic applications. The method described is particularly appealing with respect to speed and automation.

  3. A method to quantify movement activity of groups of animals using automated image analysis

    Science.gov (United States)

    Xu, Jianyu; Yu, Haizhen; Liu, Ying

    2009-07-01

    Most physiological and environmental changes are capable of inducing variations in animal behavior. The behavioral parameters have the possibility to be measured continuously in-situ by a non-invasive and non-contact approach, and have the potential to be used in the actual productions to predict stress conditions. Most vertebrates tend to live in groups, herds, flocks, shoals, bands, packs of conspecific individuals. Under culture conditions, the livestock or fish are in groups and interact on each other, so the aggregate behavior of the group should be studied rather than that of individuals. This paper presents a method to calculate the movement speed of a group of animal in a enclosure or a tank denoted by body length speed that correspond to group activity using computer vision technique. Frame sequences captured at special time interval were subtracted in pairs after image segmentation and identification. By labeling components caused by object movement in difference frame, the projected area caused by the movement of every object in the capture interval was calculated; this projected area was divided by the projected area of every object in the later frame to get body length moving distance of each object, and further could obtain the relative body length speed. The average speed of all object can well respond to the activity of the group. The group activity of a tilapia (Oreochromis niloticus) school to high (2.65 mg/L) levels of unionized ammonia (UIA) concentration were quantified based on these methods. High UIA level condition elicited a marked increase in school activity at the first hour (P<0.05) exhibiting an avoidance reaction (trying to flee from high UIA condition), and then decreased gradually.

  4. A Comparison of Visual Assessment and Automated Digital Image Analysis of Ki67 Labeling Index in Breast Cancer.

    Science.gov (United States)

    Zhong, Fangfang; Bi, Rui; Yu, Baohua; Yang, Fei; Yang, Wentao; Shui, Ruohong

    2016-01-01

    Ki67 labeling index (LI) is critical for treatment options and prognosis evaluation in breast cancer. Visual assessment (VA) is widely used to assess Ki67 LI, but has some limitations. In this study, we compared the consistency between VA and automated digital image analysis (DIA) of Ki67 LI in breast cancer, and to evaluate the application value of DIA in Ki67 LI assessment. Ki67 immunostained slides of 155 cases of primary invasive breast cancer were eyeballing assessed by five breast pathologists and automated digital image analyzed by one breast pathologist respectively. Two score methods, hot-spot score and average score, were used to choose score areas. The intra-class correlation coefficient (ICC) was used to analyze the consistency between VA and DIA, and Wilcoxon signed-rank test was used to compare the median of paired-difference between VA and DIA values. (1) A perfect agreement was demonstrated between VA and DIA of Ki67 LI by ICC analysis (PKi67 LI was also showed in G2-G3, ER positive/HER2 negative cases. Average score and hot-spot score methods both demonstrated a perfect concordance between VA and DIA of Ki67 LI. (2) All cases were classified into three groups by VA values (≤10%, 11%-30% and >30% Ki67 LI). The concordance was relatively lower in intermediate Ki67 LI group (11%-30%) compared with high (>30%) Ki67 LI groups according to both methods. (3) All cases were classified into three groups by paired-difference (d) between VA values of hot-spot score and average score (dKi67 staining distribution (heterogeneous or homogenous) and reproducibility of assessment. A perfect agreement was all demonstrated in three groups, and a slightly better Ki67 LI agreement between VA and DIA was indicated in homogenous staining slides than in heterogeneous staining ones. (4) VA values were relatively smaller than DIA values (average score: median of paired-difference -3.72; hot-spot score: median of paired-difference -9.12). An excellent agreement between VA

  5. A new automated method for analysis of gated-SPECT images based on a three-dimensional heart shaped model

    DEFF Research Database (Denmark)

    Lomsky, Milan; Richter, Jens; Johansson, Lena

    2005-01-01

    A new automated method for quantification of left ventricular function from gated-single photon emission computed tomography (SPECT) images has been developed. The method for quantification of cardiac function (CAFU) is based on a heart shaped model and the active shape algorithm. The model....... In the patient group the EDV calculated using QGS and CAFU showed good agreement for large hearts and higher CAFU values compared with QGS for the smaller hearts. In the larger hearts, ESV was much larger for QGS than for CAFU both in the phantom and patient studies. In the smallest hearts there was good...

  6. Toward standardized quantitative image quality (IQ) assessment in computed tomography (CT): A comprehensive framework for automated and comparative IQ analysis based on ICRU Report 87.

    Science.gov (United States)

    Pahn, Gregor; Skornitzke, Stephan; Schlemmer, Hans-Peter; Kauczor, Hans-Ulrich; Stiller, Wolfram

    2016-01-01

    Based on the guidelines from "Report 87: Radiation Dose and Image-quality Assessment in Computed Tomography" of the International Commission on Radiation Units and Measurements (ICRU), a software framework for automated quantitative image quality analysis was developed and its usability for a variety of scientific questions demonstrated. The extendable framework currently implements the calculation of the recommended Fourier image quality (IQ) metrics modulation transfer function (MTF) and noise-power spectrum (NPS), and additional IQ quantities such as noise magnitude, CT number accuracy, uniformity across the field-of-view, contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR) of simulated lesions for a commercially available cone-beam phantom. Sample image data were acquired with different scan and reconstruction settings on CT systems from different manufacturers. Spatial resolution is analyzed in terms of edge-spread function, line-spread-function, and MTF. 3D NPS is calculated according to ICRU Report 87, and condensed to 2D and radially averaged 1D representations. Noise magnitude, CT numbers, and uniformity of these quantities are assessed on large samples of ROIs. Low-contrast resolution (CNR, SNR) is quantitatively evaluated as a function of lesion contrast and diameter. Simultaneous automated processing of several image datasets allows for straightforward comparative assessment. The presented framework enables systematic, reproducible, automated and time-efficient quantitative IQ analysis. Consistent application of the ICRU guidelines facilitates standardization of quantitative assessment not only for routine quality assurance, but for a number of research questions, e.g. the comparison of different scanner models or acquisition protocols, and the evaluation of new technology or reconstruction methods. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. An automated vessel segmentation of retinal images using multiscale vesselness

    International Nuclear Information System (INIS)

    Ben Abdallah, M.; Malek, J.; Tourki, R.; Krissian, K.

    2011-01-01

    The ocular fundus image can provide information on pathological changes caused by local ocular diseases and early signs of certain systemic diseases, such as diabetes and hypertension. Automated analysis and interpretation of fundus images has become a necessary and important diagnostic procedure in ophthalmology. The extraction of blood vessels from retinal images is an important and challenging task in medical analysis and diagnosis. In this paper, we introduce an implementation of the anisotropic diffusion which allows reducing the noise and better preserving small structures like vessels in 2D images. A vessel detection filter, based on a multi-scale vesselness function, is then applied to enhance vascular structures.

  8. Automated ion imaging with the NanoSIMS ion microprobe

    Science.gov (United States)

    Gröner, E.; Hoppe, P.

    2006-07-01

    Automated ion imaging systems developed for Cameca IMS3f and IMS6f ion microprobes are very useful for the analysis of large numbers of presolar dust grains, in particular with respect to the identification of rare types of presolar grains. The application of these systems is restricted to the study of micrometer-sized grains, thereby by-passing the major fraction of presolar grains which are sub-micrometer in size. The new generation Cameca NanoSIMS 50 ion microprobe combines high spatial resolution, high sensitivity, and simultaneous detection of up to 6 isotopes which makes the NanoSIMS an unprecedented tool for the analysis of presolar materials. Here, we report on the development of a fully automated ion imaging system for the NanoSIMS at MPI for Chemistry in order to extend its analytical capabilities further. The ion imaging consists of five steps: (i) Removal of surface contamination on the area of interest. (ii) Secondary ion image acquisition of up to 5 isotopes in multi-detection. (iii) Automated particle recognition in a pre-selected image. (iv) Automated measurement of all recognised particles with appropriate raster sizes and measurement times. (v) Stage movement to new area and repetition of steps (ii)-(iv).

  9. Automated ion imaging with the NanoSIMS ion microprobe

    International Nuclear Information System (INIS)

    Groener, E.; Hoppe, P.

    2006-01-01

    Automated ion imaging systems developed for Cameca IMS3f and IMS6f ion microprobes are very useful for the analysis of large numbers of presolar dust grains, in particular with respect to the identification of rare types of presolar grains. The application of these systems is restricted to the study of micrometer-sized grains, thereby by-passing the major fraction of presolar grains which are sub-micrometer in size. The new generation Cameca NanoSIMS 50 ion microprobe combines high spatial resolution, high sensitivity, and simultaneous detection of up to 6 isotopes which makes the NanoSIMS an unprecedented tool for the analysis of presolar materials. Here, we report on the development of a fully automated ion imaging system for the NanoSIMS at MPI for Chemistry in order to extend its analytical capabilities further. The ion imaging consists of five steps: (i) Removal of surface contamination on the area of interest. (ii) Secondary ion image acquisition of up to 5 isotopes in multi-detection. (iii) Automated particle recognition in a pre-selected image. (iv) Automated measurement of all recognised particles with appropriate raster sizes and measurement times. (v) Stage movement to new area and repetition of steps (ii)-(iv)

  10. An automated activation analysis system

    International Nuclear Information System (INIS)

    Minor, M.M.; Hensley, W.K.; Denton, M.M.; Garcia, S.R.

    1982-01-01

    An automated delayed neutron counting and instrumental neutron activation analysis system has been developed at Los Alamos National Laboratory's Omega West Reactor (OWR) to analyze samples for uranium and 31 additional elements with a maximum throughput of 400 samples per day. The system and its mode of operation for a large reconnaissance survey will be described. (author)

  11. Automated activation-analysis system

    International Nuclear Information System (INIS)

    Minor, M.M.; Hensley, W.K.; Denton, M.M.; Garcia, S.R.

    1981-01-01

    An automated delayed neutron counting and instrumental neutron activation analysis system has been developed at Los Alamos National Laboratory's Omega West Reactor (OWR) to analyze samples for uranium and 31 additional elements with a maximum throughput of 400 samples per day. The system and its mode of operation for a large reconnaissance survey are described

  12. Automated activation-analysis system

    International Nuclear Information System (INIS)

    Minor, M.M.; Garcia, S.R.; Denton, M.M.

    1982-01-01

    An automated delayed neutron counting and instrumental neutron activation analysis system has been developed at Los Alamos National Laboratory's Omega West Reactor (OWR) to analyze samples for uranium and 31 additional elements with a maximum throughput of 400 samples per day

  13. Semi-automated Image Processing for Preclinical Bioluminescent Imaging.

    Science.gov (United States)

    Slavine, Nikolai V; McColl, Roderick W

    Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment.

  14. AUTOMATED ANALYSIS OF BREAKERS

    Directory of Open Access Journals (Sweden)

    E. M. Farhadzade

    2014-01-01

    Full Text Available Breakers relate to Electric Power Systems’ equipment, the reliability of which influence, to a great extend, on reliability of Power Plants. In particular, the breakers determine structural reliability of switchgear circuit of Power Stations and network substations. Failure in short-circuit switching off by breaker with further failure of reservation unit or system of long-distance protection lead quite often to system emergency.The problem of breakers’ reliability improvement and the reduction of maintenance expenses is becoming ever more urgent in conditions of systematic increasing of maintenance cost and repair expenses of oil circuit and air-break circuit breakers. The main direction of this problem solution is the improvement of diagnostic control methods and organization of on-condition maintenance. But this demands to use a great amount of statistic information about nameplate data of breakers and their operating conditions, about their failures, testing and repairing, advanced developments (software of computer technologies and specific automated information system (AIS.The new AIS with AISV logo was developed at the department: “Reliability of power equipment” of AzRDSI of Energy. The main features of AISV are:· to provide the security and data base accuracy;· to carry out systematic control of breakers conformity with operating conditions;· to make the estimation of individual  reliability’s value and characteristics of its changing for given combination of characteristics variety;· to provide personnel, who is responsible for technical maintenance of breakers, not only with information but also with methodological support, including recommendations for the given problem solving  and advanced methods for its realization.

  15. Measurement of TLR-induced macrophage spreading by automated image analysis: differential role of Myd88 and MAPK in early and late responses

    Directory of Open Access Journals (Sweden)

    Jens eWenzel

    2011-10-01

    Full Text Available Sensing of infectious danger by Toll-like receptors (TLR on macrophages causes not only a reprogramming of the transcriptome but also changes in the cytoskeleton important for cell spreading and motility. Since manual determination of cell contact areas from fluorescence microscopy pictures is very time consuming and prone to bias, we have developed and tested algorithms for automated measurement of macrophage spreading. The two-step method combines identification of cells by nuclear staining with DAPI and cell surface staining of the integrin CD11b. Automated image analysis correlated very well with manual annotation in resting macrophages and early after stimulation, whereas at later time points the automated cell segmentation algorithm and manual annotation showed slightly larger variation. The method was applied to investigate the impact of genetic or pharmacological inhibition of known TLR signaling components. Deificiency in the adapter protein Myd88 strongly reduced spreading activity at the late time points, but had no impact early after LPS stimulation. A similar effect was observed upon pharmacological inhibition of MEK1, the kinase activating the MAPK ERK1/2, indicating that ERK1/2 mediates Myd88-dependent macrophages spreading. In contrast, macrophages lacking the MAPK p38 were impaired in the initial spreading response but responded normally 8 – 24 h after stimulation. The dichotomy of p38 and ERK1/2 MAPK effects on early and late macrophage spreading raises the question which of the respective substrate proteins mediate(s cytoskeletal remodeling and spreading. The automated measurement of cell spreading described here increases the objectivity and greatly reduces the time required for such investigations and is therefore expected to facilitate larger through-put analysis of macrophage spreading, e.g. in siRNA knockdown screens.

  16. Automated Analysis of Corpora Callosa

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Davies, Rhodri H.

    2003-01-01

    This report describes and evaluates the steps needed to perform modern model-based interpretation of the corpus callosum in MRI. The process is discussed from the initial landmark-free contours to full-fledged statistical models based on the Active Appearance Models framework. Topics treated incl...... include landmark placement, background modelling and multi-resolution analysis. Preliminary quantitative and qualitative validation in a cross-sectional study show that fully automated analysis and segmentation of the corpus callosum are feasible....

  17. Automated Segmentation of Cardiac Magnetic Resonance Images

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Nilsson, Jens Chr.; Grønning, Bjørn A.

    2001-01-01

    Magnetic resonance imaging (MRI) has been shown to be an accurate and precise technique to assess cardiac volumes and function in a non-invasive manner and is generally considered to be the current gold-standard for cardiac imaging [1]. Measurement of ventricular volumes, muscle mass and function...... is based on determination of the left-ventricular endocardial and epicardial borders. Since manual border detection is laborious, automated segmentation is highly desirable as a fast, objective and reproducible alternative. Automated segmentation will thus enhance comparability between and within cardiac...... studies and increase accuracy by allowing acquisition of thinner MRI-slices. This abstract demonstrates that statistical models of shape and appearance, namely the deformable models: Active Appearance Models, can successfully segment cardiac MRIs....

  18. An Automated, Image Processing System for Concrete Evaluation

    International Nuclear Information System (INIS)

    Baumgart, C.W.; Cave, S.P.; Linder, K.E.

    1998-01-01

    Allied Signal Federal Manufacturing ampersand Technologies (FM ampersand T) was asked to perform a proof-of-concept study for the Missouri Highway and Transportation Department (MHTD), Research Division, in June 1997. The goal of this proof-of-concept study was to ascertain if automated scanning and imaging techniques might be applied effectively to the problem of concrete evaluation. In the current evaluation process, a concrete sample core is manually scanned under a microscope. Voids (or air spaces) within the concrete are then detected visually by a human operator by incrementing the sample under the cross-hairs of a microscope and by counting the number of ''pixels'' which fall within a void. Automation of the scanning and image analysis processes is desired to improve the speed of the scanning process, to improve evaluation consistency, and to reduce operator fatigue. An initial, proof-of-concept image analysis approach was successfully developed and demonstrated using acquired black and white imagery of concrete samples. In this paper, the automated scanning and image capture system currently under development will be described and the image processing approach developed for the proof-of-concept study will be demonstrated. A development update and plans for future enhancements are also presented

  19. Automated Analysis of Infinite Scenarios

    DEFF Research Database (Denmark)

    Buchholtz, Mikael

    2005-01-01

    The security of a network protocol crucially relies on the scenario in which the protocol is deployed. This paper describes syntactic constructs for modelling network scenarios and presents an automated analysis tool, which can guarantee that security properties hold in all of the (infinitely many......) instances of a scenario. The tool is based on control flow analysis of the process calculus LySa and is applied to the Bauer, Berson, and Feiertag protocol where is reveals a previously undocumented problem, which occurs in some scenarios but not in other....

  20. Automated vector selection of SIVQ and parallel computing integration MATLAB TM : Innovations supporting large-scale and high-throughput image analysis studies

    Directory of Open Access Journals (Sweden)

    Jerome Cheng

    2011-01-01

    Full Text Available Introduction: Spatially invariant vector quantization (SIVQ is a texture and color-based image matching algorithm that queries the image space through the use of ring vectors. In prior studies, the selection of one or more optimal vectors for a particular feature of interest required a manual process, with the user initially stochastically selecting candidate vectors and subsequently testing them upon other regions of the image to verify the vector′s sensitivity and specificity properties (typically by reviewing a resultant heat map. In carrying out the prior efforts, the SIVQ algorithm was noted to exhibit highly scalable computational properties, where each region of analysis can take place independently of others, making a compelling case for the exploration of its deployment on high-throughput computing platforms, with the hypothesis that such an exercise will result in performance gains that scale linearly with increasing processor count. Methods: An automated process was developed for the selection of optimal ring vectors to serve as the predicate matching operator in defining histopathological features of interest. Briefly, candidate vectors were generated from every possible coordinate origin within a user-defined vector selection area (VSA and subsequently compared against user-identified positive and negative "ground truth" regions on the same image. Each vector from the VSA was assessed for its goodness-of-fit to both the positive and negative areas via the use of the receiver operating characteristic (ROC transfer function, with each assessment resulting in an associated area-under-the-curve (AUC figure of merit. Results: Use of the above-mentioned automated vector selection process was demonstrated in two cases of use: First, to identify malignant colonic epithelium, and second, to identify soft tissue sarcoma. For both examples, a very satisfactory optimized vector was identified, as defined by the AUC metric. Finally, as an

  1. Non-destructive phenotypic analysis of early stage tree seedling growth using an automated stereovision imaging method

    Directory of Open Access Journals (Sweden)

    Antonio Montagnoli

    2016-10-01

    Full Text Available A plant phenotyping approach was applied to evaluate growth rate of containerized tree seedlings during the precultivation phase following seed germination. A simple and affordable stereo optical system was used to collect stereoscopic RGB images of seedlings at regular intervals of time. Comparative analysis of these images by means of a newly developed software enabled us to calculate a the increments of seedlings height and b the percentage greenness of seedling leaves. Comparison of these parameters with destructive biomass measurements showed that the height traits can be used to estimate seedling growth for needle-leaved plant species whereas the greenness trait can be used for broad-leaved plant species. Despite the need to adjust for plant type, growth stage and light conditions this new, cheap, rapid, and sustainable phenotyping approach can be used to study large-scale phenome variations due to genome variability and interaction with environmental factors.

  2. Automated image segmentation using information theory

    International Nuclear Information System (INIS)

    Hibbard, L.S.

    2001-01-01

    Full text: Our development of automated contouring of CT images for RT planning is based on maximum a posteriori (MAP) analyses of region textures, edges, and prior shapes, and assumes stationary Gaussian distributions for voxel textures and contour shapes. Since models may not accurately represent image data, it would be advantageous to compute inferences without relying on models. The relative entropy (RE) from information theory can generate inferences based solely on the similarity of probability distributions. The entropy of a distribution of a random variable X is defined as -Σ x p(x)log 2 p(x) for all the values x which X may assume. The RE (Kullback-Liebler divergence) of two distributions p(X), q(X), over X is Σ x p(x)log 2 {p(x)/q(x)}. The RE is a kind of 'distance' between p,q, equaling zero when p=q and increasing as p,q are more different. Minimum-error MAP and likelihood ratio decision rules have RE equivalents: minimum error decisions obtain with functions of the differences between REs of compared distributions. One applied result is the contour ideally separating two regions is that which maximizes the relative entropy of the two regions' intensities. A program was developed that automatically contours the outlines of patients in stereotactic headframes, a situation most often requiring manual drawing. The relative entropy of intensities inside the contour (patient) versus outside (background) was maximized by conjugate gradient descent over the space of parameters of a deformable contour. shows the computed segmentation of a patient from headframe backgrounds. This program is particularly useful for preparing images for multimodal image fusion. Relative entropy and allied measures of distribution similarity provide automated contouring criteria that do not depend on statistical models of image data. This approach should have wide utility in medical image segmentation applications. Copyright (2001) Australasian College of Physical Scientists and

  3. Automated landmark-guided deformable image registration

    International Nuclear Information System (INIS)

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-01

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency. (paper)

  4. Automated landmark-guided deformable image registration.

    Science.gov (United States)

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-07

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency.

  5. Automated Processing of Zebrafish Imaging Data: A Survey

    Science.gov (United States)

    Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A.; Kausler, Bernhard X.; Ledesma-Carbayo, María J.; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine

    2013-01-01

    Abstract Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines. PMID:23758125

  6. Automated Image Registration Using Morphological Region of Interest Feature Extraction

    Science.gov (United States)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2005-01-01

    With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.

  7. Automated analysis of complex data

    Science.gov (United States)

    Saintamant, Robert; Cohen, Paul R.

    1994-01-01

    We have examined some of the issues involved in automating exploratory data analysis, in particular the tradeoff between control and opportunism. We have proposed an opportunistic planning solution for this tradeoff, and we have implemented a prototype, Igor, to test the approach. Our experience in developing Igor was surprisingly smooth. In contrast to earlier versions that relied on rule representation, it was straightforward to increment Igor's knowledge base without causing the search space to explode. The planning representation appears to be both general and powerful, with high level strategic knowledge provided by goals and plans, and the hooks for domain-specific knowledge are provided by monitors and focusing heuristics.

  8. Tools for automating the imaging of zebrafish larvae.

    Science.gov (United States)

    Pulak, Rock

    2016-03-01

    The VAST BioImager system is a set of tools developed for zebrafish researchers who require the collection of images from a large number of 2-7 dpf zebrafish larvae. The VAST BioImager automates larval handling, positioning and orientation tasks. Color images at about 10 μm resolution are collected from the on-board camera of the system. If images of greater resolution and detail are required, this system is mounted on an upright microscope, such as a confocal or fluorescence microscope, to utilize their capabilities. The system loads a larvae, positions it in view of the camera, determines orientation using pattern recognition analysis, and then more precisely positions to user-defined orientation for optimal imaging of any desired tissue or organ system. Multiple images of the same larva can be collected. The specific part of each larva and the desired orientation and position is identified by the researcher and an experiment defining the settings and a series of steps can be saved and repeated for imaging of subsequent larvae. The system captures images, then ejects and loads another larva from either a bulk reservoir, a well of a 96 well plate using the LP Sampler, or individually targeted larvae from a Petri dish or other container using the VAST Pipettor. Alternative manual protocols for handling larvae for image collection are tedious and time consuming. The VAST BioImager automates these steps to allow for greater throughput of assays and screens requiring high-content image collection of zebrafish larvae such as might be used in drug discovery and toxicology studies. Copyright © 2015 The Author. Published by Elsevier Inc. All rights reserved.

  9. Monochromatic blue light entrains diel activity cycles in the Norway lobster, Nephrops norvegicus (L. as measured by automated video-image analysis

    Directory of Open Access Journals (Sweden)

    Jacopo Aguzzi

    2009-12-01

    Full Text Available There is growing interest in developing automated, non-invasive techniques for long-lasting, laboratory-based monitoring of behaviour in organisms from deep-water continental margins which are of ecological and commercial importance. We monitored the burrow emergence rhythms in the Norway lobster, Nephrops norvegicus, which included: a characterising the regulation of behavioural activity outside the burrow under monochromatic blue light-darkness (LD cycles of 0.1 lx, recreating slope photic conditions (i.e. 200-300 m depth and constant darkness (DD, which is necessary for the study of the circadian system; b testing the performance of a newly designed digital video-image analysis system for tracking locomotor activity. We used infrared USB web cameras and customised software (in Matlab 7.1 to acquire and process digital frames of eight animals at a rate of one frame per minute under consecutive photoperiod stages for nine days each: LD, DD, and LD (subdivided into two stages, LD1 and LD2, for analysis purposes. The automated analysis allowed the production of time series of locomotor activity based on movements of the animals’ centroids. Data were studied with periodogram, waveform, and Fourier analyses. For the first time, we report robust diurnal burrow emergence rhythms during the LD period, which became weak in DD. Our results fit with field data accounting for midday peaks in catches at the depth of slopes. The comparison of the present locomotor pattern with those recorded at different light intensities clarifies the regulation of the clock of N. norvegicus at different depths.

  10. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  11. Reload safety analysis automation tools

    International Nuclear Information System (INIS)

    Havlůj, F.; Hejzlar, J.; Vočka, R.

    2013-01-01

    Performing core physics calculations for the sake of reload safety analysis is a very demanding and time consuming process. This process generally begins with the preparation of libraries for the core physics code using a lattice code. The next step involves creating a very large set of calculations with the core physics code. Lastly, the results of the calculations must be interpreted, correctly applying uncertainties and checking whether applicable limits are satisfied. Such a procedure requires three specialized experts. One must understand the lattice code in order to correctly calculate and interpret its results. The next expert must have a good understanding of the physics code in order to create libraries from the lattice code results and to correctly define all the calculations involved. The third expert must have a deep knowledge of the power plant and the reload safety analysis procedure in order to verify, that all the necessary calculations were performed. Such a procedure involves many steps and is very time consuming. At ÚJV Řež, a.s., we have developed a set of tools which can be used to automate and simplify the whole process of performing reload safety analysis. Our application QUADRIGA automates lattice code calculations for library preparation. It removes user interaction with the lattice code and reduces his task to defining fuel pin types, enrichments, assembly maps and operational parameters all through a very nice and user-friendly GUI. The second part in reload safety analysis calculations is done by CycleKit, a code which is linked with our core physics code ANDREA. Through CycleKit large sets of calculations with complicated interdependencies can be performed using simple and convenient notation. CycleKit automates the interaction with ANDREA, organizes all the calculations, collects the results, performs limit verification and displays the output in clickable html format. Using this set of tools for reload safety analysis simplifies

  12. Semi-Automated Classification of Gray Scale Aerial Photographs using Geographic Object Based Image Analysis (GEOBIA) Technique

    Science.gov (United States)

    Harb Rabia, Ahmed; Terribile, Fabio

    2013-04-01

    Aerial photography is an important source of high resolution remotely sensed data. Before 1970, aerial photographs were the only remote sensing data source for land use and land cover classification. Using these old aerial photographs improve the final output of land use and land cover change detection. However, classic techniques of aerial photographs classification like manual interpretation or screen digitization require great experience, long processing time and vast effort. A new technique needs to be developed in order to reduce processing time and effort and to give better results. Geographic object based image analysis (GEOBIA) is a newly developed area of Geographic Information Science and remote sensing in which automatic segmentation of images into objects of similar spectral, temporal and spatial characteristics is undertaken. Unlike pixel-based technique, GEOBIA deals with the object properties such as texture, square fit, roundness and many other properties that can improve classification results. GEOBIA technique can be divided into two main steps; segmentation and classification. Segmentation process is grouping adjacent pixels into objects of similar spectral and spatial characteristics. Classification process is assigning classes to the generated objects based on the characteristics of the individual objects. This study aimed to use GEOBIA technique to develop a novel approach for land use and land cover classification of aerial photographs that saves time and effort and gives improved results. Aerial photographs from 1954 of Valle Telesina in Italy were used in this study. Images were rectified and georeferenced in Arcmap using topographic maps. Images were then processed in eCognition software to generate land use and land cover map of 1954. A decision tree rule set was developed in eCognition to classify images and finally nine classes of general land use and land cover in the study area were recognized (forest, trees stripes, agricultural

  13. Automated analysis of whole skeletal muscle for muscular atrophy detection of ALS in whole-body CT images: preliminary study

    Science.gov (United States)

    Kamiya, Naoki; Ieda, Kosuke; Zhou, Xiangrong; Yamada, Megumi; Kato, Hiroki; Muramatsu, Chisako; Hara, Takeshi; Miyoshi, Toshiharu; Inuzuka, Takashi; Matsuo, Masayuki; Fujita, Hiroshi

    2017-03-01

    Amyotrophic lateral sclerosis (ALS) causes functional disorders such as difficulty in breathing and swallowing through the atrophy of voluntary muscles. ALS in its early stages is difficult to diagnose because of the difficulty in differentiating it from other muscular diseases. In addition, image inspection methods for aggressive diagnosis for ALS have not yet been established. The purpose of this study is to develop an automatic analysis system of the whole skeletal muscle to support the early differential diagnosis of ALS using whole-body CT images. In this study, the muscular atrophy parts including ALS patients are automatically identified by recognizing and segmenting whole skeletal muscle in the preliminary steps. First, the skeleton is identified by its gray value information. Second, the initial area of the body cavity is recognized by the deformation of the thoracic cavity based on the anatomical segmented skeleton. Third, the abdominal cavity boundary is recognized using ABM for precisely recognizing the body cavity. The body cavity is precisely recognized by non-rigid registration method based on the reference points of the abdominal cavity boundary. Fourth, the whole skeletal muscle is recognized by excluding the skeleton, the body cavity, and the subcutaneous fat. Additionally, the areas of muscular atrophy including ALS patients are automatically identified by comparison of the muscle mass. The experiments were carried out for ten cases with abnormality in the skeletal muscle. Global recognition and segmentation of the whole skeletal muscle were well realized in eight cases. Moreover, the areas of muscular atrophy including ALS patients were well identified in the lower limbs. As a result, this study indicated the basic technology to detect the muscle atrophy including ALS. In the future, it will be necessary to consider methods to differentiate other kinds of muscular atrophy as well as the clinical application of this detection method for early ALS

  14. Artificial intelligence and medical imaging. Expert systems and image analysis

    International Nuclear Information System (INIS)

    Wackenheim, A.; Zoellner, G.; Horviller, S.; Jacqmain, T.

    1987-01-01

    This paper gives an overview on the existing systems for automated image analysis and interpretation in medical imaging, especially in radiology. The example of ORFEVRE, the system for the analysis of CAT-scan images of the cervical triplet (c3-c5) by image analysis and subsequent expert-system is given and discussed in detail. Possible extensions are described [fr

  15. Automated migration analysis based on cell texture: method & reliability

    Directory of Open Access Journals (Sweden)

    Chittenden Thomas W

    2005-03-01

    Full Text Available Abstract Background In this paper, we present and validate a way to measure automatically the extent of cell migration based on automated examination of a series of digital photographs. It was designed specifically to identify the impact of Second Hand Smoke (SHS on endothelial cell migration but has broader applications. The analysis has two stages: (1 preprocessing of image texture, and (2 migration analysis. Results The output is a graphic overlay that indicates the front lines of cell migration superimposed on each original image, with automated reporting of the distance traversed vs. time. Expert preference compares to manual placement of leading edge shows complete equivalence of automated vs. manual leading edge definition for cell migration measurement. Conclusion Our method is indistinguishable from careful manual determinations of cell front lines, with the advantages of full automation, objectivity, and speed.

  16. Analysis of new bone, cartilage, and fibrosis tissue in healing murine allografts using whole slide imaging and a new automated histomorphometric algorithm

    OpenAIRE

    Zhang, Longze; Chang, Martin; Beck, Christopher A; Schwarz, Edward M; Boyce, Brendan F

    2016-01-01

    Histomorphometric analysis of histologic sections of normal and diseased bone samples, such as healing allografts and fractures, is widely used in bone research. However, the utility of traditional semi-automated methods is limited because they are labor-intensive and can have high interobserver variability depending upon the parameters being assessed, and primary data cannot be re-analyzed automatically. Automated histomorphometry has long been recognized as a solution for these issues, and ...

  17. Growth Rate Determination through Automated TEM Image Analysis : Crystallization Studies of Doped SbTe Phase-Change Thin Films

    NARCIS (Netherlands)

    Oosthoek, Jasper L. M.; Kooi, Bart J.; De Hosson, Jeff T. M.; Wolters, Rob A. M.; Gravesteijn, Dirk J.; Attenborough, Karen

    A computer-controlled procedure is outlined here that first determines the position of the amorphous-crystalline interface in an image. Subsequently, from a time series of these images, the velocity of the crystal growth front is quantified. The procedure presented here can be useful for a wide

  18. Growth rate determination through automated TEM image analysis: crystallization studies of doped SbTe phase-change thin films

    NARCIS (Netherlands)

    Oosthoek, Jasper L.M.; Kooi, B.J.; de Hosson, Jeff T.M.; Wolters, Robertus A.M.; Gravesteijn, Dirk J; Gravesteijn, Dirk J.; Attenborough, Karen

    2010-01-01

    A computer-controlled procedure is outlined here that first determines the position of the amorphous-crystalline interface in an image. Subsequently, from a time series of these images, the velocity of the crystal growth front is quantified. The procedure presented here can be useful for a wide

  19. Global, Persistent, Real-time Multi-sensor Automated Satellite Image Analysis and Crop Forecasting in Commercial Cloud

    Science.gov (United States)

    Brumby, S. P.; Warren, M. S.; Keisler, R.; Chartrand, R.; Skillman, S.; Franco, E.; Kontgis, C.; Moody, D.; Kelton, T.; Mathis, M.

    2016-12-01

    Cloud computing, combined with recent advances in machine learning for computer vision, is enabling understanding of the world at a scale and at a level of space and time granularity never before feasible. Multi-decadal Earth remote sensing datasets at the petabyte scale (8×10^15 bits) are now available in commercial cloud, and new satellite constellations will generate daily global coverage at a few meters per pixel. Public and commercial satellite observations now provide a wide range of sensor modalities, from traditional visible/infrared to dual-polarity synthetic aperture radar (SAR). This provides the opportunity to build a continuously updated map of the world supporting the academic community and decision-makers in government, finanace and industry. We report on work demonstrating country-scale agricultural forecasting, and global-scale land cover/land, use mapping using a range of public and commercial satellite imagery. We describe processing over a petabyte of compressed raw data from 2.8 quadrillion pixels (2.8 petapixels) acquired by the US Landsat and MODIS programs over the past 40 years. Using commodity cloud computing resources, we convert the imagery to a calibrated, georeferenced, multiresolution tiled format suited for machine-learning analysis. We believe ours is the first application to process, in less than a day, on generally available resources, over a petabyte of scientific image data. We report on work combining this imagery with time-series SAR collected by ESA Sentinel 1. We report on work using this reprocessed dataset for experiments demonstrating country-scale food production monitoring, an indicator for famine early warning. We apply remote sensing science and machine learning algorithms to detect and classify agricultural crops and then estimate crop yields and detect threats to food security (e.g., flooding, drought). The software platform and analysis methodology also support monitoring water resources, forests and other general

  20. Automated curved planar reformation of 3D spine images

    International Nuclear Information System (INIS)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo

    2005-01-01

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks

  1. A methodology to ensure and improve accuracy of Ki67 labelling index estimation by automated digital image analysis in breast cancer tissue.

    Science.gov (United States)

    Laurinavicius, Arvydas; Plancoulaine, Benoit; Laurinaviciene, Aida; Herlin, Paulette; Meskauskas, Raimundas; Baltrusaityte, Indra; Besusparis, Justinas; Dasevicius, Darius; Elie, Nicolas; Iqbal, Yasir; Bor, Catherine

    2014-01-01

    Immunohistochemical Ki67 labelling index (Ki67 LI) reflects proliferative activity and is a potential prognostic/predictive marker of breast cancer. However, its clinical utility is hindered by the lack of standardized measurement methodologies. Besides tissue heterogeneity aspects, the key element of methodology remains accurate estimation of Ki67-stained/counterstained tumour cell profiles. We aimed to develop a methodology to ensure and improve accuracy of the digital image analysis (DIA) approach. Tissue microarrays (one 1-mm spot per patient, n = 164) from invasive ductal breast carcinoma were stained for Ki67 and scanned. Criterion standard (Ki67-Count) was obtained by counting positive and negative tumour cell profiles using a stereology grid overlaid on a spot image. DIA was performed with Aperio Genie/Nuclear algorithms. A bias was estimated by ANOVA, correlation and regression analyses. Calibration steps of the DIA by adjusting the algorithm settings were performed: first, by subjective DIA quality assessment (DIA-1), and second, to compensate the bias established (DIA-2). Visual estimate (Ki67-VE) on the same images was performed by five pathologists independently. ANOVA revealed significant underestimation bias (P Ki67-Count, in particularfor the clinically relevant interval of Ki67-Count Ki67-LI estimation by DIA, based on proper validation, calibration, and measurement error correction procedures, guided by quantified bias from reference values obtained by stereology grid count. This basic validation step is an important prerequisite for high-throughput automated DIA applications to investigate tissue heterogeneity and clinical utility aspects of Ki67 and other immunohistochemistry (IHC) biomarkers.

  2. Automation for System Safety Analysis

    Science.gov (United States)

    Malin, Jane T.; Fleming, Land; Throop, David; Thronesbery, Carroll; Flores, Joshua; Bennett, Ted; Wennberg, Paul

    2009-01-01

    This presentation describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  3. Silhouette-based approach of 3D image reconstruction for automated image acquisition using robotic arm

    Science.gov (United States)

    Azhar, N.; Saad, W. H. M.; Manap, N. A.; Saad, N. M.; Syafeeza, A. R.

    2017-06-01

    This study presents the approach of 3D image reconstruction using an autonomous robotic arm for the image acquisition process. A low cost of the automated imaging platform is created using a pair of G15 servo motor connected in series to an Arduino UNO as a main microcontroller. Two sets of sequential images were obtained using different projection angle of the camera. The silhouette-based approach is used in this study for 3D reconstruction from the sequential images captured from several different angles of the object. Other than that, an analysis based on the effect of different number of sequential images on the accuracy of 3D model reconstruction was also carried out with a fixed projection angle of the camera. The effecting elements in the 3D reconstruction are discussed and the overall result of the analysis is concluded according to the prototype of imaging platform.

  4. Development and Evaluation of a Semi-automated Segmentation Tool and a Modified Ellipsoid Formula for Volumetric Analysis of the Kidney in Non-contrast T2-Weighted MR Images.

    Science.gov (United States)

    Seuss, Hannes; Janka, Rolf; Prümmer, Marcus; Cavallaro, Alexander; Hammon, Rebecca; Theis, Ragnar; Sandmair, Martin; Amann, Kerstin; Bäuerle, Tobias; Uder, Michael; Hammon, Matthias

    2017-04-01

    Volumetric analysis of the kidney parenchyma provides additional information for the detection and monitoring of various renal diseases. Therefore the purposes of the study were to develop and evaluate a semi-automated segmentation tool and a modified ellipsoid formula for volumetric analysis of the kidney in non-contrast T2-weighted magnetic resonance (MR)-images. Three readers performed semi-automated segmentation of the total kidney volume (TKV) in axial, non-contrast-enhanced T2-weighted MR-images of 24 healthy volunteers (48 kidneys) twice. A semi-automated threshold-based segmentation tool was developed to segment the kidney parenchyma. Furthermore, the three readers measured renal dimensions (length, width, depth) and applied different formulas to calculate the TKV. Manual segmentation served as a reference volume. Volumes of the different methods were compared and time required was recorded. There was no significant difference between the semi-automatically and manually segmented TKV (p = 0.31). The difference in mean volumes was 0.3 ml (95% confidence interval (CI), -10.1 to 10.7 ml). Semi-automated segmentation was significantly faster than manual segmentation, with a mean difference = 188 s (220 vs. 408 s); p volumetric analysis of the kidney in native T2-weighted MR data delivers accurate and reproducible results and was significantly faster than manual segmentation. Applying a modified ellipsoid formula quickly provides an accurate kidney volume.

  5. Automated mapping of the intertidal beach from video images

    NARCIS (Netherlands)

    Uunk, L.; Uunk, L.; Wijnberg, Kathelijne Mariken; Morelissen, R.; Morelissen, R.

    2010-01-01

    This paper presents a fully automated procedure to derive the intertidal beach bathymetry on a daily basis from video images of low-sloping beaches that are characterised by the intermittent emergence of intertidal bars. Bathymetry data are obtained by automated and repeated mapping of shorelines

  6. Image Analysis

    DEFF Research Database (Denmark)

    . The topics of the accepted papers range from novel applications of vision systems, pattern recognition, machine learning, feature extraction, segmentation, 3D vision, to medical and biomedical image analysis. The papers originate from all the Scandinavian countries and several other European countries......The 19th Scandinavian Conference on Image Analysis was held at the IT University of Copenhagen in Denmark during June 15-17, 2015. The SCIA conference series has been an ongoing biannual event for more than 30 years and over the years it has nurtured a world-class regional research and development...

  7. Automated feature extraction and classification from image sources

    Science.gov (United States)

    ,

    1995-01-01

    The U.S. Department of the Interior, U.S. Geological Survey (USGS), and Unisys Corporation have completed a cooperative research and development agreement (CRADA) to explore automated feature extraction and classification from image sources. The CRADA helped the USGS define the spectral and spatial resolution characteristics of airborne and satellite imaging sensors necessary to meet base cartographic and land use and land cover feature classification requirements and help develop future automated geographic and cartographic data production capabilities. The USGS is seeking a new commercial partner to continue automated feature extraction and classification research and development.

  8. Distribution system analysis and automation

    CERN Document Server

    Gers, Juan

    2013-01-01

    A comprehensive guide to techniques that allow engineers to simulate, analyse and optimise power distribution systems which combined with automation, underpin the emerging concept of the "smart grid". This book is supported by theoretical concepts with real-world applications and MATLAB exercises.

  9. Automated vasculature extraction from placenta images

    Science.gov (United States)

    Almoussa, Nizar; Dutra, Brittany; Lampe, Bryce; Getreuer, Pascal; Wittman, Todd; Salafia, Carolyn; Vese, Luminita

    2011-03-01

    Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important information on how certain diseases progress. One important property is the structure of the placental blood vessels, which supply a fetus with all of its oxygen and nutrition. An essential step in the analysis of the vascular network pattern is the extraction of the blood vessels, which has only been done manually through a costly and time-consuming process. There is no existing method to automatically detect placental blood vessels; in addition, the large variation in the shape, color, and texture of the placenta makes it difficult to apply standard edge-detection algorithms. We describe a method to automatically detect and extract blood vessels from a given image by using image processing techniques and neural networks. We evaluate several local features for every pixel, in addition to a novel modification to an existing road detector. Pixels belonging to blood vessel regions have recognizable responses; hence, we use an artificial neural network to identify the pattern of blood vessels. A set of images where blood vessels are manually highlighted is used to train the network. We then apply the neural network to recognize blood vessels in new images. The network is effective in capturing the most prominent vascular structures of the placenta.

  10. Automated exploration of the radio plasma imager data

    Science.gov (United States)

    Galkin, Ivan; Reinisch, Bodo; Grinstein, Georges; Khmyrov, Grigori; Kozlov, Alexander; Huang, Xueqin; Fung, Shing

    2004-12-01

    As research instruments with large information capacities become a reality, automated systems for intelligent data analysis become a necessity. Scientific archives containing huge volumes of data preclude manual manipulation or intervention and require automated exploration and mining that can at least preclassify information in categories. The large data set from the radio plasma imager (RPI) instrument on board the IMAGE satellite shows a critical need for such exploration in order to identify and archive features of interest in the volumes of visual information. In this research we have developed such a preclassifier through a model of preattentive vision capable of detecting and extracting traces of echoes from the RPI plasmagrams. The overall design of our model complies with Marr's paradigm of vision, where elements of increasing perceptual strength are built bottom up under the Gestalt constraints of good continuation and smoothness. The specifics of the RPI data, however, demanded extension of this paradigm to achieve greater robustness for signature analysis. Our preattentive model now employs a feedback neural network that refines alignment of the oriented edge elements (edgels) detected in the plasmagram image by subjecting them to collective global-scale optimization. The level of interaction between the oriented edgels is determined by their distance and mutual orientation in accordance with the Yen and Finkel model of the striate cortex that encompasses findings in psychophysical studies of human vision. The developed models have been implemented in an operational system "CORPRAL" (Cognitive Online RPI Plasmagram Ranking Algorithm) that currently scans daily submissions of the RPI plasmagrams for the presence of echo traces. Qualifying plasmagrams are tagged in the mission database, making them available for a variety of queries. We discuss CORPRAL performance and its impact on scientific analysis of RPI data.

  11. Automated recognition of cell phenotypes in histology images based on membrane- and nuclei-targeting biomarkers

    International Nuclear Information System (INIS)

    Karaçalı, Bilge; Vamvakidou, Alexandra P; Tözeren, Aydın

    2007-01-01

    Three-dimensional in vitro culture of cancer cells are used to predict the effects of prospective anti-cancer drugs in vivo. In this study, we present an automated image analysis protocol for detailed morphological protein marker profiling of tumoroid cross section images. Histologic cross sections of breast tumoroids developed in co-culture suspensions of breast cancer cell lines, stained for E-cadherin and progesterone receptor, were digitized and pixels in these images were classified into five categories using k-means clustering. Automated segmentation was used to identify image regions composed of cells expressing a given biomarker. Synthesized images were created to check the accuracy of the image processing system. Accuracy of automated segmentation was over 95% in identifying regions of interest in synthesized images. Image analysis of adjacent histology slides stained, respectively, for Ecad and PR, accurately predicted regions of different cell phenotypes. Image analysis of tumoroid cross sections from different tumoroids obtained under the same co-culture conditions indicated the variation of cellular composition from one tumoroid to another. Variations in the compositions of cross sections obtained from the same tumoroid were established by parallel analysis of Ecad and PR-stained cross section images. Proposed image analysis methods offer standardized high throughput profiling of molecular anatomy of tumoroids based on both membrane and nuclei markers that is suitable to rapid large scale investigations of anti-cancer compounds for drug development

  12. Automated Detection of Optic Disc in Fundus Images

    Science.gov (United States)

    Burman, R.; Almazroa, A.; Raahemifar, K.; Lakshminarayanan, V.

    Optic disc (OD) localization is an important preprocessing step in the automated image detection of fundus image infected with glaucoma. An Interval Type-II fuzzy entropy based thresholding scheme along with Differential Evolution (DE) is applied to determine the location of the OD in the right of left eye retinal fundus image. The algorithm, when applied to 460 fundus images from the MESSIDOR dataset, shows a success rate of 99.07 % for 217 normal images and 95.47 % for 243 pathological images. The mean computational time is 1.709 s for normal images and 1.753 s for pathological images. These results are important for automated detection of glaucoma and for telemedicine purposes.

  13. Towards Automated Annotation of Benthic Survey Images: Variability of Human Experts and Operational Modes of Automation.

    Directory of Open Access Journals (Sweden)

    Oscar Beijbom

    Full Text Available Global climate change and other anthropogenic stressors have heightened the need to rapidly characterize ecological changes in marine benthic communities across large scales. Digital photography enables rapid collection of survey images to meet this need, but the subsequent image annotation is typically a time consuming, manual task. We investigated the feasibility of using automated point-annotation to expedite cover estimation of the 17 dominant benthic categories from survey-images captured at four Pacific coral reefs. Inter- and intra- annotator variability among six human experts was quantified and compared to semi- and fully- automated annotation methods, which are made available at coralnet.ucsd.edu. Our results indicate high expert agreement for identification of coral genera, but lower agreement for algal functional groups, in particular between turf algae and crustose coralline algae. This indicates the need for unequivocal definitions of algal groups, careful training of multiple annotators, and enhanced imaging technology. Semi-automated annotation, where 50% of the annotation decisions were performed automatically, yielded cover estimate errors comparable to those of the human experts. Furthermore, fully-automated annotation yielded rapid, unbiased cover estimates but with increased variance. These results show that automated annotation can increase spatial coverage and decrease time and financial outlay for image-based reef surveys.

  14. Automated analysis of slitless spectra. II. Quasars

    International Nuclear Information System (INIS)

    Edwards, G.; Beauchemin, M.; Borra, F.

    1988-01-01

    Automated software have been developed to process slitless spectra. The software, described in a previous paper, automatically separates stars from extended objects and quasars from stars. This paper describes the quasar search techniques and discusses the results. The performance of the software is compared and calibrated with a plate taken in a region of SA 57 that has been extensively surveyed by others using a variety of techniques: the proposed automated software performs very well. It is found that an eye search of the same plate is less complete than the automated search: surveys that rely on eye searches suffer from incompleteness at least from a magnitude brighter than the plate limit. It is shown how the complete automated analysis of a plate and computer simulations are used to calibrate and understand the characteristics of the present data. 20 references

  15. Automated diabetic retinopathy imaging in Indian eyes: A pilot study

    Directory of Open Access Journals (Sweden)

    Rupak Roy

    2014-01-01

    Full Text Available Aim: To evaluate the efficacy of an automated retinal image grading system in diabetic retinopathy (DR screening. Materials and Methods: Color fundus images of patients of a DR screening project were analyzed for the purpose of the study. For each eye two set of images were acquired, one centerd on the disk and the other centerd on the macula. All images were processed by automated DR screening software (Retmarker. The results were compared to ophthalmologist grading of the same set of photographs. Results: 5780 images of 1445 patients were analyzed. Patients were screened into two categories DR or no DR. Image quality was high, medium and low in 71 (4.91%, 1117 (77.30% and 257 (17.78% patients respectively. Specificity and sensitivity for detecting DR in the high, medium and low group were (0.59, 0.91; (0.11, 0.95 and (0.93, 0.14. Conclusion: Automated retinal image screening system for DR had a high sensitivity in high and medium quality images. Automated DR grading software′s hold promise in future screening programs.

  16. Automated identification of animal species in camera trap images

    NARCIS (Netherlands)

    Yu, X.; Wang, J.; Kays, R.; Jansen, P.A.; Wang, T.; Huang, T.

    2013-01-01

    Image sensors are increasingly being used in biodiversity monitoring, with each study generating many thousands or millions of pictures. Efficiently identifying the species captured by each image is a critical challenge for the advancement of this field. Here, we present an automated species

  17. On Automating and Standardising Corpus Callosum Analysis in Brain MRI

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Skoglund, Karl

    2005-01-01

    Corpus callosum analysis is influenced by many factors. The effort in controlling these has previously been incomplete and scattered. This paper sketches a complete pipeline for automated corpus callosum analysis from magnetic resonance images, with focus on measurement standardisation....... The presented pipeline deals with i) estimation of the mid-sagittal plane, ii) localisation and registration of the corpus callosum, iii) parameterisation and representation of its contour, and iv) means of standardising the traditional reference area measurements....

  18. An Automated Self-Learning Quantification System to Identify Visible Areas in Capsule Endoscopy Images.

    Science.gov (United States)

    Hashimoto, Shinichi; Ogihara, Hiroyuki; Suenaga, Masato; Fujita, Yusuke; Terai, Shuji; Hamamoto, Yoshihiko; Sakaida, Isao

    2017-08-01

    Visibility in capsule endoscopic images is presently evaluated through intermittent analysis of frames selected by a physician. It is thus subjective and not quantitative. A method to automatically quantify the visibility on capsule endoscopic images has not been reported. Generally, when designing automated image recognition programs, physicians must provide a training image; this process is called supervised learning. We aimed to develop a novel automated self-learning quantification system to identify visible areas on capsule endoscopic images. The technique was developed using 200 capsule endoscopic images retrospectively selected from each of three patients. The rate of detection of visible areas on capsule endoscopic images between a supervised learning program, using training images labeled by a physician, and our novel automated self-learning program, using unlabeled training images without intervention by a physician, was compared. The rate of detection of visible areas was equivalent for the supervised learning program and for our automatic self-learning program. The visible areas automatically identified by self-learning program correlated to the areas identified by an experienced physician. We developed a novel self-learning automated program to identify visible areas in capsule endoscopic images.

  19. Automated Registration Of Images From Multiple Sensors

    Science.gov (United States)

    Rignot, Eric J. M.; Kwok, Ronald; Curlander, John C.; Pang, Shirley S. N.

    1994-01-01

    Images of terrain scanned in common by multiple Earth-orbiting remote sensors registered automatically with each other and, where possible, on geographic coordinate grid. Simulated image of terrain viewed by sensor computed from ancillary data, viewing geometry, and mathematical model of physics of imaging. In proposed registration algorithm, simulated and actual sensor images matched by area-correlation technique.

  20. Automated region definition for cardiac nitrogen-13-ammonia PET imaging.

    Science.gov (United States)

    Muzik, O; Beanlands, R; Wolfe, E; Hutchins, G D; Schwaiger, M

    1993-02-01

    In combination with PET, the tracer 13N-ammonia can be employed for the noninvasive quantification of myocardial perfusion at rest and after pharmacological stress. The purpose of this study was to develop an analysis method for the quantification of regional myocardial blood flow in the clinical setting. The algorithm includes correction for patient motion, an automated definition of multiple regions and display of absolute flows in polar map format. The effects of partial volume and blood to tissue cross-contamination were accounted for by optimizing the radial position of regions to meet fundamental assumptions of the kinetic model. In order to correct for motion artifacts, the myocardial displacement was manually determined based on edge-enhanced images. The obtained results exhibit the capability of the presented algorithm to noninvasively assess regional myocardial perfusion in the clinical environment.

  1. Automated Recognition of 3D Features in GPIR Images

    Science.gov (United States)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  2. Automated Technology for Verificiation and Analysis

    DEFF Research Database (Denmark)

    This volume contains the papers presented at the 7th International Symposium on Automated Technology for Verification and Analysis held during October 13-16 in Macao SAR, China. The primary objective of the ATVA conferences remains the same: to exchange and promote the latest advances of state......-of-the-art research on theoretical and practical aspects of automated analysis, verification, and synthesis. Among 74 research papers and 10 tool papers submitted to ATVA 2009, the Program Committee accepted 23 as regular papers and 3 as tool papers. In all, 33 experts from 17 countries worked hard to make sure...

  3. Automation of Cassini Support Imaging Uplink Command Development

    Science.gov (United States)

    Ly-Hollins, Lisa; Breneman, Herbert H.; Brooks, Robert

    2010-01-01

    "Support imaging" is imagery requested by other Cassini science teams to aid in the interpretation of their data. The generation of the spacecraft command sequences for these images is performed by the Cassini Instrument Operations Team. The process initially established for doing this was very labor-intensive, tedious and prone to human error. Team management recognized this process as one that could easily benefit from automation. Team members were tasked to document the existing manual process, develop a plan and strategy to automate the process, implement the plan and strategy, test and validate the new automated process, and deliver the new software tools and documentation to Flight Operations for use during the Cassini extended mission. In addition to the goals of higher efficiency and lower risk in the processing of support imaging requests, an effort was made to maximize adaptability of the process to accommodate uplink procedure changes and the potential addition of new capabilities outside the scope of the initial effort.

  4. Automated retinal image quality assessment on the UK Biobank dataset for epidemiological studies.

    Science.gov (United States)

    Welikala, R A; Fraz, M M; Foster, P J; Whincup, P H; Rudnicka, A R; Owen, C G; Strachan, D P; Barman, S A

    2016-04-01

    Morphological changes in the retinal vascular network are associated with future risk of many systemic and vascular diseases. However, uncertainty over the presence and nature of some of these associations exists. Analysis of data from large population based studies will help to resolve these uncertainties. The QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) retinal image analysis system allows automated processing of large numbers of retinal images. However, an image quality assessment module is needed to achieve full automation. In this paper, we propose such an algorithm, which uses the segmented vessel map to determine the suitability of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. This includes an effective 3-dimensional feature set and support vector machine classification. A random subset of 800 retinal images from UK Biobank (a large prospective study of 500,000 middle aged adults; where 68,151 underwent retinal imaging) was used to examine the performance of the image quality algorithm. The algorithm achieved a sensitivity of 95.33% and a specificity of 91.13% for the detection of inadequate images. The strong performance of this image quality algorithm will make rapid automated analysis of vascular morphometry feasible on the entire UK Biobank dataset (and other large retinal datasets), with minimal operator involvement, and at low cost. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Automated image enhancement using power law transformations

    Indian Academy of Sciences (India)

    Automatically enhancing contrast of an image has been a challenging task since the digital image can represent variety of scene types. Trifonov et al (2001) performed automatic contrast enhancement by automatically determining the measure of central tendency of the brightness histogram of an image and shifting and ...

  6. Automated analysis of damages for radiation in plastics surfaces

    International Nuclear Information System (INIS)

    Andrade, C.; Camacho M, E.; Tavera, L.; Balcazar, M.

    1990-02-01

    Analysis of damages done by the radiation in a polymer characterized by optic properties of polished surfaces, of uniformity and chemical resistance that the acrylic; resistant until the 150 centigrade grades of temperature, and with an approximate weight of half of the glass. An objective of this work is the development of a method that analyze in automated form the superficial damages induced by radiation in plastic materials means an images analyst. (Author)

  7. Automated image registration for FDOPA PET studies

    International Nuclear Information System (INIS)

    Kang-Ping Lin; Sung-Cheng Huang, Dan-Chu Yu; Melega, W.; Barrio, J.R.; Phelps, M.E.

    1996-01-01

    In this study, various image registration methods are investigated for their suitability for registration of L-6-[18F]-fluoro-DOPA (FDOPA) PET images. Five different optimization criteria including sum of absolute difference (SAD), mean square difference (MSD), cross-correlation coefficient (CC), standard deviation of pixel ratio (SDPR), and stochastic sign change (SSC) were implemented and Powell's algorithm was used to optimize the criteria. The optimization criteria were calculated either unidirectionally (i.e. only evaluating the criteria for comparing the resliced image 1 with the original image 2) or bidirectionally (i.e. averaging the criteria for comparing the resliced image 1 with the original image 2 and those for the sliced image 2 with the original image 1). Monkey FDOPA images taken at various known orientations were used to evaluate the accuracy of different methods. A set of human FDOPA dynamic images was used to investigate the ability of the methods for correcting subject movement. It was found that a large improvement in performance resulted when bidirectional rather than unidirectional criteria were used. Overall, the SAD, MSD and SDPR methods were found to be comparable in performance and were suitable for registering FDOPA images. The MSD method gave more adequate results for frame-to-frame image registration for correcting subject movement during a dynamic FDOPA study. The utility of the registration method is further demonstrated by registering FDOPA images in monkeys before and after amphetamine injection to reveal more clearly the changes in spatial distribution of FDOPA due to the drug intervention. (author)

  8. FULLY AUTOMATED IMAGE ORIENTATION IN THE ABSENCE OF TARGETS

    Directory of Open Access Journals (Sweden)

    C. Stamatopoulos

    2012-07-01

    Full Text Available Automated close-range photogrammetric network orientation has traditionally been associated with the use of coded targets in the object space to allow for an initial relative orientation (RO and subsequent spatial resection of the images. Over the past decade, automated orientation via feature-based matching (FBM techniques has attracted renewed research attention in both the photogrammetry and computer vision (CV communities. This is largely due to advances made towards the goal of automated relative orientation of multi-image networks covering untargetted (markerless objects. There are now a number of CV-based algorithms, with accompanying open-source software, that can achieve multi-image orientation within narrow-baseline networks. From a photogrammetric standpoint, the results are typically disappointing as the metric integrity of the resulting models is generally poor, or even unknown, while the number of outliers within the image matching and triangulation is large, and generally too large to allow relative orientation (RO via the commonly used coplanarity equations. On the other hand, there are few examples within the photogrammetric research field of automated markerless camera calibration to metric tolerances, and these too are restricted to narrow-baseline, low-convergence imaging geometry. The objective addressed in this paper is markerless automatic multi-image orientation, maintaining metric integrity, within networks that incorporate wide-baseline imagery. By wide-baseline we imply convergent multi-image configurations with convergence angles of up to around 90°. An associated aim is provision of a fast, fully automated process, which can be performed without user intervention. For this purpose, various algorithms require optimisation to allow parallel processing utilising multiple PC cores and graphics processing units (GPUs.

  9. Computer-automated neutron activation analysis system

    International Nuclear Information System (INIS)

    Minor, M.M.; Garcia, S.R.

    1983-01-01

    An automated delayed neutron counting and instrumental neutron activation analysis system has been developed at Los Alamos National Laboratory's Omega West Reactor (OWR) to analyze samples for uranium and 31 additional elements with a maximum throughput of 400 samples per day. 5 references

  10. Automated quantification of budding Saccharomyces cerevisiae using a novel image cytometry method.

    Science.gov (United States)

    Laverty, Daniel J; Kury, Alexandria L; Kuksin, Dmitry; Pirani, Alnoor; Flanagan, Kevin; Chan, Leo Li-Ying

    2013-06-01

    The measurements of concentration, viability, and budding percentages of Saccharomyces cerevisiae are performed on a routine basis in the brewing and biofuel industries. Generation of these parameters is of great importance in a manufacturing setting, where they can aid in the estimation of product quality, quantity, and fermentation time of the manufacturing process. Specifically, budding percentages can be used to estimate the reproduction rate of yeast populations, which directly correlates with metabolism of polysaccharides and bioethanol production, and can be monitored to maximize production of bioethanol during fermentation. The traditional method involves manual counting using a hemacytometer, but this is time-consuming and prone to human error. In this study, we developed a novel automated method for the quantification of yeast budding percentages using Cellometer image cytometry. The automated method utilizes a dual-fluorescent nucleic acid dye to specifically stain live cells for imaging analysis of unique morphological characteristics of budding yeast. In addition, cell cycle analysis is performed as an alternative method for budding analysis. We were able to show comparable yeast budding percentages between manual and automated counting, as well as cell cycle analysis. The automated image cytometry method is used to analyze and characterize corn mash samples directly from fermenters during standard fermentation. Since concentration, viability, and budding percentages can be obtained simultaneously, the automated method can be integrated into the fermentation quality assurance protocol, which may improve the quality and efficiency of beer and bioethanol production processes.

  11. Automated Micro-Object Detection for Mobile Diagnostics Using Lens-Free Imaging Technology

    Directory of Open Access Journals (Sweden)

    Mohendra Roy

    2016-05-01

    Full Text Available Lens-free imaging technology has been extensively used recently for microparticle and biological cell analysis because of its high throughput, low cost, and simple and compact arrangement. However, this technology still lacks a dedicated and automated detection system. In this paper, we describe a custom-developed automated micro-object detection method for a lens-free imaging system. In our previous work (Roy et al., we developed a lens-free imaging system using low-cost components. This system was used to generate and capture the diffraction patterns of micro-objects and a global threshold was used to locate the diffraction patterns. In this work we used the same setup to develop an improved automated detection and analysis algorithm based on adaptive threshold and clustering of signals. For this purpose images from the lens-free system were then used to understand the features and characteristics of the diffraction patterns of several types of samples. On the basis of this information, we custom-developed an automated algorithm for the lens-free imaging system. Next, all the lens-free images were processed using this custom-developed automated algorithm. The performance of this approach was evaluated by comparing the counting results with standard optical microscope results. We evaluated the counting results for polystyrene microbeads, red blood cells, and HepG2, HeLa, and MCF7 cells. The comparison shows good agreement between the systems, with a correlation coefficient of 0.91 and linearity slope of 0.877. We also evaluated the automated size profiles of the microparticle samples. This Wi-Fi-enabled lens-free imaging system, along with the dedicated software, possesses great potential for telemedicine applications in resource-limited settings.

  12. Comparison of the automated evaluation of phantom mama in digital and digitalized images

    International Nuclear Information System (INIS)

    Santana, Priscila do Carmo

    2011-01-01

    Mammography is an essential tool for diagnosis and early detection of breast cancer if it is provided as a very good quality service. The process of evaluating the quality of radiographic images in general, and mammography in particular, can be much more accurate, practical and fast with the help of computer analysis tools. This work compare the automated methodology for the evaluation of scanned digital images the phantom mama. By applied the DIP method techniques was possible determine geometrical and radiometric images evaluated. The evaluated parameters include circular details of low contrast, contrast ratio, spatial resolution, tumor masses, optical density and background in Phantom Mama scanned and digitized images. The both results of images were evaluated. Through this comparison was possible to demonstrate that this automated methodology is presented as a promising alternative for the reduction or elimination of subjectivity in both types of images, but the Phantom Mama present insufficient parameters for spatial resolution evaluation. (author)

  13. Automated designation of tie-points for image-to-image coregistration.

    Science.gov (United States)

    R.E. Kennedy; W.B. Cohen

    2003-01-01

    Image-to-image registration requires identification of common points in both images (image tie-points: ITPs). Here we describe software implementing an automated, area-based technique for identifying ITPs. The ITP software was designed to follow two strategies: ( I ) capitalize on human knowledge and pattern recognition strengths, and (2) favour robustness in many...

  14. Systems Analysis as a Prelude to Library Automation

    Science.gov (United States)

    Carter, Ruth C.

    1973-01-01

    Systems analysis, as a prelude to library automation, is an inevitable commonplace fact of life in libraries. Maturation of library automation and the systems analysis which precedes its implementation is observed in this article. (55 references) (Author/TW)

  15. Automated hybridization/imaging device for fluorescent multiplex DNA sequencing

    Science.gov (United States)

    Weiss, Robert B.; Kimball, Alvin W.; Gesteland, Raymond F.; Ferguson, F. Mark; Dunn, Diane M.; Di Sera, Leonard J.; Cherry, Joshua L.

    1995-01-01

    A method is disclosed for automated multiplex sequencing of DNA with an integrated automated imaging hybridization chamber system. This system comprises an hybridization chamber device for mounting a membrane containing size-fractionated multiplex sequencing reaction products, apparatus for fluid delivery to the chamber device, imaging apparatus for light delivery to the membrane and image recording of fluorescence emanating from the membrane while in the chamber device, and programmable controller apparatus for controlling operation of the system. The multiplex reaction products are hybridized with a probe, then an enzyme (such as alkaline phosphatase) is bound to a binding moiety on the probe, and a fluorogenic substrate (such as a benzothiazole derivative) is introduced into the chamber device by the fluid delivery apparatus. The enzyme converts the fluorogenic substrate into a fluorescent product which, when illuminated in the chamber device with a beam of light from the imaging apparatus, excites fluorescence of the fluorescent product to produce a pattern of hybridization. The pattern of hybridization is imaged by a CCD camera component of the imaging apparatus to obtain a series of digital signals. These signals are converted by the controller apparatus into a string of nucleotides corresponding to the nucleotide sequence an automated sequence reader. The method and apparatus are also applicable to other membrane-based applications such as colony and plaque hybridization and Southern, Northern, and Western blots.

  16. An automated image analysis framework for segmentation and division plane detection of single live Staphylococcus aureus cells which can operate at millisecond sampling time scales using bespoke Slimfield microscopy

    Science.gov (United States)

    Wollman, Adam J. M.; Miller, Helen; Foster, Simon; Leake, Mark C.

    2016-10-01

    Staphylococcus aureus is an important pathogen, giving rise to antimicrobial resistance in cell strains such as Methicillin Resistant S. aureus (MRSA). Here we report an image analysis framework for automated detection and image segmentation of cells in S. aureus cell clusters, and explicit identification of their cell division planes. We use a new combination of several existing analytical tools of image analysis to detect cellular and subcellular morphological features relevant to cell division from millisecond time scale sampled images of live pathogens at a detection precision of single molecules. We demonstrate this approach using a fluorescent reporter GFP fused to the protein EzrA that localises to a mid-cell plane during division and is involved in regulation of cell size and division. This image analysis framework presents a valuable platform from which to study candidate new antimicrobials which target the cell division machinery, but may also have more general application in detecting morphologically complex structures of fluorescently labelled proteins present in clusters of other types of cells.

  17. An Automated Method for Semantic Classification of Regions in Coastal Images

    NARCIS (Netherlands)

    Hoonhout, B.M.; Radermacher, M.; Baart, F.; Van der Maaten, L.J.P.

    2015-01-01

    Large, long-term coastal imagery datasets are nowadays a low-cost source of information for various coastal research disciplines. However, the applicability of many existing algorithms for coastal image analysis is limited for these large datasets due to a lack of automation and robustness.

  18. Automated and unbiased image analyses as tools in phenotypic classification of small-spored Alternaria species

    DEFF Research Database (Denmark)

    Andersen, Birgitte; Hansen, Michael Edberg; Smedsgaard, Jørn

    2005-01-01

    often has been broadly applied to various morphologically and chemically distinct groups of isolates from different hosts. The purpose of this study was to develop and evaluate automated and unbiased image analysis systems that will analyze different phenotypic characters and facilitate testing...

  19. Automated force volume image processing for biological samples.

    Directory of Open Access Journals (Sweden)

    Pavel Polyakov

    2011-04-01

    Full Text Available Atomic force microscopy (AFM has now become a powerful technique for investigating on a molecular level, surface forces, nanomechanical properties of deformable particles, biomolecular interactions, kinetics, and dynamic processes. This paper specifically focuses on the analysis of AFM force curves collected on biological systems, in particular, bacteria. The goal is to provide fully automated tools to achieve theoretical interpretation of force curves on the basis of adequate, available physical models. In this respect, we propose two algorithms, one for the processing of approach force curves and another for the quantitative analysis of retraction force curves. In the former, electrostatic interactions prior to contact between AFM probe and bacterium are accounted for and mechanical interactions operating after contact are described in terms of Hertz-Hooke formalism. Retraction force curves are analyzed on the basis of the Freely Jointed Chain model. For both algorithms, the quantitative reconstruction of force curves is based on the robust detection of critical points (jumps, changes of slope or changes of curvature which mark the transitions between the various relevant interactions taking place between the AFM tip and the studied sample during approach and retraction. Once the key regions of separation distance and indentation are detected, the physical parameters describing the relevant interactions operating in these regions are extracted making use of regression procedure for fitting experiments to theory. The flexibility, accuracy and strength of the algorithms are illustrated with the processing of two force-volume images, which collect a large set of approach and retraction curves measured on a single biological surface. For each force-volume image, several maps are generated, representing the spatial distribution of the searched physical parameters as estimated for each pixel of the force-volume image.

  20. Volumetric measurements of pulmonary nodules: variability in automated analysis tools

    Science.gov (United States)

    Juluru, Krishna; Kim, Woojin; Boonn, William; King, Tara; Siddiqui, Khan; Siegel, Eliot

    2007-03-01

    Over the past decade, several computerized tools have been developed for detection of lung nodules and for providing volumetric analysis. Incidentally detected lung nodules have traditionally been followed over time by measurements of their axial dimensions on CT scans to ensure stability or document progression. A recently published article by the Fleischner Society offers guidelines on the management of incidentally detected nodules based on size criteria. For this reason, differences in measurements obtained by automated tools from various vendors may have significant implications on management, yet the degree of variability in these measurements is not well understood. The goal of this study is to quantify the differences in nodule maximum diameter and volume among different automated analysis software. Using a dataset of lung scans obtained with both "ultra-low" and conventional doses, we identified a subset of nodules in each of five size-based categories. Using automated analysis tools provided by three different vendors, we obtained size and volumetric measurements on these nodules, and compared these data using descriptive as well as ANOVA and t-test analysis. Results showed significant differences in nodule maximum diameter measurements among the various automated lung nodule analysis tools but no significant differences in nodule volume measurements. These data suggest that when using automated commercial software, volume measurements may be a more reliable marker of tumor progression than maximum diameter. The data also suggest that volumetric nodule measurements may be relatively reproducible among various commercial workstations, in contrast to the variability documented when performing human mark-ups, as is seen in the LIDC (lung imaging database consortium) study.

  1. Feasibility of estimation of brain volume and 2-deoxy-2-(18)F-fluoro-D-glucose metabolism using a novel automated image analysis method: application in Alzheimer's disease.

    Science.gov (United States)

    Musiek, Erik S; Saboury, Babak; Mishra, Shipra; Chen, Yufen; Reddin, Janet S; Newberg, Andrew B; Udupa, Jayaram K; Detre, John A; Hofheinz, Frank; Torigian, Drew; Alavi, Abass

    2012-01-01

    The development of clinically-applicable quantitative methods for the analysis of brain fluorine-18 fluoro desoxyglucose-positron emission tomography ((18)F-FDG-PET) images is a major area of research in many neurologic diseases, particularly Alzheimer's disease (AD). Region of interest visualization, evaluation, and image registration (ROVER) is a novel commercially-available software package which provides automated partial volume corrected measures of volume and glucose uptake from (18)F-FDG PET data. We performed a pilot study of ROVER analysis of brain (18)F-FDG PET images for the first time in a small cohort of patients with AD and controls. Brain (18)F-FDG-PET and volumetric magnetic resonance imaging (MRI) were performed on 14 AD patients and 18 age-matched controls. Images were subjected to ROVER analysis, and voxel-based analysis using SPM5. Volumes by ROVER were 35% lower than MRI volumes in AD patients (as hypometabolic regions were excluded in ROVER-derived volume measurement ) while average ROVER- and MRI-derived cortical volumes were nearly identical in control population. Whole brain volumes when ROVER-derived and whole brain metabolic volumetric products (MVP) were significantly lower in AD and accurately distinguished AD patients from controls (Area Under the Curve (AUC) of Receiver Operator Characteristic (ROC) curves 0.89 and 0.86, respectively). This diagnostic accuracy was similar to voxel-based analyses. Analysis by ROVER of (18)F-FDG-PET images provides a unique index of metabolically-active brain volume, and can accurately distinguish between AD patients and controls as a proof of concept. In conclusion, our findings suggest that ROVER may serve as a useful quantitative adjunct to visual or regional assessment and aid analysis of whole-brain metabolism in AD and other neurologic and psychiatric diseases.

  2. Automated whole animal bio-imaging assay for human cancer dissemination.

    Directory of Open Access Journals (Sweden)

    Veerander P S Ghotra

    Full Text Available A quantitative bio-imaging platform is developed for analysis of human cancer dissemination in a short-term vertebrate xenotransplantation assay. Six days after implantation of cancer cells in zebrafish embryos, automated imaging in 96 well plates coupled to image analysis algorithms quantifies spreading throughout the host. Findings in this model correlate with behavior in long-term rodent xenograft models for panels of poorly- versus highly malignant cell lines derived from breast, colorectal, and prostate cancer. In addition, cancer cells with scattered mesenchymal characteristics show higher dissemination capacity than cell types with epithelial appearance. Moreover, RNA interference establishes the metastasis-suppressor role for E-cadherin in this model. This automated quantitative whole animal bio-imaging assay can serve as a first-line in vivo screening step in the anti-cancer drug target discovery pipeline.

  3. Automated Pointing of Cardiac Imaging Catheters.

    Science.gov (United States)

    Loschak, Paul M; Brattain, Laura J; Howe, Robert D

    2013-12-31

    Intracardiac echocardiography (ICE) catheters enable high-quality ultrasound imaging within the heart, but their use in guiding procedures is limited due to the difficulty of manually pointing them at structures of interest. This paper presents the design and testing of a catheter steering model for robotic control of commercial ICE catheters. The four actuated degrees of freedom (4-DOF) are two catheter handle knobs to produce bi-directional bending in combination with rotation and translation of the handle. An extra degree of freedom in the system allows the imaging plane (dependent on orientation) to be directed at an object of interest. A closed form solution for forward and inverse kinematics enables control of the catheter tip position and the imaging plane orientation. The proposed algorithms were validated with a robotic test bed using electromagnetic sensor tracking of the catheter tip. The ability to automatically acquire imaging targets in the heart may improve the efficiency and effectiveness of intracardiac catheter interventions by allowing visualization of soft tissue structures that are not visible using standard fluoroscopic guidance. Although the system has been developed and tested for manipulating ICE catheters, the methods described here are applicable to any long thin tendon-driven tool (with single or bi-directional bending) requiring accurate tip position and orientation control.

  4. Osteolytica: An automated image analysis software package that rapidly measures cancer-induced osteolytic lesions in in vivo models with greater reproducibility compared to other commonly used methods.

    Science.gov (United States)

    Evans, H R; Karmakharm, T; Lawson, M A; Walker, R E; Harris, W; Fellows, C; Huggins, I D; Richmond, P; Chantry, A D

    2016-02-01

    Methods currently used to analyse osteolytic lesions caused by malignancies such as multiple myeloma and metastatic breast cancer vary from basic 2-D X-ray analysis to 2-D images of micro-CT datasets analysed with non-specialised image software such as ImageJ. However, these methods have significant limitations. They do not capture 3-D data, they are time-consuming and they often suffer from inter-user variability. We therefore sought to develop a rapid and reproducible method to analyse 3-D osteolytic lesions in mice with cancer-induced bone disease. To this end, we have developed Osteolytica, an image analysis software method featuring an easy to use, step-by-step interface to measure lytic bone lesions. Osteolytica utilises novel graphics card acceleration (parallel computing) and 3-D rendering to provide rapid reconstruction and analysis of osteolytic lesions. To evaluate the use of Osteolytica we analysed tibial micro-CT datasets from murine models of cancer-induced bone disease and compared the results to those obtained using a standard ImageJ analysis method. Firstly, to assess inter-user variability we deployed four independent researchers to analyse tibial datasets from the U266-NSG murine model of myeloma. Using ImageJ, inter-user variability between the bones was substantial (±19.6%), in contrast to using Osteolytica, which demonstrated minimal variability (±0.5%). Secondly, tibial datasets from U266-bearing NSG mice or BALB/c mice injected with the metastatic breast cancer cell line 4T1 were compared to tibial datasets from aged and sex-matched non-tumour control mice. Analyses by both Osteolytica and ImageJ showed significant increases in bone lesion area in tumour-bearing mice compared to control mice. These results confirm that Osteolytica performs as well as the current 2-D ImageJ osteolytic lesion analysis method. However, Osteolytica is advantageous in that it analyses over the entirety of the bone volume (as opposed to selected 2-D images), it

  5. Image analysis for DNA sequencing

    International Nuclear Information System (INIS)

    Palaniappan, K.; Huang, T.S.

    1991-01-01

    This paper reports that there is a great deal of interest in automating the process of DNA (deoxyribonucleic acid) sequencing to support the analysis of genomic DNA such as the Human and Mouse Genome projects. In one class of gel-based sequencing protocols autoradiograph images are generated in the final step and usually require manual interpretation to reconstruct the DNA sequence represented by the image. The need to handle a large volume of sequence information necessitates automation of the manual autoradiograph reading step through image analysis in order to reduce the length of time required to obtain sequence data and reduce transcription errors. Various adaptive image enhancement, segmentation and alignment methods were applied to autoradiograph images. The methods are adaptive to the local characteristics of the image such as noise, background signal, or presence of edges. Once the two-dimensional data is converted to a set of aligned one-dimensional profiles waveform analysis is used to determine the location of each band which represents one nucleotide in the sequence. Different classification strategies including a rule-based approach are investigated to map the profile signals, augmented with the original two-dimensional image data as necessary, to textual DNA sequence information

  6. Techniques for Automated Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Marcus, Ryan C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-09-02

    The performance of a particular HPC code depends on a multitude of variables, including compiler selection, optimization flags, OpenMP pool size, file system load, memory usage, MPI configuration, etc. As a result of this complexity, current predictive models have limited applicability, especially at scale. We present a formulation of scientific codes, nodes, and clusters that reduces complex performance analysis to well-known mathematical techniques. Building accurate predictive models and enhancing our understanding of scientific codes at scale is an important step towards exascale computing.

  7. Automated information retrieval system for radioactivation analysis

    International Nuclear Information System (INIS)

    Lambrev, V.G.; Bochkov, P.E.; Gorokhov, S.A.; Nekrasov, V.V.; Tolstikova, L.I.

    1981-01-01

    An automated information retrieval system for radioactivation analysis has been developed. An ES-1022 computer and a problem-oriented software ''The description information search system'' were used for the purpose. Main aspects and sources of forming the system information fund, characteristics of the information retrieval language of the system are reported and examples of question-answer dialogue are given. Two modes can be used: selective information distribution and retrospective search [ru

  8. Automated Program Analysis for Cybersecurity (APAC)

    Science.gov (United States)

    2016-07-14

    AUTOMATED PROGRAM ANALYSIS FOR CYBERSECURITY ( APAC ) FIVE DIRECTIONS, INC JULY 2016 FINAL TECHNICAL REPORT APPROVED...CYBERSECURITY ( APAC ) 5a. CONTRACT NUMBER FA8750-14-C-0050 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 61101E 6. AUTHOR(S) William Arbaugh...5d. PROJECT NUMBER APAC 5e. TASK NUMBER SD 5f. WORK UNIT NUMBER IR 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Five Directions, Inc

  9. Automated processing of webcam images for phenological classification.

    Directory of Open Access Journals (Sweden)

    Ludwig Bothmann

    Full Text Available Along with the global climate change, there is an increasing interest for its effect on phenological patterns such as start and end of the growing season. Scientific digital webcams are used for this purpose taking every day one or more images from the same natural motive showing for example trees or grassland sites. To derive phenological patterns from the webcam images, regions of interest are manually defined on these images by an expert and subsequently a time series of percentage greenness is derived and analyzed with respect to structural changes. While this standard approach leads to satisfying results and allows to determine dates of phenological change points, it is associated with a considerable amount of manual work and is therefore constrained to a limited number of webcams only. In particular, this forbids to apply the phenological analysis to a large network of publicly accessible webcams in order to capture spatial phenological variation. In order to be able to scale up the analysis to several hundreds or thousands of webcams, we propose and evaluate two automated alternatives for the definition of regions of interest, allowing for efficient analyses of webcam images. A semi-supervised approach selects pixels based on the correlation of the pixels' time series of percentage greenness with a few prototype pixels. An unsupervised approach clusters pixels based on scores of a singular value decomposition. We show for a scientific webcam that the resulting regions of interest are at least as informative as those chosen by an expert with the advantage that no manual action is required. Additionally, we show that the methods can even be applied to publicly available webcams accessed via the internet yielding interesting partitions of the analyzed images. Finally, we show that the methods are suitable for the intended big data applications by analyzing 13988 webcams from the AMOS database. All developed methods are implemented in the

  10. Semi-automated retinal vessel analysis in nonmydriatic fundus photography.

    Science.gov (United States)

    Schuster, Alexander Karl-Georg; Fischer, Joachim Ernst; Vossmerbaeumer, Urs

    2014-02-01

    Funduscopic assessment of the retinal vessels may be used to assess the health status of microcirculation and as a component in the evaluation of cardiovascular risk factors. Typically, the evaluation is restricted to morphological appreciation without strict quantification. Our purpose was to develop and validate a software tool for semi-automated quantitative analysis of retinal vasculature in nonmydriatic fundus photography. matlab software was used to develop a semi-automated image recognition and analysis tool for the determination of the arterial-venous (A/V) ratio in the central vessel equivalent on 45° digital fundus photographs. Validity and reproducibility of the results were ascertained using nonmydriatic photographs of 50 eyes from 25 subjects recorded from a 3DOCT device (Topcon Corp.). Two hundred and thirty-three eyes of 121 healthy subjects were evaluated to define normative values. A software tool was developed using image thresholds for vessel recognition and vessel width calculation in a semi-automated three-step procedure: vessel recognition on the photograph and artery/vein designation, width measurement and calculation of central retinal vessel equivalents. Mean vessel recognition rate was 78%, vessel class designation rate 75% and reproducibility between 0.78 and 0.91. Mean A/V ratio was 0.84. Application on a healthy norm cohort showed high congruence with prior published manual methods. Processing time per image was one minute. Quantitative geometrical assessment of the retinal vasculature may be performed in a semi-automated manner using dedicated software tools. Yielding reproducible numerical data within a short time leap, this may contribute additional value to mere morphological estimates in the clinical evaluation of fundus photographs. © 2013 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  11. Neuron Image Analyzer: Automated and Accurate Extraction of Neuronal Data from Low Quality Images.

    Science.gov (United States)

    Kim, Kwang-Min; Son, Kilho; Palmore, G Tayhas R

    2015-11-23

    Image analysis software is an essential tool used in neuroscience and neural engineering to evaluate changes in neuronal structure following extracellular stimuli. Both manual and automated methods in current use are severely inadequate at detecting and quantifying changes in neuronal morphology when the images analyzed have a low signal-to-noise ratio (SNR). This inadequacy derives from the fact that these methods often include data from non-neuronal structures or artifacts by simply tracing pixels with high intensity. In this paper, we describe Neuron Image Analyzer (NIA), a novel algorithm that overcomes these inadequacies by employing Laplacian of Gaussian filter and graphical models (i.e., Hidden Markov Model, Fully Connected Chain Model) to specifically extract relational pixel information corresponding to neuronal structures (i.e., soma, neurite). As such, NIA that is based on vector representation is less likely to detect false signals (i.e., non-neuronal structures) or generate artifact signals (i.e., deformation of original structures) than current image analysis algorithms that are based on raster representation. We demonstrate that NIA enables precise quantification of neuronal processes (e.g., length and orientation of neurites) in low quality images with a significant increase in the accuracy of detecting neuronal changes post-stimulation.

  12. Automated delineation of stroke lesions using brain CT images

    Directory of Open Access Journals (Sweden)

    Céline R. Gillebert

    2014-01-01

    Full Text Available Computed tomographic (CT images are widely used for the identification of abnormal brain tissue following infarct and hemorrhage in stroke. Manual lesion delineation is currently the standard approach, but is both time-consuming and operator-dependent. To address these issues, we present a method that can automatically delineate infarct and hemorrhage in stroke CT images. The key elements of this method are the accurate normalization of CT images from stroke patients into template space and the subsequent voxelwise comparison with a group of control CT images for defining areas with hypo- or hyper-intense signals. Our validation, using simulated and actual lesions, shows that our approach is effective in reconstructing lesions resulting from both infarct and hemorrhage and yields lesion maps spatially consistent with those produced manually by expert operators. A limitation is that, relative to manual delineation, there is reduced sensitivity of the automated method in regions close to the ventricles and the brain contours. However, the automated method presents a number of benefits in terms of offering significant time savings and the elimination of the inter-operator differences inherent to manual tracing approaches. These factors are relevant for the creation of large-scale lesion databases for neuropsychological research. The automated delineation of stroke lesions from CT scans may also enable longitudinal studies to quantify changes in damaged tissue in an objective and reproducible manner.

  13. Automated analysis of brachial ultrasound time series

    Science.gov (United States)

    Liang, Weidong; Browning, Roger L.; Lauer, Ronald M.; Sonka, Milan

    1998-07-01

    Atherosclerosis begins in childhood with the accumulation of lipid in the intima of arteries to form fatty streaks, advances through adult life when occlusive vascular disease may result in coronary heart disease, stroke and peripheral vascular disease. Non-invasive B-mode ultrasound has been found useful in studying risk factors in the symptom-free population. Large amount of data is acquired from continuous imaging of the vessels in a large study population. A high quality brachial vessel diameter measurement method is necessary such that accurate diameters can be measured consistently in all frames in a sequence, across different observers. Though human expert has the advantage over automated computer methods in recognizing noise during diameter measurement, manual measurement suffers from inter- and intra-observer variability. It is also time-consuming. An automated measurement method is presented in this paper which utilizes quality assurance approaches to adapt to specific image features, to recognize and minimize the noise effect. Experimental results showed the method's potential for clinical usage in the epidemiological studies.

  14. Spinal imaging and image analysis

    CERN Document Server

    Yao, Jianhua

    2015-01-01

    This book is instrumental to building a bridge between scientists and clinicians in the field of spine imaging by introducing state-of-the-art computational methods in the context of clinical applications.  Spine imaging via computed tomography, magnetic resonance imaging, and other radiologic imaging modalities, is essential for noninvasively visualizing and assessing spinal pathology. Computational methods support and enhance the physician’s ability to utilize these imaging techniques for diagnosis, non-invasive treatment, and intervention in clinical practice. Chapters cover a broad range of topics encompassing radiological imaging modalities, clinical imaging applications for common spine diseases, image processing, computer-aided diagnosis, quantitative analysis, data reconstruction and visualization, statistical modeling, image-guided spine intervention, and robotic surgery. This volume serves a broad audience as  contributions were written by both clinicians and researchers, which reflects the inte...

  15. An automated and simple method for brain MR image extraction

    Directory of Open Access Journals (Sweden)

    Zhu Zixin

    2011-09-01

    Full Text Available Abstract Background The extraction of brain tissue from magnetic resonance head images, is an important image processing step for the analyses of neuroimage data. The authors have developed an automated and simple brain extraction method using an improved geometric active contour model. Methods The method uses an improved geometric active contour model which can not only solve the boundary leakage problem but also is less sensitive to intensity inhomogeneity. The method defines the initial function as a binary level set function to improve computational efficiency. The method is applied to both our data and Internet brain MR data provided by the Internet Brain Segmentation Repository. Results The results obtained from our method are compared with manual segmentation results using multiple indices. In addition, the method is compared to two popular methods, Brain extraction tool and Model-based Level Set. Conclusions The proposed method can provide automated and accurate brain extraction result with high efficiency.

  16. Internet of things and automation of imaging: beyond representationalism

    Directory of Open Access Journals (Sweden)

    2016-09-01

    Full Text Available It is no doubt that the production of digital imagery invites for the major update of theoretical apparatus: what up until now was perceived solely or primarily as the stable representation of the world gives way to the image understood in terms of “the continuous actualization of networked data” or “networked terminal.” In my article I would like to argue that analysis of this new visual environment should not be limited to the procedures of data processing. What also invites serious investigation is acknowledging the reliance of contemporary media ecology on wireless communication which according to Adrian Mackenzie functions as “prepositions (‘at,’ ‘in,’ ‘with,’ by’, ‘between,’ ‘near,’ etc in the grammar of contemporary media” It seems especially important in the case of the imagery accompanying some instances of internet of things, where the considerable part of networked imagery is produced in a fully automated and machinic way. This crowdsourced air pollution monitoring platform consists of networked sensors transmitting signals and data which are then visualized as graphs and maps through the IoT service provider, Xively.

  17. Image mosaicing for automated pipe scanning

    International Nuclear Information System (INIS)

    Summan, Rahul; Dobie, Gordon; Guarato, Francesco; MacLeod, Charles; Marshall, Stephen; Pierce, Gareth; Forrester, Cailean; Bolton, Gary

    2015-01-01

    Remote visual inspection (RVI) is critical for the inspection of the interior condition of pipelines particularly in the nuclear and oil and gas industries. Conventional RVI equipment produces a video which is analysed online by a trained inspector employing expert knowledge. Due to the potentially disorientating nature of the footage, this is a time intensive and difficult activity. In this paper a new probe for such visual inspections is presented. The device employs a catadioptric lens coupled with feature based structure from motion to create a 3D model of the interior surface of a pipeline. Reliance upon the availability of image features is mitigated through orientation and distance estimates from an inertial measurement unit and encoder respectively. Such a model affords a global view of the data thus permitting a greater appreciation of the nature and extent of defects. Furthermore, the technique estimates the 3D position and orientation of the probe thus providing information to direct remedial action. Results are presented for both synthetic and real pipe sections. The former enables the accuracy of the generated model to be assessed while the latter demonstrates the efficacy of the technique in a practice

  18. Image mosaicing for automated pipe scanning

    Energy Technology Data Exchange (ETDEWEB)

    Summan, Rahul, E-mail: rahul.summan@strath.ac.uk; Dobie, Gordon, E-mail: rahul.summan@strath.ac.uk; Guarato, Francesco, E-mail: rahul.summan@strath.ac.uk; MacLeod, Charles, E-mail: rahul.summan@strath.ac.uk; Marshall, Stephen, E-mail: rahul.summan@strath.ac.uk; Pierce, Gareth [Department of Electronic and Electrical Engineering, University of Strathclyde, Glasgow, G1 1XW (United Kingdom); Forrester, Cailean [Inspectahire Instrument Company Ltd, Units 10 -12 Whitemyres Business Centre, Whitemyres Avenue, Aberdeen, AB16 6HQ (United Kingdom); Bolton, Gary [National Nuclear Laboratory, Chadwick House, Warrington Road, Birchwood Park, Warrington, WA3 6AE (United Kingdom)

    2015-03-31

    Remote visual inspection (RVI) is critical for the inspection of the interior condition of pipelines particularly in the nuclear and oil and gas industries. Conventional RVI equipment produces a video which is analysed online by a trained inspector employing expert knowledge. Due to the potentially disorientating nature of the footage, this is a time intensive and difficult activity. In this paper a new probe for such visual inspections is presented. The device employs a catadioptric lens coupled with feature based structure from motion to create a 3D model of the interior surface of a pipeline. Reliance upon the availability of image features is mitigated through orientation and distance estimates from an inertial measurement unit and encoder respectively. Such a model affords a global view of the data thus permitting a greater appreciation of the nature and extent of defects. Furthermore, the technique estimates the 3D position and orientation of the probe thus providing information to direct remedial action. Results are presented for both synthetic and real pipe sections. The former enables the accuracy of the generated model to be assessed while the latter demonstrates the efficacy of the technique in a practice.

  19. Automated morphological analysis of bone marrow cells in microscopic images for diagnosis of leukemia: nucleus-plasma separation and cell classification using a hierarchical tree model of hematopoesis

    Science.gov (United States)

    Krappe, Sebastian; Wittenberg, Thomas; Haferlach, Torsten; Münzenmayer, Christian

    2016-03-01

    The morphological differentiation of bone marrow is fundamental for the diagnosis of leukemia. Currently, the counting and classification of the different types of bone marrow cells is done manually under the use of bright field microscopy. This is a time-consuming, subjective, tedious and error-prone process. Furthermore, repeated examinations of a slide may yield intra- and inter-observer variances. For that reason a computer assisted diagnosis system for bone marrow differentiation is pursued. In this work we focus (a) on a new method for the separation of nucleus and plasma parts and (b) on a knowledge-based hierarchical tree classifier for the differentiation of bone marrow cells in 16 different classes. Classification trees are easily interpretable and understandable and provide a classification together with an explanation. Using classification trees, expert knowledge (i.e. knowledge about similar classes and cell lines in the tree model of hematopoiesis) is integrated in the structure of the tree. The proposed segmentation method is evaluated with more than 10,000 manually segmented cells. For the evaluation of the proposed hierarchical classifier more than 140,000 automatically segmented bone marrow cells are used. Future automated solutions for the morphological analysis of bone marrow smears could potentially apply such an approach for the pre-classification of bone marrow cells and thereby shortening the examination time.

  20. Automated 3-D Detection of Dendritic Spines from In Vivo Two-Photon Image Stacks.

    Science.gov (United States)

    Singh, P K; Hernandez-Herrera, P; Labate, D; Papadakis, M

    2017-10-01

    Despite the significant advances in the development of automated image analysis algorithms for the detection and extraction of neuronal structures, current software tools still have numerous limitations when it comes to the detection and analysis of dendritic spines. The problem is especially challenging in in vivo imaging, where the difficulty of extracting morphometric properties of spines is compounded by lower image resolution and contrast levels native to two-photon laser microscopy. To address this challenge, we introduce a new computational framework for the automated detection and quantitative analysis of dendritic spines in vivo multi-photon imaging. This framework includes: (i) a novel preprocessing algorithm enhancing spines in a way that they are included in the binarized volume produced during the segmentation of foreground from background; (ii) the mathematical foundation of this algorithm, and (iii) an algorithm for the detection of spine locations in reference to centerline trace and separating them from the branches to whom spines are attached to. This framework enables the computation of a wide range of geometric features such as spine length, spatial distribution and spine volume in a high-throughput fashion. We illustrate our approach for the automated extraction of dendritic spine features in time-series multi-photon images of layer 5 cortical excitatory neurons from the mouse visual cortex.

  1. Automating risk analysis of software design models.

    Science.gov (United States)

    Frydman, Maxime; Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P

    2014-01-01

    The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance.

  2. Automating Risk Analysis of Software Design Models

    Directory of Open Access Journals (Sweden)

    Maxime Frydman

    2014-01-01

    Full Text Available The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance.

  3. Full second order chromatographic/spectrometric data matrices for automated sample identification and component analysis by non-data-reducing image analysis

    DEFF Research Database (Denmark)

    Nielsen, Niles-Peter Vest; Smedsgaard, Jørn; Frisvad, Jens Christian

    1999-01-01

    A data analysis method is proposed for identification and for confirmation of classification schemes, based on single- or multiple-wavelength chromatographic profiles. The proposed method works directly on the chromatographic data without data reduction procedures such as peak area or retention...... index calculation, Chromatographic matrices from analysis of previously identified samples are used for generating a reference chromatogram for each class, and unidentified samples are compared with all reference chromatograms by calculating a resemblance measure for each reference. Once the method...... is configured, subsequent sample identification is automatic. As an example of a further development, it is shown how the method allows identification of characteristic sample components by local similarity calculations thus finding common components within a given class as well as component differences between...

  4. Automated imaging dark adaptometer for investigating hereditary retinal degenerations

    Science.gov (United States)

    Azevedo, Dario F. G.; Cideciyan, Artur V.; Regunath, Gopalakrishnan; Jacobson, Samuel G.

    1995-05-01

    We designed and built an automated imaging dark adaptometer (AIDA) to increase accuracy, reliability, versatility and speed of dark adaptation testing in patients with hereditary retinal degenerations. AIDA increases test accuracy by imaging the ocular fundus for precise positioning of bleaching and stimulus lights. It improves test reliability by permitting continuous monitoring of patient fixation. Software control of stimulus presentation provides broad testing versatility without sacrificing speed. AIDA promises to facilitate the measurement of dark adaptation in studies of the pathophysiology of retinal degenerations and in future treatment trials of these diseases.

  5. Integrating two spectral imaging systems in an automated mineralogy application

    CSIR Research Space (South Africa)

    Harris, D

    2009-11-01

    Full Text Available A system for the automated analysis and sorting of mineral samples has been developed to assist in the concentration of heavy mineral samples in the diamond exploration process. These samples consist of irregularly shaped mineral grains ranging from...

  6. Advanced biomedical image analysis

    CERN Document Server

    Haidekker, Mark A

    2010-01-01

    "This book covers the four major areas of image processing: Image enhancement and restoration, image segmentation, image quantification and classification, and image visualization. Image registration, storage, and compression are also covered. The text focuses on recently developed image processing and analysis operators and covers topical research"--Provided by publisher.

  7. SU-E-I-94: Automated Image Quality Assessment of Radiographic Systems Using An Anthropomorphic Phantom

    International Nuclear Information System (INIS)

    Wells, J; Wilson, J; Zhang, Y; Samei, E; Ravin, Carl E.

    2014-01-01

    Purpose: In a large, academic medical center, consistent radiographic imaging performance is difficult to routinely monitor and maintain, especially for a fleet consisting of multiple vendors, models, software versions, and numerous imaging protocols. Thus, an automated image quality control methodology has been implemented using routine image quality assessment with a physical, stylized anthropomorphic chest phantom. Methods: The “Duke” Phantom (Digital Phantom 07-646, Supertech, Elkhart, IN) was imaged twice on each of 13 radiographic units from a variety of vendors at 13 primary care clinics. The first acquisition used the clinical PA chest protocol to acquire the post-processed “FOR PRESENTATION” image. The second image was acquired without an antiscatter grid followed by collection of the “FOR PROCESSING” image. Manual CNR measurements were made from the largest and thickest contrast-detail inserts in the lung, heart, and abdominal regions of the phantom in each image. An automated image registration algorithm was used to estimate the CNR of the same insert using similar ROIs. Automated measurements were then compared to the manual measurements. Results: Automatic and manual CNR measurements obtained from “FOR PRESENTATION” images had average percent differences of 0.42%±5.18%, −3.44%±4.85%, and 1.04%±3.15% in the lung, heart, and abdominal regions, respectively; measurements obtained from “FOR PROCESSING” images had average percent differences of -0.63%±6.66%, −0.97%±3.92%, and −0.53%±4.18%, respectively. The maximum absolute difference in CNR was 15.78%, 10.89%, and 8.73% in the respective regions. In addition to CNR assessment of the largest and thickest contrast-detail inserts, the automated method also provided CNR estimates for all 75 contrast-detail inserts in each phantom image. Conclusion: Automated analysis of a radiographic phantom has been shown to be a fast, robust, and objective means for assessing radiographic

  8. Automating Trend Analysis for Spacecraft Constellations

    Science.gov (United States)

    Davis, George; Cooter, Miranda; Updike, Clark; Carey, Everett; Mackey, Jennifer; Rykowski, Timothy; Powers, Edward I. (Technical Monitor)

    2001-01-01

    Spacecraft trend analysis is a vital mission operations function performed by satellite controllers and engineers, who perform detailed analyses of engineering telemetry data to diagnose subsystem faults and to detect trends that may potentially lead to degraded subsystem performance or failure in the future. It is this latter function that is of greatest importance, for careful trending can often predict or detect events that may lead to a spacecraft's entry into safe-hold. Early prediction and detection of such events could result in the avoidance of, or rapid return to service from, spacecraft safing, which not only results in reduced recovery costs but also in a higher overall level of service for the satellite system. Contemporary spacecraft trending activities are manually intensive and are primarily performed diagnostically after a fault occurs, rather than proactively to predict its occurrence. They also tend to rely on information systems and software that are oudated when compared to current technologies. When coupled with the fact that flight operations teams often have limited resources, proactive trending opportunities are limited, and detailed trend analysis is often reserved for critical responses to safe holds or other on-orbit events such as maneuvers. While the contemporary trend analysis approach has sufficed for current single-spacecraft operations, it will be unfeasible for NASA's planned and proposed space science constellations. Missions such as the Dynamics, Reconnection and Configuration Observatory (DRACO), for example, are planning to launch as many as 100 'nanospacecraft' to form a homogenous constellation. A simple extrapolation of resources and manpower based on single-spacecraft operations suggests that trending for such a large spacecraft fleet will be unmanageable, unwieldy, and cost-prohibitive. It is therefore imperative that an approach to automating the spacecraft trend analysis function be studied, developed, and applied to

  9. Automated Steel Cleanliness Analysis Tool (ASCAT)

    Energy Technology Data Exchange (ETDEWEB)

    Gary Casuccio (RJ Lee Group); Michael Potter (RJ Lee Group); Fred Schwerer (RJ Lee Group); Dr. Richard J. Fruehan (Carnegie Mellon University); Dr. Scott Story (US Steel)

    2005-12-30

    The objective of this study was to develop the Automated Steel Cleanliness Analysis Tool (ASCATTM) to permit steelmakers to evaluate the quality of the steel through the analysis of individual inclusions. By characterizing individual inclusions, determinations can be made as to the cleanliness of the steel. Understanding the complicating effects of inclusions in the steelmaking process and on the resulting properties of steel allows the steel producer to increase throughput, better control the process, reduce remelts, and improve the quality of the product. The ASCAT (Figure 1) is a steel-smart inclusion analysis tool developed around a customized next-generation computer controlled scanning electron microscopy (NG-CCSEM) hardware platform that permits acquisition of inclusion size and composition data at a rate never before possible in SEM-based instruments. With built-in customized ''intelligent'' software, the inclusion data is automatically sorted into clusters representing different inclusion types to define the characteristics of a particular heat (Figure 2). The ASCAT represents an innovative new tool for the collection of statistically meaningful data on inclusions, and provides a means of understanding the complicated effects of inclusions in the steel making process and on the resulting properties of steel. Research conducted by RJLG with AISI (American Iron and Steel Institute) and SMA (Steel Manufactures of America) members indicates that the ASCAT has application in high-grade bar, sheet, plate, tin products, pipes, SBQ, tire cord, welding rod, and specialty steels and alloys where control of inclusions, whether natural or engineered, are crucial to their specification for a given end-use. Example applications include castability of calcium treated steel; interstitial free (IF) degasser grade slag conditioning practice; tundish clogging and erosion minimization; degasser circulation and optimization; quality assessment

  10. Automated Steel Cleanliness Analysis Tool (ASCAT)

    International Nuclear Information System (INIS)

    Gary Casuccio; Michael Potter; Fred Schwerer; Richard J. Fruehan; Dr. Scott Story

    2005-01-01

    The objective of this study was to develop the Automated Steel Cleanliness Analysis Tool (ASCATTM) to permit steelmakers to evaluate the quality of the steel through the analysis of individual inclusions. By characterizing individual inclusions, determinations can be made as to the cleanliness of the steel. Understanding the complicating effects of inclusions in the steelmaking process and on the resulting properties of steel allows the steel producer to increase throughput, better control the process, reduce remelts, and improve the quality of the product. The ASCAT (Figure 1) is a steel-smart inclusion analysis tool developed around a customized next-generation computer controlled scanning electron microscopy (NG-CCSEM) hardware platform that permits acquisition of inclusion size and composition data at a rate never before possible in SEM-based instruments. With built-in customized ''intelligent'' software, the inclusion data is automatically sorted into clusters representing different inclusion types to define the characteristics of a particular heat (Figure 2). The ASCAT represents an innovative new tool for the collection of statistically meaningful data on inclusions, and provides a means of understanding the complicated effects of inclusions in the steel making process and on the resulting properties of steel. Research conducted by RJLG with AISI (American Iron and Steel Institute) and SMA (Steel Manufactures of America) members indicates that the ASCAT has application in high-grade bar, sheet, plate, tin products, pipes, SBQ, tire cord, welding rod, and specialty steels and alloys where control of inclusions, whether natural or engineered, are crucial to their specification for a given end-use. Example applications include castability of calcium treated steel; interstitial free (IF) degasser grade slag conditioning practice; tundish clogging and erosion minimization; degasser circulation and optimization; quality assessment/steel cleanliness; slab, billet

  11. Automated reasoning applications to design analysis

    International Nuclear Information System (INIS)

    Stratton, R.C.

    1984-01-01

    Given the necessary relationships and definitions of design functions and components, validation of system incarnation (the physical product of design) and sneak function analysis can be achieved via automated reasoners. The relationships and definitions must define the design specification and incarnation functionally. For the design specification, the hierarchical functional representation is based on physics and engineering principles and bounded by design objectives and constraints. The relationships and definitions of the design incarnation are manifested as element functional definitions, state relationship to functions, functional relationship to direction, element connectivity, and functional hierarchical configuration

  12. Automated quantification and analysis of mandibular asymmetry

    DEFF Research Database (Denmark)

    Darvann, T. A.; Hermann, N. V.; Larsen, P.

    2010-01-01

    We present an automated method of spatially detailed 3D asymmetry quantification in mandibles extracted from CT and apply it to a population of infants with unilateral coronal synostosis (UCS). An atlas-based method employing non-rigid registration of surfaces is used for determining deformation ...... after mirroring the mandible across the MSP. A principal components analysis of asymmetry characterizes the major types of asymmetry in the population, and successfully separates the asymmetric UCS mandibles from a number of less asymmetric mandibles from a control population....

  13. Automated metabolic gas analysis systems: a review.

    Science.gov (United States)

    Macfarlane, D J

    2001-01-01

    The use of automated metabolic gas analysis systems or metabolic measurement carts (MMC) in exercise studies is common throughout the industrialised world. They have become essential tools for diagnosing many hospital patients, especially those with cardiorespiratory disease. Moreover, the measurement of maximal oxygen uptake (VO2max) is routine for many athletes in fitness laboratories and has become a defacto standard in spite of its limitations. The development of metabolic carts has also facilitated the noninvasive determination of the lactate threshold and cardiac output, respiratory gas exchange kinetics, as well as studies of outdoor activities via small portable systems that often use telemetry. Although the fundamental principles behind the measurement of oxygen uptake (VO2) and carbon dioxide production (VCO2) have not changed, the techniques used have, and indeed, some have almost turned through a full circle. Early scientists often employed a manual Douglas bag method together with separate chemical analyses, but the need for faster and more efficient techniques fuelled the development of semi- and full-automated systems by private and commercial institutions. Yet, recently some scientists are returning back to the traditional Douglas bag or Tissot-spirometer methods, or are using less complex automated systems to not only save capital costs, but also to have greater control over the measurement process. Over the last 40 years, a considerable number of automated systems have been developed, with over a dozen commercial manufacturers producing in excess of 20 different automated systems. The validity and reliability of all these different systems is not well known, with relatively few independent studies having been published in this area. For comparative studies to be possible and to facilitate greater consistency of measurements in test-retest or longitudinal studies of individuals, further knowledge about the performance characteristics of these

  14. Usefulness of automated biopsy guns in image-guided biopsy

    International Nuclear Information System (INIS)

    Lee, Jung Hyung; Rhee, Chang Soo; Lee, Sung Moon; Kim, Hong; Woo, Sung Ku; Suh, Soo Jhi

    1994-01-01

    To evaluate the usefulness of automated biopsy guns in image-guided biopsy of lung, liver, pancreas and other organs. Using automated biopsy devices, 160 biopsies of variable anatomic sites were performed: Biopsies were performed under ultrasonographic(US) guidance in 95 and computed tomographic (CT) guidance in 65. We retrospectively analyzed histologic results and complications. Specimens were adequate for histopathologic diagnosis in 143 of the 160 patients(89.4%)-Diagnostic tissue was obtained in 130 (81.3%), suggestive tissue obtained in 13(8.1%), and non-diagnostic tissue was obtained in 14(8.7%). Inadequate tissue was obtained in only 3(1.9%). There was no statistically significant difference between US-guided and CT-guided percutaneous biopsy. There was no occurrence of significant complication. We have experienced mild complications in only 5 patients-2 hematuria and 2 hematochezia in transrectal prostatic biopsy, and 1 minimal pneumothorax in CT-guided percutaneous lung biopsy. All of them were resolved spontaneously. The image-guided biopsy using the automated biopsy gun was a simple, safe and accurate method of obtaining adequate specimen for the histopathologic diagnosis

  15. The accuracy of a designed software for automated localization of craniofacial landmarks on CBCT images

    International Nuclear Information System (INIS)

    Shahidi, Shoaleh; Bahrampour, Ehsan; Soltanimehr, Elham; Zamani, Ali; Oshagh, Morteza; Moattari, Marzieh; Mehdizadeh, Alireza

    2014-01-01

    Two-dimensional projection radiographs have been traditionally considered the modality of choice for cephalometric analysis. To overcome the shortcomings of two-dimensional images, three-dimensional computed tomography (CT) has been used to evaluate craniofacial structures. However, manual landmark detection depends on medical expertise, and the process is time-consuming. The present study was designed to produce software capable of automated localization of craniofacial landmarks on cone beam (CB) CT images based on image registration and to evaluate its accuracy. The software was designed using MATLAB programming language. The technique was a combination of feature-based (principal axes registration) and voxel similarity-based methods for image registration. A total of 8 CBCT images were selected as our reference images for creating a head atlas. Then, 20 CBCT images were randomly selected as the test images for evaluating the method. Three experts twice located 14 landmarks in all 28 CBCT images during two examinations set 6 weeks apart. The differences in the distances of coordinates of each landmark on each image between manual and automated detection methods were calculated and reported as mean errors. The combined intraclass correlation coefficient for intraobserver reliability was 0.89 and for interobserver reliability 0.87 (95% confidence interval, 0.82 to 0.93). The mean errors of all 14 landmarks were <4 mm. Additionally, 63.57% of landmarks had a mean error of <3 mm compared with manual detection (gold standard method). The accuracy of our approach for automated localization of craniofacial landmarks, which was based on combining feature-based and voxel similarity-based methods for image registration, was acceptable. Nevertheless we recommend repetition of this study using other techniques, such as intensity-based methods

  16. Fully automated registration of vibrational microspectroscopic images in histologically stained tissue sections.

    Science.gov (United States)

    Yang, Chen; Niedieker, Daniel; Grosserüschkamp, Frederik; Horn, Melanie; Tannapfel, Andrea; Kallenbach-Thieltges, Angela; Gerwert, Klaus; Mosig, Axel

    2015-11-25

    In recent years, hyperspectral microscopy techniques such as infrared or Raman microscopy have been applied successfully for diagnostic purposes. In many of the corresponding studies, it is common practice to measure one and the same sample under different types of microscopes. Any joint analysis of the two image modalities requires to overlay the images, so that identical positions in the sample are located at the same coordinate in both images. This step, commonly referred to as image registration, has typically been performed manually in the lack of established automated computational registration tools. We propose a corresponding registration algorithm that addresses this registration problem, and demonstrate the robustness of our approach in different constellations of microscopes. First, we deal with subregion registration of Fourier Transform Infrared (FTIR) microscopic images in whole-slide histopathological staining images. Second, we register FTIR imaged cores of tissue microarrays in their histopathologically stained counterparts, and finally perform registration of Coherent anti-Stokes Raman spectroscopic (CARS) images within histopathological staining images. Our validation involves a large variety of samples obtained from colon, bladder, and lung tissue on three different types of microscopes, and demonstrates that our proposed method works fully automated and highly robust in different constellations of microscopes involving diverse types of tissue samples.

  17. An Automated Platform for High-Resolution Tissue Imaging Using Nanospray Desorption Electrospray Ionization Mass Spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Lanekoff, Ingela T.; Heath, Brandi S.; Liyu, Andrey V.; Thomas, Mathew; Carson, James P.; Laskin, Julia

    2012-10-02

    An automated platform has been developed for acquisition and visualization of mass spectrometry imaging (MSI) data using nanospray desorption electrospray ionization (nano-DESI). The new system enables robust operation of the nano-DESI imaging source over many hours. This is achieved by controlling the distance between the sample and the probe by mounting the sample holder onto an automated XYZ stage and defining the tilt of the sample plane. This approach is useful for imaging of relatively flat samples such as thin tissue sections. Custom software called MSI QuickView was developed for visualization of large data sets generated in imaging experiments. MSI QuickView enables fast visualization of the imaging data during data acquisition and detailed processing after the entire image is acquired. The performance of the system is demonstrated by imaging rat brain tissue sections. High resolution mass analysis combined with MS/MS experiments enabled identification of lipids and metabolites in the tissue section. In addition, high dynamic range and sensitivity of the technique allowed us to generate ion images of low-abundance isobaric lipids. High-spatial resolution image acquired over a small region of the tissue section revealed the spatial distribution of an abundant brain metabolite, creatine, in the white and gray matter that is consistent with the literature data obtained using magnetic resonance spectroscopy.

  18. [Automation of chemical analysis in enology].

    Science.gov (United States)

    Dubernet, M

    1978-01-01

    Automatic dosages took place a short time ago in oenology laboratories. First researchs about automation of usual manual analysis have been completed by I.N.R.A. Station of Dijon during 1969--1972 years. Then, other researchs were made and in 1974 the first automatic analyser appeared in application laboratories. In all cases continuous flow method was used. First dosages which has been carried out are volatic acidity, residual sugars, total SO2. The rate of work is 30 samples an hour. Then, an original way for free SO2 was suggested. At present, about a dozen of laboratories in France use these dosages. The ethanol dosage automation, very important in oenology, is very difficult to carry out. A new method using a thermometric analyzer is tested. Research about many dosages as tartaric, malic, lactic acids, glucose, fructose, glycérol, have been performed especially by I.N.R.A. Station in Narbonne. But these dosages are not current and at present no laboratory apply them. Now, equipments price and redemption, change of tradionnal dosages for automatical methods and the level of knowledge required for operators are well known. The reproducibility and the accuracy of the continuous flow automatic dosages allow, for enough important laboratories, to make an increasing number of analysis necessary for wine quality control.

  19. Normalized gradient fields cross-correlation for automated detection of prostate in magnetic resonance images

    Science.gov (United States)

    Fotin, Sergei V.; Yin, Yin; Periaswamy, Senthil; Kunz, Justin; Haldankar, Hrishikesh; Muradyan, Naira; Cornud, François; Turkbey, Baris; Choyke, Peter L.

    2012-02-01

    Fully automated prostate segmentation helps to address several problems in prostate cancer diagnosis and treatment: it can assist in objective evaluation of multiparametric MR imagery, provides a prostate contour for MR-ultrasound (or CT) image fusion for computer-assisted image-guided biopsy or therapy planning, may facilitate reporting and enables direct prostate volume calculation. Among the challenges in automated analysis of MR images of the prostate are the variations of overall image intensities across scanners, the presence of nonuniform multiplicative bias field within scans and differences in acquisition setup. Furthermore, images acquired with the presence of an endorectal coil suffer from localized high-intensity artifacts at the posterior part of the prostate. In this work, a three-dimensional method for fast automated prostate detection based on normalized gradient fields cross-correlation, insensitive to intensity variations and coil-induced artifacts, is presented and evaluated. The components of the method, offline template learning and the localization algorithm, are described in detail. The method was validated on a dataset of 522 T2-weighted MR images acquired at the National Cancer Institute, USA that was split in two halves for development and testing. In addition, second dataset of 29 MR exams from Centre d'Imagerie Médicale Tourville, France were used to test the algorithm. The 95% confidence intervals for the mean Euclidean distance between automatically and manually identified prostate centroids were 4.06 +/- 0.33 mm and 3.10 +/- 0.43 mm for the first and second test datasets respectively. Moreover, the algorithm provided the centroid within the true prostate volume in 100% of images from both datasets. Obtained results demonstrate high utility of the detection method for a fully automated prostate segmentation.

  20. Automated tracking of the vascular tree on DSA images

    International Nuclear Information System (INIS)

    Alperin, N.; Hoffmann, K.R.; Doi, K.

    1990-01-01

    Determination of the vascular tree structure is important for reconstruction of three-dimensional vascular tree from biplane images, for assessment of the significance of a lesion, and for planning treatment for arteriovenous malformation. To automate these analyses, the authors of this paper are developing a method to determine the vascular tree structure from digital subtraction angiography (DSA) images. The authors have previously described a vessel tracking method, based on the double-square-box technique. To improve the tracking accuracy, they have developed and integrated with the previous method a connectivity test and guided-sector-search technique. The connectivity test, based on region growing techniques, eliminates tracking across nonvessel regions. The guided sector-search method incorporates information from a larger are of the image to guide the search for the next tracking point

  1. Automated analysis of small animal PET studies through deformable registration to an atlas

    International Nuclear Information System (INIS)

    Gutierrez, Daniel F.; Zaidi, Habib

    2012-01-01

    This work aims to develop a methodology for automated atlas-guided analysis of small animal positron emission tomography (PET) data through deformable registration to an anatomical mouse model. A non-rigid registration technique is used to put into correspondence relevant anatomical regions of rodent CT images from combined PET/CT studies to corresponding CT images of the Digimouse anatomical mouse model. The latter provides a pre-segmented atlas consisting of 21 anatomical regions suitable for automated quantitative analysis. Image registration is performed using a package based on the Insight Toolkit allowing the implementation of various image registration algorithms. The optimal parameters obtained for deformable registration were applied to simulated and experimental mouse PET/CT studies. The accuracy of the image registration procedure was assessed by segmenting mouse CT images into seven regions: brain, lungs, heart, kidneys, bladder, skeleton and the rest of the body. This was accomplished prior to image registration using a semi-automated algorithm. Each mouse segmentation was transformed using the parameters obtained during CT to CT image registration. The resulting segmentation was compared with the original Digimouse atlas to quantify image registration accuracy using established metrics such as the Dice coefficient and Hausdorff distance. PET images were then transformed using the same technique and automated quantitative analysis of tracer uptake performed. The Dice coefficient and Hausdorff distance show fair to excellent agreement and a mean registration mismatch distance of about 6 mm. The results demonstrate good quantification accuracy in most of the regions, especially the brain, but not in the bladder, as expected. Normalized mean activity estimates were preserved between the reference and automated quantification techniques with relative errors below 10 % in most of the organs considered. The proposed automated quantification technique is

  2. Automated determination of size and morphology information from soot transmission electron microscope (TEM)-generated images

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Cheng; Chan, Qing N., E-mail: qing.chan@unsw.edu.au; Zhang, Renlin; Kook, Sanghoon; Hawkes, Evatt R.; Yeoh, Guan H. [UNSW, School of Mechanical and Manufacturing Engineering (Australia); Medwell, Paul R. [The University of Adelaide, Centre for Energy Technology (Australia)

    2016-05-15

    The thermophoretic sampling of particulates from hot media, coupled with transmission electron microscope (TEM) imaging, is a combined approach that is widely used to derive morphological information. The identification and the measurement of the particulates, however, can be complex when the TEM images are of low contrast, noisy, and have non-uniform background signal level. The image processing method can also be challenging and time consuming, when the samples collected have large variability in shape and size, or have some degree of overlapping. In this work, a three-stage image processing sequence is presented to facilitate time-efficient automated identification and measurement of particulates from the TEM grids. The proposed processing sequence is first applied to soot samples that were thermophoretically sampled from a laminar non-premixed ethylene-air flame. The parameter values that are required to be set to facilitate the automated process are identified, and sensitivity of the results to these parameters is assessed. The same analysis process is also applied to soot samples that were acquired from an externally irradiated laminar non-premixed ethylene-air flame, which have different geometrical characteristics, to assess the morphological dependence of the proposed image processing sequence. Using the optimized parameter values, statistical assessments of the automated results reveal that the largest discrepancies that are associated with the estimated values of primary particle diameter, fractal dimension, and prefactor values of the aggregates for the tested cases, are approximately 3, 1, and 10 %, respectively, when compared with the manual measurements.

  3. Automated determination of size and morphology information from soot transmission electron microscope (TEM)-generated images

    International Nuclear Information System (INIS)

    Wang, Cheng; Chan, Qing N.; Zhang, Renlin; Kook, Sanghoon; Hawkes, Evatt R.; Yeoh, Guan H.; Medwell, Paul R.

    2016-01-01

    The thermophoretic sampling of particulates from hot media, coupled with transmission electron microscope (TEM) imaging, is a combined approach that is widely used to derive morphological information. The identification and the measurement of the particulates, however, can be complex when the TEM images are of low contrast, noisy, and have non-uniform background signal level. The image processing method can also be challenging and time consuming, when the samples collected have large variability in shape and size, or have some degree of overlapping. In this work, a three-stage image processing sequence is presented to facilitate time-efficient automated identification and measurement of particulates from the TEM grids. The proposed processing sequence is first applied to soot samples that were thermophoretically sampled from a laminar non-premixed ethylene-air flame. The parameter values that are required to be set to facilitate the automated process are identified, and sensitivity of the results to these parameters is assessed. The same analysis process is also applied to soot samples that were acquired from an externally irradiated laminar non-premixed ethylene-air flame, which have different geometrical characteristics, to assess the morphological dependence of the proposed image processing sequence. Using the optimized parameter values, statistical assessments of the automated results reveal that the largest discrepancies that are associated with the estimated values of primary particle diameter, fractal dimension, and prefactor values of the aggregates for the tested cases, are approximately 3, 1, and 10 %, respectively, when compared with the manual measurements.

  4. An investigation of automated activation analysis

    International Nuclear Information System (INIS)

    Kuykendall, William E. Jr.; Wainerdi, Richard E.

    1962-01-01

    A study has been made of the possibility of applying computer techniques to the resolution of data from the complex gamma-ray spectra obtained in non-destructive activation analysis. The primary objective has been to use computer data-handling techniques to allow the existing analytical method to be used for rapid, routine, sensitive and economical elemental analyses. The necessary conditions for the satisfactory application of automated activation analysis have been evaluated and a computer programme has been completed which will process the data from samples containing a large number of different elements. To illustrate the speed of the handling sequence, the data from a sample containing four component elements can be processed in a matter of minutes, with the speed of processing limited primarily by the speed of the output printer. (author) [fr

  5. Automated Identification of Fiducial Points on 3D Torso Images

    Directory of Open Access Journals (Sweden)

    Manas M. Kawale

    2013-01-01

    Full Text Available Breast reconstruction is an important part of the breast cancer treatment process for many women. Recently, 2D and 3D images have been used by plastic surgeons for evaluating surgical outcomes. Distances between different fiducial points are frequently used as quantitative measures for characterizing breast morphology. Fiducial points can be directly marked on subjects for direct anthropometry, or can be manually marked on images. This paper introduces novel algorithms to automate the identification of fiducial points in 3D images. Automating the process will make measurements of breast morphology more reliable, reducing the inter- and intra-observer bias. Algorithms to identify three fiducial points, the nipples, sternal notch, and umbilicus, are described. The algorithms used for localization of these fiducial points are formulated using a combination of surface curvature and 2D color information. Comparison of the 3D coordinates of automatically detected fiducial points and those identified manually, and geodesic distances between the fiducial points are used to validate algorithm performance. The algorithms reliably identified the location of all three of the fiducial points. We dedicate this article to our late colleague and friend, Dr. Elisabeth K. Beahm. Elisabeth was both a talented plastic surgeon and physician-scientist; we deeply miss her insight and her fellowship.

  6. Extended Field Laser Confocal Microscopy (EFLCM): Combining automated Gigapixel image capture with in silico virtual microscopy

    International Nuclear Information System (INIS)

    Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo

    2008-01-01

    Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes

  7. Specdata: Automated Analysis Software for Broadband Spectra

    Science.gov (United States)

    Oliveira, Jasmine N.; Martin-Drumel, Marie-Aline; McCarthy, Michael C.

    2017-06-01

    With the advancement of chirped-pulse techniques, broadband rotational spectra with a few tens to several hundred GHz of spectral coverage are now routinely recorded. When studying multi-component mixtures that might result, for example, with the use of an electrical discharge, lines of new chemical species are often obscured by those of known compounds, and analysis can be laborious. To address this issue, we have developed SPECdata, an open source, interactive tool which is designed to simplify and greatly accelerate the spectral analysis and discovery. Our software tool combines both automated and manual components that free the user from computation, while giving him/her considerable flexibility to assign, manipulate, interpret and export their analysis. The automated - and key - component of the new software is a database query system that rapidly assigns transitions of known species in an experimental spectrum. For each experiment, the software identifies spectral features, and subsequently assigns them to known molecules within an in-house database (Pickett .cat files, list of frequencies...), or those catalogued in Splatalogue (using automatic on-line queries). With suggested assignments, the control is then handed over to the user who can choose to accept, decline or add additional species. Data visualization, statistical information, and interactive widgets assist the user in making decisions about their data. SPECdata has several other useful features intended to improve the user experience. Exporting a full report of the analysis, or a peak file in which assigned lines are removed are among several options. A user may also save their progress to continue at another time. Additional features of SPECdata help the user to maintain and expand their database for future use. A user-friendly interface allows one to search, upload, edit or update catalog or experiment entries.

  8. Analysis of engineering drawings and raster map images

    CERN Document Server

    Henderson, Thomas C

    2013-01-01

    Presents up-to-date methods and algorithms for the automated analysis of engineering drawings and digital cartographic maps Discusses automatic engineering drawing and map analysis techniques Covers detailed accounts of the use of unsupervised segmentation algorithms to map images

  9. Volume 19, Issue8 (December 2004)Articles in the Current Issue:Research ArticleTowards automation of palynology 1: analysis of pollen shape and ornamentation using simple geometric measures, derived from scanning electron microscope images

    Science.gov (United States)

    Treloar, W. J.; Taylor, G. E.; Flenley, J. R.

    2004-12-01

    This is the first of a series of papers on the theme of automated pollen analysis. The automation of pollen analysis could result in numerous advantages for the reconstruction of past environments, with larger data sets made practical, objectivity and fine resolution sampling. There are also applications in apiculture and medicine. Previous work on the classification of pollen using texture measures has been successful with small numbers of pollen taxa. However, as the number of pollen taxa to be identified increases, more features may be required to achieve a successful classification. This paper describes the use of simple geometric measures to augment the texture measures. The feasibility of this new approach is tested using scanning electron microscope (SEM) images of 12 taxa of fresh pollen taken from reference material collected on Henderson Island, Polynesia. Pollen images were captured directly from a SEM connected to a PC. A threshold grey-level was set and binary images were then generated. Pollen edges were then located and the boundaries were traced using a chain coding system. A number of simple geometric variables were calculated directly from the chain code of the pollen and a variable selection procedure was used to choose the optimal subset to be used for classification. The efficiency of these variables was tested using a leave-one-out classification procedure. The system successfully split the original 12 taxa sample into five sub-samples containing no more than six pollen taxa each. The further subdivision of echinate pollen types was then attempted with a subset of four pollen taxa. A set of difference codes was constructed for a range of displacements along the chain code. From these difference codes probability variables were calculated. A variable selection procedure was again used to choose the optimal subset of probabilities that may be used for classification. The efficiency of these variables was again tested using a leave

  10. Management issues in automated audit analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, K.A.; Hochberg, J.G.; Wilhelmy, S.K.; McClary, J.F.; Christoph, G.G.

    1994-03-01

    This paper discusses management issues associated with the design and implementation of an automated audit analysis system that we use to detect security events. It gives the viewpoint of a team directly responsible for developing and managing such a system. We use Los Alamos National Laboratory`s Network Anomaly Detection and Intrusion Reporter (NADIR) as a case in point. We examine issues encountered at Los Alamos, detail our solutions to them, and where appropriate suggest general solutions. After providing an introduction to NADIR, we explore four general management issues: cost-benefit questions, privacy considerations, legal issues, and system integrity. Our experiences are of general interest both to security professionals and to anyone who may wish to implement a similar system. While NADIR investigates security events, the methods used and the management issues are potentially applicable to a broad range of complex systems. These include those used to audit credit card transactions, medical care payments, and procurement systems.

  11. Applications of Automation Methods for Nonlinear Fracture Test Analysis

    Science.gov (United States)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    Using automated and standardized computer tools to calculate the pertinent test result values has several advantages such as: 1. allowing high-fidelity solutions to complex nonlinear phenomena that would be impractical to express in written equation form, 2. eliminating errors associated with the interpretation and programing of analysis procedures from the text of test standards, 3. lessening the need for expertise in the areas of solid mechanics, fracture mechanics, numerical methods, and/or finite element modeling, to achieve sound results, 4. and providing one computer tool and/or one set of solutions for all users for a more "standardized" answer. In summary, this approach allows a non-expert with rudimentary training to get the best practical solution based on the latest understanding with minimum difficulty.Other existing ASTM standards that cover complicated phenomena use standard computer programs: 1. ASTM C1340/C1340M-10- Standard Practice for Estimation of Heat Gain or Loss Through Ceilings Under Attics Containing Radiant Barriers by Use of a Computer Program 2. ASTM F 2815 - Standard Practice for Chemical Permeation through Protective Clothing Materials: Testing Data Analysis by Use of a Computer Program 3. ASTM E2807 - Standard Specification for 3D Imaging Data Exchange, Version 1.0 The verification, validation, and round-robin processes required of a computer tool closely parallel the methods that are used to ensure the solution validity for equations included in test standard. The use of automated analysis tools allows the creation and practical implementation of advanced fracture mechanics test standards that capture the physics of a nonlinear fracture mechanics problem without adding undue burden or expense to the user. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.

  12. Automated and effective content-based image retrieval for digital mammography.

    Science.gov (United States)

    Singh, Vibhav Prakash; Srivastava, Subodh; Srivastava, Rajeev

    2018-01-01

    Nowadays, huge number of mammograms has been generated in hospitals for the diagnosis of breast cancer. Content-based image retrieval (CBIR) can contribute more reliable diagnosis by classifying the query mammograms and retrieving similar mammograms already annotated by diagnostic descriptions and treatment results. Since labels, artifacts, and pectoral muscles present in mammograms can bias the retrieval procedures, automated detection and exclusion of these image noise patterns and/or non-breast regions is an essential pre-processing step. In this study, an efficient and automated CBIR system of mammograms was developed and tested. First, the pre-processing steps including automatic labelling-artifact suppression, automatic pectoral muscle removal, and image enhancement using the adaptive median filter were applied. Next, pre-processed images were segmented using the co-occurrence thresholds based seeded region growing algorithm. Furthermore, a set of image features including shape, histogram based statistical, Gabor, wavelet, and Gray Level Co-occurrence Matrix (GLCM) features, was computed from the segmented region. In order to select the optimal features, a minimum redundancy maximum relevance (mRMR) feature selection method was then applied. Finally, similar images were retrieved using Euclidean distance similarity measure. The comparative experiments conducted with reference to benchmark mammographic images analysis society (MIAS) database confirmed the effectiveness of the proposed work concerning average precision of 72% and 61.30% for normal & abnormal classes of mammograms, respectively.

  13. Automated otolith image classification with multiple views: an evaluation on Sciaenidae.

    Science.gov (United States)

    Wong, J Y; Chu, C; Chong, V C; Dhillon, S K; Loh, K H

    2016-08-01

    Combined multiple 2D views (proximal, anterior and ventral aspects) of the sagittal otolith are proposed here as a method to capture shape information for fish classification. Classification performance of single view compared with combined 2D views show improved classification accuracy of the latter, for nine species of Sciaenidae. The effects of shape description methods (shape indices, Procrustes analysis and elliptical Fourier analysis) on classification performance were evaluated. Procrustes analysis and elliptical Fourier analysis perform better than shape indices when single view is considered, but all perform equally well with combined views. A generic content-based image retrieval (CBIR) system that ranks dissimilarity (Procrustes distance) of otolith images was built to search query images without the need for detailed information of side (left or right), aspect (proximal or distal) and direction (positive or negative) of the otolith. Methods for the development of this automated classification system are discussed. © 2016 The Fisheries Society of the British Isles.

  14. Automated analysis of invadopodia dynamics in live cells

    Directory of Open Access Journals (Sweden)

    Matthew E. Berginski

    2014-07-01

    Full Text Available Multiple cell types form specialized protein complexes that are used by the cell to actively degrade the surrounding extracellular matrix. These structures are called podosomes or invadopodia and collectively referred to as invadosomes. Due to their potential importance in both healthy physiology as well as in pathological conditions such as cancer, the characterization of these structures has been of increasing interest. Following early descriptions of invadopodia, assays were developed which labelled the matrix underneath metastatic cancer cells allowing for the assessment of invadopodia activity in motile cells. However, characterization of invadopodia using these methods has traditionally been done manually with time-consuming and potentially biased quantification methods, limiting the number of experiments and the quantity of data that can be analysed. We have developed a system to automate the segmentation, tracking and quantification of invadopodia in time-lapse fluorescence image sets at both the single invadopodia level and whole cell level. We rigorously tested the ability of the method to detect changes in invadopodia formation and dynamics through the use of well-characterized small molecule inhibitors, with known effects on invadopodia. Our results demonstrate the ability of this analysis method to quantify changes in invadopodia formation from live cell imaging data in a high throughput, automated manner.

  15. Automated seed detection and three-dimensional reconstruction. I. Seed localization from fluoroscopic images or radiographs

    International Nuclear Information System (INIS)

    Tubic, Dragan; Zaccarin, Andre; Pouliot, Jean; Beaulieu, Luc

    2001-01-01

    An automated procedure for the detection of the position and the orientation of radioactive seeds on fluoroscopic images or scanned radiographs is presented. The extracted positions of seed centers and the orientations are used for three-dimensional reconstruction of permanent prostate implants. The extraction procedure requires several steps: correction of image intensifier distortions, normalization, background removal, automatic threshold selection, thresholding, and finally, moment analysis and classification of the connected components. The algorithm was tested on 75 fluoroscopic images. The results show that, on average, 92% of the seeds are detected automatically. The orientation is found with an error smaller than 5 deg. for 75% of the seeds. The orientation of overlapping seeds (10%) should be considered as an estimate at best. The image processing procedure can also be used for seed or catheter detection in CT images, with minor modifications

  16. Clinical validation of semi-automated software for volumetric and dynamic contrast enhancement analysis of soft tissue venous malformations on magnetic resonance imaging examination

    Energy Technology Data Exchange (ETDEWEB)

    Caty, Veronique [Hopital Maisonneuve-Rosemont, Universite de Montreal, Department of Radiology, Montreal, QC (Canada); Kauffmann, Claude; Giroux, Marie-France; Oliva, Vincent; Therasse, Eric [Centre Hospitalier de l' Universite de Montreal (CHUM), Universite de Montreal and Research Centre, CHUM (CRCHUM), Department of Radiology, Montreal, QC (Canada); Dubois, Josee [Centre Hospitalier Universitaire Sainte-Justine et Universite de Montreal, Department of Radiology, Montreal, QC (Canada); Mansour, Asmaa [Institut de Cardiologie de Montreal, Heart Institute Coordinating Centre, Montreal, QC (Canada); Piche, Nicolas [Object Research System, Montreal, QC (Canada); Soulez, Gilles [Centre Hospitalier de l' Universite de Montreal (CHUM), Universite de Montreal and Research Centre, CHUM (CRCHUM), Department of Radiology, Montreal, QC (Canada); CHUM - Hopital Notre-Dame, Department of Radiology, Montreal, Quebec (Canada)

    2014-02-15

    To evaluate venous malformation (VM) volume and contrast-enhancement analysis on magnetic resonance imaging (MRI) compared with diameter evaluation. Baseline MRI was undertaken in 44 patients, 20 of whom were followed by MRI after sclerotherapy. All patients underwent short-tau inversion recovery (STIR) acquisitions and dynamic contrast assessment. VM diameters in three orthogonal directions were measured to obtain the largest and mean diameters. Volumetric reconstruction of VM was generated from two orthogonal STIR sequences and fused with acquisitions after contrast medium injection. Reproducibility (interclass correlation coefficients [ICCs]) of diameter and volume measurements was estimated. VM size variations in diameter and volume after sclerotherapy and contrast enhancement before sclerotherapy were compared in patients with clinical success or failure. Inter-observer ICCs were similar for diameter and volume measurements at baseline and follow-up (range 0.87-0.99). Higher percentages of size reduction after sclerotherapy were observed with volume (32.6 ± 30.7 %) than with diameter measurements (14.4 ± 21.4 %; P = 0.037). Contrast enhancement values were estimated at 65.3 ± 27.5 % and 84 ± 13 % in patients with clinical failure and success respectively (P = 0.056). Venous malformation volume was as reproducible as diameter measurement and more sensitive in detecting therapeutic responses. Patients with better clinical outcome tend to have stronger malformation enhancement. (orig.)

  17. ASteCA: Automated Stellar Cluster Analysis

    Science.gov (United States)

    Perren, G. I.; Vázquez, R. A.; Piatti, A. E.

    2015-04-01

    We present the Automated Stellar Cluster Analysis package (ASteCA), a suit of tools designed to fully automate the standard tests applied on stellar clusters to determine their basic parameters. The set of functions included in the code make use of positional and photometric data to obtain precise and objective values for a given cluster's center coordinates, radius, luminosity function and integrated color magnitude, as well as characterizing through a statistical estimator its probability of being a true physical cluster rather than a random overdensity of field stars. ASteCA incorporates a Bayesian field star decontamination algorithm capable of assigning membership probabilities using photometric data alone. An isochrone fitting process based on the generation of synthetic clusters from theoretical isochrones and selection of the best fit through a genetic algorithm is also present, which allows ASteCA to provide accurate estimates for a cluster's metallicity, age, extinction and distance values along with its uncertainties. To validate the code we applied it on a large set of over 400 synthetic MASSCLEAN clusters with varying degrees of field star contamination as well as a smaller set of 20 observed Milky Way open clusters (Berkeley 7, Bochum 11, Czernik 26, Czernik 30, Haffner 11, Haffner 19, NGC 133, NGC 2236, NGC 2264, NGC 2324, NGC 2421, NGC 2627, NGC 6231, NGC 6383, NGC 6705, Ruprecht 1, Tombaugh 1, Trumpler 1, Trumpler 5 and Trumpler 14) studied in the literature. The results show that ASteCA is able to recover cluster parameters with an acceptable precision even for those clusters affected by substantial field star contamination. ASteCA is written in Python and is made available as an open source code which can be downloaded ready to be used from its official site.

  18. Semi-Automated Analysis of Diaphragmatic Motion with Dynamic Magnetic Resonance Imaging in Healthy Controls and Non-Ambulant Subjects with Duchenne Muscular Dystrophy

    Directory of Open Access Journals (Sweden)

    Courtney A. Bishop

    2018-01-01

    Full Text Available Subjects with Duchenne Muscular Dystrophy (DMD suffer from progressive muscle damage leading to diaphragmatic weakness that ultimately requires ventilation. Emerging treatments have generated interest in better characterizing the natural history of respiratory impairment in DMD and responses to therapy. Dynamic (cine Magnetic Resonance Imaging (MRI may provide a more sensitive measure of diaphragm function in DMD than the commonly used spirometry. This study presents an analysis pipeline for measuring parameters of diaphragmatic motion from dynamic MRI and its application to investigate MRI measures of respiratory function in both healthy controls and non-ambulant DMD boys. We scanned 13 non-ambulant DMD boys and 10 age-matched healthy male volunteers at baseline, with a subset (n = 10, 10, 8 of the DMD subjects also assessed 3, 6, and 12 months later. Spirometry-derived metrics including forced vital capacity were recorded. The MRI-derived measures included the lung cross-sectional area (CSA, the anterior, central, and posterior lung lengths in the sagittal imaging plane, and the diaphragm length over the time-course of the dynamic MRI. Regression analyses demonstrated strong linear correlations between lung CSA and the length measures over the respiratory cycle, with a reduction of these correlations in DMD, and diaphragmatic motions that contribute less efficiently to changing lung capacity in DMD. MRI measures of pulmonary function were reduced in DMD, controlling for height differences between the groups: at maximal inhalation, the maximum CSA and the total distance of motion of the diaphragm were 45% and 37% smaller. MRI measures of pulmonary function were correlated with spirometry data and showed relationships with disease progression surrogates of age and months non-ambulatory, suggesting that they provide clinically meaningful information. Changes in the MRI measures over 12 months were consistent with weakening of

  19. Automated and unsupervised detection of malarial parasites in microscopic images

    Directory of Open Access Journals (Sweden)

    Purwar Yashasvi

    2011-12-01

    Full Text Available Abstract Background Malaria is a serious infectious disease. According to the World Health Organization, it is responsible for nearly one million deaths each year. There are various techniques to diagnose malaria of which manual microscopy is considered to be the gold standard. However due to the number of steps required in manual assessment, this diagnostic method is time consuming (leading to late diagnosis and prone to human error (leading to erroneous diagnosis, even in experienced hands. The focus of this study is to develop a robust, unsupervised and sensitive malaria screening technique with low material cost and one that has an advantage over other techniques in that it minimizes human reliance and is, therefore, more consistent in applying diagnostic criteria. Method A method based on digital image processing of Giemsa-stained thin smear image is developed to facilitate the diagnostic process. The diagnosis procedure is divided into two parts; enumeration and identification. The image-based method presented here is designed to automate the process of enumeration and identification; with the main advantage being its ability to carry out the diagnosis in an unsupervised manner and yet have high sensitivity and thus reducing cases of false negatives. Results The image based method is tested over more than 500 images from two independent laboratories. The aim is to distinguish between positive and negative cases of malaria using thin smear blood slide images. Due to the unsupervised nature of method it requires minimal human intervention thus speeding up the whole process of diagnosis. Overall sensitivity to capture cases of malaria is 100% and specificity ranges from 50-88% for all species of malaria parasites. Conclusion Image based screening method will speed up the whole process of diagnosis and is more advantageous over laboratory procedures that are prone to errors and where pathological expertise is minimal. Further this method

  20. Ecological Automation Design, Extending Work Domain Analysis

    NARCIS (Netherlands)

    Amelink, M.H.J.

    2010-01-01

    In high–risk domains like aviation, medicine and nuclear power plant control, automation has enabled new capabilities, increased the economy of operation and has greatly contributed to safety. However, automation increases the number of couplings in a system, which can inadvertently lead to more

  1. Automated extraction of chemical structure information from digital raster images

    Science.gov (United States)

    Park, Jungkap; Rosania, Gus R; Shedden, Kerby A; Nguyen, Mandee; Lyu, Naesung; Saitou, Kazuhiro

    2009-01-01

    Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links to scientific research

  2. Automated extraction of chemical structure information from digital raster images

    Directory of Open Access Journals (Sweden)

    Shedden Kerby A

    2009-02-01

    Full Text Available Abstract Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links

  3. Development of a methodology for automated assessment of the quality of digitized images in mammography

    International Nuclear Information System (INIS)

    Santana, Priscila do Carmo

    2010-01-01

    The process of evaluating the quality of radiographic images in general, and mammography in particular, can be much more accurate, practical and fast with the help of computer analysis tools. The purpose of this study is to develop a computational methodology to automate the process of assessing the quality of mammography images through techniques of digital imaging processing (PDI), using an existing image processing environment (ImageJ). With the application of PDI techniques was possible to extract geometric and radiometric characteristics of the images evaluated. The evaluated parameters include spatial resolution, high-contrast detail, low contrast threshold, linear detail of low contrast, tumor masses, contrast ratio and background optical density. The results obtained by this method were compared with the results presented in the visual evaluations performed by the Health Surveillance of Minas Gerais. Through this comparison was possible to demonstrate that the automated methodology is presented as a promising alternative for the reduction or elimination of existing subjectivity in the visual assessment methodology currently in use. (author)

  4. Automated Diatom Analysis Applied to Traditional Light Microscopy: A Proof-of-Concept Study

    Science.gov (United States)

    Little, Z. H. L.; Bishop, I.; Spaulding, S. A.; Nelson, H.; Mahoney, C.

    2017-12-01

    Diatom identification and enumeration by high resolution light microscopy is required for many areas of research and water quality assessment. Such analyses, however, are both expertise and labor-intensive. These challenges motivate the need for an automated process to efficiently and accurately identify and enumerate diatoms. Improvements in particle analysis software have increased the likelihood that diatom enumeration can be automated. VisualSpreadsheet software provides a possible solution for automated particle analysis of high-resolution light microscope diatom images. We applied the software, independent of its complementary FlowCam hardware, to automated analysis of light microscope images containing diatoms. Through numerous trials, we arrived at threshold settings to correctly segment 67% of the total possible diatom valves and fragments from broad fields of view. (183 light microscope images were examined containing 255 diatom particles. Of the 255 diatom particles present, 216 diatoms valves and fragments of valves were processed, with 170 properly analyzed and focused upon by the software). Manual analysis of the images yielded 255 particles in 400 seconds, whereas the software yielded a total of 216 particles in 68 seconds, thus highlighting that the software has an approximate five-fold efficiency advantage in particle analysis time. As in past efforts, incomplete or incorrect recognition was found for images with multiple valves in contact or valves with little contrast. The software has potential to be an effective tool in assisting taxonomists with diatom enumeration by completing a large portion of analyses. Benefits and limitations of the approach are presented to allow for development of future work in image analysis and automated enumeration of traditional light microscope images containing diatoms.

  5. Development of Raman microspectroscopy for automated detection and imaging of basal cell carcinoma

    Science.gov (United States)

    Larraona-Puy, Marta; Ghita, Adrian; Zoladek, Alina; Perkins, William; Varma, Sandeep; Leach, Iain H.; Koloydenko, Alexey A.; Williams, Hywel; Notingher, Ioan

    2009-09-01

    We investigate the potential of Raman microspectroscopy (RMS) for automated evaluation of excised skin tissue during Mohs micrographic surgery (MMS). The main aim is to develop an automated method for imaging and diagnosis of basal cell carcinoma (BCC) regions. Selected Raman bands responsible for the largest spectral differences between BCC and normal skin regions and linear discriminant analysis (LDA) are used to build a multivariate supervised classification model. The model is based on 329 Raman spectra measured on skin tissue obtained from 20 patients. BCC is discriminated from healthy tissue with 90+/-9% sensitivity and 85+/-9% specificity in a 70% to 30% split cross-validation algorithm. This multivariate model is then applied on tissue sections from new patients to image tumor regions. The RMS images show excellent correlation with the gold standard of histopathology sections, BCC being detected in all positive sections. We demonstrate the potential of RMS as an automated objective method for tumor evaluation during MMS. The replacement of current histopathology during MMS by a ``generalization'' of the proposed technique may improve the feasibility and efficacy of MMS, leading to a wider use according to clinical need.

  6. Construction of a pathological risk model of occult lymph node metastases for prognostication by semi-automated image analysis of tumor budding in early-stage oral squamous cell carcinoma.

    Science.gov (United States)

    Pedersen, Nicklas Juel; Jensen, David Hebbelstrup; Lelkaitis, Giedrius; Kiss, Katalin; Charabi, Birgitte; Specht, Lena; von Buchwald, Christian

    2017-03-14

    It is challenging to identify at diagnosis those patients with early oral squamous cell carcinoma (OSCC), who have a poor prognosis and those that have a high risk of harboring occult lymph node metastases. The aim of this study was to develop a standardized and objective digital scoring method to evaluate the predictive value of tumor budding. We developed a semi-automated image-analysis algorithm, Digital Tumor Bud Count (DTBC), to evaluate tumor budding. The algorithm was tested in 222 consecutive patients with early-stage OSCC and major endpoints were overall (OS) and progression free survival (PFS). We subsequently constructed and cross-validated a binary logistic regression model and evaluated its clinical utility by decision curve analysis. A high DTBC was an independent predictor of both poor OS and PFS in a multivariate Cox regression model. The logistic regression model was able to identify patients with occult lymph node metastases with an area under the curve (AUC) of 0.83 (95% CI: 0.78-0.89, P <0.001) and a 10-fold cross-validated AUC of 0.79. Compared to other known histopathological risk factors, the DTBC had a higher diagnostic accuracy. The proposed, novel risk model could be used as a guide to identify patients who would benefit from an up-front neck dissection.

  7. Fully automated muscle quality assessment by Gabor filtering of second harmonic generation images

    Science.gov (United States)

    Paesen, Rik; Smolders, Sophie; Vega, José Manolo de Hoyos; Eijnde, Bert O.; Hansen, Dominique; Ameloot, Marcel

    2016-02-01

    Although structural changes on the sarcomere level of skeletal muscle are known to occur due to various pathologies, rigorous studies of the reduced sarcomere quality remain scarce. This can possibly be explained by the lack of an objective tool for analyzing and comparing sarcomere images across biological conditions. Recent developments in second harmonic generation (SHG) microscopy and increasing insight into the interpretation of sarcomere SHG intensity profiles have made SHG microscopy a valuable tool to study microstructural properties of sarcomeres. Typically, sarcomere integrity is analyzed by fitting a set of manually selected, one-dimensional SHG intensity profiles with a supramolecular SHG model. To circumvent this tedious manual selection step, we developed a fully automated image analysis procedure to map the sarcomere disorder for the entire image at once. The algorithm relies on a single-frequency wavelet-based Gabor approach and includes a newly developed normalization procedure allowing for unambiguous data interpretation. The method was validated by showing the correlation between the sarcomere disorder, quantified by the M-band size obtained from manually selected profiles, and the normalized Gabor value ranging from 0 to 1 for decreasing disorder. Finally, to elucidate the applicability of our newly developed protocol, Gabor analysis was used to study the effect of experimental autoimmune encephalomyelitis on the sarcomere regularity. We believe that the technique developed in this work holds great promise for high-throughput, unbiased, and automated image analysis to study sarcomere integrity by SHG microscopy.

  8. Automated 3D-Objectdocumentation on the Base of an Image Set

    Directory of Open Access Journals (Sweden)

    Sebastian Vetter

    2011-12-01

    Full Text Available Digital stereo-photogrammetry allows users an automatic evaluation of the spatial dimension and the surface texture of objects. The integration of image analysis techniques simplifies the automation of evaluation of large image sets and offers a high accuracy [1]. Due to the substantial similarities of stereoscopic image pairs, correlation techniques provide measurements of subpixel precision for corresponding image points. With the help of an automated point search algorithm in image sets identical points are used to associate pairs of images to stereo models and group them. The found identical points in all images are basis for calculation of the relative orientation of each stereo model as well as defining the relation of neighboured stereo models. By using proper filter strategies incorrect points are removed and the relative orientation of the stereo model can be made automatically. With the help of 3D-reference points or distances at the object or a defined distance of camera basis the stereo model is orientated absolute. An adapted expansion- and matching algorithm offers the possibility to scan the object surface automatically. The result is a three dimensional point cloud; the scan resolution depends on image quality. With the integration of the iterative closest point- algorithm (ICP these partial point clouds are fitted to a total point cloud. In this way, 3D-reference points are not necessary. With the help of the implemented triangulation algorithm a digital surface models (DSM can be created. The texturing can be made automatically by the usage of the images that were used for scanning the object surface. It is possible to texture the surface model directly or to generate orthophotos automatically. By using of calibrated digital SLR cameras with full frame sensor a high accuracy can be reached. A big advantage is the possibility to control the accuracy and quality of the 3d-objectdocumentation with the resolution of the images. The

  9. Semi-automated International Cartilage Repair Society scoring of equine articular cartilage lesions in optical coherence tomography images.

    Science.gov (United States)

    Te Moller, N C R; Pitkänen, M; Sarin, J K; Väänänen, S; Liukkonen, J; Afara, I O; Puhakka, P H; Brommer, H; Niemelä, T; Tulamo, R-M; Argüelles Capilla, D; Töyräs, J

    2017-07-01

    Arthroscopic optical coherence tomography (OCT) is a promising tool for the detailed evaluation of articular cartilage injuries. However, OCT-based articular cartilage scoring still relies on the operator's visual estimation. To test the hypothesis that semi-automated International Cartilage Repair Society (ICRS) scoring of chondral lesions seen in OCT images could enhance intra- and interobserver agreement of scoring and its accuracy. Validation study using equine cadaver tissue. Osteochondral samples (n = 99) were prepared from 18 equine metacarpophalangeal joints and imaged using OCT. Custom-made software was developed for semi-automated ICRS scoring of cartilage lesions on OCT images. Scoring was performed visually and semi-automatically by five observers, and levels of inter- and intraobserver agreement were calculated. Subsequently, OCT-based scores were compared with ICRS scores based on light microscopy images of the histological sections of matching locations (n = 82). When semi-automated scoring of the OCT images was performed by multiple observers, mean levels of intraobserver and interobserver agreement were higher than those achieved with visual OCT scoring (83% vs. 77% and 74% vs. 33%, respectively). Histology-based scores from matching regions of interest agreed better with visual OCT-based scoring than with semi-automated OCT scoring; however, the accuracy of the software was improved by optimising the threshold combinations used to determine the ICRS score. Images were obtained from cadavers. Semi-automated scoring software improved the reproducibility of ICRS scoring of chondral lesions in OCT images and made scoring less observer-dependent. The image analysis and segmentation techniques adopted in this study warrant further optimisation to achieve better accuracy with semi-automated ICRS scoring. In addition, studies on in vivo applications are required. © 2016 EVJ Ltd.

  10. Automated MRI Volumetric Analysis in Patients with Rasmussen Syndrome.

    Science.gov (United States)

    Wang, Z I; Krishnan, B; Shattuck, D W; Leahy, R M; Moosa, A N V; Wyllie, E; Burgess, R C; Al-Sharif, N B; Joshi, A A; Alexopoulos, A V; Mosher, J C; Udayasankar, U; Jones, S E

    2016-12-01

    Rasmussen syndrome, also known as Rasmussen encephalitis, is typically associated with volume loss of the affected hemisphere of the brain. Our aim was to apply automated quantitative volumetric MR imaging analyses to patients diagnosed with Rasmussen encephalitis, to determine the predictive value of lobar volumetric measures and to assess regional atrophy differences as well as monitor disease progression by using these measures. Nineteen patients (42 scans) with diagnosed Rasmussen encephalitis were studied. We used 2 control groups: one with 42 age- and sex-matched healthy subjects and the other with 42 epileptic patients without Rasmussen encephalitis with the same disease duration as patients with Rasmussen encephalitis. Volumetric analysis was performed on T1-weighted images by using BrainSuite. Ratios of volumes from the affected hemisphere divided by those from the unaffected hemisphere were used as input to a logistic regression classifier, which was trained to discriminate patients from controls. Using the classifier, we compared the predictive accuracy of all the volumetric measures. These ratios were used to further assess regional atrophy differences and correlate with epilepsy duration. Interhemispheric and frontal lobe ratios had the best prediction accuracy for separating patients with Rasmussen encephalitis from healthy controls and patient controls without Rasmussen encephalitis. The insula showed significantly more atrophy compared with all the other cortical regions. Patients with longitudinal scans showed progressive volume loss in the affected hemisphere. Atrophy of the frontal lobe and insula correlated significantly with epilepsy duration. Automated quantitative volumetric analysis provides accurate separation of patients with Rasmussen encephalitis from healthy controls and epileptic patients without Rasmussen encephalitis, and thus may assist the diagnosis of Rasmussen encephalitis. Volumetric analysis could also be included as part of

  11. Automated low-contrast pattern recognition algorithm for magnetic resonance image quality assessment.

    Science.gov (United States)

    Ehman, Morgan O; Bao, Zhonghao; Stiving, Scott O; Kasam, Mallik; Lanners, Dianna; Peterson, Teresa; Jonsgaard, Renee; Carter, Rickey; McGee, Kiaran P

    2017-08-01

    Low contrast (LC) detectability is a common test criterion for diagnostic radiologic quality control (QC) programs. Automation of this test is desirable in order to reduce human variability and to speed up analysis. However, automation is challenging due to the complexity of the human visual perception system and the ability to create algorithms that mimic this response. This paper describes the development and testing of an automated LC detection algorithm for use in the analysis of magnetic resonance (MR) images of the American College of Radiology (ACR) QC phantom. The detection algorithm includes fuzzy logic decision processes and various edge detection methods to quantify LC detectability. Algorithm performance was first evaluated using a single LC phantom MR image with the addition of incremental zero mean Gaussian noise resulting in a total of 200 images. A c-statistic was calculated to determine the role of CNR to indicate when the algorithm would detect ten spokes. To evaluate inter-rater agreement between experienced observers and the algorithm, a blinded observer study was performed on 196 LC phantom images acquired from nine clinical MR scanners. The nine scanners included two MR manufacturers and two field strengths (1.5 T, 3.0 T). Inter-rater and algorithm-rater agreement was quantified using Krippendorff's alpha. For the Gaussian noise added data, CNR ranged from 0.519 to 11.7 with CNR being considered an excellent discriminator of algorithm performance (c-statistic = 0.9777). Reviewer scoring of the clinical phantom data resulted in an inter-rater agreement of 0.673 with the agreement between observers and algorithm equal to 0.652, both of which indicate significant agreement. This study demonstrates that the detection of LC test patterns for MR imaging QC programs can be successfully developed and that their response can model the human visual detection system of expert MR QC readers. © 2017 American Association of Physicists in Medicine.

  12. SAND: an automated VLBI imaging and analysing pipeline - I. Stripping component trajectories

    Science.gov (United States)

    Zhang, M.; Collioud, A.; Charlot, P.

    2018-02-01

    We present our implementation of an automated very long baseline interferometry (VLBI) data-reduction pipeline that is dedicated to interferometric data imaging and analysis. The pipeline can handle massive VLBI data efficiently, which makes it an appropriate tool to investigate multi-epoch multiband VLBI data. Compared to traditional manual data reduction, our pipeline provides more objective results as less human interference is involved. The source extraction is carried out in the image plane, while deconvolution and model fitting are performed in both the image plane and the uv plane for parallel comparison. The output from the pipeline includes catalogues of CLEANed images and reconstructed models, polarization maps, proper motion estimates, core light curves and multiband spectra. We have developed a regression STRIP algorithm to automatically detect linear or non-linear patterns in the jet component trajectories. This algorithm offers an objective method to match jet components at different epochs and to determine their proper motions.

  13. Automated and connected vehicle implications and analysis.

    Science.gov (United States)

    2017-05-01

    Automated and connected vehicles (ACV) and, in particular, autonomous vehicles have captured : the interest of the public, industry and transportation authorities. ACVs can significantly reduce : accidents, fuel consumption, pollution and the costs o...

  14. Automated 3D ultrasound image segmentation to aid breast cancer image interpretation.

    Science.gov (United States)

    Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A; Yuan, Jie; Wang, Xueding; Carson, Paul L

    2016-02-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. An Automated Tracking Approach for Extraction of Retinal Vasculature in Fundus Images

    Directory of Open Access Journals (Sweden)

    Alireza Osareh

    2010-01-01

    Full Text Available Purpose: To present a novel automated method for tracking and detection of retinal blood vessels in fundus images. Methods: For every pixel in retinal images, a feature vector was computed utilizing multiscale analysis based on Gabor filters. To classify the pixels based on their extracted features as vascular or non-vascular, various classifiers including Quadratic Gaussian (QG, K-Nearest Neighbors (KNN, and Neural Networks (NN were investigated. The accuracy of classifiers was evaluated using Receiver Operating Characteristic (ROC curve analysis in addition to sensitivity and specificity measurements. We opted for an NN model due to its superior performance in classification of retinal pixels as vascular and non-vascular. Results: The proposed method achieved an overall accuracy of 96.9%, sensitivity of 96.8%, and specificity of 97.3% for identification of retinal blood vessels using a dataset of 40 images. The area under the ROC curve reached a value of 0.967. Conclusion: Automated tracking and identification of retinal blood vessels based on Gabor filters and neural network classifiers seems highly successful. Through a comprehensive optimization process of operational parameters, our proposed scheme does not require any user intervention and has consistent performance for both normal and abnormal images.

  16. Semi-automated Digital Imaging and Processing System for Measuring Lake Ice Thickness

    Science.gov (United States)

    Singh, Preetpal

    to detect equipment failure and identify defective products at the assembly line. The research work in this thesis combines machine vision and image processing technology to build a digital imaging and processing system for monitoring and measuring lake ice thickness in real time. An ultra-compact USB camera is programmed to acquire and transmit high resolution imagery for processing with MATLAB Image Processing toolbox. The image acquisition and transmission process is fully automated; image analysis is semi-automated and requires limited user input. Potential design changes to the prototype and ideas on fully automating the imaging and processing procedure are presented to conclude this research work.

  17. Automated result analysis in radiographic testing of NPPs' welded joints

    International Nuclear Information System (INIS)

    Skomorokhov, A.O.; Nakhabov, A.V.; Belousov, P.A.

    2009-01-01

    The article presents development results of algorithms for automated image interpretation of NPP welded joints radiographic inspection. The developed algorithms are based on state-of-the-art pattern recognition methods. The paper covers automatic radiographic image segmentation, defects detection and their parameters evaluation issues. The developed algorithms testing results for actual radiographic images of welded joints with significant variation of defects parameters are given [ru

  18. FPGA Accelerator for Wavelet-Based Automated Global Image Registration

    Directory of Open Access Journals (Sweden)

    Baofeng Li

    2009-01-01

    Full Text Available Wavelet-based automated global image registration (WAGIR is fundamental for most remote sensing image processing algorithms and extremely computation-intensive. With more and more algorithms migrating from ground computing to onboard computing, an efficient dedicated architecture of WAGIR is desired. In this paper, a BWAGIR architecture is proposed based on a block resampling scheme. BWAGIR achieves a significant performance by pipelining computational logics, parallelizing the resampling process and the calculation of correlation coefficient and parallel memory access. A proof-of-concept implementation with 1 BWAGIR processing unit of the architecture performs at least 7.4X faster than the CL cluster system with 1 node, and at least 3.4X than the MPM massively parallel machine with 1 node. Further speedup can be achieved by parallelizing multiple BWAGIR units. The architecture with 5 units achieves a speedup of about 3X against the CL with 16 nodes and a comparative speed with the MPM with 30 nodes. More importantly, the BWAGIR architecture can be deployed onboard economically.

  19. FPGA Accelerator for Wavelet-Based Automated Global Image Registration

    Directory of Open Access Journals (Sweden)

    Li Baofeng

    2009-01-01

    Full Text Available Abstract Wavelet-based automated global image registration (WAGIR is fundamental for most remote sensing image processing algorithms and extremely computation-intensive. With more and more algorithms migrating from ground computing to onboard computing, an efficient dedicated architecture of WAGIR is desired. In this paper, a BWAGIR architecture is proposed based on a block resampling scheme. BWAGIR achieves a significant performance by pipelining computational logics, parallelizing the resampling process and the calculation of correlation coefficient and parallel memory access. A proof-of-concept implementation with 1 BWAGIR processing unit of the architecture performs at least 7.4X faster than the CL cluster system with 1 node, and at least 3.4X than the MPM massively parallel machine with 1 node. Further speedup can be achieved by parallelizing multiple BWAGIR units. The architecture with 5 units achieves a speedup of about 3X against the CL with 16 nodes and a comparative speed with the MPM with 30 nodes. More importantly, the BWAGIR architecture can be deployed onboard economically.

  20. SlideJ: An ImageJ plugin for automated processing of whole slide images.

    Directory of Open Access Journals (Sweden)

    Vincenzo Della Mea

    Full Text Available The digital slide, or Whole Slide Image, is a digital image, acquired with specific scanners, that represents a complete tissue sample or cytological specimen at microscopic level. While Whole Slide image analysis is recognized among the most interesting opportunities, the typical size of such images-up to Gpixels- can be very demanding in terms of memory requirements. Thus, while algorithms and tools for processing and analysis of single microscopic field images are available, Whole Slide images size makes the direct use of such tools prohibitive or impossible. In this work a plugin for ImageJ, named SlideJ, is proposed with the objective to seamlessly extend the application of image analysis algorithms implemented in ImageJ for single microscopic field images to a whole digital slide analysis. The plugin has been complemented by examples of macro in the ImageJ scripting language to demonstrate its use in concrete situations.

  1. Automated image classification applied to reconstituted human corneal epithelium for the early detection of toxic damage

    Science.gov (United States)

    Crosta, Giovanni Franco; Urani, Chiara; De Servi, Barbara; Meloni, Marisa

    2010-02-01

    For a long time acute eye irritation has been assessed by means of the DRAIZE rabbit test, the limitations of which are known. Alternative tests based on in vitro models have been proposed. This work focuses on the "reconstituted human corneal epithelium" (R-HCE), which resembles the corneal epithelium of the human eye by thickness, morphology and marker expression. Testing a substance on R-HCE involves a variety of methods. Herewith quantitative morphological analysis is applied to optical microscope images of R-HCE cross sections resulting from exposure to benzalkonium chloride (BAK). The short term objectives and the first results are the analysis and classification of said images. Automated analysis relies on feature extraction by the spectrum-enhancement algorithm, which is made sensitive to anisotropic morphology, and classification based on principal components analysis. The winning strategy has been the separate analysis of the apical and basal layers, which carry morphological information of different types. R-HCE specimens have been ranked by gross damage. The onset of early damage has been detected and an R-HCE specimen exposed to a low BAK dose has been singled out from the negative and positive control. These results provide a proof of principle for the automated classification of the specimens of interest on a purely morphological basis by means of the spectrum enhancement algorithm.

  2. Automated optics inspection analysis for NIF

    International Nuclear Information System (INIS)

    Kegelmeyer, Laura M.; Clark, Raelyn; Leach, Richard R.; McGuigan, David; Kamm, Victoria Miller; Potter, Daniel; Salmon, J. Thad; Senecal, Joshua; Conder, Alan; Nostrand, Mike; Whitman, Pamela K.

    2012-01-01

    The National Ignition Facility (NIF) is a high-energy laser facility comprised of 192 beamlines that house thousands of optics. These optics guide, amplify and tightly focus light onto a tiny target for fusion ignition research and high energy density physics experiments. The condition of these optics is key to the economic, efficient and maximally energetic performance of the laser. Our goal, and novel achievement, is to find on the optics any imperfections while they are tens of microns in size, track them through time to see if they grow and if so, remove the optic and repair the single site so the entire optic can then be re-installed for further use on the laser. This paper gives an overview of the image analysis used for detecting, measuring, and tracking sites of interest on an optic while it is installed on the beamline via in situ inspection and after it has been removed for maintenance. In this way, the condition of each optic is monitored throughout the optic's lifetime. This overview paper will summarize key algorithms and technical developments for custom image analysis and processing and highlight recent improvements. (Associated papers will include more details on these issues.) We will also discuss the use of OI Analysis for daily operation of the NIF laser and its extension to inspection of NIF targets.

  3. Automated tracking of lava lake level using thermal images at Kīlauea Volcano, Hawai’i

    Science.gov (United States)

    Patrick, Matthew R.; Swanson, Don; Orr, Tim R.

    2016-01-01

    Tracking the level of the lava lake in Halema‘uma‘u Crater, at the summit of Kīlauea Volcano, Hawai’i, is an essential part of monitoring the ongoing eruption and forecasting potentially hazardous changes in activity. We describe a simple automated image processing routine that analyzes continuously-acquired thermal images of the lava lake and measures lava level. The method uses three image segmentation approaches, based on edge detection, short-term change analysis, and composite temperature thresholding, to identify and track the lake margin in the images. These relative measurements from the images are periodically calibrated with laser rangefinder measurements to produce real-time estimates of lake elevation. Continuous, automated tracking of the lava level has been an important tool used by the U.S. Geological Survey’s Hawaiian Volcano Observatory since 2012 in real-time operational monitoring of the volcano and its hazard potential.

  4. Automated analysis of cell migration and nuclear envelope rupture in confined environments.

    Science.gov (United States)

    Elacqua, Joshua J; McGregor, Alexandra L; Lammerding, Jan

    2018-01-01

    Recent in vitro and in vivo studies have highlighted the importance of the cell nucleus in governing migration through confined environments. Microfluidic devices that mimic the narrow interstitial spaces of tissues have emerged as important tools to study cellular dynamics during confined migration, including the consequences of nuclear deformation and nuclear envelope rupture. However, while image acquisition can be automated on motorized microscopes, the analysis of the corresponding time-lapse sequences for nuclear transit through the pores and events such as nuclear envelope rupture currently requires manual analysis. In addition to being highly time-consuming, such manual analysis is susceptible to person-to-person variability. Studies that compare large numbers of cell types and conditions therefore require automated image analysis to achieve sufficiently high throughput. Here, we present an automated image analysis program to register microfluidic constrictions and perform image segmentation to detect individual cell nuclei. The MATLAB program tracks nuclear migration over time and records constriction-transit events, transit times, transit success rates, and nuclear envelope rupture. Such automation reduces the time required to analyze migration experiments from weeks to hours, and removes the variability that arises from different human analysts. Comparison with manual analysis confirmed that both constriction transit and nuclear envelope rupture were detected correctly and reliably, and the automated analysis results closely matched a manual analysis gold standard. Applying the program to specific biological examples, we demonstrate its ability to detect differences in nuclear transit time between cells with different levels of the nuclear envelope proteins lamin A/C, which govern nuclear deformability, and to detect an increase in nuclear envelope rupture duration in cells in which CHMP7, a protein involved in nuclear envelope repair, had been depleted

  5. Functional MRI preprocessing in lesioned brains: manual versus automated region of interest analysis

    Directory of Open Access Journals (Sweden)

    Kathleen A Garrison

    2015-09-01

    Full Text Available Functional magnetic resonance imaging has significant potential in the study and treatment of neurological disorders and stroke. Region of interest (ROI analysis in such studies allows for testing of strong a priori clinical hypotheses with improved statistical power. A commonly used automated approach to ROI analysis is to spatially normalize each participant’s structural brain image to a template brain image and define ROIs using an atlas. However, in studies of individuals with structural brain lesions such as stroke, the gold standard approach may be to manually hand-draw ROIs on each participant’s non-normalized structural brain image. Automated approaches to ROI analysis are faster and more standardized, yet are susceptible to preprocessing error (e.g., normalization error that can be greater in lesioned brains. The manual approach to ROI analysis has high demand for time and expertise but may provide a more accurate estimate of brain response. In this study, we directly compare commonly used automated and manual approaches to ROI analysis by reanalyzing data from a previously published hypothesis-driven cognitive fMRI study involving individuals with stroke. The ROI evaluated is the pars opercularis of the inferior frontal gyrus. We found a significant difference in task-related effect size and percent activated voxels in this ROI between the automated and manual approaches to ROI analysis. Task interactions, however, were consistent across ROI analysis approaches. These findings support the use of automated approaches to ROI analysis in studies of lesioned brains, provided they employ a task interaction design.

  6. Automated ultrasound edge-tracking software comparable to established semi-automated reference software for carotid intima-media thickness analysis.

    Science.gov (United States)

    Shenouda, Ninette; Proudfoot, Nicole A; Currie, Katharine D; Timmons, Brian W; MacDonald, Maureen J

    2017-04-26

    Many commercial ultrasound systems are now including automated analysis packages for the determination of carotid intima-media thickness (cIMT); however, details regarding their algorithms and methodology are not published. Few studies have compared their accuracy and reliability with previously established automated software, and those that have were in asymptomatic adults. Therefore, this study compared cIMT measures from a fully automated ultrasound edge-tracking software (EchoPAC PC, Version 110.0.2; GE Medical Systems, Horten, Norway) to an established semi-automated reference software (Artery Measurement System (AMS) II, Version 1.141; Gothenburg, Sweden) in 30 healthy preschool children (ages 3-5 years) and 27 adults with coronary artery disease (CAD; ages 48-81 years). For both groups, Bland-Altman plots revealed good agreement with a negligible mean cIMT difference of -0·03 mm. Software differences were statistically, but not clinically, significant for preschool images (P = 0·001) and were not significant for CAD images (P = 0·09). Intra- and interoperator repeatability was high and comparable between software for preschool images (ICC, 0·90-0·96; CV, 1·3-2·5%), but slightly higher with the automated ultrasound than the semi-automated reference software for CAD images (ICC, 0·98-0·99; CV, 1·4-2·0% versus ICC, 0·84-0·89; CV, 5·6-6·8%). These findings suggest that the automated ultrasound software produces valid cIMT values in healthy preschool children and adults with CAD. Automated ultrasound software may be useful for ensuring consistency among multisite research initiatives or large cohort studies involving repeated cIMT measures, particularly in adults with documented CAD. © 2017 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  7. An Automated MR Image Segmentation System Using Multi-layer Perceptron Neural Network

    Directory of Open Access Journals (Sweden)

    Amiri S

    2013-12-01

    Full Text Available Background: Brain tissue segmentation for delineation of 3D anatomical structures from magnetic resonance (MR images can be used for neuro-degenerative disorders, characterizing morphological differences between subjects based on volumetric analysis of gray matter (GM, white matter (WM and cerebrospinal fluid (CSF, but only if the obtained segmentation results are correct. Due to image artifacts such as noise, low contrast and intensity non-uniformity, there are some classifcation errors in the results of image segmentation. Objective: An automated algorithm based on multi-layer perceptron neural networks (MLPNN is presented for segmenting MR images. The system is to identify two tissues of WM and GM in human brain 2D structural MR images. A given 2D image is processed to enhance image intensity and to remove extra cerebral tissue. Thereafter, each pixel of the image under study is represented using 13 features (8 statistical and 5 non- statistical features and is classifed using a MLPNN into one of the three classes WM and GM or unknown. Results: The developed MR image segmentation algorithm was evaluated using 20 real images. Training using only one image, the system showed robust performance when tested using the remaining 19 images. The average Jaccard similarity index and Dice similarity metric for the GM and WM tissues were estimated to be 75.7 %, 86.0% for GM, and 67.8% and 80.7%for WM, respectively. Conclusion: The obtained performances are encouraging and show that the presented method may assist with segmentation of 2D MR images especially where categorizing WM and GM is of interest.

  8. An Automated MR Image Segmentation System Using Multi-layer Perceptron Neural Network.

    Science.gov (United States)

    Amiri, S; Movahedi, M M; Kazemi, K; Parsaei, H

    2013-12-01

    Brain tissue segmentation for delineation of 3D anatomical structures from magnetic resonance (MR) images can be used for neuro-degenerative disorders, characterizing morphological differences between subjects based on volumetric analysis of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF), but only if the obtained segmentation results are correct. Due to image artifacts such as noise, low contrast and intensity non-uniformity, there are some classification errors in the results of image segmentation. An automated algorithm based on multi-layer perceptron neural networks (MLPNN) is presented for segmenting MR images. The system is to identify two tissues of WM and GM in human brain 2D structural MR images. A given 2D image is processed to enhance image intensity and to remove extra cerebral tissue. Thereafter, each pixel of the image under study is represented using 13 features (8 statistical and 5 non- statistical features) and is classified using a MLPNN into one of the three classes WM and GM or unknown. The developed MR image segmentation algorithm was evaluated using 20 real images. Training using only one image, the system showed robust performance when tested using the remaining 19 images. The average Jaccard similarity index and Dice similarity metric for the GM and WM tissues were estimated to be 75.7 %, 86.0% for GM, and 67.8% and 80.7%for WM, respectively. The obtained performances are encouraging and show that the presented method may assist with segmentation of 2D MR images especially where categorizing WM and GM is of interest.

  9. Automated haematology analysis to diagnose malaria

    NARCIS (Netherlands)

    Campuzano-Zuluaga, Germán; Hänscheid, Thomas; Grobusch, Martin P.

    2010-01-01

    For more than a decade, flow cytometry-based automated haematology analysers have been studied for malaria diagnosis. Although current haematology analysers are not specifically designed to detect malaria-related abnormalities, most studies have found sensitivities that comply with WHO

  10. Automation of radionuclide analysis in nuclear industry

    International Nuclear Information System (INIS)

    Gostilo, V.; Sokolov, A.; Kuzmenko, V.; Kondratjev, V.

    2009-01-01

    The development results for the automated precise HPGe spectrometers and systems for radionuclide analyses in nuclear industry and environmental monitoring are presented. Automated HPGe spectrometer for radionuclide monitoring of coolant in primary circuit of NPPs is intended for technological monitoring of the radionuclide specific activity in liquid and gaseous flows in the on-line mode. The automated spectrometer based on flowing HPGe detector with the through channel is intended for control of the uniformity of distribution of uranium and/or plutonium in fresh fuel elements, transferred through the detector, as well as for on-line control of the fluids and gases flows with low activity. Automated monitoring system for radionuclide volumetric activity in outlet channels of NPPs is intended for radionuclide monitoring of water reservoirs in the regions of nuclear weapons testing, near nuclear storage, nuclear power plants and other objects of nuclear energetic. Autonomous HPGe spectrometer for deep water radionuclide monitoring is applicable for registration of gamma radionuclides, distributed in water depth up to 3000 m (radioactive wastes storage, wreck of atomic ships, lost nuclear charges, atomic industry technological waste release etc.).(authors)

  11. Comparison of manual & automated analysis methods for corneal endothelial cell density measurements by specular microscopy.

    Science.gov (United States)

    Huang, Jianyan; Maram, Jyotsna; Tepelus, Tudor C; Modak, Cristina; Marion, Ken; Sadda, SriniVas R; Chopra, Vikas; Lee, Olivia L

    2017-08-07

    To determine the reliability of corneal endothelial cell density (ECD) obtained by automated specular microscopy versus that of validated manual methods and factors that predict such reliability. Sharp central images from 94 control and 106 glaucomatous eyes were captured with Konan specular microscope NSP-9900. All images were analyzed by trained graders using Konan CellChek Software, employing the fully- and semi-automated methods as well as Center Method. Images with low cell count (input cells number <100) and/or guttata were compared with the Center and Flex-Center Methods. ECDs were compared and absolute error was used to assess variation. The effect on ECD of age, cell count, cell size, and cell size variation was evaluated. No significant difference was observed between the Center and Flex-Center Methods in corneas with guttata (p=0.48) or low ECD (p=0.11). No difference (p=0.32) was observed in ECD of normal controls <40 yrs old between the fully-automated method and manual Center Method. However, in older controls and glaucomatous eyes, ECD was overestimated by the fully-automated method (p=0.034) and semi-automated method (p=0.025) as compared to manual method. Our findings show that automated analysis significantly overestimates ECD in the eyes with high polymegathism and/or large cell size, compared to the manual method. Therefore, we discourage reliance upon the fully-automated method alone to perform specular microscopy analysis, particularly if an accurate ECD value is imperative. Copyright © 2017. Published by Elsevier España, S.L.U.

  12. Automated measurement of pressure injury through image processing.

    Science.gov (United States)

    Li, Dan; Mathews, Carol

    2017-11-01

    To develop an image processing algorithm to automatically measure pressure injuries using electronic pressure injury images stored in nursing documentation. Photographing pressure injuries and storing the images in the electronic health record is standard practice in many hospitals. However, the manual measurement of pressure injury is time-consuming, challenging and subject to intra/inter-reader variability with complexities of the pressure injury and the clinical environment. A cross-sectional algorithm development study. A set of 32 pressure injury images were obtained from a western Pennsylvania hospital. First, we transformed the images from an RGB (i.e. red, green and blue) colour space to a YC b C r colour space to eliminate inferences from varying light conditions and skin colours. Second, a probability map, generated by a skin colour Gaussian model, guided the pressure injury segmentation process using the Support Vector Machine classifier. Third, after segmentation, the reference ruler - included in each of the images - enabled perspective transformation and determination of pressure injury size. Finally, two nurses independently measured those 32 pressure injury images, and intraclass correlation coefficient was calculated. An image processing algorithm was developed to automatically measure the size of pressure injuries. Both inter- and intra-rater analysis achieved good level reliability. Validation of the size measurement of the pressure injury (1) demonstrates that our image processing algorithm is a reliable approach to monitoring pressure injury progress through clinical pressure injury images and (2) offers new insight to pressure injury evaluation and documentation. Once our algorithm is further developed, clinicians can be provided with an objective, reliable and efficient computational tool for segmentation and measurement of pressure injuries. With this, clinicians will be able to more effectively monitor the healing process of pressure

  13. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters

  14. Optical Coherence Tomography in the UK Biobank Study - Rapid Automated Analysis of Retinal Thickness for Large Population-Based Studies.

    Directory of Open Access Journals (Sweden)

    Pearse A Keane

    Full Text Available To describe an approach to the use of optical coherence tomography (OCT imaging in large, population-based studies, including methods for OCT image acquisition, storage, and the remote, rapid, automated analysis of retinal thickness.In UK Biobank, OCT images were acquired between 2009 and 2010 using a commercially available "spectral domain" OCT device (3D OCT-1000, Topcon. Images were obtained using a raster scan protocol, 6 mm x 6 mm in area, and consisting of 128 B-scans. OCT image sets were stored on UK Biobank servers in a central repository, adjacent to high performance computers. Rapid, automated analysis of retinal thickness was performed using custom image segmentation software developed by the Topcon Advanced Biomedical Imaging Laboratory (TABIL. This software employs dual-scale gradient information to allow for automated segmentation of nine intraretinal boundaries in a rapid fashion.67,321 participants (134,642 eyes in UK Biobank underwent OCT imaging of both eyes as part of the ocular module. 134,611 images were successfully processed with 31 images failing segmentation analysis due to corrupted OCT files or withdrawal of subject consent for UKBB study participation. Average time taken to call up an image from the database and complete segmentation analysis was approximately 120 seconds per data set per login, and analysis of the entire dataset was completed in approximately 28 days.We report an approach to the rapid, automated measurement of retinal thickness from nearly 140,000 OCT image sets from the UK Biobank. In the near future, these measurements will be publically available for utilization by researchers around the world, and thus for correlation with the wealth of other data collected in UK Biobank. The automated analysis approaches we describe may be of utility for future large population-based epidemiological studies, clinical trials, and screening programs that employ OCT imaging.

  15. Optical Coherence Tomography in the UK Biobank Study - Rapid Automated Analysis of Retinal Thickness for Large Population-Based Studies.

    Science.gov (United States)

    Keane, Pearse A; Grossi, Carlota M; Foster, Paul J; Yang, Qi; Reisman, Charles A; Chan, Kinpui; Peto, Tunde; Thomas, Dhanes; Patel, Praveen J

    2016-01-01

    To describe an approach to the use of optical coherence tomography (OCT) imaging in large, population-based studies, including methods for OCT image acquisition, storage, and the remote, rapid, automated analysis of retinal thickness. In UK Biobank, OCT images were acquired between 2009 and 2010 using a commercially available "spectral domain" OCT device (3D OCT-1000, Topcon). Images were obtained using a raster scan protocol, 6 mm x 6 mm in area, and consisting of 128 B-scans. OCT image sets were stored on UK Biobank servers in a central repository, adjacent to high performance computers. Rapid, automated analysis of retinal thickness was performed using custom image segmentation software developed by the Topcon Advanced Biomedical Imaging Laboratory (TABIL). This software employs dual-scale gradient information to allow for automated segmentation of nine intraretinal boundaries in a rapid fashion. 67,321 participants (134,642 eyes) in UK Biobank underwent OCT imaging of both eyes as part of the ocular module. 134,611 images were successfully processed with 31 images failing segmentation analysis due to corrupted OCT files or withdrawal of subject consent for UKBB study participation. Average time taken to call up an image from the database and complete segmentation analysis was approximately 120 seconds per data set per login, and analysis of the entire dataset was completed in approximately 28 days. We report an approach to the rapid, automated measurement of retinal thickness from nearly 140,000 OCT image sets from the UK Biobank. In the near future, these measurements will be publically available for utilization by researchers around the world, and thus for correlation with the wealth of other data collected in UK Biobank. The automated analysis approaches we describe may be of utility for future large population-based epidemiological studies, clinical trials, and screening programs that employ OCT imaging.

  16. Automated SEM Modal Analysis Applied to the Diogenites

    Science.gov (United States)

    Bowman, L. E.; Spilde, M. N.; Papike, James J.

    1996-01-01

    Analysis of volume proportions of minerals, or modal analysis, is routinely accomplished by point counting on an optical microscope, but the process, particularly on brecciated samples such as the diogenite meteorites, is tedious and prone to error by misidentification of very small fragments, which may make up a significant volume of the sample. Precise volume percentage data can be gathered on a scanning electron microscope (SEM) utilizing digital imaging and an energy dispersive spectrometer (EDS). This form of automated phase analysis reduces error, and at the same time provides more information than could be gathered using simple point counting alone, such as particle morphology statistics and chemical analyses. We have previously studied major, minor, and trace-element chemistry of orthopyroxene from a suite of diogenites. This abstract describes the method applied to determine the modes on this same suite of meteorites and the results of that research. The modal abundances thus determined add additional information on the petrogenesis of the diogenites. In addition, low-abundance phases such as spinels were located for further analysis by this method.

  17. Automated analysis of intima-media thickness: analysis and performance of CARES 3.0.

    Science.gov (United States)

    Saba, Luca; Montisci, Roberto; Famiglietti, Luca; Tallapally, Niranjan; Acharya, U Rajendra; Molinari, Filippo; Sanfilippo, Roberto; Mallarini, Giorgio; Nicolaides, Andrew; Suri, Jasjit S

    2013-07-01

    In recent years, the use of computer-based techniques has been advocated to improve intima-media thickness (IMT) quantification and its reproducibility. The purpose of this study was to test the diagnostic performance of a new IMT automated algorithm, CARES 3.0, which is a patented class of IMT measurement systems called AtheroEdge (AtheroPoint, LLC, Roseville, CA). From 2 different institutions, we analyzed the carotid arteries of 250 patients. The automated CARES 3.0 algorithm was tested versus 2 other automated algorithms, 1 semiautomated algorithm, and a reader reference to assess the IMT measurements. Bland-Altman analysis, regression analysis, and the Student t test were performed. CARES 3.0 showed an IMT measurement bias ± SD of -0.022 ± 0.288 mm compared with the expert reader. The average IMT by CARES 3.0 was 0.852 ± 0.248 mm, and that of the reader was 0.872 ± 0.325 mm. In the Bland-Altman plots, the CARES 3.0 IMT measurements showed accurate values, with about 80% of the images having an IMT measurement bias ranging between -50% and +50%. These values were better than those of the previous CARES releases and the semiautomated algorithm. Regression analysis showed that, among all techniques, the best t value was between CARES 3.0 and the reader. We have developed an improved fully automated technique for carotid IMT measurement on longitudinal ultrasound images. This new version, called CARES 3.0, consists of a new heuristic for lumen-intima and media-adventitia detection, which showed high accuracy and reproducibility for IMT measurement.

  18. Automated Detection of Firearms and Knives in a CCTV Image

    Directory of Open Access Journals (Sweden)

    Michał Grega

    2016-01-01

    Full Text Available Closed circuit television systems (CCTV are becoming more and more popular and are being deployed in many offices, housing estates and in most public spaces. Monitoring systems have been implemented in many European and American cities. This makes for an enormous load for the CCTV operators, as the number of camera views a single operator can monitor is limited by human factors. In this paper, we focus on the task of automated detection and recognition of dangerous situations for CCTV systems. We propose algorithms that are able to alert the human operator when a firearm or knife is visible in the image. We have focused on limiting the number of false alarms in order to allow for a real-life application of the system. The specificity and sensitivity of the knife detection are significantly better than others published recently. We have also managed to propose a version of a firearm detection algorithm that offers a near-zero rate of false alarms. We have shown that it is possible to create a system that is capable of an early warning in a dangerous situation, which may lead to faster and more effective response times and a reduction in the number of potential victims.

  19. Automated Detection of Firearms and Knives in a CCTV Image.

    Science.gov (United States)

    Grega, Michał; Matiolański, Andrzej; Guzik, Piotr; Leszczuk, Mikołaj

    2016-01-01

    Closed circuit television systems (CCTV) are becoming more and more popular and are being deployed in many offices, housing estates and in most public spaces. Monitoring systems have been implemented in many European and American cities. This makes for an enormous load for the CCTV operators, as the number of camera views a single operator can monitor is limited by human factors. In this paper, we focus on the task of automated detection and recognition of dangerous situations for CCTV systems. We propose algorithms that are able to alert the human operator when a firearm or knife is visible in the image. We have focused on limiting the number of false alarms in order to allow for a real-life application of the system. The specificity and sensitivity of the knife detection are significantly better than others published recently. We have also managed to propose a version of a firearm detection algorithm that offers a near-zero rate of false alarms. We have shown that it is possible to create a system that is capable of an early warning in a dangerous situation, which may lead to faster and more effective response times and a reduction in the number of potential victims.

  20. Prehospital digital photography and automated image transmission in an emergency medical service - an ancillary retrospective analysis of a prospective controlled trial.

    Science.gov (United States)

    Bergrath, Sebastian; Rossaint, Rolf; Lenssen, Niklas; Fitzner, Christina; Skorning, Max

    2013-01-16

    Still picture transmission was performed using a telemedicine system in an Emergency Medical Service (EMS) during a prospective, controlled trial. In this ancillary, retrospective study the quality and content of the transmitted pictures and the possible influences of this application on prehospital time requirements were investigated. A digital camera was used with a telemedicine system enabling encrypted audio and data transmission between an ambulance and a remotely located physician. By default, images were compressed (jpeg, 640 x 480 pixels). On occasion, this compression was deactivated (3648 x 2736 pixels). Two independent investigators assessed all transmitted pictures according to predefined criteria. In cases of different ratings, a third investigator had final decision competence. Patient characteristics and time intervals were extracted from the EMS protocol sheets and dispatch centre reports. Overall 314 pictures (mean 2.77 ± 2.42 pictures/mission) were transmitted during 113 missions (group 1). Pictures were not taken for 151 missions (group 2). Regarding picture quality, the content of 240 (76.4%) pictures was clearly identifiable; 45 (14.3%) pictures were considered "limited quality" and 29 (9.2%) pictures were deemed "not useful" due to not/hardly identifiable content. For pictures with file compression (n = 84 missions) and without (n = 17 missions), the content was clearly identifiable in 74% and 97% of the pictures, respectively (p = 0.003). Medical reports (n = 98, 32.8%), medication lists (n = 49, 16.4%) and 12-lead ECGs (n = 28, 9.4%) were most frequently photographed. The patient characteristics of group 1 vs. 2 were as follows: median age - 72.5 vs. 56.5 years, p = 0.001; frequency of acute coronary syndrome - 24/113 vs. 15/151, p = 0.014. The NACA scores and gender distribution were comparable. Median on-scene times were longer with picture transmission (26 vs. 22 min, p = 0.011), but ambulance

  1. Prehospital digital photography and automated image transmission in an emergency medical service – an ancillary retrospective analysis of a prospective controlled trial

    Directory of Open Access Journals (Sweden)

    Bergrath Sebastian

    2013-01-01

    Full Text Available Abstract Background Still picture transmission was performed using a telemedicine system in an Emergency Medical Service (EMS during a prospective, controlled trial. In this ancillary, retrospective study the quality and content of the transmitted pictures and the possible influences of this application on prehospital time requirements were investigated. Methods A digital camera was used with a telemedicine system enabling encrypted audio and data transmission between an ambulance and a remotely located physician. By default, images were compressed (jpeg, 640 x 480 pixels. On occasion, this compression was deactivated (3648 x 2736 pixels. Two independent investigators assessed all transmitted pictures according to predefined criteria. In cases of different ratings, a third investigator had final decision competence. Patient characteristics and time intervals were extracted from the EMS protocol sheets and dispatch centre reports. Results Overall 314 pictures (mean 2.77 ± 2.42 pictures/mission were transmitted during 113 missions (group 1. Pictures were not taken for 151 missions (group 2. Regarding picture quality, the content of 240 (76.4% pictures was clearly identifiable; 45 (14.3% pictures were considered “limited quality” and 29 (9.2% pictures were deemed “not useful” due to not/hardly identifiable content. For pictures with file compression (n = 84 missions and without (n = 17 missions, the content was clearly identifiable in 74% and 97% of the pictures, respectively (p = 0.003. Medical reports (n = 98, 32.8%, medication lists (n = 49, 16.4% and 12-lead ECGs (n = 28, 9.4% were most frequently photographed. The patient characteristics of group 1 vs. 2 were as follows: median age – 72.5 vs. 56.5 years, p = 0.001; frequency of acute coronary syndrome – 24/113 vs. 15/151, p = 0.014. The NACA scores and gender distribution were comparable. Median on-scene times were longer with picture

  2. Automation of the method gamma of comparison dosimetry images

    International Nuclear Information System (INIS)

    Moreno Reyes, J. C.; Macias Jaen, J.; Arrans Lara, R.

    2013-01-01

    The objective of this work was the development of JJGAMMA application analysis software, which enables this task systematically, minimizing intervention specialist and therefore the variability due to the observer. Both benefits, allow comparison of images is done in practice with the required frequency and objectivity. (Author)

  3. Adaptive Algorithms for Automated Processing of Document Images

    Science.gov (United States)

    2011-01-01

    IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI), 21(8):761 –768, Aug. 1999. [84] N. Stamatopoulos, B. Gatos , and T. Georgiou. Automatic...2007. [85] N. Stamatopoulos, B. Gatos , and T. Georgiou. Page frame detection for double page document images. In Proc. of the 9th IAPR Int’l

  4. Power Analysis of an Automated Dynamic Cone Penetrometer

    Science.gov (United States)

    2015-09-01

    ARL-TR-7494 ● SEP 2015 US Army Research Laboratory Power Analysis of an Automated Dynamic Cone Penetrometer by C Wesley...Automated Dynamic Cone Penetrometer by C Wesley Tipton IV and Donald H Porschet Sensors and Electron Devices Directorate, ARL...Dynamic Cone Penetrometer 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) C Wesley Tipton IV and Donald H

  5. Comparison of manually produced and automated cross country movement maps using digital image processing techniques

    Science.gov (United States)

    Wynn, L. K.

    1985-01-01

    The Image-Based Information System (IBIS) was used to automate the cross country movement (CCM) mapping model developed by the Defense Mapping Agency (DMA). Existing terrain factor overlays and a CCM map, produced by DMA for the Fort Lewis, Washington area, were digitized and reformatted into geometrically registered images. Terrain factor data from Slope, Soils, and Vegetation overlays were entered into IBIS, and were then combined utilizing IBIS-programmed equations to implement the DMA CCM model. The resulting IBIS-generated CCM map was then compared with the digitized manually produced map to test similarity. The numbers of pixels comprising each CCM region were compared between the two map images, and percent agreement between each two regional counts was computed. The mean percent agreement equalled 86.21%, with an areally weighted standard deviation of 11.11%. Calculation of Pearson's correlation coefficient yielded +9.997. In some cases, the IBIS-calculated map code differed from the DMA codes: analysis revealed that IBIS had calculated the codes correctly. These highly positive results demonstrate the power and accuracy of IBIS in automating models which synthesize a variety of thematic geographic data.

  6. Automated GPR Rebar Analysis for Robotic Bridge Deck Evaluation.

    Science.gov (United States)

    Kaur, Parneet; Dana, Kristin J; Romero, Francisco A; Gucunski, Nenad

    2016-10-01

    Ground penetrating radar (GPR) is used to evaluate deterioration of reinforced concrete bridge decks based on measuring signal attenuation from embedded rebar. The existing methods for obtaining deterioration maps from GPR data often require manual interaction and offsite processing. In this paper, a novel algorithm is presented for automated rebar detection and analysis. We test the process with comprehensive measurements obtained using a novel state-of-the-art robotic bridge inspection system equipped with GPR sensors. The algorithm achieves robust performance by integrating machine learning classification using image-based gradient features and robust curve fitting of the rebar hyperbolic signature. The approach avoids edge detection, thresholding, and template matching that require manual tuning and are known to perform poorly in the presence of noise and outliers. The detected hyperbolic signatures of rebars within the bridge deck are used to generate deterioration maps of the bridge deck. The results of the rebar region detector are compared quantitatively with several methods of image-based classification and a significant performance advantage is demonstrated. High rates of accuracy are reported on real data that includes thousands of individual hyperbolic rebar signatures from three real bridge decks.

  7. Automated endoscopic navigation and advisory system from medical image

    Science.gov (United States)

    Kwoh, Chee K.; Khan, Gul N.; Gillies, Duncan F.

    1999-05-01

    , which is developed to obtain the relative depth of the colon surface in the image by assuming a point light source very close to the camera. If we assume the colon has a shape similar to a tube, then a reasonable approximation of the position of the center of the colon (lumen) will be a function of the direction in which the majority of the normal vectors of shape are pointing. The second layer is the control layer and at this level, a decision model must be built for endoscope navigation and advisory system. The system that we built is the models of probabilistic networks that create a basic, artificial intelligence system for navigation in the colon. We have constructed the probabilistic networks from correlated objective data using the maximum weighted spanning tree algorithm. In the construction of a probabilistic network, it is always assumed that the variables starting from the same parent are conditionally independent. However, this may not hold and will give rise to incorrect inferences. In these cases, we proposed the creation of a hidden node to modify the network topology, which in effect models the dependency of correlated variables, to solve the problem. The conditional probability matrices linking the hidden node to its neighbors are determined using a gradient descent method which minimizing the objective cost function. The error gradients can be treated as updating messages and ca be propagated in any direction throughout any singly connected network to adjust the network parameters. With the above two- level approach, we have been able to build an automated endoscope navigation and advisory system successfully.

  8. Automated arteriole and venule classification using deep learning for retinal images from the UK Biobank cohort.

    Science.gov (United States)

    Welikala, R A; Foster, P J; Whincup, P H; Rudnicka, A R; Owen, C G; Strachan, D P; Barman, S A

    2017-11-01

    The morphometric characteristics of the retinal vasculature are associated with future risk of many systemic and vascular diseases. However, analysis of data from large population based studies is needed to help resolve uncertainties in some of these associations. This requires automated systems that extract quantitative measures of vessel morphology from large numbers of retinal images. Associations between retinal vessel morphology and disease precursors/outcomes may be similar or opposing for arterioles and venules. Therefore, the accurate detection of the vessel type is an important element in such automated systems. This paper presents a deep learning approach for the automatic classification of arterioles and venules across the entire retinal image, including vessels located at the optic disc. This comprises of a convolutional neural network whose architecture contains six learned layers: three convolutional and three fully-connected. Complex patterns are automatically learnt from the data, which avoids the use of hand crafted features. The method is developed and evaluated using 835,914 centreline pixels derived from 100 retinal images selected from the 135,867 retinal images obtained at the UK Biobank (large population-based cohort study of middle aged and older adults) baseline examination. This is a challenging dataset in respect to image quality and hence arteriole/venule classification is required to be highly robust. The method achieves a significant increase in accuracy of 8.1% when compared to the baseline method, resulting in an arteriole/venule classification accuracy of 86.97% (per pixel basis) over the entire retinal image. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Microvascular glycocalyx dimension estimated by automated SDF imaging is not related to cardiovascular disease

    NARCIS (Netherlands)

    Amraoui, Fouad; Olde Engberink, Rik H. G.; van Gorp, Jacqueline; Ramdani, Amal; Vogt, Liffert; van den Born, Bert-Jan H.

    2014-01-01

    The EG regulates vascular homeostasis and has anti-atherogenic properties. SDF imaging allows for noninvasive visualization of microvessels and automated estimation of EG dimensions. We aimed to assess whether microcirculatory EG dimension is related to cardiovascular disease. Sublingual EG

  10. Automated counting and analysis of etched tracks in CR-39 plastic

    International Nuclear Information System (INIS)

    Majborn, B.

    1986-01-01

    An image analysis system has been set up which is capable of automated counting and analysis of etched nuclear particle tracks in plastic. The system is composed of an optical microscope, CCD camera, frame grabber, personal computer, monitor, and printer. The frame grabber acquires and displays images at video rate. It has a spatial resolution of 512 x 512 pixels with 8 bits of digitisation corresponding to 256 grey levels. The software has been developed for general image processing and adapted for the present purpose. Comparisons of automated and visual microscope counting of tracks in chemically etched CR-39 detectors are presented with emphasis on results of interest for practical radon measurements or neutron dosimetry, e.g. calibration factors, background track densities and variations in background. (author)

  11. Chimenea and other tools: Automated imaging of multi-epoch radio-synthesis data with CASA

    OpenAIRE

    Staley, Tim D.; Anderson, Gemma E.

    2015-01-01

    In preparing the way for the Square Kilometre Array and its pathfinders, there is a pressing need to begin probing the transient sky in a fully robotic fashion using the current generation of radio telescopes. Effective exploitation of such surveys requires a largely automated data-reduction process. This paper introduces an end-to-end automated reduction pipeline, AMIsurvey, used for calibrating and imaging data from the Arcminute Microkelvin Imager Large Array. AMIsurvey makes use of severa...

  12. Automated Scoring and Analysis of Micronucleated Human Lymphocytes.

    Science.gov (United States)

    Callisen, Hannes Heinrich

    Physical and chemical mutagens and carcinogens in our environment produce chromosome abberations in the circulating peripheral blood lymphocytes. The abberations, in turn, give rise to micronuclei when the lymphocytes proliferate in culture. In order to improve the micronucleus assay as a method for screening human populations for chromosome damage, I have (1) developed a high-resolution optical low-light-level micrometry expert system (HOLMES) to digitize and process microscope images of micronuclei in human peripheral blood lymphocytes, (2) defined a protocol of image processing techniques to objectively and uniquely identify and score micronuclei, and (3) analysed digital images of lymphocytes in order to study methods for (a) verifying the identification of suspect micronuclei, (b) classifying proliferating and non-proliferating lymphocytes, and (c) understanding the mechanisms of micronuclei formation and micronuclei fate during cell division. For the purpose of scoring micronuclei, HOLMES promises to (a) improve counting statistics since a greater number of cells can be scored without operator/microscopist fatigue, (b) provide for a more objective and consistent criterion for the identification of micronuclei than the human observer, and (c) yield quantitative information on nuclear and micronuclear characteristics useful in better understanding the micronucleus life cycle. My results on computer aided identification of micronuclei on microscope slides are gratifying. They demonstrate that automation of the micronucleus assay is feasible. Manual verification of HOLMES' results show correct extraction of micronuclei from the scene for 70% of the digitized images and correct identification of the micronuclei for 90% of the extracted objects. Moreover, quantitative analysis on digitized images of lymphocytes using HOLMES has revealed several exciting results: (a) micronuclear DNA content may be estimated from simple area measurements, (b) micronuclei seem to

  13. Quantification of diffusion tensor imaging in normal white matter maturation of early childhood using an automated processing pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Loh, K.B.; Ramli, N.; Tan, L.K.; Roziah, M. [University of Malaya, Department of Biomedical Imaging, University Malaya Research Imaging Centre (UMRIC), Faculty of Medicine, Kuala Lumpur (Malaysia); Rahmat, K. [University of Malaya, Department of Biomedical Imaging, University Malaya Research Imaging Centre (UMRIC), Faculty of Medicine, Kuala Lumpur (Malaysia); University Malaya, Biomedical Imaging Department, Kuala Lumpur (Malaysia); Ariffin, H. [University of Malaya, Department of Paediatrics, Faculty of Medicine, Kuala Lumpur (Malaysia)

    2012-07-15

    The degree and status of white matter myelination can be sensitively monitored using diffusion tensor imaging (DTI). This study looks at the measurement of fractional anistropy (FA) and mean diffusivity (MD) using an automated ROI with an existing DTI atlas. Anatomical MRI and structural DTI were performed cross-sectionally on 26 normal children (newborn to 48 months old), using 1.5-T MRI. The automated processing pipeline was implemented to convert diffusion-weighted images into the NIfTI format. DTI-TK software was used to register the processed images to the ICBM DTI-81 atlas, while AFNI software was used for automated atlas-based volumes of interest (VOIs) and statistical value extraction. DTI exhibited consistent grey-white matter contrast. Triphasic temporal variation of the FA and MD values was noted, with FA increasing and MD decreasing rapidly early in the first 12 months. The second phase lasted 12-24 months during which the rate of FA and MD changes was reduced. After 24 months, the FA and MD values plateaued. DTI is a superior technique to conventional MR imaging in depicting WM maturation. The use of the automated processing pipeline provides a reliable environment for quantitative analysis of high-throughput DTI data. (orig.)

  14. Quantification of diffusion tensor imaging in normal white matter maturation of early childhood using an automated processing pipeline

    International Nuclear Information System (INIS)

    Loh, K.B.; Ramli, N.; Tan, L.K.; Roziah, M.; Rahmat, K.; Ariffin, H.

    2012-01-01

    The degree and status of white matter myelination can be sensitively monitored using diffusion tensor imaging (DTI). This study looks at the measurement of fractional anistropy (FA) and mean diffusivity (MD) using an automated ROI with an existing DTI atlas. Anatomical MRI and structural DTI were performed cross-sectionally on 26 normal children (newborn to 48 months old), using 1.5-T MRI. The automated processing pipeline was implemented to convert diffusion-weighted images into the NIfTI format. DTI-TK software was used to register the processed images to the ICBM DTI-81 atlas, while AFNI software was used for automated atlas-based volumes of interest (VOIs) and statistical value extraction. DTI exhibited consistent grey-white matter contrast. Triphasic temporal variation of the FA and MD values was noted, with FA increasing and MD decreasing rapidly early in the first 12 months. The second phase lasted 12-24 months during which the rate of FA and MD changes was reduced. After 24 months, the FA and MD values plateaued. DTI is a superior technique to conventional MR imaging in depicting WM maturation. The use of the automated processing pipeline provides a reliable environment for quantitative analysis of high-throughput DTI data. (orig.)

  15. 3-D image pre-processing algorithms for improved automated tracing of neuronal arbors.

    Science.gov (United States)

    Narayanaswamy, Arunachalam; Wang, Yu; Roysam, Badrinath

    2011-09-01

    The accuracy and reliability of automated neurite tracing systems is ultimately limited by image quality as reflected in the signal-to-noise ratio, contrast, and image variability. This paper describes a novel combination of image processing methods that operate on images of neurites captured by confocal and widefield microscopy, and produce synthetic images that are better suited to automated tracing. The algorithms are based on the curvelet transform (for denoising curvilinear structures and local orientation estimation), perceptual grouping by scalar voting (for elimination of non-tubular structures and improvement of neurite continuity while preserving branch points), adaptive focus detection, and depth estimation (for handling widefield images without deconvolution). The proposed methods are fast, and capable of handling large images. Their ability to handle images of unlimited size derives from automated tiling of large images along the lateral dimension, and processing of 3-D images one optical slice at a time. Their speed derives in part from the fact that the core computations are formulated in terms of the Fast Fourier Transform (FFT), and in part from parallel computation on multi-core computers. The methods are simple to apply to new images since they require very few adjustable parameters, all of which are intuitive. Examples of pre-processing DIADEM Challenge images are used to illustrate improved automated tracing resulting from our pre-processing methods.

  16. Automated retroillumination photography analysis for objective assessment of Fuchs Corneal Dystrophy severity

    Science.gov (United States)

    Eghrari, Allen O.; Mumtaz, Aisha A.; Garrett, Brian; Rezaei, Mahsa; Akhavan, Mina S.; Riazuddin, S. Amer; Gottsch, John D.

    2016-01-01

    Purpose Retroillumination photography analysis (RPA) is an objective tool for assessment of the number and distribution of guttae in eyes affected with Fuchs Corneal Dystrophy (FCD). Current protocols include manual processing of images; here we assess validity and interrater reliability of automated analysis across various levels of FCD severity. Methods Retroillumination photographs of 97 FCD-affected corneas were acquired and total counts of guttae previously summated manually. For each cornea, a single image was loaded into ImageJ software. We reduced color variability and subtracted background noise. Reflection of light from each gutta was identified as a local area of maximum intensity and counted automatically. Noise tolerance level was titrated for each cornea by examining a small region of each image with automated overlay to ensure appropriate coverage of individual guttae. We tested interrater reliability of automated counts of guttae across a spectrum of clinical and educational experience. Results A set of 97 retroillumination photographs were analyzed. Clinical severity as measured by a modified Krachmer scale ranged from a severity level of 1 to 5 in the set of analyzed corneas. Automated counts by an ophthalmologist correlated strongly with Krachmer grading (R2=0.79) and manual counts (R2=0.88). Intraclass correlation coefficient demonstrated strong correlation, at 0.924 (95% CI, 0.870- 0.958) among cases analyzed by three students, and 0.869 (95% CI, 0.797- 0.918) among cases for which images was analyzed by an ophthalmologist and two students. Conclusions Automated RPA allows for grading of FCD severity with high resolution across a spectrum of disease severity. PMID:27811565

  17. Automated identification of retained surgical items in radiological images

    Science.gov (United States)

    Agam, Gady; Gan, Lin; Moric, Mario; Gluncic, Vicko

    2015-03-01

    Retained surgical items (RSIs) in patients is a major operating room (OR) patient safety concern. An RSI is any surgical tool, sponge, needle or other item inadvertently left in a patients body during the course of surgery. If left undetected, RSIs may lead to serious negative health consequences such as sepsis, internal bleeding, and even death. To help physicians efficiently and effectively detect RSIs, we are developing computer-aided detection (CADe) software for X-ray (XR) image analysis, utilizing large amounts of currently available image data to produce a clinically effective RSI detection system. Physician analysis of XRs for the purpose of RSI detection is a relatively lengthy process that may take up to 45 minutes to complete. It is also error prone due to the relatively low acuity of the human eye for RSIs in XR images. The system we are developing is based on computer vision and machine learning algorithms. We address the problem of low incidence by proposing synthesis algorithms. The CADe software we are developing may be integrated into a picture archiving and communication system (PACS), be implemented as a stand-alone software application, or be integrated into portable XR machine software through application programming interfaces. Preliminary experimental results on actual XR images demonstrate the effectiveness of the proposed approach.

  18. Lesion Segmentation in Automated 3D Breast Ultrasound: Volumetric Analysis.

    Science.gov (United States)

    Agarwal, Richa; Diaz, Oliver; Lladó, Xavier; Gubern-Mérida, Albert; Vilanova, Joan C; Martí, Robert

    2018-03-01

    Mammography is the gold standard screening technique in breast cancer, but it has some limitations for women with dense breasts. In such cases, sonography is usually recommended as an additional imaging technique. A traditional sonogram produces a two-dimensional (2D) visualization of the breast and is highly operator dependent. Automated breast ultrasound (ABUS) has also been proposed to produce a full 3D scan of the breast automatically with reduced operator dependency, facilitating double reading and comparison with past exams. When using ABUS, lesion segmentation and tracking changes over time are challenging tasks, as the three-dimensional (3D) nature of the images makes the analysis difficult and tedious for radiologists. The goal of this work is to develop a semi-automatic framework for breast lesion segmentation in ABUS volumes which is based on the Watershed algorithm. The effect of different de-noising methods on segmentation is studied showing a significant impact ([Formula: see text]) on the performance using a dataset of 28 temporal pairs resulting in a total of 56 ABUS volumes. The volumetric analysis is also used to evaluate the performance of the developed framework. A mean Dice Similarity Coefficient of [Formula: see text] with a mean False Positive ratio [Formula: see text] has been obtained. The Pearson correlation coefficient between the segmented volumes and the corresponding ground truth volumes is [Formula: see text] ([Formula: see text]). Similar analysis, performed on 28 temporal (prior and current) pairs, resulted in a good correlation coefficient [Formula: see text] ([Formula: see text]) for prior and [Formula: see text] ([Formula: see text]) for current cases. The developed framework showed prospects to help radiologists to perform an assessment of ABUS lesion volumes, as well as to quantify volumetric changes during lesions diagnosis and follow-up.

  19. Quantitative Assessment of Mouse Mammary Gland Morphology Using Automated Digital Image Processing and TEB Detection.

    Science.gov (United States)

    Blacher, Silvia; Gérard, Céline; Gallez, Anne; Foidart, Jean-Michel; Noël, Agnès; Péqueux, Christel

    2016-04-01

    The assessment of rodent mammary gland morphology is largely used to study the molecular mechanisms driving breast development and to analyze the impact of various endocrine disruptors with putative pathological implications. In this work, we propose a methodology relying on fully automated digital image analysis methods including image processing and quantification of the whole ductal tree and of the terminal end buds as well. It allows to accurately and objectively measure both growth parameters and fine morphological glandular structures. Mammary gland elongation was characterized by 2 parameters: the length and the epithelial area of the ductal tree. Ductal tree fine structures were characterized by: 1) branch end-point density, 2) branching density, and 3) branch length distribution. The proposed methodology was compared with quantification methods classically used in the literature. This procedure can be transposed to several software and thus largely used by scientists studying rodent mammary gland morphology.

  20. Automated Analysis of Imaging Based Experiments Project

    Data.gov (United States)

    National Aeronautics and Space Administration — For many applications involving liquid injection, the ability to predict the details of the breakup process is often limited due to the complexity of the two-phase...

  1. Initial development of an automated task analysis profiling system

    International Nuclear Information System (INIS)

    Jorgensen, C.C.

    1984-01-01

    A program for automated task analysis is described. Called TAPS (task analysis profiling system), the program accepts normal English prose and outputs skills, knowledges, attitudes, and abilities (SKAAs) along with specific guidance and recommended ability measurement tests for nuclear power plant operators. A new method for defining SKAAs is presented along with a sample program output

  2. Analysis of Trinity Power Metrics for Automated Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Michalenko, Ashley Christine [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-28

    This is a presentation from Los Alamos National Laboraotyr (LANL) about the analysis of trinity power metrics for automated monitoring. The following topics are covered: current monitoring efforts, motivation for analysis, tools used, the methodology, work performed during the summer, and future work planned.

  3. Development of automated system of heavy water analysis

    International Nuclear Information System (INIS)

    Fedorchenko, O.A.; Novozhilov, V.A.; Trenin, V.D.

    1993-01-01

    Application of traditional methods of qualitative and quantitative control of coolant (moderator) for the analysis of heavy water with high tritium content presents many difficulties and an inevitable accumulation of wastes that many facilities will not accept. This report describes an automated system for heavy water sampling and analysis

  4. Flow injection analysis: Emerging tool for laboratory automation in radiochemistry

    International Nuclear Information System (INIS)

    Egorov, O.; Ruzicka, J.; Grate, J.W.; Janata, J.

    1996-01-01

    Automation of routine and serial assays is a common practice of modern analytical laboratory, while it is virtually nonexistent in the field of radiochemistry. Flow injection analysis (FIA) is a general solution handling methodology that has been extensively used for automation of routine assays in many areas of analytical chemistry. Reproducible automated solution handling and on-line separation capabilities are among several distinctive features that make FI a very promising, yet under utilized tool for automation in analytical radiochemistry. The potential of the technique is demonstrated through the development of an automated 90 Sr analyzer and its application in the analysis of tank waste samples from the Hanford site. Sequential injection (SI), the latest generation of FIA, is used to rapidly separate 90 Sr from interfering radionuclides and deliver separated Sr zone to a flow-through liquid scintillation detector. The separation is performed on a mini column containing Sr-specific sorbent extraction material, which selectively retains Sr under acidic conditions. The 90 Sr is eluted with water, mixed with scintillation cocktail, and sent through the flow cell of a flow through counter, where 90 Sr radioactivity is detected as a transient signal. Both peak area and peak height can be used for quantification of sample radioactivity. Alternatively, stopped flow detection can be performed to improve detection precision for low activity samples. The authors current research activities are focused on expansion of radiochemical applications of FIA methodology, with an ultimate goal of creating a set of automated methods that will cover the basic needs of radiochemical analysis at the Hanford site. The results of preliminary experiments indicate that FIA is a highly suitable technique for the automation of chemically more challenging separations, such as separation of actinide elements

  5. The effect of JPEG compression on automated detection of microaneurysms in retinal images

    Science.gov (United States)

    Cree, M. J.; Jelinek, H. F.

    2008-02-01

    As JPEG compression at source is ubiquitous in retinal imaging, and the block artefacts introduced are known to be of similar size to microaneurysms (an important indicator of diabetic retinopathy) it is prudent to evaluate the effect of JPEG compression on automated detection of retinal pathology. Retinal images were acquired at high quality and then compressed to various lower qualities. An automated microaneurysm detector was run on the retinal images of various qualities of JPEG compression and the ability to predict the presence of diabetic retinopathy based on the detected presence of microaneurysms was evaluated with receiver operating characteristic (ROC) methodology. The negative effect of JPEG compression on automated detection was observed even at levels of compression sometimes used in retinal eye-screening programmes and these may have important clinical implications for deciding on acceptable levels of compression for a fully automated eye-screening programme.

  6. Twelve automated thresholding methods for segmentation of PET images: a phantom study

    Science.gov (United States)

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.

    2012-06-01

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  7. Automated Recognition of Vegetation and Water Bodies on the Territory of Megacities in Satellite Images of Visible and IR Bands

    Science.gov (United States)

    Mozgovoy, Dmitry k.; Hnatushenko, Volodymyr V.; Vasyliev, Volodymyr V.

    2018-04-01

    Vegetation and water bodies are a fundamental element of urban ecosystems, and water mapping is critical for urban and landscape planning and management. A methodology of automated recognition of vegetation and water bodies on the territory of megacities in satellite images of sub-meter spatial resolution of the visible and IR bands is proposed. By processing multispectral images from the satellite SuperView-1A, vector layers of recognized plant and water objects were obtained. Analysis of the results of image processing showed a sufficiently high accuracy of the delineation of the boundaries of recognized objects and a good separation of classes. The developed methodology provides a significant increase of the efficiency and reliability of updating maps of large cities while reducing financial costs. Due to the high degree of automation, the proposed methodology can be implemented in the form of a geo-information web service functioning in the interests of a wide range of public services and commercial institutions.

  8. Application of Automated Image-guided Patch Clamp for the Study of Neurons in Brain Slices.

    Science.gov (United States)

    Wu, Qiuyu; Chubykin, Alexander A

    2017-07-31

    Whole-cell patch clamp is the gold-standard method to measure the electrical properties of single cells. However, the in vitro patch clamp remains a challenging and low-throughput technique due to its complexity and high reliance on user operation and control. This manuscript demonstrates an image-guided automatic patch clamp system for in vitro whole-cell patch clamp experiments in acute brain slices. Our system implements a computer vision-based algorithm to detect fluorescently labeled cells and to target them for fully automatic patching using a micromanipulator and internal pipette pressure control. The entire process is highly automated, with minimal requirements for human intervention. Real-time experimental information, including electrical resistance and internal pipette pressure, are documented electronically for future analysis and for optimization to different cell types. Although our system is described in the context of acute brain slice recordings, it can also be applied to the automated image-guided patch clamp of dissociated neurons, organotypic slice cultures, and other non-neuronal cell types.

  9. A catalog of automated analysis methods for enterprise models.

    Science.gov (United States)

    Florez, Hector; Sánchez, Mario; Villalobos, Jorge

    2016-01-01

    Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool.

  10. Automated measurements of metabolic tumor volume and metabolic parameters in lung PET/CT imaging

    Science.gov (United States)

    Orologas, F.; Saitis, P.; Kallergi, M.

    2017-11-01

    Patients with lung tumors or inflammatory lung disease could greatly benefit in terms of treatment and follow-up by PET/CT quantitative imaging, namely measurements of metabolic tumor volume (MTV), standardized uptake values (SUVs) and total lesion glycolysis (TLG). The purpose of this study was the development of an unsupervised or partially supervised algorithm using standard image processing tools for measuring MTV, SUV, and TLG from lung PET/CT scans. Automated metabolic lesion volume and metabolic parameter measurements were achieved through a 5 step algorithm: (i) The segmentation of the lung areas on the CT slices, (ii) the registration of the CT segmented lung regions on the PET images to define the anatomical boundaries of the lungs on the functional data, (iii) the segmentation of the regions of interest (ROIs) on the PET images based on adaptive thresholding and clinical criteria, (iv) the estimation of the number of pixels and pixel intensities in the PET slices of the segmented ROIs, (v) the estimation of MTV, SUVs, and TLG from the previous step and DICOM header data. Whole body PET/CT scans of patients with sarcoidosis were used for training and testing the algorithm. Lung area segmentation on the CT slices was better achieved with semi-supervised techniques that reduced false positive detections significantly. Lung segmentation results agreed with the lung volumes published in the literature while the agreement between experts and algorithm in the segmentation of the lesions was around 88%. Segmentation results depended on the image resolution selected for processing. The clinical parameters, SUV (either mean or max or peak) and TLG estimated by the segmented ROIs and DICOM header data provided a way to correlate imaging data to clinical and demographic data. In conclusion, automated MTV, SUV, and TLG measurements offer powerful analysis tools in PET/CT imaging of the lungs. Custom-made algorithms are often a better approach than the manufacturer

  11. Automation of reactor neutron activation analysis

    International Nuclear Information System (INIS)

    Pavlov, S.S.; Dmitriev, A.Yu.; Frontasyeva, M.V.

    2013-01-01

    The present status of the development of a software package designed for automation of NAA at the IBR-2 reactor of FLNP, JINR, Dubna, is reported. Following decisions adopted at the CRP Meeting in Delft, August 27-31, 2012, the missing tool - a sample changer - will be installed for NAA in compliance with the peculiar features of the radioanalytical laboratory REGATA at the IBR-2 reactor. The details of the design are presented. The software for operation with the sample changer consists of two parts. The first part is a user interface and the second one is a program to control the sample changer. The second part will be developed after installing the tool.

  12. Automated sensitivity analysis using the GRESS language

    International Nuclear Information System (INIS)

    Pin, F.G.; Oblow, E.M.; Wright, R.Q.

    1986-04-01

    An automated procedure for performing large-scale sensitivity studies based on the use of computer calculus is presented. The procedure is embodied in a FORTRAN precompiler called GRESS, which automatically processes computer models and adds derivative-taking capabilities to the normal calculated results. In this report, the GRESS code is described, tested against analytic and numerical test problems, and then applied to a major geohydrological modeling problem. The SWENT nuclear waste repository modeling code is used as the basis for these studies. Results for all problems are discussed in detail. Conclusions are drawn as to the applicability of GRESS in the problems at hand and for more general large-scale modeling sensitivity studies

  13. Automated interpretation of PET/CT images in patients with lung cancer

    DEFF Research Database (Denmark)

    Gutte, Henrik; Jakobsson, David; Olofsson, Fredrik

    2007-01-01

    standard' image interpretation. The training group was used in the development of the automated method. The image processing techniques included algorithms for segmentation of the lungs based on the CT images and detection of lesions in the PET images. Lung boundaries from the CT images were used...... cancer. METHODS: A total of 87 patients who underwent PET/CT examinations due to suspected lung cancer comprised the training group. The test group consisted of PET/CT images from 49 patients suspected with lung cancer. The consensus interpretations by two experienced physicians were used as the 'gold......PURPOSE: To develop a completely automated method based on image processing techniques and artificial neural networks for the interpretation of combined [(18)F]fluorodeoxyglucose (FDG) positron emission tomography (PET) and computed tomography (CT) images for the diagnosis and staging of lung...

  14. Image analysis and modeling in medical image computing. Recent developments and advances.

    Science.gov (United States)

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body

  15. Image analysis of PV module electroluminescence

    Science.gov (United States)

    Lai, T.; Ramirez, C.; Potter, B. G.; Simmons-Potter, K.

    2017-08-01

    Electroluminescence imaging can be used as a non-invasive method to spatially assess performance degradation in photovoltaic (PV) modules. Cells, or regions of cells, that do not produce an infra-red luminescence signal under electrical excitation indicate potential damage in the module. In this study, an Andor iKon-M camera and an image acquisition tool provided by Andor have been utilized to obtain electroluminescent images of a full-sized multicrystalline PV module at regular intervals throughout an accelerated lifecycle test (ALC) performed in a large-scale environmental degradation chamber. Computer aided digital image analysis methods were then used to automate degradation assessment in the modules. Initial preprocessing of the images was designed to remove both background noise and barrel distortion in the image data. Image areas were then mapped so that changes in luminescent intensity across both individual cells and the full module could be identified. Two primary techniques for image analysis were subsequently investigated. In the first case, pixel intensity distributions were evaluated over each individual PV cell and changes to the intensities of the cells over the course of an ALC test were evaluated. In the second approach, intensity line scans of each of the cells in a PV module were performed and variations in line scan data were identified during the module ALC test. In this report, both the image acquisition and preprocessing technique and the contribution of each image analysis approach to an assessment of degradation behavior will be discussed.

  16. Automated Processing of Imaging Data through Multi-tiered Classification of Biological Structures Illustrated Using Caenorhabditis elegans.

    Directory of Open Access Journals (Sweden)

    Mei Zhan

    2015-04-01

    Full Text Available Quantitative imaging has become a vital technique in biological discovery and clinical diagnostics; a plethora of tools have recently been developed to enable new and accelerated forms of biological investigation. Increasingly, the capacity for high-throughput experimentation provided by new imaging modalities, contrast techniques, microscopy tools, microfluidics and computer controlled systems shifts the experimental bottleneck from the level of physical manipulation and raw data collection to automated recognition and data processing. Yet, despite their broad importance, image analysis solutions to address these needs have been narrowly tailored. Here, we present a generalizable formulation for autonomous identification of specific biological structures that is applicable for many problems. The process flow architecture we present here utilizes standard image processing techniques and the multi-tiered application of classification models such as support vector machines (SVM. These low-level functions are readily available in a large array of image processing software packages and programming languages. Our framework is thus both easy to implement at the modular level and provides specific high-level architecture to guide the solution of more complicated image-processing problems. We demonstrate the utility of the classification routine by developing two specific classifiers as a toolset for automation and cell identification in the model organism Caenorhabditis elegans. To serve a common need for automated high-resolution imaging and behavior applications in the C. elegans research community, we contribute a ready-to-use classifier for the identification of the head of the animal under bright field imaging. Furthermore, we extend our framework to address the pervasive problem of cell-specific identification under fluorescent imaging, which is critical for biological investigation in multicellular organisms or tissues. Using these examples as a

  17. Automated Outcome Classification of Computed Tomography Imaging Reports for Pediatric Traumatic Brain Injury.

    Science.gov (United States)

    Yadav, Kabir; Sarioglu, Efsun; Choi, Hyeong Ah; Cartwright, Walter B; Hinds, Pamela S; Chamberlain, James M

    2016-02-01

    The authors have previously demonstrated highly reliable automated classification of free-text computed tomography (CT) imaging reports using a hybrid system that pairs linguistic (natural language processing) and statistical (machine learning) techniques. Previously performed for identifying the outcome of orbital fracture in unprocessed radiology reports from a clinical data repository, the performance has not been replicated for more complex outcomes. To validate automated outcome classification performance of a hybrid natural language processing (NLP) and machine learning system for brain CT imaging reports. The hypothesis was that our system has performance characteristics for identifying pediatric traumatic brain injury (TBI). This was a secondary analysis of a subset of 2,121 CT reports from the Pediatric Emergency Care Applied Research Network (PECARN) TBI study. For that project, radiologists dictated CT reports as free text, which were then deidentified and scanned as PDF documents. Trained data abstractors manually coded each report for TBI outcome. Text was extracted from the PDF files using optical character recognition. The data set was randomly split evenly for training and testing. Training patient reports were used as input to the Medical Language Extraction and Encoding (MedLEE) NLP tool to create structured output containing standardized medical terms and modifiers for negation, certainty, and temporal status. A random subset stratified by site was analyzed using descriptive quantitative content analysis to confirm identification of TBI findings based on the National Institute of Neurological Disorders and Stroke (NINDS) Common Data Elements project. Findings were coded for presence or absence, weighted by frequency of mentions, and past/future/indication modifiers were filtered. After combining with the manual reference standard, a decision tree classifier was created using data mining tools WEKA 3.7.5 and Salford Predictive Miner 7

  18. Digital image analysis

    DEFF Research Database (Denmark)

    Riber-Hansen, Rikke; Vainer, Ben; Steiniche, Torben

    2012-01-01

    Digital image analysis (DIA) is increasingly implemented in histopathological research to facilitate truly quantitative measurements, decrease inter-observer variation and reduce hands-on time. Originally, efforts were made to enable DIA to reproduce manually obtained results on histological slides...... reproducibility, application of stereology-based quantitative measurements, time consumption, optimization of histological slides, regions of interest selection and recent developments in staining and imaging techniques....

  19. Feasibility and Accuracy of Automated Software for Transthoracic Three-Dimensional Left Ventricular Volume and Function Analysis: Comparisons with Two-Dimensional Echocardiography, Three-Dimensional Transthoracic Manual Method, and Cardiac Magnetic Resonance Imaging.

    Science.gov (United States)

    Tamborini, Gloria; Piazzese, Concetta; Lang, Roberto M; Muratori, Manuela; Chiorino, Elisa; Mapelli, Massimo; Fusini, Laura; Ali, Sarah Ghulam; Gripari, Paola; Pontone, Gianluca; Andreini, Daniele; Pepi, Mauro

    2017-11-01

    Recently, a new automated software package (HeartModel) was developed to obtain three-dimensional (3D) left ventricular (LV) volumes using a model-based algorithm (MBA) with a "one-button" simple system and user-adjustable slider. The aims of this study were to verify the feasibility and accuracy of the MBA in comparison with other commonly used imaging techniques in a large unselected population, to evaluate possible accuracy improvements of free operator border adjustments or changes of the slider's default position, and to identify differences in method accuracy related to specific pathologies. This prospective study included consecutive 200 patients. LV volumes and ejection fraction were obtained using the MBA and compared with the two-dimensional biplane method, the 3D full-volume (3DFV) modality, and, in 90 of 200 cases, cardiac magnetic resonance (CMR) measurements. To evaluate the optimal position of the slider with respect to the 3DFV and CMR modalities, a set of threefold cross-validation experiments was performed. Optimized and manually corrected LV volumes obtained using the MBA were also tested. Linear correlation and Bland-Altman analysis were used to assess intertechnique agreement. Automatic volumes were feasible in 194 patients (94.5%), with a mean processing time of 29 ± 10 sec. MBA-derived volumes correlated significantly with all evaluated methods, with slight overestimation of two-dimensional biplane and slight underestimation of CMR measurements. Higher correlations were found between MBA and 3DFV measurements, with negligible differences both in volumes (overestimation) and in LV ejection fraction (underestimation), respectively. Optimization of the user-adjustable slider position improved the correlation and markedly reduced the bias between the MBA and 3DFV or CMR. The accuracy of MBA volumes was lower in some pathologies for incorrect definition of LV endocardium. The MBA is highly feasible, reproducible, and rapid, and it correlates

  20. Accurate automated apnea analysis in preterm infants.

    Science.gov (United States)

    Vergales, Brooke D; Paget-Brown, Alix O; Lee, Hoshik; Guin, Lauren E; Smoot, Terri J; Rusin, Craig G; Clark, Matthew T; Delos, John B; Fairchild, Karen D; Lake, Douglas E; Moorman, Randall; Kattwinkel, John

    2014-02-01

    In 2006 the apnea of prematurity (AOP) consensus group identified inaccurate counting of apnea episodes as a major barrier to progress in AOP research. We compare nursing records of AOP to events detected by a clinically validated computer algorithm that detects apnea from standard bedside monitors. Waveform, vital sign, and alarm data were collected continuously from all very low-birth-weight infants admitted over a 25-month period, analyzed for central apnea, bradycardia, and desaturation (ABD) events, and compared with nursing documentation collected from charts. Our algorithm defined apnea as > 10 seconds if accompanied by bradycardia and desaturation. Of the 3,019 nurse-recorded events, only 68% had any algorithm-detected ABD event. Of the 5,275 algorithm-detected prolonged apnea events > 30 seconds, only 26% had nurse-recorded documentation within 1 hour. Monitor alarms sounded in only 74% of events of algorithm-detected prolonged apnea events > 10 seconds. There were 8,190,418 monitor alarms of any description throughout the neonatal intensive care unit during the 747 days analyzed, or one alarm every 2 to 3 minutes per nurse. An automated computer algorithm for continuous ABD quantitation is a far more reliable tool than the medical record to address the important research questions identified by the 2006 AOP consensus group. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  1. Searching for prostate cancer by fully automated magnetic resonance imaging classification: deep learning versus non-deep learning.

    Science.gov (United States)

    Wang, Xinggang; Yang, Wei; Weinreb, Jeffrey; Han, Juan; Li, Qiubai; Kong, Xiangchuang; Yan, Yongluan; Ke, Zan; Luo, Bo; Liu, Tao; Wang, Liang

    2017-11-13

    Prostate cancer (PCa) is a major cause of death since ancient time documented in Egyptian Ptolemaic mummy imaging. PCa detection is critical to personalized medicine and varies considerably under an MRI scan. 172 patients with 2,602 morphologic images (axial 2D T2-weighted imaging) of the prostate were obtained. A deep learning with deep convolutional neural network (DCNN) and a non-deep learning with SIFT image feature and bag-of-word (BoW), a representative method for image recognition and analysis, were used to distinguish pathologically confirmed PCa patients from prostate benign conditions (BCs) patients with prostatitis or prostate benign hyperplasia (BPH). In fully automated detection of PCa patients, deep learning had a statistically higher area under the receiver operating characteristics curve (AUC) than non-deep learning (P = 0.0007 deep learning method and 0.70 (95% CI 0.63-0.77) for non-deep learning method, respectively. Our results suggest that deep learning with DCNN is superior to non-deep learning with SIFT image feature and BoW model for fully automated PCa patients differentiation from prostate BCs patients. Our deep learning method is extensible to image modalities such as MR imaging, CT and PET of other organs.

  2. Priming of pioneer tree Guazuma ulmifolia (Malvaceae seeds evaluated by an automated computer image analysis Condicionamento fisiológico de sementes da árvore pioneira Guazuma ulmifolia (Malvaceae avaliado por análise computadorizada de imagens

    Directory of Open Access Journals (Sweden)

    Pedro Henrique Santin Brancalion

    2010-01-01

    Full Text Available Direct seeding is one of the most promising methods in restoration ecology, but low field seedling emergence from pioneer tree seeds still reduces its large scale applicability. The aim of this research was to evaluate seed priming for the pioneer tree species Guazuma ulmifolia. Priming treatments were selected based on seed hydration curves in water and in PEG 8000 solution. Seeds were primed in water for 16 h and in Polyethylene glycol - PEG 8000 (-0.8 MPa for 56 and 88 h at 20ºC to reach approximately 30% water content. Half of the seed sample of each treatment was dried back to the initial moisture content (7.2%; both dried and non-dried primed seeds as well as the unprimed seeds (control were tested for germination (percentage and rate and vigor (electrical conductivity of seed leachates. Seedling emergence percentage and rate were evaluated under greenhouse conditions, while seedling length and uniformity of seedling development were estimated using the automated image analysis software SVIS®. Primed seeds showed the highest physiological potential, which was mainly demonstrated by image analysis. Fresh or dried primed seeds in water for 16 h and in PEG (-0.8 MPa for 56 h, and fresh primed seeds in PEG for 88 h, improved G. ulmifolia germination performance. It is suggested that these treatments were promising to enhance efficiency of stand establishment of this species by direct seeding in restoration ecology programs.A semeadura direta é um dos métodos mais promissores para a restauração ecológica, mas a baixa emergência de plântulas em campo a partir de sementes de árvores pioneiras ainda limita sua aplicabilidade em larga escala. Avaliou-se a resposta de sementes da espécie florestal pioneira Guazuma ulmifolia ao condicionamento fisiológico. Os tratamentos foram selecionados com base em curvas de hidratação em água e em solução osmótica de Polietilenoglicol - PEG 8000. As sementes foram condicionadas em água por 16 h

  3. Advancements in Automated Circuit Grouping for Intellectual Property Trust Analysis

    Science.gov (United States)

    2017-03-20

    Advancements in Automated Circuit Grouping for Intellectual Property Trust Analysis James Inge, Matthew Kwiec, Stephen Baka, John Hallman...module, a custom on- chip memory module, a custom arithmetic logic unit module, and a custom Ethernet frame check sequence generator module. Though

  4. An Automated Data Analysis Tool for Livestock Market Data

    Science.gov (United States)

    Williams, Galen S.; Raper, Kellie Curry

    2011-01-01

    This article describes an automated data analysis tool that allows Oklahoma Cooperative Extension Service educators to disseminate results in a timely manner. Primary data collected at Oklahoma Quality Beef Network (OQBN) certified calf auctions across the state results in a large amount of data per sale site. Sale summaries for an individual sale…

  5. Automated procedure for performing computer security risk analysis

    International Nuclear Information System (INIS)

    Smith, S.T.; Lim, J.J.

    1984-05-01

    Computers, the invisible backbone of nuclear safeguards, monitor and control plant operations and support many materials accounting systems. Our automated procedure to assess computer security effectiveness differs from traditional risk analysis methods. The system is modeled as an interactive questionnaire, fully automated on a portable microcomputer. A set of modular event trees links the questionnaire to the risk assessment. Qualitative scores are obtained for target vulnerability, and qualitative impact measures are evaluated for a spectrum of threat-target pairs. These are then combined by a linguistic algebra to provide an accurate and meaningful risk measure. 12 references, 7 figures

  6. Slide Set: Reproducible image analysis and batch processing with ImageJ.

    Science.gov (United States)

    Nanes, Benjamin A

    2015-11-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution.

  7. Automated Segmentation of Kidneys from MR Images in Patients with Autosomal Dominant Polycystic Kidney Disease.

    Science.gov (United States)

    Kim, Youngwoo; Ge, Yinghui; Tao, Cheng; Zhu, Jianbing; Chapman, Arlene B; Torres, Vicente E; Yu, Alan S L; Mrug, Michal; Bennett, William M; Flessner, Michael F; Landsittel, Doug P; Bae, Kyongtae T

    2016-04-07

    Our study developed a fully automated method for segmentation and volumetric measurements of kidneys from magnetic resonance images in patients with autosomal dominant polycystic kidney disease and assessed the performance of the automated method with the reference manual segmentation method. Study patients were selected from the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease. At the enrollment of the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease Study in 2000, patients with autosomal dominant polycystic kidney disease were between 15 and 46 years of age with relatively preserved GFRs. Our fully automated segmentation method was on the basis of a spatial prior probability map of the location of kidneys in abdominal magnetic resonance images and regional mapping with total variation regularization and propagated shape constraints that were formulated into a level set framework. T2-weighted magnetic resonance image sets of 120 kidneys were selected from 60 patients with autosomal dominant polycystic kidney disease and divided into the training and test datasets. The performance of the automated method in reference to the manual method was assessed by means of two metrics: Dice similarity coefficient and intraclass correlation coefficient of segmented kidney volume. The training and test sets were swapped for crossvalidation and reanalyzed. Successful segmentation of kidneys was performed with the automated method in all test patients. The segmented kidney volumes ranged from 177.2 to 2634 ml (mean, 885.4±569.7 ml). The mean Dice similarity coefficient ±SD between the automated and manual methods was 0.88±0.08. The mean correlation coefficient between the two segmentation methods for the segmented volume measurements was 0.97 (Pkidneys from abdominal magnetic resonance images in patients with autosomal dominant polycystic kidney disease with varying kidney volumes. The performance of the automated method was in good

  8. Automated detection of diabetic retinopathy in retinal images

    Directory of Open Access Journals (Sweden)

    Carmen Valverde

    2016-01-01

    Full Text Available Diabetic retinopathy (DR is a disease with an increasing prevalence and the main cause of blindness among working-age population. The risk of severe vision loss can be significantly reduced by timely diagnosis and treatment. Systematic screening for DR has been identified as a cost-effective way to save health services resources. Automatic retinal image analysis is emerging as an important screening tool for early DR detection, which can reduce the workload associated to manual grading as well as save diagnosis costs and time. Many research efforts in the last years have been devoted to developing automatic tools to help in the detection and evaluation of DR lesions. However, there is a large variability in the databases and evaluation criteria used in the literature, which hampers a direct comparison of the different studies. This work is aimed at summarizing the results of the available algorithms for the detection and classification of DR pathology. A detailed literature search was conducted using PubMed. Selected relevant studies in the last 10 years were scrutinized and included in the review. Furthermore, we will try to give an overview of the available commercial software for automatic retinal image analysis.

  9. ORIGAMI Automator Primer. Automated ORIGEN Source Terms and Spent Fuel Storage Pool Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wieselquist, William A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Thompson, Adam B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bowman, Stephen M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Peterson, Joshua L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-04-01

    Source terms and spent nuclear fuel (SNF) storage pool decay heat load analyses for operating nuclear power plants require a large number of Oak Ridge Isotope Generation and Depletion (ORIGEN) calculations. SNF source term calculations also require a significant amount of bookkeeping to track quantities such as core and assembly operating histories, spent fuel pool (SFP) residence times, heavy metal masses, and enrichments. The ORIGEN Assembly Isotopics (ORIGAMI) module in the SCALE code system provides a simple scheme for entering these data. However, given the large scope of the analysis, extensive scripting is necessary to convert formats and process data to create thousands of ORIGAMI input files (one per assembly) and to process the results into formats readily usable by follow-on analysis tools. This primer describes a project within the SCALE Fulcrum graphical user interface (GUI) called ORIGAMI Automator that was developed to automate the scripting and bookkeeping in large-scale source term analyses. The ORIGAMI Automator enables the analyst to (1) easily create, view, and edit the reactor site and assembly information, (2) automatically create and run ORIGAMI inputs, and (3) analyze the results from ORIGAMI. ORIGAMI Automator uses the standard ORIGEN binary concentrations files produced by ORIGAMI, with concentrations available at all time points in each assembly’s life. The GUI plots results such as mass, concentration, activity, and decay heat using a powerful new ORIGEN Post-Processing Utility for SCALE (OPUS) GUI component. This document includes a description and user guide for the GUI, a step-by-step tutorial for a simplified scenario, and appendices that document the file structures used.

  10. Automated morphometry of transgenic mouse brains in MR images

    NARCIS (Netherlands)

    Scheenstra, Alize Elske Hiltje

    2011-01-01

    Quantitative and local morphometry of mouse brain MRI is a relatively new field of research, where automated methods can be exploited to rapidly provide accurate and repeatable results. In this thesis we reviewed several existing methods and applications of quantitative morphometry to brain MR

  11. Large-Scale Automated Analysis of Location Patterns in Randomly-Tagged 3T3 Cells

    Science.gov (United States)

    Osuna, Elvira García; Hua, Juchang; Bateman, Nicholas W.; Zhao, Ting; Berget, Peter B.; Murphy, Robert F.

    2010-01-01

    Location proteomics is concerned with the systematic analysis of the subcellular location of proteins. In order to perform high-resolution, high-throughput analysis of all protein location patterns, automated methods are needed. Here we describe the use of such methods on a large collection of images obtained by automated microscopy to perform high-throughput analysis of endogenous proteins randomly-tagged with a fluorescent protein in NIH 3T3 cells. Cluster analysis was performed to identify the statistically significant location patterns in these images. This allowed us to assign a location pattern to each tagged protein without specifying what patterns are possible. To choose the best feature set for this clustering, we have used a novel method that determines which features do not artificially discriminate between control wells on different plates and uses Stepwise Discriminant Analysis (SDA) to determine which features do discriminate as much as possible among the randomly-tagged wells. Combining this feature set with consensus clustering methods resulted in 35 clusters among the first 188 clones we obtained. This approach represents a powerful automated solution to the problem of identifying subcellular locations on a proteome-wide basis for many different cell types. PMID:17285363

  12. Computerized detection of breast cancer on automated breast ultrasound imaging of women with dense breasts

    Energy Technology Data Exchange (ETDEWEB)

    Drukker, Karen, E-mail: kdrukker@uchicago.edu; Sennett, Charlene A.; Giger, Maryellen L. [Department of Radiology, MC2026, The University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637 (United States)

    2014-01-15

    Purpose: Develop a computer-aided detection method and investigate its feasibility for detection of breast cancer in automated 3D ultrasound images of women with dense breasts. Methods: The HIPAA compliant study involved a dataset of volumetric ultrasound image data, “views,” acquired with an automated U-Systems Somo•V{sup ®} ABUS system for 185 asymptomatic women with dense breasts (BI-RADS Composition/Density 3 or 4). For each patient, three whole-breast views (3D image volumes) per breast were acquired. A total of 52 patients had breast cancer (61 cancers), diagnosed through any follow-up at most 365 days after the original screening mammogram. Thirty-one of these patients (32 cancers) had a screening-mammogram with a clinically assigned BI-RADS Assessment Category 1 or 2, i.e., were mammographically negative. All software used for analysis was developed in-house and involved 3 steps: (1) detection of initial tumor candidates, (2) characterization of candidates, and (3) elimination of false-positive candidates. Performance was assessed by calculating the cancer detection sensitivity as a function of the number of “marks” (detections) per view. Results: At a single mark per view, i.e., six marks per patient, the median detection sensitivity by cancer was 50.0% (16/32) ± 6% for patients with a screening mammogram-assigned BI-RADS category 1 or 2—similar to radiologists’ performance sensitivity (49.9%) for this dataset from a prior reader study—and 45.9% (28/61) ± 4% for all patients. Conclusions: Promising detection sensitivity was obtained for the computer on a 3D ultrasound dataset of women with dense breasts at a rate of false-positive detections that may be acceptable for clinical implementation.

  13. Automated Photogrammetric Image Matching with Sift Algorithm and Delaunay Triangulation

    DEFF Research Database (Denmark)

    Karagiannis, Georgios; Antón Castro, Francesc/François; Mioc, Darka

    2016-01-01

    An algorithm for image matching of multi-sensor and multi-temporal satellite images is developed. The method is based on the SIFT feature detector proposed by Lowe in (Lowe, 1999). First, SIFT feature points are detected independently in two images (reference and sensed image). The features...

  14. Gabor Analysis for Imaging

    DEFF Research Database (Denmark)

    Christensen, Ole; Feichtinger, Hans G.; Paukner, Stephan

    2015-01-01

    In contrast to classical Fourier analysis, time–frequency analysis is concerned with localized Fourier transforms. Gabor analysis is an important branch of time–frequency analysis. Although significantly different, it shares with the wavelet transform methods the ability to describe the smoothness......, it characterizes a function by its transform over phase space, which is the time–frequency plane (TF-plane) in a musical context or the location–wave-number domain in the context of image processing. Since the transition from the signal domain to the phase space domain introduces an enormous amount of data...

  15. Precision automation of cell type classification and sub-cellular fluorescence quantification from laser scanning confocal images

    Directory of Open Access Journals (Sweden)

    Hardy Craig Hall

    2016-02-01

    Full Text Available While novel whole-plant phenotyping technologies have been successfully implemented into functional genomics and breeding programs, the potential of automated phenotyping with cellular resolution is largely unexploited. Laser scanning confocal microscopy has the potential to close this gap by providing spatially highly resolved images containing anatomic as well as chemical information on a subcellular basis. However, in the absence of automated methods, the assessment of the spatial patterns and abundance of fluorescent markers with subcellular resolution is still largely qualitative and time-consuming. Recent advances in image acquisition and analysis, coupled with improvements in microprocessor performance, have brought such automated methods within reach, so that information from thousands of cells per image for hundreds of images may be derived in an experimentally convenient time-frame. Here, we present a MATLAB-based analytical pipeline to 1 segment radial plant organs into individual cells, 2 classify cells into cell type categories based upon random forest classification, 3 divide each cell into sub-regions, and 4 quantify fluorescence intensity to a subcellular degree of precision for a separate fluorescence channel. In this research advance, we demonstrate the precision of this analytical process for the relatively complex tissues of Arabidopsis hypocotyls at various stages of development. High speed and robustness make our approach suitable for phenotyping of large collections of stem-like material and other tissue types.

  16. Automated analysis and design of complex structures

    International Nuclear Information System (INIS)

    Wilson, E.L.

    1977-01-01

    The present application of optimum design appears to be restricted to components of the structure rather than to the total structural system. Since design normally involved many analysis of the system any improvement in the efficiency of the basic methods of analysis will allow more complicated systems to be designed by optimum methods. The evaluation of the risk and reliability of a structural system can be extremely important. Reliability studies have been made of many non-structural systems for which the individual components have been extensively tested and the service environment is known. For such systems the reliability studies are valid. For most structural systems, however, the properties of the components can only be estimated and statistical data associated with the potential loads is often minimum. Also, a potentially critical loading condition may be completely neglected in the study. For these reasons and the previous problems associated with the reliability of both linear and nonlinear analysis computer programs it appears to be premature to place a significant value on such studies for complex structures. With these comments as background the purpose of this paper is to discuss the following: the relationship of analysis to design; new methods of analysis; new of improved finite elements; effect of minicomputer on structural analysis methods; the use of system of microprocessors for nonlinear structural analysis; the role of interacting graphics systems in future analysis and design. This discussion will focus on the impact of new, inexpensive computer hardware on design and analysis methods

  17. Automated haematology analysis to diagnose malaria

    Directory of Open Access Journals (Sweden)

    Grobusch Martin P

    2010-11-01

    Full Text Available Abstract For more than a decade, flow cytometry-based automated haematology analysers have been studied for malaria diagnosis. Although current haematology analysers are not specifically designed to detect malaria-related abnormalities, most studies have found sensitivities that comply with WHO malaria-diagnostic guidelines, i.e. ≥ 95% in samples with > 100 parasites/μl. Establishing a correct and early malaria diagnosis is a prerequisite for an adequate treatment and to minimizing adverse outcomes. Expert light microscopy remains the 'gold standard' for malaria diagnosis in most clinical settings. However, it requires an explicit request from clinicians and has variable accuracy. Malaria diagnosis with flow cytometry-based haematology analysers could become an important adjuvant diagnostic tool in the routine laboratory work-up of febrile patients in or returning from malaria-endemic regions. Haematology analysers so far studied for malaria diagnosis are the Cell-Dyn®, Coulter® GEN·S and LH 750, and the Sysmex XE-2100® analysers. For Cell-Dyn analysers, abnormal depolarization events mainly in the lobularity/granularity and other scatter-plots, and various reticulocyte abnormalities have shown overall sensitivities and specificities of 49% to 97% and 61% to 100%, respectively. For the Coulter analysers, a 'malaria factor' using the monocyte and lymphocyte size standard deviations obtained by impedance detection has shown overall sensitivities and specificities of 82% to 98% and 72% to 94%, respectively. For the XE-2100, abnormal patterns in the DIFF, WBC/BASO, and RET-EXT scatter-plots, and pseudoeosinophilia and other abnormal haematological variables have been described, and multivariate diagnostic models have been designed with overall sensitivities and specificities of 86% to 97% and 81% to 98%, respectively. The accuracy for malaria diagnosis may vary according to species, parasite load, immunity and clinical context where the

  18. A Semi-automated Approach to Improve the Efficiency of Medical Imaging Segmentation for Haptic Rendering.

    Science.gov (United States)

    Banerjee, Pat; Hu, Mengqi; Kannan, Rahul; Krishnaswamy, Srinivasan

    2017-08-01

    The Sensimmer platform represents our ongoing research on simultaneous haptics and graphics rendering of 3D models. For simulation of medical and surgical procedures using Sensimmer, 3D models must be obtained from medical imaging data, such as magnetic resonance imaging (MRI) or computed tomography (CT). Image segmentation techniques are used to determine the anatomies of interest from the images. 3D models are obtained from segmentation and their triangle reduction is required for graphics and haptics rendering. This paper focuses on creating 3D models by automating the segmentation of CT images based on the pixel contrast for integrating the interface between Sensimmer and medical imaging devices, using the volumetric approach, Hough transform method, and manual centering method. Hence, automating the process has reduced the segmentation time by 56.35% while maintaining the same accuracy of the output at ±2 voxels.

  19. MRF-ANN: a machine learning approach for automated ER scoring of breast cancer immunohistochemical images.

    Science.gov (United States)

    Mungle, T; Tewary, S; DAS, D K; Arun, I; Basak, B; Agarwal, S; Ahmed, R; Chatterjee, S; Chakraborty, C

    2017-08-01

    Molecular pathology, especially immunohistochemistry, plays an important role in evaluating hormone receptor status along with diagnosis of breast cancer. Time-consumption and inter-/intraobserver variability are major hindrances for evaluating the receptor score. In view of this, the paper proposes an automated Allred Scoring methodology for estrogen receptor (ER). White balancing is used to normalize the colour image taking into consideration colour variation during staining in different labs. Markov random field model with expectation-maximization optimization is employed to segment the ER cells. The proposed segmentation methodology is found to have F-measure 0.95. Artificial neural network is subsequently used to obtain intensity-based score for ER cells, from pixel colour intensity features. Simultaneously, proportion score - percentage of ER positive cells is computed via cell counting. The final ER score is computed by adding intensity and proportion scores - a standard Allred scoring system followed by pathologists. The classification accuracy for classification of cells by classifier in terms of F-measure is 0.9626. The problem of subjective interobserver ability is addressed by quantifying ER score from two expert pathologist and proposed methodology. The intraclass correlation achieved is greater than 0.90. The study has potential advantage of assisting pathologist in decision making over manual procedure and could evolve as a part of automated decision support system with other receptor scoring/analysis procedure. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  20. Hyperspectral image analysis. A tutorial

    DEFF Research Database (Denmark)

    Amigo Rubio, Jose Manuel; Babamoradi, Hamid; Elcoroaristizabal Martin, Saioa

    2015-01-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processing...... to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares - Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case....

  1. Calibration of a semi-automated segmenting method for quantification of adipose tissue compartments from magnetic resonance images of mice.

    Science.gov (United States)

    Garteiser, Philippe; Doblas, Sabrina; Towner, Rheal A; Griffin, Timothy M

    2013-11-01

    To use an automated water-suppressed magnetic resonance imaging (MRI) method to objectively assess adipose tissue (AT) volumes in whole body and specific regional body components (subcutaneous, thoracic and peritoneal) of obese and lean mice. Water-suppressed MR images were obtained on a 7T, horizontal-bore MRI system in whole bodies (excluding head) of 26 week old male C57BL6J mice fed a control (10% kcal fat) or high-fat diet (60% kcal fat) for 20 weeks. Manual (outlined regions) versus automated (Gaussian fitting applied to threshold-weighted images) segmentation procedures were compared for whole body AT and regional AT volumes (i.e., subcutaneous, thoracic, and peritoneal). The AT automated segmentation method was compared to dual-energy X-ray (DXA) analysis. The average AT volumes for whole body and individual compartments correlated well between the manual outlining and the automated methods (R2>0.77, p<0.05). Subcutaneous, peritoneal, and total body AT volumes were increased 2-3 fold and thoracic AT volume increased more than 5-fold in diet-induced obese mice versus controls (p<0.05). MRI and DXA-based method comparisons were highly correlative (R2=0.94, p<0.0001). Automated AT segmentation of water-suppressed MRI data using a global Gaussian filtering algorithm resulted in a fairly accurate assessment of total and regional AT volumes in a pre-clinical mouse model of obesity. © 2013 Elsevier Inc. All rights reserved.

  2. Automated Asteroseismic Analysis of Solar-type Stars

    DEFF Research Database (Denmark)

    Karoff, Christoffer; Campante, T.L.; Chaplin, W.J.

    2010-01-01

    The rapidly increasing volume of asteroseismic observations on solar-type stars has revealed a need for automated analysis tools. The reason for this is not only that individual analyses of single stars are rather time consuming, but more importantly that these large volumes of observations open...... the possibility to do population studies on large samples of stars and such population studies demand a consistent analysis. By consistent analysis we understand an analysis that can be performed without the need to make any subjective choices on e.g. mode identification and an analysis where the uncertainties...

  3. A method for fast automated microscope image stitching.

    Science.gov (United States)

    Yang, Fan; Deng, Zhen-Sheng; Fan, Qiu-Hong

    2013-05-01

    Image stitching is an important technology to produce a panorama or larger image by combining several images with overlapped areas. In many biomedical researches, image stitching is highly desirable to acquire a panoramic image which represents large areas of certain structures or whole sections, while retaining microscopic resolution. In this study, we develop a fast normal light microscope image stitching algorithm based on feature extraction. At first, an algorithm of scale-space reconstruction of speeded-up robust features (SURF) was proposed to extract features from the images to be stitched with a short time and higher repeatability. Then, the histogram equalization (HE) method was employed to preprocess the images to enhance their contrast for extracting more features. Thirdly, the rough overlapping zones of the images preprocessed were calculated by phase correlation, and the improved SURF was used to extract the image features in the rough overlapping areas. Fourthly, the features were corresponded by matching algorithm and the transformation parameters were estimated, then the images were blended seamlessly. Finally, this procedure was applied to stitch normal light microscope images to verify its validity. Our experimental results demonstrate that the improved SURF algorithm is very robust to viewpoint, illumination, blur, rotation and zoom of the images and our method is able to stitch microscope images automatically with high precision and high speed. Also, the method proposed in this paper is applicable to registration and stitching of common images as well as stitching the microscope images in the field of virtual microscope for the purpose of observing, exchanging, saving, and establishing a database of microscope images. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Chimenea and other tools: Automated imaging of multi-epoch radio-synthesis data with CASA

    Science.gov (United States)

    Staley, T. D.; Anderson, G. E.

    2015-11-01

    In preparing the way for the Square Kilometre Array and its pathfinders, there is a pressing need to begin probing the transient sky in a fully robotic fashion using the current generation of radio telescopes. Effective exploitation of such surveys requires a largely automated data-reduction process. This paper introduces an end-to-end automated reduction pipeline, AMIsurvey, used for calibrating and imaging data from the Arcminute Microkelvin Imager Large Array. AMIsurvey makes use of several component libraries which have been packaged separately for open-source release. The most scientifically significant of these is chimenea, which implements a telescope-agnostic algorithm for automated imaging of pre-calibrated multi-epoch radio-synthesis data, of the sort typically acquired for transient surveys or follow-up. The algorithm aims to improve upon standard imaging pipelines by utilizing iterative RMS-estimation and automated source-detection to avoid so called 'Clean-bias', and makes use of CASA subroutines for the underlying image-synthesis operations. At a lower level, AMIsurvey relies upon two libraries, drive-ami and drive-casa, built to allow use of mature radio-astronomy software packages from within Python scripts. While targeted at automated imaging, the drive-casa interface can also be used to automate interaction with any of the CASA subroutines from a generic Python process. Additionally, these packages may be of wider technical interest beyond radio-astronomy, since they demonstrate use of the Python library pexpect to emulate terminal interaction with an external process. This approach allows for rapid development of a Python interface to any legacy or externally-maintained pipeline which accepts command-line input, without requiring alterations to the original code.

  5. Towards Automated Design, Analysis and Optimization of Declarative Curation Workflows

    Directory of Open Access Journals (Sweden)

    Tianhong Song

    2014-10-01

    Full Text Available Data curation is increasingly important. Our previous work on a Kepler curation package has demonstrated advantages that come from automating data curation pipelines by using workflow systems. However, manually designed curation workflows can be error-prone and inefficient due to a lack of user understanding of the workflow system, misuse of actors, or human error. Correcting problematic workflows is often very time-consuming. A more proactive workflow system can help users avoid such pitfalls. For example, static analysis before execution can be used to detect the potential problems in a workflow and help the user to improve workflow design. In this paper, we propose a declarative workflow approach that supports semi-automated workflow design, analysis and optimization. We show how the workflow design engine helps users to construct data curation workflows, how the workflow analysis engine detects different design problems of workflows and how workflows can be optimized by exploiting parallelism.

  6. Solar Image Analysis and Visualization

    CERN Document Server

    Ireland, J

    2009-01-01

    This volume presents a selection of papers on the state of the art of image enhancement, automated feature detection, machine learning, and visualization tools in support of solar physics that focus on the challenges presented by new ground-based and space-based instrumentation. The articles and topics were inspired by the Third Solar Image Processing Workshop, held at Trinity College Dublin, Ireland but contributions from other experts have been included as well. This book is mainly aimed at researchers and graduate students working on image processing and compter vision in astronomy and solar physics.

  7. Microscopic images dataset for automation of RBCs counting

    Directory of Open Access Journals (Sweden)

    Sherif Abbas

    2015-12-01

    Full Text Available A method for Red Blood Corpuscles (RBCs counting has been developed using RBCs light microscopic images and Matlab algorithm. The Dataset consists of Red Blood Corpuscles (RBCs images and there RBCs segmented images. A detailed description using flow chart is given in order to show how to produce RBCs mask. The RBCs mask was used to count the number of RBCs in the blood smear image.

  8. Evaluation of an improved technique for automated center lumen line definition in cardiovascular image data

    International Nuclear Information System (INIS)

    Gratama van Andel, Hugo A.F.; Meijering, Erik; Vrooman, Henri A.; Stokking, Rik; Lugt, Aad van der; Monye, Cecile de

    2006-01-01

    The aim of the study was to evaluate a new method for automated definition of a center lumen line in vessels in cardiovascular image data. This method, called VAMPIRE, is based on improved detection of vessel-like structures. A multiobserver evaluation study was conducted involving 40 tracings in clinical CTA data of carotid arteries to compare VAMPIRE with an established technique. This comparison showed that VAMPIRE yields considerably more successful tracings and improved handling of stenosis, calcifications, multiple vessels, and nearby bone structures. We conclude that VAMPIRE is highly suitable for automated definition of center lumen lines in vessels in cardiovascular image data. (orig.)

  9. Automated genome sequence analysis and annotation.

    Science.gov (United States)

    Andrade, M A; Brown, N P; Leroy, C; Hoersch, S; de Daruvar, A; Reich, C; Franchini, A; Tamames, J; Valencia, A; Ouzounis, C; Sander, C

    1999-05-01

    Large-scale genome projects generate a rapidly increasing number of sequences, most of them biochemically uncharacterized. Research in bioinformatics contributes to the development of methods for the computational characterization of these sequences. However, the installation and application of these methods require experience and are time consuming. We present here an automatic system for preliminary functional annotation of protein sequences that has been applied to the analysis of sets of sequences from complete genomes, both to refine overall performance and to make new discoveries comparable to those made by human experts. The GeneQuiz system includes a Web-based browser that allows examination of the evidence leading to an automatic annotation and offers additional information, views of the results, and links to biological databases that complement the automatic analysis. System structure and operating principles concerning the use of multiple sequence databases, underlying sequence analysis tools, lexical analyses of database annotations and decision criteria for functional assignments are detailed. The system makes automatic quality assessments of results based on prior experience with the underlying sequence analysis tools; overall error rates in functional assignment are estimated at 2.5-5% for cases annotated with highest reliability ('clear' cases). Sources of over-interpretation of results are discussed with proposals for improvement. A conservative definition for reporting 'new findings' that takes account of database maturity is presented along with examples of possible kinds of discoveries (new function, family and superfamily) made by the system. System performance in relation to sequence database coverage, database dynamics and database search methods is analysed, demonstrating the inherent advantages of an integrated automatic approach using multiple databases and search methods applied in an objective and repeatable manner. The GeneQuiz system

  10. Basic research planning in mathematical pattern recognition and image analysis

    Science.gov (United States)

    Bryant, J.; Guseman, L. F., Jr.

    1981-01-01

    Fundamental problems encountered while attempting to develop automated techniques for applications of remote sensing are discussed under the following categories: (1) geometric and radiometric preprocessing; (2) spatial, spectral, temporal, syntactic, and ancillary digital image representation; (3) image partitioning, proportion estimation, and error models in object scene interference; (4) parallel processing and image data structures; and (5) continuing studies in polarization; computer architectures and parallel processing; and the applicability of "expert systems" to interactive analysis.

  11. Sci-Thur AM: YIS – 08: Automated Imaging Quality Assurance for Image-Guided Small Animal Irradiators

    International Nuclear Information System (INIS)

    Johnstone, Chris; Bazalova-Carter, Magdalena

    2016-01-01

    Purpose: To develop quality assurance (QA) standards and tolerance levels for image quality of small animal irradiators. Methods: A fully automated in-house QA software for image analysis of a commercial microCT phantom was created. Quantitative analyses of CT linearity, signal-to-noise ratio (SNR), uniformity and noise, geometric accuracy, modulation transfer function (MTF), and CT number evaluation was performed. Phantom microCT scans from seven institutions acquired with varying parameters (kVp, mA, time, voxel size, and frame rate) and five irradiator units (Xstrahl SARRP, PXI X-RAD 225Cx, PXI X-RAD SmART, GE explore CT/RT 140, and GE Explore CT 120) were analyzed. Multi-institutional data sets were compared using our in-house software to establish pass/fail criteria for each QA test. Results: CT linearity (R2>0.996) was excellent at all but Institution 2. Acceptable SNR (>35) and noise levels (<55HU) were obtained at four of the seven institutions, where failing scans were acquired with less than 120mAs. Acceptable MTF (>1.5 lp/mm for MTF=0.2) was obtained at all but Institution 6 due to the largest scan voxel size (0.35mm). The geometric accuracy passed (<1.5%) at five of the seven institutions. Conclusion: Our QA software can be used to rapidly perform quantitative imaging QA for small animal irradiators, accumulate results over time, and display possible changes in imaging functionality from its original performance and/or from the recommended tolerance levels. This tool will aid researchers in maintaining high image quality, enabling precise conformal dose delivery to small animals.

  12. Tank Farm Operations Surveillance Automation Analysis

    International Nuclear Information System (INIS)

    MARQUEZ, D.L.

    2000-01-01

    The Nuclear Operations Project Services identified the need to improve manual tank farm surveillance data collection, review, distribution and storage practices often referred to as Operator Rounds. This document provides the analysis in terms of feasibility to improve the manual data collection methods by using handheld computer units, barcode technology, a database for storage and acquisitions, associated software, and operational procedures to increase the efficiency of Operator Rounds associated with surveillance activities

  13. Automated software analysis of nuclear core discharge data

    International Nuclear Information System (INIS)

    Larson, T.W.; Halbig, J.K.; Howell, J.A.; Eccleston, G.W.; Klosterbuer, S.F.

    1993-03-01

    Monitoring the fueling process of an on-load nuclear reactor is a full-time job for nuclear safeguarding agencies. Nuclear core discharge monitors (CDMS) can provide continuous, unattended recording of the reactor's fueling activity for later, qualitative review by a safeguards inspector. A quantitative analysis of this collected data could prove to be a great asset to inspectors because more information can be extracted from the data and the analysis time can be reduced considerably. This paper presents a prototype for an automated software analysis system capable of identifying when fuel bundle pushes occurred and monitoring the power level of the reactor. Neural network models were developed for calculating the region on the reactor face from which the fuel was discharged and predicting the burnup. These models were created and tested using actual data collected from a CDM system at an on-load reactor facility. Collectively, these automated quantitative analysis programs could help safeguarding agencies to gain a better perspective on the complete picture of the fueling activity of an on-load nuclear reactor. This type of system can provide a cost-effective solution for automated monitoring of on-load reactors significantly reducing time and effort

  14. Micro photometer's automation for quantitative spectrograph analysis

    International Nuclear Information System (INIS)

    Gutierrez E, C.Y.A.

    1996-01-01

    A Microphotometer is used to increase the sharpness of dark spectral lines. Analyzing these lines one sample content and its concentration could be determined and the analysis is known as Quantitative Spectrographic Analysis. The Quantitative Spectrographic Analysis is carried out in 3 steps, as follows. 1. Emulsion calibration. This consists of gauging a photographic emulsion, to determine the intensity variations in terms of the incident radiation. For the procedure of emulsion calibration an adjustment with square minimum to the data obtained is applied to obtain a graph. It is possible to determine the density of dark spectral line against the incident light intensity shown by the microphotometer. 2. Working curves. The values of known concentration of an element against incident light intensity are plotted. Since the sample contains several elements, it is necessary to find a work curve for each one of them. 3. Analytical results. The calibration curve and working curves are compared and the concentration of the studied element is determined. The automatic data acquisition, calculation and obtaining of resulting, is done by means of a computer (PC) and a computer program. The conditioning signal circuits have the function of delivering TTL levels (Transistor Transistor Logic) to make the communication between the microphotometer and the computer possible. Data calculation is done using a computer programm

  15. A Robust Actin Filaments Image Analysis Framework.

    Directory of Open Access Journals (Sweden)

    Mitchel Alioscha-Perez

    2016-08-01

    Full Text Available The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale. Based on this observation, we propose a three-steps actin filaments extraction methodology: (i first the input image is decomposed into a 'cartoon' part corresponding to the filament structures in the image, and a noise/texture part, (ii on the 'cartoon' image, we apply a multi-scale line detector coupled with a (iii quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in

  16. Automated, non-linear registration between 3-dimensional brain map and medical head image

    International Nuclear Information System (INIS)

    Mizuta, Shinobu; Urayama, Shin-ichi; Zoroofi, R.A.; Uyama, Chikao

    1998-01-01

    In this paper, we propose an automated, non-linear registration method between 3-dimensional medical head image and brain map in order to efficiently extract the regions of interest. In our method, input 3-dimensional image is registered into a reference image extracted from a brain map. The problems to be solved are automated, non-linear image matching procedure, and cost function which represents the similarity between two images. Non-linear matching is carried out by dividing the input image into connected partial regions, transforming the partial regions preserving connectivity among the adjacent images, evaluating the image similarity between the transformed regions of the input image and the correspondent regions of the reference image, and iteratively searching the optimal transformation of the partial regions. In order to measure the voxelwise similarity of multi-modal images, a cost function is introduced, which is based on the mutual information. Some experiments using MR images presented the effectiveness of the proposed method. (author)

  17. Automated melanoma detection with a novel multispectral imaging system: results of a prospective study

    International Nuclear Information System (INIS)

    Tomatis, Stefano; Carrara, Mauro; Bono, Aldo; Bartoli, Cesare; Lualdi, Manuela; Tragni, Gabrina; Colombo, Ambrogio; Marchesini, Renato

    2005-01-01

    The aim of this research was to evaluate the performance of a new spectroscopic system in the diagnosis of melanoma. This study involves a consecutive series of 1278 patients with 1391 cutaneous pigmented lesions including 184 melanomas. In an attempt to approach the 'real world' of lesion population, a further set of 1022 not excised clinically reassuring lesions was also considered for analysis. Each lesion was imaged in vivo by a multispectral imaging system. The system operates at wavelengths between 483 and 950 nm by acquiring 15 images at equally spaced wavelength intervals. From the images, different lesion descriptors were extracted related to the colour distribution and morphology of the lesions. Data reduction techniques were applied before setting up a neural network classifier designed to perform automated diagnosis. The data set was randomly divided into three sets: train (696 lesions, including 90 melanomas) and verify (348 lesions, including 53 melanomas) for the instruction of a proper neural network, and an independent test set (347 lesions, including 41 melanomas). The neural network was able to discriminate between melanomas and non-melanoma lesions with a sensitivity of 80.4% and a specificity of 75.6% in the 1391 histologized cases data set. No major variations were found in classification scores when train, verify and test subsets were separately evaluated. Following receiver operating characteristic (ROC) analysis, the resulting area under the curve was 0.85. No significant differences were found among areas under train, verify and test set curves, supporting the good network ability to generalize for new cases. In addition, specificity and area under ROC curve increased up to 90% and 0.90, respectively, when the additional set of 1022 lesions without histology was added to the test set. Our data show that performance of an automated system is greatly population dependent, suggesting caution in the comparison with results reported in the

  18. Automated reasoning applications to design validation and sneak function analysis

    International Nuclear Information System (INIS)

    Stratton, R.C.

    1984-01-01

    Argonne National Laboratory (ANL) is actively involved in the LMFBR Man-Machine Integration (MMI) Safety Program. The objective of this program is to enhance the operational safety and reliability of fast-breeder reactors by optimum integration of men and machines through the application of human factors principles and control engineering to the design, operation, and the control environment. ANL is developing methods to apply automated reasoning and computerization in the validation and sneak function analysis process. This project provides the element definitions and relations necessary for an automated reasoner (AR) to reason about design validation and sneak function analysis. This project also provides a demonstration of this AR application on an Experimental Breeder Reactor-II (EBR-II) system, the Argonne Cooling System

  19. Automated three-dimensional X-ray analysis using a dual-beam FIB

    International Nuclear Information System (INIS)

    Schaffer, Miroslava; Wagner, Julian; Schaffer, Bernhard; Schmied, Mario; Mulders, Hans

    2007-01-01

    We present a fully automated method for three-dimensional (3D) elemental analysis demonstrated using a ceramic sample of chemistry (Ca)MgTiO x . The specimen is serially sectioned by a focused ion beam (FIB) microscope, and energy-dispersive X-ray spectrometry (EDXS) is used for elemental analysis of each cross-section created. A 3D elemental model is reconstructed from the stack of two-dimensional (2D) data. This work concentrates on issues arising from process automation, the large sample volume of approximately 17x17x10 μm 3 , and the insulating nature of the specimen. A new routine for post-acquisition data correction of different drift effects is demonstrated. Furthermore, it is shown that EDXS data may be erroneous for specimens containing voids, and that back-scattered electron images have to be used to correct for these errors

  20. Experience based ageing analysis of NPP protection automation in Finland

    International Nuclear Information System (INIS)

    Simola, K.

    2000-01-01

    This paper describes three successive studies on ageing of protection automation of nuclear power plants. These studies were aimed at developing a methodology for an experience based ageing analysis, and applying it to identify the most critical components from ageing and safety points of view. The analyses resulted also to suggestions for improvement of data collection systems for the purpose of further ageing analyses. (author)

  1. A new automated assessment method for contrast–detail images by applying support vector machine and its robustness to nonlinear image processing

    International Nuclear Information System (INIS)

    Takei, Takaaki; Ikeda, Mitsuru; Imai, Kumiharu; Yamauchi-Kawaura, Chiyo; Kato, Katsuhiko; Isoda, Haruo

    2013-01-01

    The automated contrast–detail (C–D) analysis methods developed so-far cannot be expected to work well on images processed with nonlinear methods, such as noise reduction methods. Therefore, we have devised a new automated C–D analysis method by applying support vector machine (SVM), and tested for its robustness to nonlinear image processing. We acquired the CDRAD (a commercially available C–D test object) images at a tube voltage of 120 kV and a milliampere-second product (mAs) of 0.5–5.0. A partial diffusion equation based technique was used as noise reduction method. Three radiologists and three university students participated in the observer performance study. The training data for our SVM method was the classification data scored by the one radiologist for the CDRAD images acquired at 1.6 and 3.2 mAs and their noise-reduced images. We also compared the performance of our SVM method with the CDRAD Analyser algorithm. The mean C–D diagrams (that is a plot of the mean of the smallest visible hole diameter vs. hole depth) obtained from our devised SVM method agreed well with the ones averaged across the six human observers for both original and noise-reduced CDRAD images, whereas the mean C–D diagrams from the CDRAD Analyser algorithm disagreed with the ones from the human observers for both original and noise-reduced CDRAD images. In conclusion, our proposed SVM method for C–D analysis will work well for the images processed with the non-linear noise reduction method as well as for the original radiographic images.

  2. Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks.

    Science.gov (United States)

    Yu, Lequan; Chen, Hao; Dou, Qi; Qin, Jing; Heng, Pheng-Ann

    2017-04-01

    Automated melanoma recognition in dermoscopy images is a very challenging task due to the low contrast of skin lesions, the huge intraclass variation of melanomas, the high degree of visual similarity between melanoma and non-melanoma lesions, and the existence of many artifacts in the image. In order to meet these challenges, we propose a novel method for melanoma recognition by leveraging very deep convolutional neural networks (CNNs). Compared with existing methods employing either low-level hand-crafted features or CNNs with shallower architectures, our substantially deeper networks (more than 50 layers) can acquire richer and more discriminative features for more accurate recognition. To take full advantage of very deep networks, we propose a set of schemes to ensure effective training and learning under limited training data. First, we apply the residual learning to cope with the degradation and overfitting problems when a network goes deeper. This technique can ensure that our networks benefit from the performance gains achieved by increasing network depth. Then, we construct a fully convolutional residual network (FCRN) for accurate skin lesion segmentation, and further enhance its capability by incorporating a multi-scale contextual information integration scheme. Finally, we seamlessly integrate the proposed FCRN (for segmentation) and other very deep residual networks (for classification) to form a two-stage framework. This framework enables the classification network to extract more representative and specific features based on segmented results instead of the whole dermoscopy images, further alleviating the insufficiency of training data. The proposed framework is extensively evaluated on ISBI 2016 Skin Lesion Analysis Towards Melanoma Detection Challenge dataset. Experimental results demonstrate the significant performance gains of the proposed framework, ranking the first in classification and the second in segmentation among 25 teams and 28 teams

  3. Automated Analysis of Security in Networking Systems

    DEFF Research Database (Denmark)

    Buchholtz, Mikael

    2004-01-01

    It has for a long time been a challenge to built secure networking systems. One way to counter this problem is to provide developers of software applications for networking systems with easy-to-use tools that can check security properties before the applications ever reach the marked. These tools...... will both help raise the general level of awareness of the problems and prevent the most basic flaws from occurring. This thesis contributes to the development of such tools. Networking systems typically try to attain secure communication by applying standard cryptographic techniques. In this thesis...... attacks, and attacks launched by insiders. Finally, the perspectives for the application of the analysis techniques are discussed, thereby, coming a small step closer to providing developers with easy- to-use tools for validating the security of networking applications....

  4. Applications of Automation Methods for Nonlinear Fracture Test Analysis

    Science.gov (United States)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    As fracture mechanics material testing evolves, the governing test standards continue to be refined to better reflect the latest understanding of the physics of the fracture processes involved. The traditional format of ASTM fracture testing standards, utilizing equations expressed directly in the text of the standard to assess the experimental result, is self-limiting in the complexity that can be reasonably captured. The use of automated analysis techniques to draw upon a rich, detailed solution database for assessing fracture mechanics tests provides a foundation for a new approach to testing standards that enables routine users to obtain highly reliable assessments of tests involving complex, non-linear fracture behavior. Herein, the case for automating the analysis of tests of surface cracks in tension in the elastic-plastic regime is utilized as an example of how such a database can be generated and implemented for use in the ASTM standards framework. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.

  5. Comparison of the automated evaluation of phantom mama in digital and digitalized images; Comparacao da avaliacao automatizada do phantom mama em imagens digitais e digitalizadas

    Energy Technology Data Exchange (ETDEWEB)

    Santana, Priscila do Carmo, E-mail: pcs@cdtn.b [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Dept. de Engenharia Nuclear. Programa de Pos-Graduacao em Ciencias e Tecnicas Nucleares; Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Fac. de Medicina. Dept. de Propedeutica Complementar; Gomes, Danielle Soares; Oliveira, Marcio Alves; Nogueira, Maria do Socorro, E-mail: mnogue@cdtn.b [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2011-07-01

    Mammography is an essential tool for diagnosis and early detection of breast cancer if it is provided as a very good quality service. The process of evaluating the quality of radiographic images in general, and mammography in particular, can be much more accurate, practical and fast with the help of computer analysis tools. This work compare the automated methodology for the evaluation of scanned digital images the phantom mama. By applied the DIP method techniques was possible determine geometrical and radiometric images evaluated. The evaluated parameters include circular details of low contrast, contrast ratio, spatial resolution, tumor masses, optical density and background in Phantom Mama scanned and digitized images. The both results of images were evaluated. Through this comparison was possible to demonstrate that this automated methodology is presented as a promising alternative for the reduction or elimination of subjectivity in both types of images, but the Phantom Mama present insufficient parameters for spatial resolution evaluation. (author)

  6. Automated Quantitative Bone Analysis in In Vivo X-ray Micro-Computed Tomography.

    Science.gov (United States)

    Behrooz, Ali; Kask, Peet; Meganck, Jeff; Kempner, Joshua

    2017-09-01

    Measurement and analysis of bone morphometry in 3D micro-computed tomography volumes using automated image processing and analysis improve the accuracy, consistency, reproducibility, and speed of preclinical osteological research studies. Automating segmentation and separation of individual bones in 3D micro-computed tomography volumes of murine models presents significant challenges considering partial volume effects and joints with thin spacing, i.e., 50 to [Formula: see text]. In this paper, novel hybrid splitting filters are presented to overcome the challenge of automated bone separation. This is achieved by enhancing joint contrast using rotationally invariant second-derivative operators. These filters generate split components that seed marker-controlled watershed segmentation. In addition, these filters can be used to separate metaphysis and epiphysis in long bones, e.g., femur, and remove the metaphyseal growth plate from the detected bone mask in morphometric measurements. Moreover, for slice-by-slice stereological measurements of long bones, particularly curved bones, such as tibia, the accuracy of the analysis can be improved if the planar measurements are guided to follow the longitudinal direction of the bone. In this paper, an approach is presented for characterizing the bone medial axis using morphological thinning and centerline operations. Building upon the medial axis, a novel framework is presented to automatically guide stereological measurements of long bones and enhance measurement accuracy and consistency. These image processing and analysis approaches are combined in an automated streamlined software workflow and applied to a range of in vivo micro-computed tomography studies for validation.

  7. Automated jitter correction for IR image processing to assess the quality of W7-X high heat flux components

    International Nuclear Information System (INIS)

    Greuner, H; De Marne, P; Herrmann, A; Boeswirth, B; Schindler, T; Smirnow, M

    2009-01-01

    An automated IR image processing method was developed to evaluate the surface temperature distribution of cyclically loaded high heat flux (HHF) plasma facing components. IPP Garching will perform the HHF testing of a high percentage of the series production of the WENDELSTEIN 7-X (W7-X) divertor targets to minimize the number of undiscovered uncertainties in the finally installed components. The HHF tests will be performed as quality assurance (QA) complementary to the non-destructive examination (NDE) methods used during the manufacturing. The IR analysis of an HHF-loaded component detects growing debonding of the plasma facing material, made of carbon fibre composite (CFC), after a few thermal cycles. In the case of the prototype testing, the IR data was processed manually. However, a QA method requires a reliable, reproducible and efficient automated procedure. Using the example of the HHF testing of W7-X pre-series target elements, the paper describes the developed automated IR image processing method. The algorithm is based on an iterative two-step correlation analysis with an individually defined reference pattern for the determination of the jitter.

  8. Infrared thermal imaging for automated detection of diabetic foot complications

    NARCIS (Netherlands)

    van Netten, Jaap J.; van Baal, Jeff G.; Liu, Chanjuan; van der Heijden, Ferdi; Bus, Sicco A.

    2013-01-01

    Although thermal imaging can be a valuable technology in the prevention and management of diabetic foot disease, it is not yet widely used in clinical practice. Technological advancement in infrared imaging increases its application range. The aim was to explore the first steps in the applicability

  9. Infrared thermal imaging for automated detection of diabetic foot complications

    NARCIS (Netherlands)

    van Netten, Jaap J.; van Baal, Jeff G.; Liu, C.; van der Heijden, Ferdinand; Bus, Sicco A.

    Background: Although thermal imaging can be a valuable technology in the prevention and management of diabetic foot disease, it is not yet widely used in clinical practice. Technological advancement in infrared imaging increases its application range. The aim was to explore the first steps in the

  10. CT angiography for planning transcatheter aortic valve replacement using automated tube voltage selection: Image quality and radiation exposure

    Energy Technology Data Exchange (ETDEWEB)

    Mangold, Stefanie [Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC (United States); Department of Diagnostic and Interventional Radiology, Eberhard-Karls University Tuebingen, Tuebingen (Germany); De Cecco, Carlo N. [Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC (United States); Schoepf, U. Joseph, E-mail: schoepf@musc.edu [Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC (United States); Division of Cardiology, Department of Medicine, Medical University of South Carolina, Charleston, SC (United States); Kuhlman, Taylor S.; Varga-Szemes, Akos [Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC (United States); Caruso, Damiano [Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC (United States); Department of Radiological Sciences, Oncology and Pathology, University of Rome “Sapienza”, Rome (Italy); Duguay, Taylor M. [Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC (United States); Tesche, Christian [Department of Cardiology, Heart Centre Munich-Bogenhausen, Munich (Germany); Vogl, Thomas J. [Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC (United States); Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt (Germany); Nikolaou, Konstantin [Department of Diagnostic and Interventional Radiology, Eberhard-Karls University Tuebingen, Tuebingen (Germany); and others

    2017-01-15

    Highlights: • TAVR-planning CT was performed with automated tube voltage selection. • Automated tube voltage selection enables individual tube voltage adaptation. • Image quality was diagnostic while radiation exposure was significantly decreased. - Abstract: Purpose: To assess image quality and accuracy of CT angiography (CTA) for transcatheter aortic valve replacement (TAVR) planning performed with 3rd generation dual-source CT (DSCT). Material and methods: We evaluated 125 patients who underwent TAVR-planning CTA on 3rd generation DSCT. A two-part protocol was performed including retrospectively ECG-gated coronary CTA (CCTA) and prospectively ECG-triggered aortoiliac CTA using 60 mL of contrast medium. Automated tube voltage selection and advanced iterative reconstruction were applied. Effective dose (ED), signal-to-noise (SNR) and contrast-to-noise ratios (CNR) were calculated. Five-point scales were used for subjective image quality analysis. In patients who underwent TAVR, sizing parameters were obtained. Results: Image quality was rated good to excellent in 97.6% of CCTA and 100% of aortoiliac CTAs. CTA studies at >100 kV showed decreased objective image quality compared to 70–100 kV (SNR, all p ≤ 0.0459; CNR, all p ≤ 0.0462). Mean ED increased continuously from 70 to >100 kV (CCTA: 4.5 ± 1.7 mSv–13.6 ± 2.9 mSv, all p ≤ 0.0233; aortoiliac CTA: 2.4 ± 0.9 mSv–6.8 ± 2.7 mSv, all p ≤ 0.0414). In 39 patients TAVR was performed and annulus diameter was within the recommended range in all patients. No severe cardiac or vascular complications were noted. Conclusion: 3rd generation DSCT provides diagnostic image quality in TAVR-planning CTA and facilitates reliable assessment of TAVR device and delivery option while reducing radiation dose.

  11. Prevalence of discordant microscopic changes with automated CBC analysis

    Directory of Open Access Journals (Sweden)

    Fabiano de Jesus Santos

    2014-12-01

    Full Text Available Introduction:The most common cause of diagnostic error is related to errors in laboratory tests as well as errors of results interpretation. In order to reduce them, the laboratory currently has modern equipment which provides accurate and reliable results. The development of automation has revolutionized the laboratory procedures in Brazil and worldwide.Objective:To determine the prevalence of microscopic changes present in blood slides concordant and discordant with results obtained using fully automated procedures.Materials and method:From January to July 2013, 1,000 hematological parameters slides were analyzed. Automated analysis was performed on last generation equipment, which methodology is based on electrical impedance, and is able to quantify all the figurative elements of the blood in a universe of 22 parameters. The microscopy was performed by two experts in microscopy simultaneously.Results:The data showed that only 42.70% were concordant, comparing with 57.30% discordant. The main findings among discordant were: Changes in red blood cells 43.70% (n = 250, white blood cells 38.46% (n = 220, and number of platelet 17.80% (n = 102.Discussion:The data show that some results are not consistent with clinical or physiological state of an individual, and cannot be explained because they have not been investigated, which may compromise the final diagnosis.Conclusion:It was observed that it is of fundamental importance that the microscopy qualitative analysis must be performed in parallel with automated analysis in order to obtain reliable results, causing a positive impact on the prevention, diagnosis, prognosis, and therapeutic follow-up.

  12. Automated processing of thermal infrared images of Osservatorio Vesuviano permanent surveillance network by using Matlab code

    Science.gov (United States)

    Sansivero, Fabio; Vilardo, Giuseppe; Caputo, Teresa

    2017-04-01

    The permanent thermal infrared surveillance network of Osservatorio Vesuviano (INGV) is composed of 6 stations which acquire IR frames of fumarole fields in the Campi Flegrei caldera and inside the Vesuvius crater (Italy). The IR frames are uploaded to a dedicated server in the Surveillance Center of Osservatorio Vesuviano in order to process the infrared data and to excerpt all the information contained. In a first phase the infrared data are processed by an automated system (A.S.I.R.A. Acq- Automated System of IR Analysis and Acquisition) developed in Matlab environment and with a user-friendly graphic user interface (GUI). ASIRA daily generates time-series of residual temperature values of the maximum temperatures observed in the IR scenes after the removal of seasonal effects. These time-series are displayed in the Surveillance Room of Osservatorio Vesuviano and provide information about the evolution of shallow temperatures field of the observed areas. In particular the features of ASIRA Acq include: a) efficient quality selection of IR scenes, b) IR images co-registration in respect of a reference frame, c) seasonal correction by using a background-removal methodology, a) filing of IR matrices and of the processed data in shared archives accessible to interrogation. The daily archived records can be also processed by ASIRA Plot (Matlab code with GUI) to visualize IR data time-series and to help in evaluating inputs parameters for further data processing and analysis. Additional processing features are accomplished in a second phase by ASIRA Tools which is Matlab code with GUI developed to extract further information from the dataset in automated way. The main functions of ASIRA Tools are: a) the analysis of temperature variations of each pixel of the IR frame in a given time interval, b) the removal of seasonal effects from temperature of every pixel in the IR frames by using an analytic approach (removal of sinusoidal long term seasonal component by using a

  13. Image sequence analysis

    CERN Document Server

    1981-01-01

    The processing of image sequences has a broad spectrum of important applica­ tions including target tracking, robot navigation, bandwidth compression of TV conferencing video signals, studying the motion of biological cells using microcinematography, cloud tracking, and highway traffic monitoring. Image sequence processing involves a large amount of data. However, because of the progress in computer, LSI, and VLSI technologies, we have now reached a stage when many useful processing tasks can be done in a reasonable amount of time. As a result, research and development activities in image sequence analysis have recently been growing at a rapid pace. An IEEE Computer Society Workshop on Computer Analysis of Time-Varying Imagery was held in Philadelphia, April 5-6, 1979. A related special issue of the IEEE Transactions on Pattern Anal­ ysis and Machine Intelligence was published in November 1980. The IEEE Com­ puter magazine has also published a special issue on the subject in 1981. The purpose of this book ...

  14. Automated Segmentability Index for Layer Segmentation of Macular SD-OCT Images

    NARCIS (Netherlands)

    Lee, K.; Buitendijk, G.H.; Bogunovic, H.; Springelkamp, H.; Hofman, A.; Wahle, A.; Sonka, M.; Vingerling, J.R.; Klaver, C.C.W.; Abramoff, M.D.

    2016-01-01

    PURPOSE: To automatically identify which spectral-domain optical coherence tomography (SD-OCT) scans will provide reliable automated layer segmentations for more accurate layer thickness analyses in population studies. METHODS: Six hundred ninety macular SD-OCT image volumes (6.0 x 6.0 x 2.3 mm3)

  15. Automated image mosaics by non-automated light microscopes: the MicroMos software tool.

    Science.gov (United States)

    Piccinini, F; Bevilacqua, A; Lucarelli, E

    2013-12-01

    Light widefield microscopes and digital imaging are the basis for most of the analyses performed in every biological laboratory. In particular, the microscope's user is typically interested in acquiring high-detailed images for analysing observed cells and tissues, meanwhile being representative of a wide area to have reliable statistics. The microscopist has to choose between higher magnification factor and extension of the observed area, due to the finite size of the camera's field of view. To overcome the need of arrangement, mosaicing techniques have been developed in the past decades for increasing the camera's field of view by stitching together more images. Nevertheless, these approaches typically work in batch mode and rely on motorized microscopes. Or alternatively, the methods are conceived just to provide visually pleasant mosaics not suitable for quantitative analyses. This work presents a tool for building mosaics of images acquired with nonautomated light microscopes. The method proposed is based on visual information only and the mosaics are built by incrementally stitching couples of images, making the approach available also for online applications. Seams in the stitching regions as well as tonal inhomogeneities are corrected by compensating the vignetting effect. In the experiments performed, we tested different registration approaches, confirming that the translation model is not always the best, despite the fact that the motion of the sample holder of the microscope is apparently translational and typically considered as such. The method's implementation is freely distributed as an open source tool called MicroMos. Its usability makes building mosaics of microscope images at subpixel accuracy easier. Furthermore, optional parameters for building mosaics according to different strategies make MicroMos an easy and reliable tool to compare different registration approaches, warping models and tonal corrections. © 2013 The Authors Journal of

  16. Hyper-Cam automated calibration method for continuous hyperspectral imaging measurements

    Science.gov (United States)

    Gagnon, Jean-Philippe; Habte, Zewdu; George, Jacks; Farley, Vincent; Tremblay, Pierre; Chamberland, Martin; Romano, Joao; Rosario, Dalton

    2010-04-01

    The midwave and longwave infrared regions of the electromagnetic spectrum contain rich information which can be captured by hyperspectral sensors thus enabling enhanced detection of targets of interest. A continuous hyperspectral imaging measurement capability operated 24/7 over varying seasons and weather conditions permits the evaluation of hyperspectral imaging for detection of different types of targets in real world environments. Such a measurement site was built at Picatinny Arsenal under the Spectral and Polarimetric Imagery Collection Experiment (SPICE), where two Hyper-Cam hyperspectral imagers are installed at the Precision Armament Laboratory (PAL) and are operated autonomously since Fall of 2009. The Hyper-Cam are currently collecting a complete hyperspectral database that contains the MWIR and LWIR hyperspectral measurements of several targets under day, night, sunny, cloudy, foggy, rainy and snowy conditions. The Telops Hyper-Cam sensor is an imaging spectrometer that enables the spatial and spectral analysis capabilities using a single sensor. It is based on the Fourier-transform technology yielding high spectral resolution and enabling high accuracy radiometric calibration. It provides datacubes of up to 320x256 pixels at spectral resolutions of up to 0.25 cm-1. The MWIR version covers the 3 to 5 μm spectral range and the LWIR version covers the 8 to 12 μm spectral range. This paper describes the automated operation of the two Hyper-Cam sensors being used in the SPICE data collection. The Reveal Automation Control Software (RACS) developed collaboratively between Telops, ARDEC, and ARL enables flexible operating parameters and autonomous calibration. Under the RACS software, the Hyper-Cam sensors can autonomously calibrate itself using their internal blackbody targets, and the calibration events are initiated by user defined time intervals and on internal beamsplitter temperature monitoring. The RACS software is the first software developed for

  17. Automated vessel shadow segmentation of fovea-centered spectral-domain images from multiple OCT devices

    Science.gov (United States)

    Wu, Jing; Gerendas, Bianca S.; Waldstein, Sebastian M.; Simader, Christian; Schmidt-Erfurth, Ursula

    2014-03-01

    Spectral-domain Optical Coherence Tomography (SD-OCT) is a non-invasive modality for acquiring high reso- lution, three-dimensional (3D) cross sectional volumetric images of the retina and the subretinal layers. SD-OCT also allows the detailed imaging of retinal pathology, aiding clinicians in the diagnosis of sight degrading diseases such as age-related macular degeneration (AMD) and glaucoma.1 Disease diagnosis, assessment, and treatment requires a patient to undergo multiple OCT scans, possibly using different scanning devices, to accurately and precisely gauge disease activity, progression and treatment success. However, the use of OCT imaging devices from different vendors, combined with patient movement may result in poor scan spatial correlation, potentially leading to incorrect patient diagnosis or treatment analysis. Image registration can be used to precisely compare disease states by registering differing 3D scans to one another. In order to align 3D scans from different time- points and vendors using registration, landmarks are required, the most obvious being the retinal vasculature. Presented here is a fully automated cross-vendor method to acquire retina vessel locations for OCT registration from fovea centred 3D SD-OCT scans based on vessel shadows. Noise filtered OCT scans are flattened based on vendor retinal layer segmentation, to extract the retinal pigment epithelium (RPE) layer of the retina. Voxel based layer profile analysis and k-means clustering is used to extract candidate vessel shadow regions from the RPE layer. In conjunction, the extracted RPE layers are combined to generate a projection image featuring all candidate vessel shadows. Image processing methods for vessel segmentation of the OCT constructed projection image are then applied to optimize the accuracy of OCT vessel shadow segmentation through the removal of false positive shadow regions such as those caused by exudates and cysts. Validation of segmented vessel shadows uses

  18. Automated Segmentation of Light-Sheet Fluorescent Imaging to Characterize Experimental Doxorubicin-Induced Cardiac Injury and Repair.

    Science.gov (United States)

    Packard, René R Sevag; Baek, Kyung In; Beebe, Tyler; Jen, Nelson; Ding, Yichen; Shi, Feng; Fei, Peng; Kang, Bong Jin; Chen, Po-Heng; Gau, Jonathan; Chen, Michael; Tang, Jonathan Y; Shih, Yu-Huan; Ding, Yonghe; Li, Debiao; Xu, Xiaolei; Hsiai, Tzung K

    2017-08-17

    This study sought to develop an automated segmentation approach based on histogram analysis of raw axial images acquired by light-sheet fluorescent imaging (LSFI) to establish rapid reconstruction of the 3-D zebrafish cardiac architecture in response to doxorubicin-induced injury and repair. Input images underwent a 4-step automated image segmentation process consisting of stationary noise removal, histogram equalization, adaptive thresholding, and image fusion followed by 3-D reconstruction. We applied this method to 3-month old zebrafish injected intraperitoneally with doxorubicin followed by LSFI at 3, 30, and 60 days post-injection. We observed an initial decrease in myocardial and endocardial cavity volumes at day 3, followed by ventricular remodeling at day 30, and recovery at day 60 (P < 0.05, n = 7-19). Doxorubicin-injected fish developed ventricular diastolic dysfunction and worsening global cardiac function evidenced by elevated E/A ratios and myocardial performance indexes quantified by pulsed-wave Doppler ultrasound at day 30, followed by normalization at day 60 (P < 0.05, n = 9-20). Treatment with the γ-secretase inhibitor, DAPT, to inhibit cleavage and release of Notch Intracellular Domain (NICD) blocked cardiac architectural regeneration and restoration of ventricular function at day 60 (P < 0.05, n = 6-14). Our approach provides a high-throughput model with translational implications for drug discovery and genetic modifiers of chemotherapy-induced cardiomyopathy.

  19. An automated method for analysis of microcirculation videos for accurate assessment of tissue perfusion

    Directory of Open Access Journals (Sweden)

    Demir Sumeyra U

    2012-12-01

    Full Text Available Abstract Background Imaging of the human microcirculation in real-time has the potential to detect injuries and illnesses that disturb the microcirculation at earlier stages and may improve the efficacy of resuscitation. Despite advanced imaging techniques to monitor the microcirculation, there are currently no tools for the near real-time analysis of the videos produced by these imaging systems. An automated system tool that can extract microvasculature information and monitor changes in tissue perfusion quantitatively might be invaluable as a diagnostic and therapeutic endpoint for resuscitation. Methods The experimental algorithm automatically extracts microvascular network and quantitatively measures changes in the microcirculation. There are two main parts in the algorithm: video processing and vessel segmentation. Microcirculatory videos are first stabilized in a video processing step to remove motion artifacts. In the vessel segmentation process, the microvascular network is extracted using multiple level thresholding and pixel verification techniques. Threshold levels are selected using histogram information of a set of training video recordings. Pixel-by-pixel differences are calculated throughout the frames to identify active blood vessels and capillaries with flow. Results Sublingual microcirculatory videos are recorded from anesthetized swine at baseline and during hemorrhage using a hand-held Side-stream Dark Field (SDF imaging device to track changes in the microvasculature during hemorrhage. Automatically segmented vessels in the recordings are analyzed visually and the functional capillary density (FCD values calculated by the algorithm are compared for both health baseline and hemorrhagic conditions. These results were compared to independently made FCD measurements using a well-known semi-automated method. Results of the fully automated algorithm demonstrated a significant decrease of FCD values. Similar, but more variable FCD

  20. Automated registration of multispectral MR vessel wall images of the carotid artery

    Energy Technology Data Exchange (ETDEWEB)

    Klooster, R. van ' t; Staring, M.; Reiber, J. H. C.; Lelieveldt, B. P. F.; Geest, R. J. van der, E-mail: rvdgeest@lumc.nl [Department of Radiology, Division of Image Processing, Leiden University Medical Center, 2300 RC Leiden (Netherlands); Klein, S. [Department of Radiology and Department of Medical Informatics, Biomedical Imaging Group Rotterdam, Erasmus MC, Rotterdam 3015 GE (Netherlands); Kwee, R. M.; Kooi, M. E. [Department of Radiology, Cardiovascular Research Institute Maastricht, Maastricht University Medical Center, Maastricht 6202 AZ (Netherlands)

    2013-12-15

    Purpose: Atherosclerosis is the primary cause of heart disease and stroke. The detailed assessment of atherosclerosis of the carotid artery requires high resolution imaging of the vessel wall using multiple MR sequences with different contrast weightings. These images allow manual or automated classification of plaque components inside the vessel wall. Automated classification requires all sequences to be in alignment, which is hampered by patient motion. In clinical practice, correction of this motion is performed manually. Previous studies applied automated image registration to correct for motion using only nondeformable transformation models and did not perform a detailed quantitative validation. The purpose of this study is to develop an automated accurate 3D registration method, and to extensively validate this method on a large set of patient data. In addition, the authors quantified patient motion during scanning to investigate the need for correction. Methods: MR imaging studies (1.5T, dedicated carotid surface coil, Philips) from 55 TIA/stroke patients with ipsilateral <70% carotid artery stenosis were randomly selected from a larger cohort. Five MR pulse sequences were acquired around the carotid bifurcation, each containing nine transverse slices: T1-weighted turbo field echo, time of flight, T2-weighted turbo spin-echo, and pre- and postcontrast T1-weighted turbo spin-echo images (T1W TSE). The images were manually segmented by delineating the lumen contour in each vessel wall sequence and were manually aligned by applying throughplane and inplane translations to the images. To find the optimal automatic image registration method, different masks, choice of the fixed image, different types of the mutual information image similarity metric, and transformation models including 3D deformable transformation models, were evaluated. Evaluation of the automatic registration results was performed by comparing the lumen segmentations of the fixed image and

  1. An image-processing program for automated counting

    Science.gov (United States)

    Cunningham, D.J.; Anderson, W.H.; Anthony, R.M.

    1996-01-01

    An image-processing program developed by the National Institute of Health, IMAGE, was modified in a cooperative project between remote sensing specialists at the Ohio State University Center for Mapping and scientists at the Alaska Science Center to facilitate estimating numbers of black brant (Branta bernicla nigricans) in flocks at Izembek National Wildlife Refuge. The modified program, DUCK HUNT, runs on Apple computers. Modifications provide users with a pull down menu that optimizes image quality; identifies objects of interest (e.g., brant) by spectral, morphometric, and spatial parameters defined interactively by users; counts and labels objects of interest; and produces summary tables. Images from digitized photography, videography, and high- resolution digital photography have been used with this program to count various species of waterfowl.

  2. Identification and red blood cell automated counting from blood smear images using computer-aided system.

    Science.gov (United States)

    Acharya, Vasundhara; Kumar, Preetham

    2018-03-01

    Red blood cell count plays a vital role in identifying the overall health of the patient. Hospitals use the hemocytometer to count the blood cells. Conventional method of placing the smear under microscope and counting the cells manually lead to erroneous results, and medical laboratory technicians are put under stress. A computer-aided system will help to attain precise results in less amount of time. This research work proposes an image-processing technique for counting the number of red blood cells. It aims to examine and process the blood smear image, in order to support the counting of red blood cells and identify the number of normal and abnormal cells in the image automatically. K-medoids algorithm which is robust to external noise is used to extract the WBCs from the image. Granulometric analysis is used to separate the red blood cells from the white blood cells. The red blood cells obtained are counted using the labeling algorithm and circular Hough transform. The radius range for the circle-drawing algorithm is estimated by computing the distance of the pixels from the boundary which automates the entire algorithm. A comparison is done between the counts obtained using the labeling algorithm and circular Hough transform. Results of the work showed that circular Hough transform was more accurate in counting the red blood cells than the labeling algorithm as it was successful in identifying even the overlapping cells. The work also intends to compare the results of cell count done using the proposed methodology and manual approach. The work is designed to address all the drawbacks of the previous research work. The research work can be extended to extract various texture and shape features of abnormal cells identified so that diseases like anemia of inflammation and chronic disease can be detected at the earliest.

  3. Automated grading of left ventricular segmental wall motion by an artificial neural network using color kinesis images

    Directory of Open Access Journals (Sweden)

    L.O. Murta Jr.

    2006-01-01

    Full Text Available The present study describes an auxiliary tool in the diagnosis of left ventricular (LV segmental wall motion (WM abnormalities based on color-coded echocardiographic WM images. An artificial neural network (ANN was developed and validated for grading LV segmental WM using data from color kinesis (CK images, a technique developed to display the timing and magnitude of global and regional WM in real time. We evaluated 21 normal subjects and 20 patients with LVWM abnormalities revealed by two-dimensional echocardiography. CK images were obtained in two sets of viewing planes. A method was developed to analyze CK images, providing quantitation of fractional area change in each of the 16 LV segments. Two experienced observers analyzed LVWM from two-dimensional images and scored them as: 1 normal, 2 mild hypokinesia, 3 moderate hypokinesia, 4 severe hypokinesia, 5 akinesia, and 6 dyskinesia. Based on expert analysis of 10 normal subjects and 10 patients, we trained a multilayer perceptron ANN using a back-propagation algorithm to provide automated grading of LVWM, and this ANN was then tested in the remaining subjects. Excellent concordance between expert and ANN analysis was shown by ROC curve analysis, with measured area under the curve of 0.975. An excellent correlation was also obtained for global LV segmental WM index by expert and ANN analysis (R² = 0.99. In conclusion, ANN showed high accuracy for automated semi-quantitative grading of WM based on CK images. This technique can be an important aid, improving diagnostic accuracy and reducing inter-observer variability in scoring segmental LVWM.

  4. Image Features Based on Characteristic Curves and Local Binary Patterns for Automated HER2 Scoring

    Directory of Open Access Journals (Sweden)

    Ramakrishnan Mukundan

    2018-02-01

    Full Text Available This paper presents novel feature descriptors and classification algorithms for the automated scoring of HER2 in Whole Slide Images (WSI of breast cancer histology slides. Since a large amount of processing is involved in analyzing WSI images, the primary design goal has been to keep the computational complexity to the minimum possible level and to use simple, yet robust feature descriptors that can provide accurate classification of the slides. We propose two types of feature descriptors that encode important information about staining patterns and the percentage of staining present in ImmunoHistoChemistry (IHC-stained slides. The first descriptor is called a characteristic curve, which is a smooth non-increasing curve that represents the variation of percentage of staining with saturation levels. The second new descriptor introduced in this paper is a local binary pattern (LBP feature curve, which is also a non-increasing smooth curve that represents the local texture of the staining patterns. Both descriptors show excellent interclass variance and intraclass correlation and are suitable for the design of automatic HER2 classification algorithms. This paper gives the detailed theoretical aspects of the feature descriptors and also provides experimental results and a comparative analysis.

  5. Automated Frequency Domain Decomposition for Operational Modal Analysis

    DEFF Research Database (Denmark)

    Brincker, Rune; Andersen, Palle; Jacobsen, Niels-Jørgen

    2007-01-01

    The Frequency Domain Decomposition (FDD) technique is known as one of the most user friendly and powerful techniques for operational modal analysis of structures. However, the classical implementation of the technique requires some user interaction. The present paper describes an algorithm...... for automated FDD, thus a version of FDD where no user interaction is required. Such algorithm can be used for obtaining a default estimate of modal parameters in commercial software for operational modal analysis - or even more important - it can be used as the modal information engine in a system...

  6. Automated Analysis and Classification of Histological Tissue Features by Multi-Dimensional Microscopic Molecular Profiling.

    Directory of Open Access Journals (Sweden)

    Daniel P Riordan

    Full Text Available Characterization of the molecular attributes and spatial arrangements of cells and features within complex human tissues provides a critical basis for understanding processes involved in development and disease. Moreover, the ability to automate steps in the analysis and interpretation of histological images that currently require manual inspection by pathologists could revolutionize medical diagnostics. Toward this end, we developed a new imaging approach called multidimensional microscopic molecular profiling (MMMP that can measure several independent molecular properties in situ at subcellular resolution for the same tissue specimen. MMMP involves repeated cycles of antibody or histochemical staining, imaging, and signal removal, which ultimately can generate information analogous to a multidimensional flow cytometry analysis on intact tissue sections. We performed a MMMP analysis on a tissue microarray containing a diverse set of 102 human tissues using a panel of 15 informative antibody and 5 histochemical stains plus DAPI. Large-scale unsupervised analysis of MMMP data, and visualization of the resulting classifications, identified molecular profiles that were associated with functional tissue features. We then directly annotated H&E images from this MMMP series such that canonical histological features of interest (e.g. blood vessels, epithelium, red blood cells were individually labeled. By integrating image annotation data, we identified molecular signatures that were associated with specific histological annotations and we developed statistical models for automatically classifying these features. The classification accuracy for automated histology labeling was objectively evaluated using a cross-validation strategy, and significant accuracy (with a median per-pixel rate of 77% per feature from 15 annotated samples for de novo feature prediction was obtained. These results suggest that high-dimensional profiling may advance the

  7. Automated tissue segmentation of MR brain images in the presence of white matter lesions.

    Science.gov (United States)

    Valverde, Sergi; Oliver, Arnau; Roura, Eloy; González-Villà, Sandra; Pareto, Deborah; Vilanova, Joan C; Ramió-Torrentà, Lluís; Rovira, Àlex; Lladó, Xavier

    2017-01-01

    Over the last few years, the increasing interest in brain tissue volume measurements on clinical settings has led to the development of a wide number of automated tissue segmentation methods. However, white matter lesions are known to reduce the performance of automated tissue segmentation methods, which requires manual annotation of the lesions and refilling them before segmentation, which is tedious and time-consuming. Here, we propose a new, fully automated T1-w/FLAIR tissue segmentation approach designed to deal with images in the presence of WM lesions. This approach integrates a robust partial volume tissue segmentation with WM outlier rejection and filling, combining intensity and probabilistic and morphological prior maps. We evaluate the performance of this method on the MRBrainS13 tissue segmentation challenge database, which contains images with vascular WM lesions, and also on a set of Multiple Sclerosis (MS) patient images. On both databases, we validate the performance of our method with other state-of-the-art techniques. On the MRBrainS13 data, the presented approach was at the time of submission the best ranked unsupervised intensity model method of the challenge (7th position) and clearly outperformed the other unsupervised pipelines such as FAST and SPM12. On MS data, the differences in tissue segmentation between the images segmented with our method and the same images where manual expert annotations were used to refill lesions on T1-w images before segmentation were lower or similar to the best state-of-the-art pipeline incorporating automated lesion segmentation and filling. Our results show that the proposed pipeline achieved very competitive results on both vascular and MS lesions. A public version of this approach is available to download for the neuro-imaging community. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Improvement of Binary Analysis Components in Automated Malware Analysis Framework

    Science.gov (United States)

    2017-02-21

    and by monitoring their behavior, then generate data for malware detection signature and for developing their counter measure. 15. SUBJECT TERMS...FA2386-15-1-4068 Keiji Takeda, Keio University keiji@sfc.keio.ac.jp 1 Objective This research was conducted to develop components for automated...binary program and by monitoring their behavior, then generate data for malware detection signature and for developing their counter measure. 2

  9. An automated four-point scale scoring of segmental wall motion in echocardiography using quantified parametric images

    International Nuclear Information System (INIS)

    Kachenoura, N; Delouche, A; Ruiz Dominguez, C; Frouin, F; Diebold, B; Nardi, O

    2010-01-01

    The aim of this paper is to develop an automated method which operates on echocardiographic dynamic loops for classifying the left ventricular regional wall motion (RWM) in a four-point scale. A non-selected group of 37 patients (2 and 4 chamber views) was studied. Each view was segmented according to the standardized segmentation using three manually positioned anatomical landmarks (the apex and the angles of the mitral annulus). The segmented data were analyzed by two independent experienced echocardiographists and the consensual RWM scores were used as a reference for comparisons. A fast and automatic parametric imaging method was used to compute and display as static color-coded parametric images both temporal and motion information contained in left ventricular dynamic echocardiograms. The amplitude and time parametric images were provided to a cardiologist for visual analysis of RWM and used for RWM quantification. A cross-validation method was applied to the segmental quantitative indices for classifying RWM in a four-point scale. A total of 518 segments were analyzed. Comparison between visual interpretation of parametric images and the reference reading resulted in an absolute agreement (Aa) of 66% and a relative agreement (Ra) of 96% and kappa (κ) coefficient of 0.61. Comparison of the automated RWM scoring against the same reference provided Aa = 64%, Ra = 96% and κ = 0.64 on the validation subset. Finally, linear regression analysis between the global quantitative index and global reference scores as well as ejection fraction resulted in correlations of 0.85 and 0.79. A new automated four-point scale scoring of RWM was developed and tested in a non-selected database. Its comparison against a consensual visual reading of dynamic echocardiograms showed its ability to classify RWM abnormalities.

  10. Evaluating wood failure in plywood shear by optical image analysis

    Science.gov (United States)

    Charles W. McMillin

    1984-01-01

    This exploratory study evaulates the potential of using an automatic image analysis method to measure percent wood failure in plywood shear specimens. The results suggest that this method my be as accurate as the visual method in tracking long-term gluebond quality. With further refinement, the method could lead to automated equipment replacing the subjective visual...

  11. Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing.

    Science.gov (United States)

    Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang

    2018-02-15

    Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED light target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, direction location algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system.

  12. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm

    Directory of Open Access Journals (Sweden)

    Ricardo Andres Pizarro

    2016-12-01

    Full Text Available High-resolution three-dimensional magnetic resonance imaging (3D-MRI is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM algorithm in the quality assessment of structural brain images, using global and region of interest (ROI automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.

  13. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm.

    Science.gov (United States)

    Pizarro, Ricardo A; Cheng, Xi; Barnett, Alan; Lemaitre, Herve; Verchinski, Beth A; Goldman, Aaron L; Xiao, Ena; Luo, Qian; Berman, Karen F; Callicott, Joseph H; Weinberger, Daniel R; Mattay, Venkata S

    2016-01-01

    High-resolution three-dimensional magnetic resonance imaging (3D-MRI) is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM) algorithm in the quality assessment of structural brain images, using global and region of interest (ROI) automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy) of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.

  14. An application of image processing techniques in computed tomography image analysis

    DEFF Research Database (Denmark)

    McEvoy, Fintan

    2007-01-01

    number of animals and image slices, automation of the process was desirable. The open-source and free image analysis program ImageJ was used. A macro procedure was created that provided the required functionality. The macro performs a number of basic image processing procedures. These include an initial...... process designed to remove the scanning table from the image and to center the animal in the image. This is followed by placement of a vertical line segment from the mid point of the upper border of the image to the image center. Measurements are made between automatically detected outer and inner...... boundaries of subcutaneous adipose tissue along this line segment. This process was repeated as the image was rotated (with the line position remaining unchanged) so that measurements around the complete circumference were obtained. Additionally, an image was created showing all detected boundary points so...

  15. Automated Structure Detection in HRTEM Images: An Example with Graphene

    DEFF Research Database (Denmark)

    Kling, Jens; Vestergaard, Jacob Schack; Dahl, Anders Bjorholm

    Graphene, as the forefather of 2D-materials, attracts much attention due to its extraordinary properties like transparency, flexibility and outstanding high conductivity, together with a thickness of only one atom. The properties seem to be dependent on the atomic structure of graphene...... of time making it difficult to resolve dynamic processes or unstable structures. Tools that assist to get the maximum of information out of recorded images are therefore greatly appreciated. In order to get the most accurate results out of the structure detection, we have optimized the imaging conditions...

  16. System and method for automated object detection in an image

    Energy Technology Data Exchange (ETDEWEB)

    Kenyon, Garrett T.; Brumby, Steven P.; George, John S.; Paiton, Dylan M.; Schultz, Peter F.

    2015-10-06

    A contour/shape detection model may use relatively simple and efficient kernels to detect target edges in an object within an image or video. A co-occurrence probability may be calculated for two or more edge features in an image or video using an object definition. Edge features may be differentiated between in response to measured contextual support, and prominent edge features may be extracted based on the measured contextual support. The object may then be identified based on the extracted prominent edge features.

  17. Automated tissue classification of intracardiac optical coherence tomography images (Conference Presentation)

    Science.gov (United States)

    Gan, Yu; Tsay, David; Amir, Syed B.; Marboe, Charles C.; Hendon, Christine P.

    2016-03-01

    Remodeling of the myocardium is associated with increased risk of arrhythmia and heart failure. Our objective is to automatically identify regions of fibrotic myocardium, dense collagen, and adipose tissue, which can serve as a way to guide radiofrequency ablation therapy or endomyocardial biopsies. Using computer vision and machine learning, we present an automated algorithm to classify tissue compositions from cardiac optical coherence tomography (OCT) images. Three dimensional OCT volumes were obtained from 15 human hearts ex vivo within 48 hours of donor death (source, NDRI). We first segmented B-scans using a graph searching method. We estimated the boundary of each region by minimizing a cost function, which consisted of intensity, gradient, and contour smoothness. Then, features, including texture analysis, optical properties, and statistics of high moments, were extracted. We used a statistical model, relevance vector machine, and trained this model with abovementioned features to classify tissue compositions. To validate our method, we applied our algorithm to 77 volumes. The datasets for validation were manually segmented and classified by two investigators who were blind to our algorithm results and identified the tissues based on trichrome histology and pathology. The difference between automated and manual segmentation was 51.78 +/- 50.96 μm. Experiments showed that the attenuation coefficients of dense collagen were significantly different from other tissue types (P < 0.05, ANOVA). Importantly, myocardial fibrosis tissues were different from normal myocardium in entropy and kurtosis. The tissue types were classified with an accuracy of 84%. The results show good agreements with histology.

  18. Statistical Image Analysis of Longitudinal RAVENS Images

    Directory of Open Access Journals (Sweden)

    Seonjoo eLee

    2015-10-01

    Full Text Available Regional analysis of volumes examined in normalized space (RAVENS are transformation images used in the study of brain morphometry. In this paper, RAVENS images are analyzed using a longitudinal variant of voxel-based morphometry (VBM and longitudinal functional principal component analysis (LFPCA for high-dimensional images. We demonstrate that the latter overcomes the limitations of standard longitudinal VBM analyses, which does not separate registration errors from other longitudinal changes and baseline patterns. This is especially important in contexts where longitudinal changes are only a small fraction of the overall observed variability, which is typical in normal aging and many chronic diseases. Our simulation study shows that LFPCA effectively separates registration error from baseline and longitudinal signals of interest by decomposing RAVENS images measured at multiple visits into three components: a subject-specific imaging random intercept that quantifies the cross-sectional variability, a subject-specific imaging slope that quantifies the irreversible changes over multiple visits, and a subject-visit specific imaging deviation. We describe strategies to identify baseline/longitudinal variation and registration errors combined with covariates of interest. Our analysis suggests that specific regional brain atrophy and ventricular enlargement are associated with multiple sclerosis (MS disease progression.

  19. Automated interpretation of PET/CT images in patients with lung cancer

    DEFF Research Database (Denmark)

    Gutte, Henrik; Jakobsson, David; Olofsson, Fredrik

    2007-01-01

    cancer. METHODS: A total of 87 patients who underwent PET/CT examinations due to suspected lung cancer comprised the training group. The test group consisted of PET/CT images from 49 patients suspected with lung cancer. The consensus interpretations by two experienced physicians were used as the 'gold...... method measured as the area under the receiver operating characteristic curve, was 0.97 in the test group, with an accuracy of 92%. The sensitivity was 86% at a specificity of 100%. CONCLUSIONS: A completely automated method using artificial neural networks can be used to detect lung cancer......PURPOSE: To develop a completely automated method based on image processing techniques and artificial neural networks for the interpretation of combined [(18)F]fluorodeoxyglucose (FDG) positron emission tomography (PET) and computed tomography (CT) images for the diagnosis and staging of lung...

  20. Automated Coronal Loop Identification Using Digital Image Processing Techniques

    Science.gov (United States)

    Lee, Jong K.; Gary, G. Allen; Newman, Timothy S.

    2003-01-01

    The results of a master thesis project on a study of computer algorithms for automatic identification of optical-thin, 3-dimensional solar coronal loop centers from extreme ultraviolet and X-ray 2-dimensional images will be presented. These center splines are proxies of associated magnetic field lines. The project is pattern recognition problems in which there are no unique shapes or edges and in which photon and detector noise heavily influence the images. The study explores extraction techniques using: (1) linear feature recognition of local patterns (related to the inertia-tensor concept), (2) parametric space via the Hough transform, and (3) topological adaptive contours (snakes) that constrains curvature and continuity as possible candidates for digital loop detection schemes. We have developed synthesized images for the coronal loops to test the various loop identification algorithms. Since the topology of these solar features is dominated by the magnetic field structure, a first-order magnetic field approximation using multiple dipoles provides a priori information in the identification process. Results from both synthesized and solar images will be presented.

  1. Transportation informatics : advanced image processing techniques automated pavement distress evaluation.

    Science.gov (United States)

    2010-01-01

    The current project, funded by MIOH-UTC for the period 1/1/2009- 4/30/2010, is concerned : with the development of the framework for a transportation facility inspection system using : advanced image processing techniques. The focus of this study is ...

  2. Curvelet based offline analysis of SEM images.

    Directory of Open Access Journals (Sweden)

    Syed Hamad Shirazi

    Full Text Available Manual offline analysis, of a scanning electron microscopy (SEM image, is a time consuming process and requires continuous human intervention and efforts. This paper presents an image processing based method for automated offline analyses of SEM images. To this end, our strategy relies on a two-stage process, viz. texture analysis and quantification. The method involves a preprocessing step, aimed at the noise removal, in order to avoid false edges. For texture analysis, the proposed method employs a state of the art Curvelet transform followed by segmentation through a combination of entropy filtering, thresholding and mathematical morphology (MM. The quantification is carried out by the application of a box-counting algorithm, for fractal dimension (FD calculations, with the ultimate goal of measuring the parameters, like surface area and perimeter. The perimeter is estimated indirectly by counting the boundary boxes of the filled shapes. The proposed method, when applied to a representative set of SEM images, not only showed better results in image segmentation but also exhibited a good accuracy in the calculation of surface area and perimeter. The proposed method outperforms the well-known Watershed segmentation algorithm.

  3. A robust computational solution for automated quantification of a specific binding ratio based on [123I]FP-CIT SPECT images

    International Nuclear Information System (INIS)

    Oliveira, F. P. M.; Tavares, J. M. R. S.; Borges, Faria D.; Campos, Costa D.

    2014-01-01

    The purpose of the current paper is to present a computational solution to accurately quantify a specific to a non-specific uptake ratio in [ 123 I]fP-CIT single photon emission computed tomography (SPECT) images and simultaneously measure the spatial dimensions of the basal ganglia, also known as basal nuclei. A statistical analysis based on a reference dataset selected by the user is also automatically performed. The quantification of the specific to non-specific uptake ratio here is based on regions of interest defined after the registration of the image under study with a template image. The computational solution was tested on a dataset of 38 [ 123 I]FP-CIT SPECT images: 28 images were from patients with Parkinson’s disease and the remainder from normal patients, and the results of the automated quantification were compared to the ones obtained by three well-known semi-automated quantification methods. The results revealed a high correlation coefficient between the developed automated method and the three semi-automated methods used for comparison (r ≥0.975). The solution also showed good robustness against different positions of the patient, as an almost perfect agreement between the specific to non-specific uptake ratio was found (ICC=1.000). The mean processing time was around 6 seconds per study using a common notebook PC. The solution developed can be useful for clinicians to evaluate [ 123 I]FP-CIT SPECT images due to its accuracy, robustness and speed. Also, the comparison between case studies and the follow-up of patients can be done more accurately and proficiently since the intra- and inter-observer variability of the semi-automated calculation does not exist in automated solutions. The dimensions of the basal ganglia and their automatic comparison with the values of the population selected as reference are also important for professionals in this area.

  4. Document image analysis: A primer

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Abstract. Document image analysis refers to algorithms and techniques that are applied to images of documents to obtain a computer-readable description from pixel data. A well-known document image analysis product is the Optical Character. Recognition (OCR) software that recognizes characters in a scanned document ...

  5. Automated model-based quantitative analysis of phantoms with spherical inserts in FDG PET scans.

    Science.gov (United States)

    Ulrich, Ethan J; Sunderland, John J; Smith, Brian J; Mohiuddin, Imran; Parkhurst, Jessica; Plichta, Kristin A; Buatti, John M; Beichel, Reinhard R

    2018-01-01

    Quality control plays an increasingly important role in quantitative PET imaging and is typically performed using phantoms. The purpose of this work was to develop and validate a fully automated analysis method for two common PET/CT quality assurance phantoms: the NEMA NU-2 IQ and SNMMI/CTN oncology phantom. The algorithm was designed to only utilize the PET scan to enable the analysis of phantoms with thin-walled inserts. We introduce a model-based method for automated analysis of phantoms with spherical inserts. Models are first constructed for each type of phantom to be analyzed. A robust insert detection algorithm uses the model to locate all inserts inside the phantom. First, candidates for inserts are detected using a scale-space detection approach. Second, candidates are given an initial label using a score-based optimization algorithm. Third, a robust model fitting step aligns the phantom model to the initial labeling and fixes incorrect labels. Finally, the detected insert locations are refined and measurements are taken for each insert and several background regions. In addition, an approach for automated selection of NEMA and CTN phantom models is presented. The method was evaluated on a diverse set of 15 NEMA and 20 CTN phantom PET/CT scans. NEMA phantoms were filled with radioactive tracer solution at 9.7:1 activity ratio over background, and CTN phantoms were filled with 4:1 and 2:1 activity ratio over background. For quantitative evaluation, an independent reference standard was generated by two experts using PET/CT scans of the phantoms. In addition, the automated approach was compared against manual analysis, which represents the current clinical standard approach, of the PET phantom scans by four experts. The automated analysis method successfully detected and measured all inserts in all test phantom scans. It is a deterministic algorithm (zero variability), and the insert detection RMS error (i.e., bias) was 0.97, 1.12, and 1.48 mm for phantom

  6. Extending and applying active appearance models for automated, high precision segmentation in different image modalities

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Fisker, Rune; Ersbøll, Bjarne Kjær

    2001-01-01

    , an initialization scheme is designed thus making the usage of AAMs fully automated. Using these extensions it is demonstrated that AAMs can segment bone structures in radiographs, pork chops in perspective images and the left ventricle in cardiovascular magnetic resonance images in a robust, fast and accurate...... object class description, which can be employed to rapidly search images for new object instances. The proposed extensions concern enhanced shape representation, handling of homogeneous and heterogeneous textures, refinement optimization using Simulated Annealing and robust statistics. Finally...

  7. Automated detection of new impact sites on Martian surface from HiRISE images

    Science.gov (United States)

    Xin, Xin; Di, Kaichang; Wang, Yexin; Wan, Wenhui; Yue, Zongyu

    2017-10-01

    In this study, an automated method for Martian new impact site detection from single images is presented. It first extracts dark areas in full high resolution image, then detects new impact craters within dark areas using a cascade classifier which combines local binary pattern features and Haar-like features trained by an AdaBoost machine learning algorithm. Experimental results using 100 HiRISE images show that the overall detection rate of proposed method is 84.5%, with a true positive rate of 86.9%. The detection rate and true positive rate in the flat regions are 93.0% and 91.5%, respectively.

  8. Toward the virtual cell: Automated approaches to building models of subcellular organization “learned” from microscopy images

    Science.gov (United States)

    Buck, Taráz E.; Li, Jieyue; Rohde, Gustavo K.; Murphy, Robert F.

    2012-01-01

    We review state-of-the-art computational methods for constructing, from image data, generative statistical models of cellular and nuclear shapes and the arrangement of subcellular structures and proteins within them. These automated approaches allow consistent analysis of images of cells for the purposes of learning the range of possible phenotypes, discriminating between them, and informing further investigation. Such models can also provide realistic geometry and initial protein locations to simulations in order to better understand cellular and subcellular processes. To determine the structures of cellular components and how proteins and other molecules are distributed among them, the generative modeling approach described here can be coupled with high throughput imaging technology to infer and represent subcellular organization from data with few a priori assumptions. We also discuss potential improvements to these methods and future directions for research. PMID:22777818

  9. Study of geologic-structural situation around Semipalatinsk test site test - holes using space images automated decoding method

    International Nuclear Information System (INIS)

    Gorbunova, Eh.M.; Ivanchenko, G.N.

    2004-01-01

    Performance of underground nuclear explosions (UNE) leads to irreversible changes in geological environment around the boreholes. In natural environment it was detected inhomogeneity of rock massif condition changes, which depended on characteristics of the underground nuclear explosion, anisotropy of medium and presence of faulting. Application of automated selection and statistic analysis of unstretched lineaments in high resolution space images using special software pack LESSA allows specifying the geologic-structural features of Semipalatinsk Test Site (STS), ranging selected fracture zones, outlining and analyzing post-explosion zone surface deformations. (author)

  10. Automated segmentation of pigmented skin lesions in multispectral imaging

    International Nuclear Information System (INIS)

    Carrara, Mauro; Tomatis, Stefano; Bono, Aldo; Bartoli, Cesare; Moglia, Daniele; Lualdi, Manuela; Colombo, Ambrogio; Santinami, Mario; Marchesini, Renato

    2005-01-01

    The aim of this study was to develop an algorithm for the automatic segmentation of multispectral images of pigmented skin lesions. The study involved 1700 patients with 1856 cutaneous pigmented lesions, which were analysed in vivo by a novel spectrophotometric system, before excision. The system is able to acquire a set of 15 different multispectral images at equally spaced wavelengths between 483 and 951 nm. An original segmentation algorithm was developed and applied to the whole set of lesions and was able to automatically contour them all. The obtained lesion boundaries were shown to two expert clinicians, who, independently, rejected 54 of them. The 97.1% contour accuracy indicates that the developed algorithm could be a helpful and effective instrument for the automatic segmentation of skin pigmented lesions. (note)

  11. Application of an Automated Discharge Imaging System and LSPIV during Typhoon Events in Taiwan

    Directory of Open Access Journals (Sweden)

    Wei-Che Huang

    2018-03-01

    Full Text Available An automated discharge imaging system (ADIS, which is a non-intrusive and safe approach, was developed for measuring river flows during flash flood events. ADIS consists of dual cameras to capture complete surface images in the near and far fields. Surface velocities are accurately measured using the Large Scale Particle Image Velocimetry (LSPIV technique. The stream discharges are then obtained from the depth-averaged velocity (based upon an empirical velocity-index relationship and cross-section area. The ADIS was deployed at the Yu-Feng gauging station in Shimen Reservoir upper catchment, northern Taiwan. For a rigorous validation, surface velocity measurements were conducted using ADIS/LSPIV and other instruments. In terms of the averaged surface velocity, all of the measured results were in good agreement with small differences, i.e., 0.004 to 0.39 m/s and 0.023 to 0.345 m/s when compared to those from acoustic Doppler current profiler (ADCP and surface velocity radar (SVR, respectively. The ADIS/LSPIV was further applied to measure surface velocities and discharges during typhoon events (i.e., Chan-Hom, Soudelor, Goni, and Dujuan in 2015. The measured water level and surface velocity both showed rapid increases due to flash floods. The estimated discharges from ADIS/LSPIV and ADCP were compared, presenting good consistency with correlation coefficient R = 0.996 and normalized root mean square error NRMSE = 7.96%. The results of sensitivity analysis indicate that the components till (τ and roll (θ of the camera are most sensitive parameters to affect the surface velocity using ADIS/LSPIV. Overall, the ADIS based upon LSPIV technique effectively measures surface velocities for reliable estimations of river discharges during typhoon events.

  12. Postprocessing algorithm for automated analysis of pelvic intraoperative neuromonitoring signals

    Directory of Open Access Journals (Sweden)

    Wegner Celine

    2016-09-01

    Full Text Available Two dimensional pelvic intraoperative neuromonitoring (pIONM® is based on electric stimulation of autonomic nerves under observation of electromyography of internal anal sphincter (IAS and manometry of urinary bladder. The method provides nerve identification and verification of its’ functional integrity. Currently pIONM® is gaining increased attention in times where preservation of function is becoming more and more important. Ongoing technical and methodological developments in experimental and clinical settings require further analysis of the obtained signals. This work describes a postprocessing algorithm for pIONM® signals, developed for automated analysis of huge amount of recorded data. The analysis routine includes a graphical representation of the recorded signals in the time and frequency domain, as well as a quantitative evaluation by means of features calculated from the time and frequency domain. The produced plots are summarized automatically in a PowerPoint presentation. The calculated features are filled into a standardized Excel-sheet, ready for statistical analysis.

  13. Extended automated separation techniques in destructive neutron activation analysis

    International Nuclear Information System (INIS)

    Tjioe, P.S.; Goeij, J.J.M. de; Houtman, J.P.W.

    1977-01-01

    An automated post-irradiation chemical separation scheme for the analysis of 14 trace elements in biological materials is described. The procedure consists of a destruction with sulfuric acid and hydrogen peroxide, a distillation of the volatile elements with hydrobromic acid and chromatography of both distillate and residue over Dowex 2x8 anion exchanger columns. Accuracy, precision and sensitivity are tested with reference materials (BOWEN's kale, NBS bovine liver, IAEA materials, dried animal whole blood, wheat flour, dried potatoes, powdered milk, oyster homogenate) and on a sample of pooled human blood. Blank values due to trace elements in the quartz irradiation vials are also discussed. (T.G.)

  14. Automated 3-D echocardiography analysis compared with manual delineations and SPECT MUGA.

    Science.gov (United States)

    Sanchez-Ortiz, Gerardo I; Wright, Gabriel J T; Clarke, Nigel; Declerck, Jérôme; Banning, Adrian P; Noble, J Alison

    2002-09-01

    A major barrier for using 3-D echocardiography for quantitative analysis of heart function in routine clinical practice is the absence of accurate and robust segmentation and tracking methods necessary to make the analysis automatic. In this paper, we present an automated three-dimensional (3-D) echocardiographic acquisition and image-processing methodology for assessment of left ventricular (LV) function. We combine global image information provided by a novel multiscale fuzzy-clustering segmentation algorithm, with local boundaries obtained with phase-based acoustic feature detection. We then use the segmentation results to fit and track the LV endocardial surface using a 3-D continuous transformation. To our knowledge, this is the first report of a completely automated method. The protocol is evaluated in a small clinical case study (nine patients). We compare ejection fractions (EFs) computed with the new approach to those obtained using the standard clinical technique, single-photon emission computed tomography multigated acquisition. Errors on six datasets were found to be within six percentage points. A further two, with poor image quality, improved upon EFs from manually delineated contours, and the last failed due to artifacts in the data. Volume-time curves were derived and the results compared to those from manual segmentation. Improvement over an earlier published version of the method is noted.

  15. AUTOMATED INSPECTION OF POWER LINE CORRIDORS TO MEASURE VEGETATION UNDERCUT USING UAV-BASED IMAGES

    Directory of Open Access Journals (Sweden)

    M. Maurer

    2017-08-01

    Full Text Available Power line corridor inspection is a time consuming task that is performed mostly manually. As the development of UAVs made huge progress in recent years, and photogrammetric computer vision systems became well established, it is time to further automate inspection tasks. In this paper we present an automated processing pipeline to inspect vegetation undercuts of power line corridors. For this, the area of inspection is reconstructed, geo-referenced, semantically segmented and inter class distance measurements are calculated. The presented pipeline performs an automated selection of the proper 3D reconstruction method for on the one hand wiry (power line, and on the other hand solid objects (surrounding. The automated selection is realized by performing pixel-wise semantic segmentation of the input images using a Fully Convolutional Neural Network. Due to the geo-referenced semantic 3D reconstructions a documentation of areas where maintenance work has to be performed is inherently included in the distance measurements and can be extracted easily. We evaluate the influence of the semantic segmentation according to the 3D reconstruction and show that the automated semantic separation in wiry and dense objects of the 3D reconstruction routine improves the quality of the vegetation undercut inspection. We show the generalization of the semantic segmentation to datasets acquired using different acquisition routines and to varied seasons in time.

  16. Comparison of known food weights with image-based portion-size automated estimation and adolescents' self-reported portion size.

    Science.gov (United States)

    Lee, Christina D; Chae, Junghoon; Schap, TusaRebecca E; Kerr, Deborah A; Delp, Edward J; Ebert, David S; Boushey, Carol J

    2012-03-01

    Diet is a critical element of diabetes self-management. An emerging area of research is the use of