WorldWideScience

Sample records for high-content image analysis

  1. Image analysis benchmarking methods for high-content screen design.

    Science.gov (United States)

    Fuller, C J; Straight, A F

    2010-05-01

    The recent development of complex chemical and small interfering RNA (siRNA) collections has enabled large-scale cell-based phenotypic screening. High-content and high-throughput imaging are widely used methods to record phenotypic data after chemical and small interfering RNA treatment, and numerous image processing and analysis methods have been used to quantify these phenotypes. Currently, there are no standardized methods for evaluating the effectiveness of new and existing image processing and analysis tools for an arbitrary screening problem. We generated a series of benchmarking images that represent commonly encountered variation in high-throughput screening data and used these image standards to evaluate the robustness of five different image analysis methods to changes in signal-to-noise ratio, focal plane, cell density and phenotype strength. The analysis methods that were most reliable, in the presence of experimental variation, required few cells to accurately distinguish phenotypic changes between control and experimental data sets. We conclude that by applying these simple benchmarking principles an a priori estimate of the image acquisition requirements for phenotypic analysis can be made before initiating an image-based screen. Application of this benchmarking methodology provides a mechanism to significantly reduce data acquisition and analysis burdens and to improve data quality and information content.

  2. Development of automatic image analysis methods for high-throughput and high-content screening

    NARCIS (Netherlands)

    Di, Zi

    2013-01-01

    This thesis focuses on the development of image analysis methods for ultra-high content analysis of high-throughput screens where cellular phenotype responses to various genetic or chemical perturbations that are under investigation. Our primary goal is to deliver efficient and robust image analysis

  3. Development of automatic image analysis methods for high-throughput and high-content screening

    NARCIS (Netherlands)

    Di, Zi

    2013-01-01

    This thesis focuses on the development of image analysis methods for ultra-high content analysis of high-throughput screens where cellular phenotype responses to various genetic or chemical perturbations that are under investigation. Our primary goal is to deliver efficient and robust image analysis

  4. High-content analysis of single cells directly assembled on CMOS sensor based on color imaging.

    Science.gov (United States)

    Tanaka, Tsuyoshi; Saeki, Tatsuya; Sunaga, Yoshihiko; Matsunaga, Tadashi

    2010-12-15

    A complementary metal oxide semiconductor (CMOS) image sensor was applied to high-content analysis of single cells which were assembled closely or directly onto the CMOS sensor surface. The direct assembling of cell groups on CMOS sensor surface allows large-field (6.66 mm×5.32 mm in entire active area of CMOS sensor) imaging within a second. Trypan blue-stained and non-stained cells in the same field area on the CMOS sensor were successfully distinguished as white- and blue-colored images under white LED light irradiation. Furthermore, the chemiluminescent signals of each cell were successfully visualized as blue-colored images on CMOS sensor only when HeLa cells were placed directly on the micro-lens array of the CMOS sensor. Our proposed approach will be a promising technique for real-time and high-content analysis of single cells in a large-field area based on color imaging.

  5. Open Source High Content Analysis Utilizing Automated Fluorescence Lifetime Imaging Microscopy.

    Science.gov (United States)

    Görlitz, Frederik; Kelly, Douglas J; Warren, Sean C; Alibhai, Dominic; West, Lucien; Kumar, Sunil; Alexandrov, Yuriy; Munro, Ian; Garcia, Edwin; McGinty, James; Talbot, Clifford; Serwa, Remigiusz A; Thinon, Emmanuelle; da Paola, Vincenzo; Murray, Edward J; Stuhmeier, Frank; Neil, Mark A A; Tate, Edward W; Dunsby, Christopher; French, Paul M W

    2017-01-18

    We present an open source high content analysis instrument utilizing automated fluorescence lifetime imaging (FLIM) for assaying protein interactions using Förster resonance energy transfer (FRET) based readouts of fixed or live cells in multiwell plates. This provides a means to screen for cell signaling processes read out using intramolecular FRET biosensors or intermolecular FRET of protein interactions such as oligomerization or heterodimerization, which can be used to identify binding partners. We describe here the functionality of this automated multiwell plate FLIM instrumentation and present exemplar data from our studies of HIV Gag protein oligomerization and a time course of a FRET biosensor in live cells. A detailed description of the practical implementation is then provided with reference to a list of hardware components and a description of the open source data acquisition software written in µManager. The application of FLIMfit, an open source MATLAB-based client for the OMERO platform, to analyze arrays of multiwell plate FLIM data is also presented. The protocols for imaging fixed and live cells are outlined and a demonstration of an automated multiwell plate FLIM experiment using cells expressing fluorescent protein-based FRET constructs is presented. This is complemented by a walk-through of the data analysis for this specific FLIM FRET data set.

  6. A Novel Automated High-Content Analysis Workflow Capturing Cell Population Dynamics from Induced Pluripotent Stem Cell Live Imaging Data

    Science.gov (United States)

    Kerz, Maximilian; Folarin, Amos; Meleckyte, Ruta; Watt, Fiona M.; Dobson, Richard J.; Danovi, Davide

    2016-01-01

    Most image analysis pipelines rely on multiple channels per image with subcellular reference points for cell segmentation. Single-channel phase-contrast images are often problematic, especially for cells with unfavorable morphology, such as induced pluripotent stem cells (iPSCs). Live imaging poses a further challenge, because of the introduction of the dimension of time. Evaluations cannot be easily integrated with other biological data sets including analysis of endpoint images. Here, we present a workflow that incorporates a novel CellProfiler-based image analysis pipeline enabling segmentation of single-channel images with a robust R-based software solution to reduce the dimension of time to a single data point. These two packages combined allow robust segmentation of iPSCs solely on phase-contrast single-channel images and enable live imaging data to be easily integrated to endpoint data sets while retaining the dynamics of cellular responses. The described workflow facilitates characterization of the response of live-imaged iPSCs to external stimuli and definition of cell line–specific, phenotypic signatures. We present an efficient tool set for automated high-content analysis suitable for cells with challenging morphology. This approach has potentially widespread applications for human pluripotent stem cells and other cell types. PMID:27256155

  7. Development of a quantitative morphological assessment of toxicant-treated zebrafish larvae using brightfield imaging and high-content analysis.

    Science.gov (United States)

    Deal, Samantha; Wambaugh, John; Judson, Richard; Mosher, Shad; Radio, Nick; Houck, Keith; Padilla, Stephanie

    2016-09-01

    One of the rate-limiting procedures in a developmental zebrafish screen is the morphological assessment of each larva. Most researchers opt for a time-consuming, structured visual assessment by trained human observer(s). The present studies were designed to develop a more objective, accurate and rapid method for screening zebrafish for dysmorphology. Instead of the very detailed human assessment, we have developed the computational malformation index, which combines the use of high-content imaging with a very brief human visual assessment. Each larva was quickly assessed by a human observer (basic visual assessment), killed, fixed and assessed for dysmorphology with the Zebratox V4 BioApplication using the Cellomics® ArrayScan® V(TI) high-content image analysis platform. The basic visual assessment adds in-life parameters, and the high-content analysis assesses each individual larva for various features (total area, width, spine length, head-tail length, length-width ratio, perimeter-area ratio). In developing the computational malformation index, a training set of hundreds of embryos treated with hundreds of chemicals were visually assessed using the basic or detailed method. In the second phase, we assessed both the stability of these high-content measurements and its performance using a test set of zebrafish treated with a dose range of two reference chemicals (trans-retinoic acid or cadmium). We found the measures were stable for at least 1 week and comparison of these automated measures to detailed visual inspection of the larvae showed excellent congruence. Our computational malformation index provides an objective manner for rapid phenotypic brightfield assessment of individual larva in a developmental zebrafish assay. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Morphometric Characterization of Rat and Human Alveolar Macrophage Cell Models and their Response to Amiodarone using High Content Image Analysis.

    Science.gov (United States)

    Hoffman, Ewelina; Patel, Aateka; Ball, Doug; Klapwijk, Jan; Millar, Val; Kumar, Abhinav; Martin, Abigail; Mahendran, Rhamiya; Dailey, Lea Ann; Forbes, Ben; Hutter, Victoria

    2017-05-24

    Progress to the clinic may be delayed or prevented when vacuolated or "foamy" alveolar macrophages are observed during non-clinical inhalation toxicology assessment. The first step in developing methods to study this response in vitro is to characterize macrophage cell lines and their response to drug exposures. Human (U937) and rat (NR8383) cell lines and primary rat alveolar macrophages obtained by bronchoalveolar lavage were characterized using high content fluorescence imaging analysis quantification of cell viability, morphometry, and phospholipid and neutral lipid accumulation. Cell health, morphology and lipid content were comparable (p content. Responses to amiodarone, a known inducer of phospholipidosis, required analysis of shifts in cell population profiles (the proportion of cells with elevated vacuolation or lipid content) rather than average population data which was insensitive to the changes observed. A high content image analysis assay was developed and used to provide detailed morphological characterization of rat and human alveolar-like macrophages and their response to a phospholipidosis-inducing agent. This provides a basis for development of assays to predict or understand macrophage vacuolation following inhaled drug exposure.

  9. Towards semantic-driven high-content image analysis: an operational instantiation for mitosis detection in digital histopathology.

    Science.gov (United States)

    Racoceanu, D; Capron, F

    2015-06-01

    This study concerns a novel symbolic cognitive vision framework emerged from the Cognitive Microscopy (MICO(1)) initiative. MICO aims at supporting the evolution towards digital pathology, by studying cognitive clinical-compliant protocols involving routine virtual microscopy. We instantiate this paradigm in the case of mitotic count as a component of breast cancer grading in histopathology. The key concept of our approach is the role of the semantics as driver of the whole slide image analysis protocol. All the decisions being taken into a semantic and formal world, MICO represents a knowledge-driven platform for digital histopathology. Therefore, the core of this initiative is the knowledge representation and the reasoning. Pathologists' knowledge and strategies are used to efficiently guide image analysis algorithms. In this sense, hard-coded knowledge, semantic and usability gaps are to be reduced by a leading, active role of reasoning and of semantic approaches. Integrating ontologies and reasoning in confluence with modular imaging algorithms, allows the emergence of new clinical-compliant protocols for digital pathology. This represents a promising way to solve decision reproducibility and traceability issues in digital histopathology, while increasing the flexibility of the platform and pathologists' acceptance, the one always having the legal responsibility in the diagnosis process. The proposed protocols open the way to increasingly reliable cancer assessment (i.e. multiple slides per sample analysis), quantifiable and traceable second opinion for cancer grading, and modern capabilities for cancer research support in histopathology (i.e. content and context-based indexing and retrieval). Last, but not least, the generic approach introduced here is applicable for number of additional challenges, related to molecular imaging and, in general, to high-content image exploration.

  10. Multiparametric Cell Cycle Analysis Using the Operetta High-Content Imager and Harmony Software with PhenoLOGIC.

    Directory of Open Access Journals (Sweden)

    Andrew J Massey

    Full Text Available High-content imaging is a powerful tool for determining cell phenotypes at the single cell level. Characterising the effect of small molecules on cell cycle distribution is important for understanding their mechanism of action especially in oncology drug discovery but also for understanding potential toxicology liabilities. Here, a high-throughput phenotypic assay utilising the PerkinElmer Operetta high-content imager and Harmony software to determine cell cycle distribution is described. PhenoLOGIC, a machine learning algorithm within Harmony software was employed to robustly separate single cells from cell clumps. DNA content, EdU incorporation and pHH3 (S10 expression levels were subsequently utilised to separate cells into the various phases of the cell cycle. The assay is amenable to multiplexing with an additional pharmacodynamic marker to assess cell cycle changes within a specific cellular sub-population. Using this approach, the cell cycle distribution of γH2AX positive nuclei was determined following treatment with DNA damaging agents. Likewise, the assay can be multiplexed with Ki67 to determine the fraction of quiescent cells and with BrdU dual labelling to determine S-phase duration. This methodology therefore provides a relatively cheap, quick and high-throughput phenotypic method for determining accurate cell cycle distribution for small molecule mechanism of action and drug toxicity studies.

  11. Information management for high content live cell imaging

    Directory of Open Access Journals (Sweden)

    White Michael RH

    2009-07-01

    Full Text Available Abstract Background High content live cell imaging experiments are able to track the cellular localisation of labelled proteins in multiple live cells over a time course. Experiments using high content live cell imaging will generate multiple large datasets that are often stored in an ad-hoc manner. This hinders identification of previously gathered data that may be relevant to current analyses. Whilst solutions exist for managing image data, they are primarily concerned with storage and retrieval of the images themselves and not the data derived from the images. There is therefore a requirement for an information management solution that facilitates the indexing of experimental metadata and results of high content live cell imaging experiments. Results We have designed and implemented a data model and information management solution for the data gathered through high content live cell imaging experiments. Many of the experiments to be stored measure the translocation of fluorescently labelled proteins from cytoplasm to nucleus in individual cells. The functionality of this database has been enhanced by the addition of an algorithm that automatically annotates results of these experiments with the timings of translocations and periods of any oscillatory translocations as they are uploaded to the repository. Testing has shown the algorithm to perform well with a variety of previously unseen data. Conclusion Our repository is a fully functional example of how high throughput imaging data may be effectively indexed and managed to address the requirements of end users. By implementing the automated analysis of experimental results, we have provided a clear impetus for individuals to ensure that their data forms part of that which is stored in the repository. Although focused on imaging, the solution provided is sufficiently generic to be applied to other functional proteomics and genomics experiments. The software is available from: fhttp://code.google.com/p/livecellim/

  12. High content analysis in amyotrophic lateral sclerosis.

    Science.gov (United States)

    Rinaldi, Federica; Motti, Dario; Ferraiuolo, Laura; Kaspar, Brian K

    2017-04-01

    Amyotrophic lateral sclerosis (ALS) is a devastating disease characterized by the progressive loss of motor neurons. Neurons, astrocytes, oligodendrocytes and microglial cells all undergo pathological modifications in the onset and progression of ALS. A number of genes involved in the etiopathology of the disease have been identified, but a complete understanding of the molecular mechanisms of ALS has yet to be determined. Currently, people affected by ALS have a life expectancy of only two to five years from diagnosis. The search for a treatment has been slow and mostly unsuccessful, leaving patients in desperate need of better therapies. Until recently, most pre-clinical studies utilized the available ALS animal models. In the past years, the development of new protocols for isolation of patient cells and differentiation into relevant cell types has provided new tools to model ALS, potentially more relevant to the disease itself as they directly come from patients. The use of stem cells is showing promise to facilitate ALS research by expanding our understanding of the disease and help to identify potential new therapeutic targets and therapies to help patients. Advancements in high content analysis (HCA) have the power to contribute to move ALS research forward by combining automated image acquisition along with digital image analysis. With modern HCA machines it is possible, in a period of just a few hours, to observe changes in morphology and survival of cells, under the stimulation of hundreds, if not thousands of drugs and compounds. In this article, we will summarize the major molecular and cellular hallmarks of ALS, describe the advancements provided by the in vitro models developed in the last few years, and review the studies that have applied HCA to the ALS field to date. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Semiautomatic High-Content Analysis of Complex Images from Cocultures of Vascular Smooth Muscle Cells and Macrophages: A CellProfiler Showcase.

    Science.gov (United States)

    Roeper, Matthias; Braun-Dullaeus, Ruediger C; Weinert, Sönke

    2017-08-01

    Automatization in microscopy, cell culture, and the ease of digital imagery allow obtainment of more information from single samples and upscaling of image-based analysis to high-content approaches. Simple segmentation algorithms of biological imagery are nowadays widely spread in biomedical research, but processing of complex sample structures, for example, variable sample compositions, cell shapes, and sizes, and rare events remains a difficult task. As there is no perfect method for image segmentation and fully automatic image analysis of complex content, we aimed to succeed by identification of unique and reliable features within the sample. Through exemplary use of a coculture of vascular smooth muscle cells (VSMCs) and macrophages (MPs), we demonstrate how rare interactions within this highly variable sample type can be analyzed. Because of limitations in immunocytochemistry in our specific setup, we developed a semiautomatic approach to examine the interaction of lipid-laden MPs with VSMCs under hypoxic conditions based on nuclei morphology by high-content analysis using the open-source software CellProfiler ( www.cellprofiler.org ). We provide evidence that, in comparison with fully automatic analysis, a low threshold within the analysis workflow and subsequent manual control save time, while providing more objective and reliable results.

  14. Shedding light on filovirus infection with high-content imaging.

    Science.gov (United States)

    Pegoraro, Gianluca; Bavari, Sina; Panchal, Rekha G

    2012-08-01

    Microscopy has been instrumental in the discovery and characterization of microorganisms. Major advances in high-throughput fluorescence microscopy and automated, high-content image analysis tools are paving the way to the systematic and quantitative study of the molecular properties of cellular systems, both at the population and at the single-cell level. High-Content Imaging (HCI) has been used to characterize host-virus interactions in genome-wide reverse genetic screens and to identify novel cellular factors implicated in the binding, entry, replication and egress of several pathogenic viruses. Here we present an overview of the most significant applications of HCI in the context of the cell biology of filovirus infection. HCI assays have been recently implemented to quantitatively study filoviruses in cell culture, employing either infectious viruses in a BSL-4 environment or surrogate genetic systems in a BSL-2 environment. These assays are becoming instrumental for small molecule and siRNA screens aimed at the discovery of both cellular therapeutic targets and of compounds with anti-viral properties. We discuss the current practical constraints limiting the implementation of high-throughput biology in a BSL-4 environment, and propose possible solutions to safely perform high-content, high-throughput filovirus infection assays. Finally, we discuss possible novel applications of HCI in the context of filovirus research with particular emphasis on the identification of possible cellular biomarkers of virus infection.

  15. Shedding Light on Filovirus Infection with High-Content Imaging

    Directory of Open Access Journals (Sweden)

    Rekha G. Panchal

    2012-08-01

    Full Text Available Microscopy has been instrumental in the discovery and characterization of microorganisms. Major advances in high-throughput fluorescence microscopy and automated, high-content image analysis tools are paving the way to the systematic and quantitative study of the molecular properties of cellular systems, both at the population and at the single-cell level. High-Content Imaging (HCI has been used to characterize host-virus interactions in genome-wide reverse genetic screens and to identify novel cellular factors implicated in the binding, entry, replication and egress of several pathogenic viruses. Here we present an overview of the most significant applications of HCI in the context of the cell biology of filovirus infection. HCI assays have been recently implemented to quantitatively study filoviruses in cell culture, employing either infectious viruses in a BSL-4 environment or surrogate genetic systems in a BSL-2 environment. These assays are becoming instrumental for small molecule and siRNA screens aimed at the discovery of both cellular therapeutic targets and of compounds with anti-viral properties. We discuss the current practical constraints limiting the implementation of high-throughput biology in a BSL-4 environment, and propose possible solutions to safely perform high-content, high-throughput filovirus infection assays. Finally, we discuss possible novel applications of HCI in the context of filovirus research with particular emphasis on the identification of possible cellular biomarkers of virus infection.

  16. Application of high-content image analysis for quantitatively estimating lipid accumulation in oleaginous yeasts with potential for use in biodiesel production.

    Science.gov (United States)

    Capus, Aurélie; Monnerat, Marianne; Ribeiro, Luiz Carlos; de Souza, Wanderley; Martins, Juliana Lopes; Sant'Anna, Celso

    2016-03-01

    Biodiesel from oleaginous microorganisms is a viable substitute for a fossil fuel. Current methods for microorganism lipid productivity evaluation do not analyze lipid dynamics in single cells. Here, we described a high-content image analysis (HCA) as a promising strategy for screening oleaginous microorganisms for biodiesel production, while generating single-cell lipid dynamics data in large cell density. Rhodotorula slooffiae yeast were grown in standard (CTL) or lipid trigger medium (LTM), and lipid droplet (LD) accumulation was analyzed in deconvolved confocal microscopy images of cells stained with the lipophilic fluorescent Nile red (NR) dye using automated cell and LD segmentation. The 'vesicle segmentation' method yielded valid morphometric results for limited lipid accumulation in smaller LDs (CTL samples) and for high lipid accumulation in larger LDs (LTM samples), and detected LD localization changes. Thus, HCA can be used to analyze the lipid accumulation patterns likely to be encountered in screens for biodiesel production.

  17. Quantitative analysis of mitochondrial morphology and membrane potential in living cells using high-content imaging, machine learning, and morphological binning.

    Science.gov (United States)

    Leonard, Anthony P; Cameron, Robert B; Speiser, Jaime L; Wolf, Bethany J; Peterson, Yuri K; Schnellmann, Rick G; Beeson, Craig C; Rohrer, Bärbel

    2015-02-01

    Understanding the processes of mitochondrial dynamics (fission, fusion, biogenesis, and mitophagy) has been hampered by the lack of automated, deterministic methods to measure mitochondrial morphology from microscopic images. A method to quantify mitochondrial morphology and function is presented here using a commercially available automated high-content wide-field fluorescent microscopy platform and R programming-language-based semi-automated data analysis to achieve high throughput morphological categorization (puncta, rod, network, and large & round) and quantification of mitochondrial membrane potential. In conjunction with cellular respirometry to measure mitochondrial respiratory capacity, this method detected that increasing concentrations of toxicants known to directly or indirectly affect mitochondria (t-butyl hydroperoxide [TBHP], rotenone, antimycin A, oligomycin, ouabain, and carbonyl cyanide-p-trifluoromethoxyphenylhydrazone [FCCP]), decreased mitochondrial networked areas in cultured 661w cells to 0.60-0.80 at concentrations that inhibited respiratory capacity to 0.20-0.70 (fold change compared to vehicle). Concomitantly, mitochondrial swelling was increased from 1.4- to 2.3-fold of vehicle as indicated by changes in large & round areas in response to TBHP, oligomycin, or ouabain. Finally, the automated identification of mitochondrial location enabled accurate quantification of mitochondrial membrane potential by measuring intramitochondrial tetramethylrhodamine methyl ester (TMRM) fluorescence intensity. Administration of FCCP depolarized and administration of oligomycin hyperpolarized mitochondria, as evidenced by changes in intramitochondrial TMRM fluorescence intensities to 0.33- or 5.25-fold of vehicle control values, respectively. In summary, this high-content imaging method accurately quantified mitochondrial morphology and membrane potential in hundreds of thousands of cells on a per-cell basis, with sufficient throughput for pharmacological

  18. Graph cut and image intensity-based splitting improves nuclei segmentation in high-content screening

    Science.gov (United States)

    Farhan, Muhammad; Ruusuvuori, Pekka; Emmenlauer, Mario; Rämö, Pauli; Yli-Harja, Olli; Dehio, Christoph

    2013-02-01

    Quantification of phenotypes in high-content screening experiments depends on the accuracy of single cell analysis. In such analysis workflows, cell nuclei segmentation is typically the first step and is followed by cell body segmentation, feature extraction, and subsequent data analysis workflows. Therefore, it is of utmost importance that the first steps of high-content analysis are done accurately in order to guarantee correctness of the final analysis results. In this paper, we present a novel cell nuclei image segmentation framework which exploits robustness of graph cut to obtain initial segmentation for image intensity-based clump splitting method to deliver the accurate overall segmentation. By using quantitative benchmarks and qualitative comparison with real images from high-content screening experiments with complicated multinucleate cells, we show that our method outperforms other state-of-the-art nuclei segmentation methods. Moreover, we provide a modular and easy-to-use implementation of the method for a widely used platform.

  19. Simultaneous multi-parametric analysis of Leishmania and of its hosting mammal cells: A high content imaging-based method enabling sound drug discovery process.

    Science.gov (United States)

    Forestier, Claire-Lise; Späth, Gerald Frank; Prina, Eric; Dasari, Sreekanth

    2015-11-01

    Leishmaniasis is a vector-borne disease for which only limited therapeutic options are available. The disease is ranked among the six most important tropical infectious diseases and represents the second-largest parasitic killer in the world. The development of new therapies has been hampered by the lack of technologies and methodologies that can be integrated into the complex physiological environment of a cell or organism and adapted to suitable in vitro and in vivo Leishmania models. Recent advances in microscopy imaging offer the possibility to assess the efficacy of potential drug candidates against Leishmania within host cells. This technology allows the simultaneous visualization of relevant phenotypes in parasite and host cells and the quantification of a variety of cellular events. In this review, we present the powerful cellular imaging methodologies that have been developed for drug screening in a biologically relevant context, addressing both high-content and high-throughput needs. Furthermore, we discuss the potential of intra-vital microscopy imaging in the context of the anti-leishmanial drug discovery process.

  20. High content image cytometry in the context of subnuclear organization.

    Science.gov (United States)

    De Vos, W H; Van Neste, L; Dieriks, B; Joss, G H; Van Oostveldt, P

    2010-01-01

    The organization of proteins in space and time is essential to their function. To accurately quantify subcellular protein characteristics in a population of cells with regard for the stochasticity of events in a natural context, there is a fast-growing need for image-based cytometry. Simultaneously, the massive amount of data that is generated by image-cytometric analyses, calls for tools that enable pattern recognition and automated classification. In this article, we present a general approach for multivariate phenotypic profiling of individual cell nuclei and quantification of subnuclear spots using automated fluorescence mosaic microscopy, optimized image processing tools, and supervised classification. We demonstrate the efficiency of our analysis by determination of differential DNA damage repair patterns in response to genotoxic stress and radiation, and we show the potential of data mining in pinpointing specific phenotypes after transient transfection. The presented approach allowed for systematic analysis of subnuclear features in large image data sets and accurate classification of phenotypes at the level of the single cell. Consequently, this type of nuclear fingerprinting shows potential for high-throughput applications, such as functional protein assays or drug compound screening.

  1. High-content single-cell analysis on-chip using a laser microarray scanner.

    Science.gov (United States)

    Zhou, Jing; Wu, Yu; Lee, Sang-Kwon; Fan, Rong

    2012-12-07

    High-content cellomic analysis is a powerful tool for rapid screening of cellular responses to extracellular cues and examination of intracellular signal transduction pathways at the single-cell level. In conjunction with microfluidics technology that provides unique advantages in sample processing and precise control of fluid delivery, it holds great potential to transform lab-on-a-chip systems for high-throughput cellular analysis. However, high-content imaging instruments are expensive, sophisticated, and not readily accessible. Herein, we report on a laser scanning cytometry approach that exploits a bench-top microarray scanner as an end-point reader to perform rapid and automated fluorescence imaging of cells cultured on a chip. Using high-content imaging analysis algorithms, we demonstrated multiplexed measurements of morphometric and proteomic parameters from all single cells. Our approach shows the improvement of both sensitivity and dynamic range by two orders of magnitude as compared to conventional epifluorescence microscopy. We applied this technology to high-throughput analysis of mesenchymal stem cells on an extracellular matrix protein array and characterization of heterotypic cell populations. This work demonstrates the feasibility of a laser microarray scanner for high-content cellomic analysis and opens up new opportunities to conduct informative cellular analysis and cell-based screening in the lab-on-a-chip systems.

  2. High-content analysis for drug delivery and nanoparticle applications.

    Science.gov (United States)

    Brayden, David J; Cryan, Sally-Ann; Dawson, Kenneth A; O'Brien, Peter J; Simpson, Jeremy C

    2015-08-01

    High-content analysis (HCA) provides quantitative multiparametric cellular fluorescence data. From its origins in discovery toxicology, it is now addressing fundamental questions in drug delivery. Nanoparticles (NPs), polymers, and intestinal permeation enhancers are being harnessed in drug delivery systems to modulate plasma membrane properties and the intracellular environment. Identifying comparative mechanistic cytotoxicity on sublethal events is crucial to expedite the development of such systems. NP uptake and intracellular routing pathways are also being dissected using chemical and genetic perturbations, with the potential to assess the intracellular fate of targeted and untargeted particles in vitro. As we discuss here, HCA is set to make a major impact in preclinical delivery research by elucidating the intracellular pathways of NPs and the in vitro mechanistic-based toxicology of formulation constituents.

  3. A multi-channel high time resolution detector for high content imaging

    CERN Document Server

    Lapington, J S; Miller, G M; Ashton, T J R; Jarron, P; Despeisse, M; Powolny, F; Howorth, J; Milnes, J

    2009-01-01

    Medical imaging has long benefited from advances in photon counting detectors arising from space and particle physics. We describe a microchannel plate-based detector system for high content (multi-parametric) analysis, specifically designed to provide a step change in performance and throughput for measurements in imaged live cells and tissue for the ‘omics’. The detector system integrates multi-channel, high time resolution, photon counting capability into a single miniaturized detector with integrated ASIC electronics, comprising a fast, low power amplifier discriminator and TDC for every channel of the discrete pixel electronic readout, and achieving a pixel density improvement of order two magnitudes compared with current comparable devices. The device combines high performance, easy reconfigurability, and economy within a compact footprint. We present simulations and preliminary measurements in the context of our ultimate goals of 20 ps time resolution with multi-channel parallel analysis (1024 chan...

  4. Quantitative high content imaging of cellular adaptive stress response pathways in toxicity for chemical safety assessment.

    Science.gov (United States)

    Wink, Steven; Hiemstra, Steven; Huppelschoten, Suzanna; Danen, Erik; Niemeijer, Marije; Hendriks, Giel; Vrieling, Harry; Herpers, Bram; van de Water, Bob

    2014-03-17

    Over the past decade, major leaps forward have been made on the mechanistic understanding and identification of adaptive stress response landscapes underlying toxic insult using transcriptomics approaches. However, for predictive purposes of adverse outcome several major limitations in these approaches exist. First, the limited number of samples that can be analyzed reduces the in depth analysis of concentration-time course relationships for toxic stress responses. Second these transcriptomics analysis have been based on the whole cell population, thereby inevitably preventing single cell analysis. Third, transcriptomics is based on the transcript level, totally ignoring (post)translational regulation. We believe these limitations are circumvented with the application of high content analysis of relevant toxicant-induced adaptive stress signaling pathways using bacterial artificial chromosome (BAC) green fluorescent protein (GFP) reporter cell-based assays. The goal is to establish a platform that incorporates all adaptive stress pathways that are relevant for toxicity, with a focus on drug-induced liver injury. In addition, cellular stress responses typically follow cell perturbations at the subcellular organelle level. Therefore, we complement our reporter line panel with reporters for specific organelle morphometry and function. Here, we review the approaches of high content imaging of cellular adaptive stress responses to chemicals and the application in the mechanistic understanding and prediction of chemical toxicity at a systems toxicology level.

  5. Automated analysis of high-content microscopy data with deep learning.

    Science.gov (United States)

    Kraus, Oren Z; Grys, Ben T; Ba, Jimmy; Chong, Yolanda; Frey, Brendan J; Boone, Charles; Andrews, Brenda J

    2017-04-18

    Existing computational pipelines for quantitative analysis of high-content microscopy data rely on traditional machine learning approaches that fail to accurately classify more than a single dataset without substantial tuning and training, requiring extensive analysis. Here, we demonstrate that the application of deep learning to biological image data can overcome the pitfalls associated with conventional machine learning classifiers. Using a deep convolutional neural network (DeepLoc) to analyze yeast cell images, we show improved performance over traditional approaches in the automated classification of protein subcellular localization. We also demonstrate the ability of DeepLoc to classify highly divergent image sets, including images of pheromone-arrested cells with abnormal cellular morphology, as well as images generated in different genetic backgrounds and in different laboratories. We offer an open-source implementation that enables updating DeepLoc on new microscopy datasets. This study highlights deep learning as an important tool for the expedited analysis of high-content microscopy data. © 2017 The Authors. Published under the terms of the CC BY 4.0 license.

  6. A High-Content Larval Zebrafish Brain Imaging Method for Small Molecule Drug Discovery

    Science.gov (United States)

    Liu, Harrison; Chen, Steven; Huang, Kevin; Kim, Jeffrey; Mo, Han; Iovine, Raffael; Gendre, Julie; Pascal, Pauline; Li, Qiang; Sun, Yaping; Dong, Zhiqiang; Arkin, Michelle; Guo, Su

    2016-01-01

    Drug discovery in whole-organisms such as zebrafish is a promising approach for identifying biologically-relevant lead compounds. However, high content imaging of zebrafish at cellular resolution is challenging due to the difficulty in orienting larvae en masse such that the cell type of interest is in clear view. We report the development of the multi-pose imaging method, which uses 96-well round bottom plates combined with a standard liquid handler to repose the larvae within each well multiple times, such that an image in a specific orientation can be acquired. We have validated this method in a chemo-genetic zebrafish model of dopaminergic neuron degeneration. For this purpose, we have developed an analysis pipeline that identifies the larval brain in each image and then quantifies neuronal health in CellProfiler. Our method achieves a SSMD* score of 6.96 (robust Z’-factor of 0.56) and is suitable for screening libraries up to 105 compounds in size. PMID:27732643

  7. High-Content Microscopy Analysis of Subcellular Structures: Assay Development and Application to Focal Adhesion Quantification.

    Science.gov (United States)

    Kroll, Torsten; Schmidt, David; Schwanitz, Georg; Ahmad, Mubashir; Hamann, Jana; Schlosser, Corinne; Lin, Yu-Chieh; Böhm, Konrad J; Tuckermann, Jan; Ploubidou, Aspasia

    2016-07-01

    High-content analysis (HCA) converts raw light microscopy images to quantitative data through the automated extraction, multiparametric analysis, and classification of the relevant information content. Combined with automated high-throughput image acquisition, HCA applied to the screening of chemicals or RNAi-reagents is termed high-content screening (HCS). Its power in quantifying cell phenotypes makes HCA applicable also to routine microscopy. However, developing effective HCA and bioinformatic analysis pipelines for acquisition of biologically meaningful data in HCS is challenging. Here, the step-by-step development of an HCA assay protocol and an HCS bioinformatics analysis pipeline are described. The protocol's power is demonstrated by application to focal adhesion (FA) detection, quantitative analysis of multiple FA features, and functional annotation of signaling pathways regulating FA size, using primary data of a published RNAi screen. The assay and the underlying strategy are aimed at researchers performing microscopy-based quantitative analysis of subcellular features, on a small scale or in large HCS experiments. © 2016 by John Wiley & Sons, Inc.

  8. Predicting In Vivo Anti-Hepatofibrotic Drug Efficacy Based on In Vitro High-Content Analysis

    OpenAIRE

    2011-01-01

    BACKGROUND/AIMS: Many anti-fibrotic drugs with high in vitro efficacies fail to produce significant effects in vivo. The aim of this work is to use a statistical approach to design a numerical predictor that correlates better with in vivo outcomes. METHODS: High-content analysis (HCA) was performed with 49 drugs on hepatic stellate cells (HSCs) LX-2 stained with 10 fibrotic markers. ~0.3 billion feature values from all cells in >150,000 images were quantified to reflect the drug effects. A sy...

  9. High content analysis of phagocytic activity and cell morphology with PuntoMorph.

    Science.gov (United States)

    Al-Ali, Hassan; Gao, Han; Dalby-Hansen, Camilla; Peters, Vanessa Ann; Shi, Yan; Brambilla, Roberta

    2017-11-01

    Phagocytosis is essential for maintenance of normal homeostasis and healthy tissue. As such, it is a therapeutic target for a wide range of clinical applications. The development of phenotypic screens targeting phagocytosis has lagged behind, however, due to the difficulties associated with image-based quantification of phagocytic activity. We present a robust algorithm and cell-based assay system for high content analysis of phagocytic activity. The method utilizes fluorescently labeled beads as a phagocytic substrate with defined physical properties. The algorithm employs statistical modeling to determine the mean fluorescence of individual beads within each image, and uses the information to conduct an accurate count of phagocytosed beads. In addition, the algorithm conducts detailed and sophisticated analysis of cellular morphology, making it a standalone tool for high content screening. We tested our assay system using microglial cultures. Our results recapitulated previous findings on the effects of microglial stimulation on cell morphology and phagocytic activity. Moreover, our cell-level analysis revealed that the two phenotypes associated with microglial activation, specifically cell body hypertrophy and increased phagocytic activity, are not highly correlated. This novel finding suggests the two phenotypes may be under the control of distinct signaling pathways. We demonstrate that our assay system outperforms preexisting methods for quantifying phagocytic activity in multiple dimensions including speed, accuracy, and resolution. We provide a framework to facilitate the development of high content assays suitable for drug screening. For convenience, we implemented our algorithm in a standalone software package, PuntoMorph. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Anti-cancer agents in Saudi Arabian herbals revealed by automated high-content imaging

    KAUST Repository

    Hajjar, Dina

    2017-06-13

    Natural products have been used for medical applications since ancient times. Commonly, natural products are structurally complex chemical compounds that efficiently interact with their biological targets, making them useful drug candidates in cancer therapy. Here, we used cell-based phenotypic profiling and image-based high-content screening to study the mode of action and potential cellular targets of plants historically used in Saudi Arabia\\'s traditional medicine. We compared the cytological profiles of fractions taken from Juniperus phoenicea (Arar), Anastatica hierochuntica (Kaff Maryam), and Citrullus colocynthis (Hanzal) with a set of reference compounds with established modes of action. Cluster analyses of the cytological profiles of the tested compounds suggested that these plants contain possible topoisomerase inhibitors that could be effective in cancer treatment. Using histone H2AX phosphorylation as a marker for DNA damage, we discovered that some of the compounds induced double-strand DNA breaks. Furthermore, chemical analysis of the active fraction isolated from Juniperus phoenicea revealed possible anti-cancer compounds. Our results demonstrate the usefulness of cell-based phenotypic screening of natural products to reveal their biological activities.

  11. Comparison of three cell fixation methods for high content analysis assays utilizing quantum dots.

    Science.gov (United States)

    Williams, Y; Byrne, S; Bashir, M; Davies, A; Whelan, A; Gun'ko, Y; Kelleher, D; Volkov, Y

    2008-10-01

    Semiconductor nanoparticles or quantum dots are being increasingly utilized as fluorescent probes in cell biology both in live and fixed cell assays. Quantum dots possess an immense potential for use in multiplexing assays that can be run on high content screening analysers. Depending on the nature of the biological target under investigation, experiments are frequently required on cells retaining an intact cell membrane or also on those that have been fixed and permeabilized to expose intracellular antigens. Fixation of cell lines before or after the addition of quantum dots may affect their localization, emission properties and stability. Using a high content analysis platform we perform a quantitative comparative analysis of three common fixation techniques in two different cell lines exposed to carboxylic acid stabilized CdTe quantum dots. Our study demonstrates that in prefixed and permeabilized cells, quantum dots are readily internalized regardless of cell type, and their intracellular location is primarily determined by the properties of the quantum dots themselves. However, if the fixation procedures are preformed on live cells previously incubated with quantum dots, other important factors have to be considered. The choice of the fixative significantly influences the fluorescent characteristics of the quantum dots. Fixatives, regardless of their chemical nature, negatively affected quantum dots fluorescence intensity. Comparative analysis of gluteraldehyde, methanol and paraformaldehyde demonstrated that 2% paraformaldehyde was the fixative of choice. The presence of protein in the media did not significantly alter the quantum dot fluorescence. This study indicates that multiplexing assays utilizing quantum dots, despite being a cutting edge tool for high content cell imaging, still require careful consideration of the basic steps in biological sample processing.

  12. High-Content Analysis of CRISPR-Cas9 Gene-Edited Human Embryonic Stem Cells

    Directory of Open Access Journals (Sweden)

    Jared Carlson-Stevermer

    2016-01-01

    Full Text Available CRISPR-Cas9 gene editing of human cells and tissues holds much promise to advance medicine and biology, but standard editing methods require weeks to months of reagent preparation and selection where much or all of the initial edited samples are destroyed during analysis. ArrayEdit, a simple approach utilizing surface-modified multiwell plates containing one-pot transcribed single-guide RNAs, separates thousands of edited cell populations for automated, live, high-content imaging and analysis. The approach lowers the time and cost of gene editing and produces edited human embryonic stem cells at high efficiencies. Edited genes can be expressed in both pluripotent stem cells and differentiated cells. This preclinical platform adds important capabilities to observe editing and selection in situ within complex structures generated by human cells, ultimately enabling optical and other molecular perturbations in the editing workflow that could refine the specificity and versatility of gene editing.

  13. High-Content Analysis of CRISPR-Cas9 Gene-Edited Human Embryonic Stem Cells.

    Science.gov (United States)

    Carlson-Stevermer, Jared; Goedland, Madelyn; Steyer, Benjamin; Movaghar, Arezoo; Lou, Meng; Kohlenberg, Lucille; Prestil, Ryan; Saha, Krishanu

    2016-01-12

    CRISPR-Cas9 gene editing of human cells and tissues holds much promise to advance medicine and biology, but standard editing methods require weeks to months of reagent preparation and selection where much or all of the initial edited samples are destroyed during analysis. ArrayEdit, a simple approach utilizing surface-modified multiwell plates containing one-pot transcribed single-guide RNAs, separates thousands of edited cell populations for automated, live, high-content imaging and analysis. The approach lowers the time and cost of gene editing and produces edited human embryonic stem cells at high efficiencies. Edited genes can be expressed in both pluripotent stem cells and differentiated cells. This preclinical platform adds important capabilities to observe editing and selection in situ within complex structures generated by human cells, ultimately enabling optical and other molecular perturbations in the editing workflow that could refine the specificity and versatility of gene editing.

  14. A multi-functional imaging approach to high-content protein interaction screening.

    Directory of Open Access Journals (Sweden)

    Daniel R Matthews

    Full Text Available Functional imaging can provide a level of quantification that is not possible in what might be termed traditional high-content screening. This is due to the fact that the current state-of-the-art high-content screening systems take the approach of scaling-up single cell assays, and are therefore based on essentially pictorial measures as assay indicators. Such phenotypic analyses have become extremely sophisticated, advancing screening enormously, but this approach can still be somewhat subjective. We describe the development, and validation, of a prototype high-content screening platform that combines steady-state fluorescence anisotropy imaging with fluorescence lifetime imaging (FLIM. This functional approach allows objective, quantitative screening of small molecule libraries in protein-protein interaction assays. We discuss the development of the instrumentation, the process by which information on fluorescence resonance energy transfer (FRET can be extracted from wide-field, acceptor fluorescence anisotropy imaging and cross-checking of this modality using lifetime imaging by time-correlated single-photon counting. Imaging of cells expressing protein constructs where eGFP and mRFP1 are linked with amino-acid chains of various lengths (7, 19 and 32 amino acids shows the two methodologies to be highly correlated. We validate our approach using a small-scale inhibitor screen of a Cdc42 FRET biosensor probe expressed in epidermoid cancer cells (A431 in a 96 microwell-plate format. We also show that acceptor fluorescence anisotropy can be used to measure variations in hetero-FRET in protein-protein interactions. We demonstrate this using a screen of inhibitors of internalization of the transmembrane receptor, CXCR4. These assays enable us to demonstrate all the capabilities of the instrument, image processing and analytical techniques that have been developed. Direct correlation between acceptor anisotropy and donor FLIM is observed for FRET

  15. Selective-plane illumination microscopy for high-content volumetric biological imaging

    Science.gov (United States)

    McGorty, Ryan; Huang, Bo

    2016-03-01

    Light-sheet microscopy, also named selective-plane illumination microscopy, enables optical sectioning with minimal light delivered to the sample. Therefore, it allows one to gather volumetric datasets of developing embryos and other light-sensitive samples over extended times. We have configured a light-sheet microscope that, unlike most previous designs, can image samples in formats compatible with high-content imaging. Our microscope can be used with multi-well plates or with microfluidic devices. In designing our optical system to accommodate these types of sample holders we encounter large optical aberrations. We counter these aberrations with both static optical components in the imaging path and with adaptive optics. Potential applications of this microscope include studying the development of a large number of embryos in parallel and over long times with subcellular resolution and doing high-throughput screens on organisms or cells where volumetric data is necessary.

  16. High-content analysis of sequential events during the early phase of influenza A virus infection.

    Science.gov (United States)

    Banerjee, Indranil; Yamauchi, Yohei; Helenius, Ari; Horvath, Peter

    2013-01-01

    Influenza A virus (IAV) represents a worldwide threat to public health by causing severe morbidity and mortality every year. Due to high mutation rate, new strains of IAV emerge frequently. These IAVs are often drug-resistant and require vaccine reformulation. A promising approach to circumvent this problem is to target host cell determinants crucial for IAV infection, but dispensable for the cell. Several RNAi-based screens have identified about one thousand cellular factors that promote IAV infection. However, systematic analyses to determine their specific functions are lacking. To address this issue, we developed quantitative, imaging-based assays to dissect seven consecutive steps in the early phases of IAV infection in tissue culture cells. The entry steps for which we developed the assays were: virus binding to the cell membrane, endocytosis, exposure to low pH in endocytic vacuoles, acid-activated fusion of viral envelope with the vacuolar membrane, nucleocapsid uncoating in the cytosol, nuclear import of viral ribonucleoproteins, and expression of the viral nucleoprotein. We adapted the assays to automated microscopy and optimized them for high-content screening. To quantify the image data, we performed both single and multi-parametric analyses, in combination with machine learning. By time-course experiments, we determined the optimal time points for each assay. Our quality control experiments showed that the assays were sufficiently robust for high-content analysis. The methods we describe in this study provide a powerful high-throughput platform to understand the host cell processes, which can eventually lead to the discovery of novel anti-pathogen strategies.

  17. High-Content Analysis of Breast Cancer Using Single-Cell Deep Transfer Learning.

    Science.gov (United States)

    Kandaswamy, Chetak; Silva, Luís M; Alexandre, Luís A; Santos, Jorge M

    2016-03-01

    High-content analysis has revolutionized cancer drug discovery by identifying substances that alter the phenotype of a cell, which prevents tumor growth and metastasis. The high-resolution biofluorescence images from assays allow precise quantitative measures enabling the distinction of small molecules of a host cell from a tumor. In this work, we are particularly interested in the application of deep neural networks (DNNs), a cutting-edge machine learning method, to the classification of compounds in chemical mechanisms of action (MOAs). Compound classification has been performed using image-based profiling methods sometimes combined with feature reduction methods such as principal component analysis or factor analysis. In this article, we map the input features of each cell to a particular MOA class without using any treatment-level profiles or feature reduction methods. To the best of our knowledge, this is the first application of DNN in this domain, leveraging single-cell information. Furthermore, we use deep transfer learning (DTL) to alleviate the intensive and computational demanding effort of searching the huge parameter's space of a DNN. Results show that using this approach, we obtain a 30% speedup and a 2% accuracy improvement.

  18. Validation of a high-content screening assay using whole-well imaging of transformed phenotypes.

    Science.gov (United States)

    Ramirez, Christina N; Ozawa, Tatsuya; Takagi, Toshimitsu; Antczak, Christophe; Shum, David; Graves, Robert; Holland, Eric C; Djaballah, Hakim

    2011-06-01

    Automated microscopy was introduced two decades ago and has become an integral part of the discovery process as a high-content screening platform with noticeable challenges in executing cell-based assays. It would be of interest to use it to screen for reversers of a transformed cell phenotype. In this report, we present data obtained from an optimized assay that identifies compounds that reverse a transformed phenotype induced in NIH-3T3 cells by expressing a novel oncogene, KP, resulting from fusion between platelet derived growth factor receptor alpha (PDGFRα) and kinase insert domain receptor (KDR), that was identified in human glioblastoma. Initial image acquisitions using multiple tiles per well were found to be insufficient as to accurately image and quantify the clusters; whole-well imaging, performed on the IN Cell Analyzer 2000, while still two-dimensional imaging, was found to accurately image and quantify clusters, due largely to the inherent variability of their size and well location. The resulting assay exhibited a Z' value of 0.79 and a signal-to-noise ratio of 15, and it was validated against known effectors and shown to identify only PDGFRα inhibitors, and then tested in a pilot screen against a library of 58 known inhibitors identifying mostly PDGFRα inhibitors as reversers of the KP induced transformed phenotype. In conclusion, our optimized and validated assay using whole-well imaging is robust and sensitive in identifying compounds that reverse the transformed phenotype induced by KP with a broader applicability to other cell-based assays that are challenging in HTS against chemical and RNAi libraries.

  19. Multiplexed high-content analysis of mitochondrial morphofunction using live-cell microscopy.

    Science.gov (United States)

    Iannetti, Eligio F; Smeitink, Jan A M; Beyrath, Julien; Willems, Peter H G M; Koopman, Werner J H

    2016-09-01

    Mitochondria have a central role in cellular (patho)physiology, and they display a highly variable morphology that is probably coupled to their functional state. Here we present a protocol that allows unbiased and automated quantification of mitochondrial 'morphofunction' (i.e., morphology and membrane potential), cellular parameters (size, confluence) and nuclear parameters (number, morphology) in intact living primary human skin fibroblasts (PHSFs). Cells are cultured in 96-well plates and stained with tetramethyl rhodamine methyl ester (TMRM), calcein-AM (acetoxy-methyl ester) and Hoechst 33258. Next, multispectral fluorescence images are acquired using automated microscopy and processed to extract 44 descriptors. Subsequently, the descriptor data are subjected to a quality control (QC) algorithm based upon principal component analysis (PCA) and interpreted using univariate, bivariate and multivariate analysis. The protocol requires a time investment of ∼4 h distributed over 2 d. Although it is specifically developed for PHSFs, which are widely used in preclinical research, the protocol is portable to other cell types and can be scaled up for implementation in high-content screening.

  20. A high-content analysis toolbox permits dissection of diverse signaling pathways for T lymphocyte polarization.

    Science.gov (United States)

    Freeley, Michael; Bakos, Gabor; Davies, Anthony; Kelleher, Dermot; Long, Aideen; Dunican, Dara J

    2010-06-01

    RNA interfering (RNAi) screening strategies offer the potential to elucidate the signaling pathways that regulate integrin and adhesion receptor-mediated changes in T lymphocyte morphology. Of crucial importance, however, is the definition of key sets of parameters that will provide accurate, quantitative, and nonredundant information to flag relevant hits in such assays. In this study, the authors have used an image-based high-content analysis (HCA) technology platform and a panel of 24 pharmacological inhibitors, at a range of concentrations, to define key sets of parameters that enables sensitive and quantitative effects on integrin (LFA-1)-mediated lymphocyte morphology to be evaluated. In particular, multiparametric analysis of lymphocyte morphology that was based on intracellular staining of both the F-actin and alpha-tubulin cytoskeleton resulted in improved ability to discriminate morphological behavior compared to F-actin staining alone. Morphological and fluorescence intensity/distribution profiling of pharmacologically treated lymphocytes stimulated with integrin (LFA-1) and adhesion receptors (CD44) also revealed notable differences in their sensitivity to inhibitors. The assay described here may be used in HCA strategies such as RNAi screening assays to elucidate the signaling pathways and molecules that regulate integrin/adhesion receptor-mediated T lymphocyte polarization.

  1. Quantitative characterization of mitosis-blocked tetraploid cells using high content analysis.

    Science.gov (United States)

    Grove, Linnette E; Ghosh, Richik N

    2006-08-01

    A range of cellular evidence supporting a G1 tetraploidy checkpoint was obtained from different assay methods including flow cytometry, immunoblotting, and microscopy. Cancer research would benefit if these cellular properties could instead be measured by a single, quantitative, automated assay method, such as high content analysis (HCA). Thus, nocodazole-treated cells were fluorescently labeled for different cell cycle-associated properties, including DNA content, retinoblastoma (Rb) and histone H3 phosphorylation, p53 and p21(WAF1) expression, nuclear and cell sizes, and cell morphology, and automatically imaged, analyzed, and correlated using HCA. HCA verified that nocodazole-induced mitosis block resulted in tetraploid cells. Rb and histone H3 were maximally hyperphosphorylated by 24 h of nocodazole treatment, accompanied by cell and nuclear size decreases and cellular rounding. Cells remained tetraploid and mononucleated with longer treatments, but other targets reverted to G1 levels, including Rb and histone H3 dephosphorylation accompanied by cellular respreading. This was accompanied by increased p53 and p21(WAF1) expression levels. The range of effects accompanying nocodazole-induced block of mitosis and the resulting tetraploid cells' reversal to a pseudo-G1 state can be quantitatively measured by HCA in an automated manner, recommending this assay method for the large-scale biology challenges of modern cancer drug discovery.

  2. Predicting In Vivo Anti-Hepatofibrotic Drug Efficacy Based on In Vitro High-Content Analysis

    Science.gov (United States)

    Zheng, Baixue; Tan, Looling; Mo, Xuejun; Yu, Weimiao; Wang, Yan; Tucker-Kellogg, Lisa; Welsch, Roy E.; So, Peter T. C.; Yu, Hanry

    2011-01-01

    Background/Aims Many anti-fibrotic drugs with high in vitro efficacies fail to produce significant effects in vivo. The aim of this work is to use a statistical approach to design a numerical predictor that correlates better with in vivo outcomes. Methods High-content analysis (HCA) was performed with 49 drugs on hepatic stellate cells (HSCs) LX-2 stained with 10 fibrotic markers. ∼0.3 billion feature values from all cells in >150,000 images were quantified to reflect the drug effects. A systematic literature search on the in vivo effects of all 49 drugs on hepatofibrotic rats yields 28 papers with histological scores. The in vivo and in vitro datasets were used to compute a single efficacy predictor (Epredict). Results We used in vivo data from one context (CCl4 rats with drug treatments) to optimize the computation of Epredict. This optimized relationship was independently validated using in vivo data from two different contexts (treatment of DMN rats and prevention of CCl4 induction). A linear in vitro-in vivo correlation was consistently observed in all the three contexts. We used Epredict values to cluster drugs according to efficacy; and found that high-efficacy drugs tended to target proliferation, apoptosis and contractility of HSCs. Conclusions The Epredict statistic, based on a prioritized combination of in vitro features, provides a better correlation between in vitro and in vivo drug response than any of the traditional in vitro markers considered. PMID:22073152

  3. Discovering Molecules That Regulate Efferocytosis Using Primary Human Macrophages and High Content Imaging.

    Directory of Open Access Journals (Sweden)

    Sandra Santulli-Marotto

    Full Text Available Defective clearance of apoptotic cells can result in sustained inflammation and subsequent autoimmunity. Macrophages, the "professional phagocyte" of the body, are responsible for efficient, non-phlogistic, apoptotic cell clearance. Controlling phagocytosis of apoptotic cells by macrophages is an attractive therapeutic opportunity to ameliorate inflammation. Using high content imaging, we have developed a system for evaluating the effects of antibody treatment on apoptotic cell uptake in primary human macrophages by comparing the Phagocytic Index (PI for each antibody. Herein we demonstrate the feasibility of evaluating a panel of antibodies of unknown specificities obtained by immunization of mice with primary human macrophages and show that they can be distinguished based on individual PI measurements. In this study ~50% of antibodies obtained enhance phagocytosis of apoptotic cells while approximately 5% of the antibodies in the panel exhibit some inhibition. Though the specificities of the majority of antibodies are unknown, two of the antibodies that improved apoptotic cell uptake recognize recombinant MerTK; a receptor known to function in this capacity in vivo. The agonistic impact of these antibodies on efferocytosis could be demonstrated without addition of either of the MerTK ligands, Gas6 or ProS. These results validate applying the mechanism of this fundamental biological process as a means for identification of modulators that could potentially serve as therapeutics. This strategy for interrogating macrophages to discover molecules regulating apoptotic cell uptake is not limited by access to purified protein thereby increasing the possibility of finding novel apoptotic cell uptake pathways.

  4. Design and implementation of high-content imaging platforms: lessons learned from end user-developer collaboration.

    Science.gov (United States)

    Adams, Cynthia L; Sjaastad, Michael D

    2009-11-01

    Automated high-content screening and analysis (HCS/HCA) technology solutions have become indispensable in expediting the pace of drug discovery. Because of the complexity involved in designing, building, and validating HCS/HCA platforms, it is important to design, build, and validate a HCS/HCA platform before it is actually needed. Managed properly, collaboration between technology providers and end users in research is essential in accelerating development of the hardware and software of new HCS/HCA platforms before they become commercially available. Such a collaboration results in the cost effective creation of new technologies that meet specific and customized industrial requirements. This review outlines the history of, and considerations relevant to, the development of the Cytometrix Profiling System by Cytokinetics, Inc. and the "Complete Imaging Solution" for high-content screening, developed by Molecular Devices Corporation (MDC) (now MDS Analytical Technologies), from original conception and testing of various components, to multiple development cycles from 1998 to the present, and finally to market consolidation.

  5. A Neuronal and Astrocyte Co-Culture Assay for High Content Analysis of Neurotoxicity

    Science.gov (United States)

    Anderl, Janet L; Redpath, Stella; Ball, Andrew J

    2009-01-01

    High Content Analysis (HCA) assays combine cells and detection reagents with automated imaging and powerful image analysis algorithms, allowing measurement of multiple cellular phenotypes within a single assay. In this study, we utilized HCA to develop a novel assay for neurotoxicity. Neurotoxicity assessment represents an important part of drug safety evaluation, as well as being a significant focus of environmental protection efforts. Additionally, neurotoxicity is also a well-accepted in vitro marker of the development of neurodegenerative diseases such as Alzheimer's and Parkinson's diseases. Recently, the application of HCA to neuronal screening has been reported. By labeling neuronal cells with βIII-tubulin, HCA assays can provide high-throughput, non-subjective, quantitative measurements of parameters such as neuronal number, neurite count and neurite length, all of which can indicate neurotoxic effects. However, the role of astrocytes remains unexplored in these models. Astrocytes have an integral role in the maintenance of central nervous system (CNS) homeostasis, and are associated with both neuroprotection and neurodegradation when they are activated in response to toxic substances or disease states. GFAP is an intermediate filament protein expressed predominantly in the astrocytes of the CNS. Astrocytic activation (gliosis) leads to the upregulation of GFAP, commonly accompanied by astrocyte proliferation and hypertrophy. This process of reactive gliosis has been proposed as an early marker of damage to the nervous system. The traditional method for GFAP quantitation is by immunoassay. This approach is limited by an inability to provide information on cellular localization, morphology and cell number. We determined that HCA could be used to overcome these limitations and to simultaneously measure multiple features associated with gliosis - changes in GFAP expression, astrocyte hypertrophy, and astrocyte proliferation - within a single assay. In co

  6. A neuronal and astrocyte co-culture assay for high content analysis of neurotoxicity.

    Science.gov (United States)

    Anderl, Janet L; Redpath, Stella; Ball, Andrew J

    2009-05-05

    High Content Analysis (HCA) assays combine cells and detection reagents with automated imaging and powerful image analysis algorithms, allowing measurement of multiple cellular phenotypes within a single assay. In this study, we utilized HCA to develop a novel assay for neurotoxicity. Neurotoxicity assessment represents an important part of drug safety evaluation, as well as being a significant focus of environmental protection efforts. Additionally, neurotoxicity is also a well-accepted in vitro marker of the development of neurodegenerative diseases such as Alzheimer's and Parkinson's diseases. Recently, the application of HCA to neuronal screening has been reported. By labeling neuronal cells with betaIII-tubulin, HCA assays can provide high-throughput, non-subjective, quantitative measurements of parameters such as neuronal number, neurite count and neurite length, all of which can indicate neurotoxic effects. However, the role of astrocytes remains unexplored in these models. Astrocytes have an integral role in the maintenance of central nervous system (CNS) homeostasis, and are associated with both neuroprotection and neurodegradation when they are activated in response to toxic substances or disease states. GFAP is an intermediate filament protein expressed predominantly in the astrocytes of the CNS. Astrocytic activation (gliosis) leads to the upregulation of GFAP, commonly accompanied by astrocyte proliferation and hypertrophy. This process of reactive gliosis has been proposed as an early marker of damage to the nervous system. The traditional method for GFAP quantitation is by immunoassay. This approach is limited by an inability to provide information on cellular localization, morphology and cell number. We determined that HCA could be used to overcome these limitations and to simultaneously measure multiple features associated with gliosis - changes in GFAP expression, astrocyte hypertrophy, and astrocyte proliferation - within a single assay. In co

  7. Predicting in vivo anti-hepatofibrotic drug efficacy based on in vitro high-content analysis.

    Directory of Open Access Journals (Sweden)

    Baixue Zheng

    Full Text Available BACKGROUND/AIMS: Many anti-fibrotic drugs with high in vitro efficacies fail to produce significant effects in vivo. The aim of this work is to use a statistical approach to design a numerical predictor that correlates better with in vivo outcomes. METHODS: High-content analysis (HCA was performed with 49 drugs on hepatic stellate cells (HSCs LX-2 stained with 10 fibrotic markers. ~0.3 billion feature values from all cells in >150,000 images were quantified to reflect the drug effects. A systematic literature search on the in vivo effects of all 49 drugs on hepatofibrotic rats yields 28 papers with histological scores. The in vivo and in vitro datasets were used to compute a single efficacy predictor (E(predict. RESULTS: We used in vivo data from one context (CCl(4 rats with drug treatments to optimize the computation of E(predict. This optimized relationship was independently validated using in vivo data from two different contexts (treatment of DMN rats and prevention of CCl(4 induction. A linear in vitro-in vivo correlation was consistently observed in all the three contexts. We used E(predict values to cluster drugs according to efficacy; and found that high-efficacy drugs tended to target proliferation, apoptosis and contractility of HSCs. CONCLUSIONS: The E(predict statistic, based on a prioritized combination of in vitro features, provides a better correlation between in vitro and in vivo drug response than any of the traditional in vitro markers considered.

  8. Inferring Toxicological Responses of HepG2 Cells from ToxCast High Content Imaging Data (SOT)

    Science.gov (United States)

    Understanding the dynamic perturbation of cell states by chemicals can aid in for predicting their adverse effects. High-content imaging (HCI) was used to measure the state of HepG2 cells over three time points (1, 24, and 72 h) in response to 976 ToxCast chemicals for 10 differe...

  9. Whole organism high-content screening by label-free, image-based Bayesian classification for parasitic diseases.

    Directory of Open Access Journals (Sweden)

    Ross A Paveley

    Full Text Available Sole reliance on one drug, Praziquantel, for treatment and control of schistosomiasis raises concerns about development of widespread resistance, prompting renewed interest in the discovery of new anthelmintics. To discover new leads we designed an automated label-free, high content-based, high throughput screen (HTS to assess drug-induced effects on in vitro cultured larvae (schistosomula using bright-field imaging. Automatic image analysis and Bayesian prediction models define morphological damage, hit/non-hit prediction and larval phenotype characterization. Motility was also assessed from time-lapse images. In screening a 10,041 compound library the HTS correctly detected 99.8% of the hits scored visually. A proportion of these larval hits were also active in an adult worm ex-vivo screen and are the subject of ongoing studies. The method allows, for the first time, screening of large compound collections against schistosomes and the methods are adaptable to other whole organism and cell-based screening by morphology and motility phenotyping.

  10. Impact of image segmentation on high-content screening data quality for SK-BR-3 cells

    Directory of Open Access Journals (Sweden)

    Li Yizheng

    2007-09-01

    Full Text Available Abstract Background High content screening (HCS is a powerful method for the exploration of cellular signalling and morphology that is rapidly being adopted in cancer research. HCS uses automated microscopy to collect images of cultured cells. The images are subjected to segmentation algorithms to identify cellular structures and quantitate their morphology, for hundreds to millions of individual cells. However, image analysis may be imperfect, especially for "HCS-unfriendly" cell lines whose morphology is not well handled by current image segmentation algorithms. We asked if segmentation errors were common for a clinically relevant cell line, if such errors had measurable effects on the data, and if HCS data could be improved by automated identification of well-segmented cells. Results Cases of poor cell body segmentation occurred frequently for the SK-BR-3 cell line. We trained classifiers to identify SK-BR-3 cells that were well segmented. On an independent test set created by human review of cell images, our optimal support-vector machine classifier identified well-segmented cells with 81% accuracy. The dose responses of morphological features were measurably different in well- and poorly-segmented populations. Elimination of the poorly-segmented cell population increased the purity of DNA content distributions, while appropriately retaining biological heterogeneity, and simultaneously increasing our ability to resolve specific morphological changes in perturbed cells. Conclusion Image segmentation has a measurable impact on HCS data. The application of a multivariate shape-based filter to identify well-segmented cells improved HCS data quality for an HCS-unfriendly cell line, and could be a valuable post-processing step for some HCS datasets.

  11. Quantification of patient-derived 3D cancer spheroids in high-content screening images

    Science.gov (United States)

    Kang, Mi-Sun; Rhee, Seon-Min; Seo, Ji-Hyun; Kim, Myoung-Hee

    2017-02-01

    We present a cell image quantification method for image-based drug response prediction from patient-derived glioblastoma cells. Drug response of each person differs at the cellular level. Therefore, quantification of a patient-derived cell phenotype is important in drug response prediction. We performed fluorescence microscopy to understand the features of patient-derived 3D cancer spheroids. A 3D cell culture simulates the in-vivo environment more closely than 2D adherence culture, and thus, allows more accurate cell analysis. Furthermore, it allows assessment of cellular aggregates. Cohesion is an important feature of cancer cells. In this paper, we demonstrate image-based quantification of cellular area, fluorescence intensity, and cohesion. To this end, we first performed image stitching to create an image of each well of the plate with the same environment. This image shows colonies of various sizes and shapes. To automatically detect the colonies, we used an intensity based classification algorithm. The morphological features of each cancer cell colony were measured. Next, we calculated the location correlation of each colony that is appeal of the cell density in the same well environment. Finally, we compared the features for drug-treated and untreated cells. This technique could potentially be applied for drug screening and quantification of the effects of the drugs.

  12. Use of a Machine Learning-Based High Content Analysis Approach to Identify Photoreceptor Neurite Promoting Molecules.

    Science.gov (United States)

    Fuller, John A; Berlinicke, Cynthia A; Inglese, James; Zack, Donald J

    2016-01-01

    High content analysis (HCA) has become a leading methodology in phenotypic drug discovery efforts. Typical HCA workflows include imaging cells using an automated microscope and analyzing the data using algorithms designed to quantify one or more specific phenotypes of interest. Due to the richness of high content data, unappreciated phenotypic changes may be discovered in existing image sets using interactive machine-learning based software systems. Primary postnatal day four retinal cells from the photoreceptor (PR) labeled QRX-EGFP reporter mice were isolated, seeded, treated with a set of 234 profiled kinase inhibitors and then cultured for 1 week. The cells were imaged with an Acumen plate-based laser cytometer to determine the number and intensity of GFP-expressing, i.e. PR, cells. Wells displaying intensities and counts above threshold values of interest were re-imaged at a higher resolution with an INCell2000 automated microscope. The images were analyzed with an open source HCA analysis tool, PhenoRipper (Rajaram et al., Nat Methods 9:635-637, 2012), to identify the high GFP-inducing treatments that additionally resulted in diverse phenotypes compared to the vehicle control samples. The pyrimidinopyrimidone kinase inhibitor CHEMBL-1766490, a pan kinase inhibitor whose major known targets are p38α and the Src family member lck, was identified as an inducer of photoreceptor neuritogenesis by using the open-source HCA program PhenoRipper. This finding was corroborated using a cell-based method of image analysis that measures quantitative differences in the mean neurite length in GFP expressing cells. Interacting with data using machine learning algorithms may complement traditional HCA approaches by leading to the discovery of small molecule-induced cellular phenotypes in addition to those upon which the investigator is initially focusing.

  13. Identification of Novel Macropinocytosing Human Antibodies by Phage Display and High-Content Analysis.

    Science.gov (United States)

    Ha, K D; Bidlingmaier, S M; Su, Y; Lee, N-K; Liu, B

    2017-01-01

    Internalizing antibodies have great potential for the development of targeted therapeutics. Antibodies that internalize via the macropinocytosis pathway are particularly promising since macropinocytosis is capable of mediating rapid, bulk uptake and is selectively upregulated in many cancers. We hereby describe a method for identifying antibodies that internalize via macropinocytosis by screening phage-displayed single-chain antibody selection outputs with an automated fluorescent microscopy-based high-content analysis platform. Furthermore, this method can be similarly applied to other endocytic pathways if other fluorescent, pathway-specific, soluble markers are available. © 2017 Elsevier Inc. All rights reserved.

  14. High-content analysis screening for cell cycle regulators using arrayed synthetic crRNA libraries.

    Science.gov (United States)

    Strezoska, Žaklina; Perkett, Matthew R; Chou, Eldon T; Maksimova, Elena; Anderson, Emily M; McClelland, Shawn; Kelley, Melissa L; Vermeulen, Annaleen; Smith, Anja van Brabant

    2017-06-10

    The CRISPR-Cas9 system has been utilized for large-scale, loss-of-function screens mainly using lentiviral pooled formats and cell-survival phenotypic assays. Screening in an arrayed format expands the types of phenotypic readouts that can be used to now include high-content, morphology-based assays, and with the recent availability of synthetic crRNA libraries, new studies are emerging. Here, we use a cell cycle reporter cell line to perform an arrayed, synthetic crRNA:tracrRNA screen targeting 169 genes (>600 crRNAs) and used high content analysis (HCA) to identify genes that regulate the cell cycle. Seven parameters were used to classify cells into cell cycle categories and multiple parameters were combined using a new analysis technique to identify hits. Comprehensive hit follow-up experiments included target gene expression analysis, confirmation of DNA insertions/deletions, and validation with orthogonal reagents. Our results show that most hits had three or more independent crRNAs per gene that demonstrated a phenotype with consistent individual parameters, indicating that our screen produced high-confidence hits with low off-target effects and allowed us to identify hits with more subtle phenotypes. The results of our screen demonstrate the power of using arrayed, synthetic crRNAs for functional phenotypic screening using multiparameter HCA assays. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  15. Recombinant differential anchorage probes that tower over the spatial dimension of intracellular signals for high content screening and analysis.

    Science.gov (United States)

    Schembri, Laura; Zanese, Marion; Depierre-Plinet, Gaelle; Petit, Muriel; Elkaoukabi-Chaibi, Assia; Tauzin, Loic; Florean, Cristina; Lartigue, Lydia; Medina, Chantal; Rey, Christophe; Belloc, Francis; Reiffers, Josy; Ichas, François; De Giorgi, Francesca

    2009-12-01

    Recombinant fluorescent probes allow the detection of molecular events inside living cells. Many of them exploit the intracellular space to provide positional signals and, thus, require detection by single cell imaging. We describe here a novel strategy based on probes capable of encoding the spatial dimension of intracellular signals into "all-or-none" fluorescence intensity changes (differential anchorage probes, DAPs). The resulting signals can be acquired in single cells at high throughput by automated flow cytometry, (i) bypassing image acquisition and analysis, (ii) providing a direct quantitative readout, and (iii) allowing the exploration of large experimental series. We illustrate our purpose with DAPs for Bax and the effector caspases 3 and 7, which are keys players in apoptotic cell death, and show applications in basic research, high content multiplexed library screening, compound characterization, and drug profiling.

  16. Comparison of multivariate data analysis strategies for high-content screening.

    Science.gov (United States)

    Kümmel, Anne; Selzer, Paul; Beibel, Martin; Gubler, Hanspeter; Parker, Christian N; Gabriel, Daniela

    2011-03-01

    High-content screening (HCS) is increasingly used in biomedical research generating multivariate, single-cell data sets. Before scoring a treatment, the complex data sets are processed (e.g., normalized, reduced to a lower dimensionality) to help extract valuable information. However, there has been no published comparison of the performance of these methods. This study comparatively evaluates unbiased approaches to reduce dimensionality as well as to summarize cell populations. To evaluate these different data-processing strategies, the prediction accuracies and the Z' factors of control compounds of a HCS cell cycle data set were monitored. As expected, dimension reduction led to a lower degree of discrimination between control samples. A high degree of classification accuracy was achieved when the cell population was summarized on well level using percentile values. As a conclusion, the generic data analysis pipeline described here enables a systematic review of alternative strategies to analyze multiparametric results from biological systems.

  17. Factor analysis in optimization of formulation of high content uniformity tablets containing low dose active substance.

    Science.gov (United States)

    Lukášová, Ivana; Muselík, Jan; Franc, Aleš; Goněc, Roman; Mika, Filip; Vetchý, David

    2017-09-11

    Warfarin is intensively discussed drug with narrow therapeutic range. There have been cases of bleeding attributed to varying content or altered quality of the active substance. Factor analysis is useful for finding suitable technological parameters leading to high content uniformity of tablets containing low amount of active substance. The composition of tabletting blend and technological procedure were set with respect to factor analysis of previously published results. The correctness of set parameters was checked by manufacturing and evaluation of tablets containing 1-10mg of warfarin sodium. The robustness of suggested technology was checked by using "worst case scenario" and statistical evaluation of European Pharmacopoeia (EP) content uniformity limits with respect to Bergum division and process capability index (Cpk). To evaluate the quality of active substance and tablets, dissolution method was developed (water; EP apparatus II; 25rpm), allowing for statistical comparison of dissolution profiles. Obtained results prove the suitability of factor analysis to optimize the composition with respect to batches manufactured previously and thus the use of metaanalysis under industrial conditions is feasible. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Adaptation of a Cell-Based High Content Screening System for the In-Depth Analysis of Celiac Biopsy Tissue.

    Science.gov (United States)

    Cooper, Sarah E J; Mohamed, Bashir M; Elliott, Louise; Davies, Anthony Mitchell; Feighery, Conleth F; Kelly, Jacinta; Dunne, Jean

    2015-01-01

    The IN Cell Analyzer 1000 possesses several distinguishing features that make it a valuable tool in research today. This fully automated high content screening (HCS) system introduced quantitative fluorescent microscopy with computerized image analysis for use in cell-based analysis. Previous studies have focused on live cell assays, where it has proven to be a powerful and robust method capable of providing reproducible, quantitative data. Using HCS as a tool to investigate antigen expression in duodenal biopsies, we developed a novel approach to tissue positioning and mapping. We adapted IN Cell Analyzer 1000's image acquisition and analysis software for the investigation of tissue transglutaminase (tTG) and smooth muscle alpha-actin (SM α-actin) staining in paraffin-embedded duodenal tissue sections from celiac patients and healthy controls. These innovations allowed a quantitative analysis of cellular structure and protein expression. The results from routine biopsy material indicated the intensity of protein expression was altered in celiac disease compared to normal biopsy material.

  19. Thermogravimetric Analysis of Effects of High-Content Limstone Addition on Combustion Characteristics of Taixi Anthracite

    Institute of Scientific and Technical Information of China (English)

    ZHANG Hong; LI Mei; SUN Min; WEI Xian-yong

    2004-01-01

    Combustion characteristics of Taixi anthracite admixed with high content of limestone addition were investigated with thermogravimetric analysis. The results show that limestone addition has a little promoting effect on the ignition of raw coals as a whole. The addition of limestone is found to significantly accelerate the combustion and burnout of raw coals. The higher the sample mass is, the more significant the effect will be. The results also show that the change of limestone proportion between 45%-80% has little effect on ignition temperatures of coal in the blended samples. Increasing limestone content lowers the temperature corresponding to the maximum weight loss. Although higher maximum mass loss rates are observed with higher limestone content, the effect is found not ascribed to changing limestone addition, but to the decrease of absolute coal mass in the sample. The change of limestone proportion has little effect on its burnout temperature. Mechanism analysis indicates that these phenomena result mainly from improved heat conduction due to limestone addition.

  20. Normalizing for individual cell population context in the analysis of high-content cellular screens

    Directory of Open Access Journals (Sweden)

    Knapp Bettina

    2011-12-01

    Full Text Available Abstract Background High-content, high-throughput RNA interference (RNAi offers unprecedented possibilities to elucidate gene function and involvement in biological processes. Microscopy based screening allows phenotypic observations at the level of individual cells. It was recently shown that a cell's population context significantly influences results. However, standard analysis methods for cellular screens do not currently take individual cell data into account unless this is important for the phenotype of interest, i.e. when studying cell morphology. Results We present a method that normalizes and statistically scores microscopy based RNAi screens, exploiting individual cell information of hundreds of cells per knockdown. Each cell's individual population context is employed in normalization. We present results on two infection screens for hepatitis C and dengue virus, both showing considerable effects on observed phenotypes due to population context. In addition, we show on a non-virus screen that these effects can be found also in RNAi data in the absence of any virus. Using our approach to normalize against these effects we achieve improved performance in comparison to an analysis without this normalization and hit scoring strategy. Furthermore, our approach results in the identification of considerably more significantly enriched pathways in hepatitis C virus replication than using a standard analysis approach. Conclusions Using a cell-based analysis and normalization for population context, we achieve improved sensitivity and specificity not only on a individual protein level, but especially also on a pathway level. This leads to the identification of new host dependency factors of the hepatitis C and dengue viruses and higher reproducibility of results.

  1. High content analysis at single cell level identifies different cellular responses dependent on nanomaterial concentrations.

    Science.gov (United States)

    Manshian, Bella B; Munck, Sebastian; Agostinis, Patrizia; Himmelreich, Uwe; Soenen, Stefaan J

    2015-09-08

    A mechanistic understanding of nanomaterial (NM) interaction with biological environments is pivotal for the safe transition from basic science to applied nanomedicine. NM exposure results in varying levels of internalized NM in different neighboring cells, due to variances in cell size, cell cycle phase and NM agglomeration. Using high-content analysis, we investigated the cytotoxic effects of fluorescent quantum dots on cultured cells, where all effects were correlated with the concentration of NMs at the single cell level. Upon binning the single cell data into different categories related to NM concentration, this study demonstrates, for the first time, that quantum dots activate both cytoprotective and cytotoxic mechanisms, resulting in a zero net result on the overall cell population, yet with significant effects in cells with higher cellular NM levels. Our results suggest that future NM cytotoxicity studies should correlate NM toxicity with cellular NM numbers on the single cell level, as conflicting mechanisms in particular cell subpopulations are commonly overlooked using classical toxicological methods.

  2. High content analysis at single cell level identifies different cellular responses dependent on nanomaterial concentrations

    Science.gov (United States)

    Manshian, Bella B.; Munck, Sebastian; Agostinis, Patrizia; Himmelreich, Uwe; Soenen, Stefaan J.

    2015-09-01

    A mechanistic understanding of nanomaterial (NM) interaction with biological environments is pivotal for the safe transition from basic science to applied nanomedicine. NM exposure results in varying levels of internalized NM in different neighboring cells, due to variances in cell size, cell cycle phase and NM agglomeration. Using high-content analysis, we investigated the cytotoxic effects of fluorescent quantum dots on cultured cells, where all effects were correlated with the concentration of NMs at the single cell level. Upon binning the single cell data into different categories related to NM concentration, this study demonstrates, for the first time, that quantum dots activate both cytoprotective and cytotoxic mechanisms, resulting in a zero net result on the overall cell population, yet with significant effects in cells with higher cellular NM levels. Our results suggest that future NM cytotoxicity studies should correlate NM toxicity with cellular NM numbers on the single cell level, as conflicting mechanisms in particular cell subpopulations are commonly overlooked using classical toxicological methods.

  3. High-content analysis to leverage a robust phenotypic profiling approach to vascular modulation.

    Science.gov (United States)

    Isherwood, Beverley J; Walls, Rebecca E; Roberts, Mark E; Houslay, Thomas M; Brave, Sandra R; Barry, Simon T; Carragher, Neil O

    2013-12-01

    Phenotypic screening seeks to identify substances that modulate phenotypes in a desired manner with the aim of progressing first-in-class agents. Successful campaigns require physiological relevance, robust screening, and an ability to deconvolute perturbed pathways. High-content analysis (HCA) is increasingly used in cell biology and offers one approach to prosecution of phenotypic screens, but challenges exist in exploitation where data generated are high volume and complex. We combine development of an organotypic model with novel HCA tools to map phenotypic responses to pharmacological perturbations. We describe implementation for angiogenesis, a process that has long been a focus for therapeutic intervention but has lacked robust models that recapitulate more completely mechanisms involved. The study used human primary endothelial cells in co-culture with stromal fibroblasts to model multiple aspects of angiogenic signaling: cell interactions, proliferation, migration, and differentiation. Multiple quantitative descriptors were derived from automated microscopy using custom-designed algorithms. Data were extracted using a bespoke informatics platform that integrates processing, statistics, and feature display into a streamlined workflow for building and interrogating fingerprints. Ninety compounds were characterized, defining mode of action by phenotype. Our approach for assessing phenotypic outcomes in complex assay models is robust and capable of supporting a range of phenotypic screens at scale.

  4. High-Content Imaging Assays for Identifying Compounds that Generate Superoxide and Impair Mitochondrial Membrane Potential in Adherent Eukaryotic Cells.

    Science.gov (United States)

    Billis, Puja; Will, Yvonne; Nadanaciva, Sashi

    2014-02-19

    Reactive oxygen species (ROS) are constantly produced in cells as a result of aerobic metabolism. When there is an excessive production of ROS and the cell's antioxidant defenses are overwhelmed, oxidative stress occurs. The superoxide anion is a type of ROS that is produced primarily in mitochondria but is also generated in other regions of the cell including peroxisomes, endoplasmic reticulum, plasma membrane, and cytosol. Here, a high-content imaging assay using the dye dihydroethidium is described for identifying compounds that generate superoxide in eukaryotic cells. A high-content imaging assay using the fluorescent dye tetramethylrhodamine methyl ester is also described to identify compounds that impair mitochondrial membrane potential in eukaryotic cells. The purpose of performing both assays is to identify compounds that (1) generate superoxide at lower concentrations than they impair mitochondrial membrane potential, (2) impair mitochondrial membrane potential at lower concentrations than they generate superoxide, (3) generate superoxide and impair mitochondrial function at similar concentrations, and (4) do not generate superoxide or impair mitochondrial membrane potential during the duration of the assays.

  5. High-content imaging with micropatterned multiwell plates reveals influence of cell geometry and cytoskeleton on chromatin dynamics.

    Science.gov (United States)

    Harkness, Ty; McNulty, Jason D; Prestil, Ryan; Seymour, Stephanie K; Klann, Tyler; Murrell, Michael; Ashton, Randolph S; Saha, Krishanu

    2015-10-01

    Understanding the mechanisms underpinning cellular responses to microenvironmental cues requires tight control not only of the complex milieu of soluble signaling factors, extracellular matrix (ECM) connections and cell-cell contacts within cell culture, but also of the biophysics of human cells. Advances in biomaterial fabrication technologies have recently facilitated detailed examination of cellular biophysics and revealed that constraints on cell geometry arising from the cellular microenvironment influence a wide variety of human cell behaviors. Here, we create an in vitro platform capable of precise and independent control of biochemical and biophysical microenvironmental cues by adapting microcontact printing technology into the format of standard six- to 96-well plates to create MicroContact Printed Well Plates (μCP Well Plates). Automated high-content imaging of human cells seeded on μCP Well Plates revealed tight, highly consistent control of single-cell geometry, cytoskeletal organization, and nuclear elongation. Detailed subcellular imaging of the actin cytoskeleton and chromatin within live human fibroblasts on μCP Well Plates was then used to describe a new relationship between cellular geometry and chromatin dynamics. In summary, the μCP Well Plate platform is an enabling high-content screening technology for human cell biology and cellular engineering efforts that seek to identify key biochemical and biophysical cues in the cellular microenvironment.

  6. Assessment of Cr(VI-induced cytotoxicity and genotoxicity using high content analysis.

    Directory of Open Access Journals (Sweden)

    Chad M Thompson

    Full Text Available Oral exposure to high concentrations of hexavalent chromium [Cr(VI] induces intestinal redox changes, villus cytotoxicity, crypt hyperplasia, and intestinal tumors in mice. To assess the effects of Cr(VI in a cell model relevant to the intestine, undifferentiated (proliferating and differentiated (confluent Caco-2 cells were treated with Cr(VI, hydrogen peroxide or rotenone for 2-24 hours. DNA damage was then assessed by nuclear staining intensity of 8-hydroxydeoxyguanosine (8-OHdG and phosphorylated histone variant H2AX (γ-H2AX measured by high content analysis methods. In undifferentiated Caco-2, all three chemicals increased 8-OHdG and γ-H2AX staining at cytotoxic concentrations, whereas only 8-OHdG was elevated at non-cytotoxic concentrations at 24 hr. Differentiated Caco-2 were more resistant to cytotoxicity and DNA damage than undifferentiated cells, and there were no changes in apoptotic markers p53 or annexin-V. However, Cr(VI induced a dose-dependent translocation of the unfolded protein response transcription factor ATF6 into the nucleus. Micronucleus (MN formation was assessed in CHO-K1 and A549 cell lines. Cr(VI increased MN frequency in CHO-K1 only at highly cytotoxic concentrations. Relative to the positive control Mitomycin-C, Cr(VI only slightly increased MN frequency in A549 at mildly cytotoxic concentrations. The results demonstrate that Cr(VI genotoxicity correlates with cytotoxic concentrations, and that H2AX phosphorylation occurs at higher concentrations than oxidative DNA damage in proliferating Caco-2 cells. The findings suggest that in vitro genotoxicity of Cr(VI is primarily oxidative in nature at low concentrations. Implications for in vivo intestinal toxicity of Cr(VI will be discussed.

  7. Assessment of Cr(VI)-Induced Cytotoxicity and Genotoxicity Using High Content Analysis

    Science.gov (United States)

    Thompson, Chad M.; Fedorov, Yuriy; Brown, Daniel D.; Suh, Mina; Proctor, Deborah M.; Kuriakose, Liz; Haws, Laurie C.; Harris, Mark A.

    2012-01-01

    Oral exposure to high concentrations of hexavalent chromium [Cr(VI)] induces intestinal redox changes, villus cytotoxicity, crypt hyperplasia, and intestinal tumors in mice. To assess the effects of Cr(VI) in a cell model relevant to the intestine, undifferentiated (proliferating) and differentiated (confluent) Caco-2 cells were treated with Cr(VI), hydrogen peroxide or rotenone for 2–24 hours. DNA damage was then assessed by nuclear staining intensity of 8-hydroxydeoxyguanosine (8-OHdG) and phosphorylated histone variant H2AX (γ-H2AX) measured by high content analysis methods. In undifferentiated Caco-2, all three chemicals increased 8-OHdG and γ-H2AX staining at cytotoxic concentrations, whereas only 8-OHdG was elevated at non-cytotoxic concentrations at 24 hr. Differentiated Caco-2 were more resistant to cytotoxicity and DNA damage than undifferentiated cells, and there were no changes in apoptotic markers p53 or annexin-V. However, Cr(VI) induced a dose-dependent translocation of the unfolded protein response transcription factor ATF6 into the nucleus. Micronucleus (MN) formation was assessed in CHO-K1 and A549 cell lines. Cr(VI) increased MN frequency in CHO-K1 only at highly cytotoxic concentrations. Relative to the positive control Mitomycin-C, Cr(VI) only slightly increased MN frequency in A549 at mildly cytotoxic concentrations. The results demonstrate that Cr(VI) genotoxicity correlates with cytotoxic concentrations, and that H2AX phosphorylation occurs at higher concentrations than oxidative DNA damage in proliferating Caco-2 cells. The findings suggest that in vitro genotoxicity of Cr(VI) is primarily oxidative in nature at low concentrations. Implications for in vivo intestinal toxicity of Cr(VI) will be discussed. PMID:22905163

  8. Nanoscale high-content analysis using compositional heterogeneities of single proteoliposomes

    DEFF Research Database (Denmark)

    Mathiasen, Signe; Christensen, Sune M.; Fung, Juan José

    2014-01-01

    heterogeneities can severely skew ensemble-average proteoliposome measurements but also enable ultraminiaturized high-content screens. We took advantage of this screening capability to map the oligomerization energy of the β2-adrenergic receptor using ∼10(9)-fold less protein than conventional assays....

  9. High-content live cell imaging with RNA probes: advancements in high-throughput antimalarial drug discovery

    Directory of Open Access Journals (Sweden)

    Cervantes Serena

    2009-06-01

    Full Text Available Abstract Background Malaria, a major public health issue in developing nations, is responsible for more than one million deaths a year. The most lethal species, Plasmodium falciparum, causes up to 90% of fatalities. Drug resistant strains to common therapies have emerged worldwide and recent artemisinin-based combination therapy failures hasten the need for new antimalarial drugs. Discovering novel compounds to be used as antimalarials is expedited by the use of a high-throughput screen (HTS to detect parasite growth and proliferation. Fluorescent dyes that bind to DNA have replaced expensive traditional radioisotope incorporation for HTS growth assays, but do not give additional information regarding the parasite stage affected by the drug and a better indication of the drug's mode of action. Live cell imaging with RNA dyes, which correlates with cell growth and proliferation, has been limited by the availability of successful commercial dyes. Results After screening a library of newly synthesized stryrl dyes, we discovered three RNA binding dyes that provide morphological details of live parasites. Utilizing an inverted confocal imaging platform, live cell imaging of parasites increases parasite detection, improves the spatial and temporal resolution of the parasite under drug treatments, and can resolve morphological changes in individual cells. Conclusion This simple one-step technique is suitable for automation in a microplate format for novel antimalarial compound HTS. We have developed a new P. falciparum RNA high-content imaging growth inhibition assay that is robust with time and energy efficiency.

  10. HC StratoMineR: A Web-Based Tool for the Rapid Analysis of High-Content Datasets.

    Science.gov (United States)

    Omta, Wienand A; van Heesbeen, Roy G; Pagliero, Romina J; van der Velden, Lieke M; Lelieveld, Daphne; Nellen, Mehdi; Kramer, Maik; Yeong, Marley; Saeidi, Amir M; Medema, Rene H; Spruit, Marco; Brinkkemper, Sjaak; Klumperman, Judith; Egan, David A

    2016-10-01

    High-content screening (HCS) can generate large multidimensional datasets and when aligned with the appropriate data mining tools, it can yield valuable insights into the mechanism of action of bioactive molecules. However, easy-to-use data mining tools are not widely available, with the result that these datasets are frequently underutilized. Here, we present HC StratoMineR, a web-based tool for high-content data analysis. It is a decision-supportive platform that guides even non-expert users through a high-content data analysis workflow. HC StratoMineR is built by using My Structured Query Language for storage and querying, PHP: Hypertext Preprocessor as the main programming language, and jQuery for additional user interface functionality. R is used for statistical calculations, logic and data visualizations. Furthermore, C++ and graphical processor unit power is diffusely embedded in R by using the rcpp and rpud libraries for operations that are computationally highly intensive. We show that we can use HC StratoMineR for the analysis of multivariate data from a high-content siRNA knock-down screen and a small-molecule screen. It can be used to rapidly filter out undesirable data; to select relevant data; and to perform quality control, data reduction, data exploration, morphological hit picking, and data clustering. Our results demonstrate that HC StratoMineR can be used to functionally categorize HCS hits and, thus, provide valuable information for hit prioritization.

  11. Integrating high-content imaging and chemical genetics to probe host cellular pathways critical for Yersinia pestis infection.

    Directory of Open Access Journals (Sweden)

    Krishna P Kota

    Full Text Available The molecular machinery that regulates the entry and survival of Yersinia pestis in host macrophages is poorly understood. Here, we report the development of automated high-content imaging assays to quantitate the internalization of virulent Y. pestis CO92 by macrophages and the subsequent activation of host NF-κB. Implementation of these assays in a focused chemical screen identified kinase inhibitors that inhibited both of these processes. Rac-2-ethoxy-3 octadecanamido-1-propylphosphocholine (a protein Kinase C inhibitor, wortmannin (a PI3K inhibitor, and parthenolide (an IκB kinase inhibitor, inhibited pathogen-induced NF-κB activation and reduced bacterial entry and survival within macrophages. Parthenolide inhibited NF-κB activation in response to stimulation with Pam3CSK4 (a TLR2 agonist, E. coli LPS (a TLR4 agonist or Y. pestis infection, while the PI3K and PKC inhibitors were selective only for Y. pestis infection. Together, our results suggest that phagocytosis is the major stimulus for NF-κB activation in response to Y. pestis infection, and that Y. pestis entry into macrophages may involve the participation of protein kinases such as PI3K and PKC. More importantly, the automated image-based screening platform described here can be applied to the study of other bacteria in general and, in combination with chemical genetic screening, can be used to identify host cell functions facilitating the identification of novel antibacterial therapeutics.

  12. Recent advances in quantitative high throughput and high content data analysis.

    Science.gov (United States)

    Moutsatsos, Ioannis K; Parker, Christian N

    2016-01-01

    High throughput screening has become a basic technique with which to explore biological systems. Advances in technology, including increased screening capacity, as well as methods that generate multiparametric readouts, are driving the need for improvements in the analysis of data sets derived from such screens. This article covers the recent advances in the analysis of high throughput screening data sets from arrayed samples, as well as the recent advances in the analysis of cell-by-cell data sets derived from image or flow cytometry application. Screening multiple genomic reagents targeting any given gene creates additional challenges and so methods that prioritize individual gene targets have been developed. The article reviews many of the open source data analysis methods that are now available and which are helping to define a consensus on the best practices to use when analyzing screening data. As data sets become larger, and more complex, the need for easily accessible data analysis tools will continue to grow. The presentation of such complex data sets, to facilitate quality control monitoring and interpretation of the results will require the development of novel visualizations. In addition, advanced statistical and machine learning algorithms that can help identify patterns, correlations and the best features in massive data sets will be required. The ease of use for these tools will be important, as they will need to be used iteratively by laboratory scientists to improve the outcomes of complex analyses.

  13. High content analysis platform for optimization of lipid mediated CRISPR-Cas9 delivery strategies in human cells.

    Science.gov (United States)

    Steyer, Benjamin; Carlson-Stevermer, Jared; Angenent-Mari, Nicolas; Khalil, Andrew; Harkness, Ty; Saha, Krishanu

    2016-04-01

    Non-viral gene-editing of human cells using the CRISPR-Cas9 system requires optimized delivery of multiple components. Both the Cas9 endonuclease and a single guide RNA, that defines the genomic target, need to be present and co-localized within the nucleus for efficient gene-editing to occur. This work describes a new high-throughput screening platform for the optimization of CRISPR-Cas9 delivery strategies. By exploiting high content image analysis and microcontact printed plates, multi-parametric gene-editing outcome data from hundreds to thousands of isolated cell populations can be screened simultaneously. Employing this platform, we systematically screened four commercially available cationic lipid transfection materials with a range of RNAs encoding the CRISPR-Cas9 system. Analysis of Cas9 expression and editing of a fluorescent mCherry reporter transgene within human embryonic kidney cells was monitored over several days after transfection. Design of experiments analysis enabled rigorous evaluation of delivery materials and RNA concentration conditions. The results of this analysis indicated that the concentration and identity of transfection material have significantly greater effect on gene-editing than ratio or total amount of RNA. Cell subpopulation analysis on microcontact printed plates, further revealed that low cell number and high Cas9 expression, 24h after CRISPR-Cas9 delivery, were strong predictors of gene-editing outcomes. These results suggest design principles for the development of materials and transfection strategies with lipid-based materials. This platform could be applied to rapidly optimize materials for gene-editing in a variety of cell/tissue types in order to advance genomic medicine, regenerative biology and drug discovery. CRISPR-Cas9 is a new gene-editing technology for "genome surgery" that is anticipated to treat genetic diseases. This technology uses multiple components of the Cas9 system to cut out disease-causing mutations

  14. High-content screening of drug-induced cardiotoxicity using quantitative single cell imaging cytometry on microfluidic device.

    Science.gov (United States)

    Kim, Min Jung; Lee, Su Chul; Pal, Sukdeb; Han, Eunyoung; Song, Joon Myong

    2011-01-07

    Drug-induced cardiotoxicity or cytotoxicity followed by cell death in cardiac muscle is one of the major concerns in drug development. Herein, we report a high-content quantitative multicolor single cell imaging tool for automatic screening of drug-induced cardiotoxicity in an intact cell. A tunable multicolor imaging system coupled with a miniaturized sample platform was destined to elucidate drug-induced cardiotoxicity via simultaneous quantitative monitoring of intracellular sodium ion concentration, potassium ion channel permeability and apoptosis/necrosis in H9c2(2-1) cell line. Cells were treated with cisapride (a human ether-à-go-go-related gene (hERG) channel blocker), digoxin (Na(+)/K(+)-pump blocker), camptothecin (anticancer agent) and a newly synthesized anti-cancer drug candidate (SH-03). Decrease in potassium channel permeability in cisapride-treated cells indicated that it can also inhibit the trafficking of the hERG channel. Digoxin treatment resulted in an increase of intracellular [Na(+)]. However, it did not affect potassium channel permeability. Camptothecin and SH-03 did not show any cytotoxic effect at normal use (≤300 nM and 10 μM, respectively). This result clearly indicates the potential of SH-03 as a new anticancer drug candidate. The developed method was also used to correlate the cell death pathway with alterations in intracellular [Na(+)]. The developed protocol can directly depict and quantitate targeted cellular responses, subsequently enabling an automated, easy to operate tool that is applicable to drug-induced cytotoxicity monitoring with special reference to next generation drug discovery screening. This multicolor imaging based system has great potential as a complementary system to the conventional patch clamp technique and flow cytometric measurement for the screening of drug cardiotoxicity.

  15. Evaluation of the cytotoxicity of organic dust components on THP1 monocytes-derived macrophages using high content analysis.

    Science.gov (United States)

    Ramery, Eve; O'Brien, Peter J

    2014-03-01

    Organic dust contains pathogen-associated molecular patterns (PAMPs) which can induce significant airway diseases following chronic exposure. Mononuclear phagocytes are key protecting cells of the respiratory tract. Several studies have investigated the effects of PAMPs and mainly endotoxins, on cytokine production. However the sublethal cytotoxicity of organic dust components on macrophages has not been tested yet. The novel technology of high content analysis (HCA) is already used to assess subclinical drug-induced toxicity. It combines the capabilities of flow cytometry, intracellular fluorescence probes, and image analysis and enables rapid multiple analyses in large numbers of samples. In this study, HCA was used to investigate the cytotoxicity of the three major PAMPs contained in organic dust, i.e., endotoxin (LPS), peptidoglycan (PGN) and β-glucans (zymosan) on THP-1 monocyte-derived macrophages. LPS was used at concentrations of 0.005, 0.01, 0.02, 0.05, 0.1, and 1 μg/mL; PGN and zymosan were used at concentrations of 1, 5, 10, 50, 100, and 500 μg/mL. Cells were exposed to PAMPs for 24 h. In addition, the oxidative burst and the phagocytic capabilities of the cells were tested. An overlap between PGN intrinsic fluorescence and red/far-red fluorescent dyes occurred, rendering the evaluation of some parameters impossible for PGN. LPS induced sublethal cytotoxicity at the lowest dose (from 50 ng/mL). However, the greatest cytotoxic changes occurred with zymosan. In addition, zymosan, but not LPS, induced phagosome maturation and oxidative burst. Given the fact that β-glucans can be up to 100-fold more concentrated in organic dust than LPS, these results suggest that β-glucans could play a major role in macrophage impairment following heavy dust exposure and will merit further investigation in the near future.

  16. High content image-based screening of a protease inhibitor library reveals compounds broadly active against Rift Valley fever virus and other highly pathogenic RNA viruses.

    Directory of Open Access Journals (Sweden)

    Rajini Mudhasani

    2014-08-01

    Full Text Available High content image-based screening was developed as an approach to test a protease inhibitor small molecule library for antiviral activity against Rift Valley fever virus (RVFV and to determine their mechanism of action. RVFV is the causative agent of severe disease of humans and animals throughout Africa and the Arabian Peninsula. Of the 849 compounds screened, 34 compounds exhibited ≥ 50% inhibition against RVFV. All of the hit compounds could be classified into 4 distinct groups based on their unique chemical backbone. Some of the compounds also showed broad antiviral activity against several highly pathogenic RNA viruses including Ebola, Marburg, Venezuela equine encephalitis, and Lassa viruses. Four hit compounds (C795-0925, D011-2120, F694-1532 and G202-0362, which were most active against RVFV and showed broad-spectrum antiviral activity, were selected for further evaluation for their cytotoxicity, dose response profile, and mode of action using classical virological methods and high-content imaging analysis. Time-of-addition assays in RVFV infections suggested that D011-2120 and G202-0362 targeted virus egress, while C795-0925 and F694-1532 inhibited virus replication. We showed that D011-2120 exhibited its antiviral effects by blocking microtubule polymerization, thereby disrupting the Golgi complex and inhibiting viral trafficking to the plasma membrane during virus egress. While G202-0362 also affected virus egress, it appears to do so by a different mechanism, namely by blocking virus budding from the trans Golgi. F694-1532 inhibited viral replication, but also appeared to inhibit overall cellular gene expression. However, G202-0362 and C795-0925 did not alter any of the morphological features that we examined and thus may prove to be good candidates for antiviral drug development. Overall this work demonstrates that high-content image analysis can be used to screen chemical libraries for new antivirals and to determine their

  17. High content analysis of an in vitro model for metabolic toxicity: results with the model toxicants 4-aminophenol and cyclophosphamide.

    Science.gov (United States)

    Cole, Stephanie D; Madren-Whalley, Janna S; Li, Albert P; Dorsey, Russell; Salem, Harry

    2014-12-01

    In vitro models that accurately and rapidly assess hepatotoxicity and the effects of hepatic metabolism on nonliver cell types are needed by the U.S. Department of Defense and the pharmaceutical industry to screen compound libraries. Here, we report the first use of high content analysis on the Integrated Discrete Multiple Organ Co-Culture (IdMOC) system, a high-throughput method for such studies. We cultured 3T3-L1 cells in the presence and absence of primary human hepatocytes, and exposed the cultures to 4-aminophenol and cyclophosphamide, model toxicants that are respectively detoxified and activated by the liver. Following staining with calcein-AM, ethidium homodimer-1, and Hoechst 33342, high content analysis of the cultures revealed four cytotoxic endpoints: fluorescence intensities of calcein-AM and ethidium homodimer-1, nuclear area, and cell density. Using these endpoints, we observed that the cytotoxicity of 4-aminophenol in 3T3-L1 cells in co-culture was less than that observed for 3T3-L1 monocultures, consistent with the known detoxification of 4-aminophenol by hepatocytes. Conversely, cyclophosphamide cytotoxicity for 3T3-L1 cells was enhanced by co-culturing with hepatocytes, consistent with the known metabolic activation of this toxicant. The use of IdMOC plates combined with high content analysis is therefore a multi-endpoint, high-throughput capability for measuring the effects of metabolism on toxicity.

  18. Vanillin inhibits translation and induces messenger ribonucleoprotein (mRNP) granule formation in saccharomyces cerevisiae: application and validation of high-content, image-based profiling.

    Science.gov (United States)

    Iwaki, Aya; Ohnuki, Shinsuke; Suga, Yohei; Izawa, Shingo; Ohya, Yoshikazu

    2013-01-01

    Vanillin, generated by acid hydrolysis of lignocellulose, acts as a potent inhibitor of the growth of the yeast Saccharomyces cerevisiae. Here, we investigated the cellular processes affected by vanillin using high-content, image-based profiling. Among 4,718 non-essential yeast deletion mutants, the morphology of those defective in the large ribosomal subunit showed significant similarity to that of vanillin-treated cells. The defects in these mutants were clustered in three domains of the ribosome: the mRNA tunnel entrance, exit and backbone required for small subunit attachment. To confirm that vanillin inhibited ribosomal function, we assessed polysome and messenger ribonucleoprotein granule formation after treatment with vanillin. Analysis of polysome profiles showed disassembly of the polysomes in the presence of vanillin. Processing bodies and stress granules, which are composed of non-translating mRNAs and various proteins, were formed after treatment with vanillin. These results suggest that vanillin represses translation in yeast cells.

  19. Identifying and quantifying heterogeneity in high content analysis: application of heterogeneity indices to drug discovery.

    Directory of Open Access Journals (Sweden)

    Albert H Gough

    Full Text Available One of the greatest challenges in biomedical research, drug discovery and diagnostics is understanding how seemingly identical cells can respond differently to perturbagens including drugs for disease treatment. Although heterogeneity has become an accepted characteristic of a population of cells, in drug discovery it is not routinely evaluated or reported. The standard practice for cell-based, high content assays has been to assume a normal distribution and to report a well-to-well average value with a standard deviation. To address this important issue we sought to define a method that could be readily implemented to identify, quantify and characterize heterogeneity in cellular and small organism assays to guide decisions during drug discovery and experimental cell/tissue profiling. Our study revealed that heterogeneity can be effectively identified and quantified with three indices that indicate diversity, non-normality and percent outliers. The indices were evaluated using the induction and inhibition of STAT3 activation in five cell lines where the systems response including sample preparation and instrument performance were well characterized and controlled. These heterogeneity indices provide a standardized method that can easily be integrated into small and large scale screening or profiling projects to guide interpretation of the biology, as well as the development of therapeutics and diagnostics. Understanding the heterogeneity in the response to perturbagens will become a critical factor in designing strategies for the development of therapeutics including targeted polypharmacology.

  20. An RNA replication-center assay for high content image-based quantifications of human rhinovirus and coxsackievirus infections

    Directory of Open Access Journals (Sweden)

    Lötzerich Mark

    2010-10-01

    Full Text Available Abstract Background Picornaviruses are common human and animal pathogens, including polio and rhinoviruses of the enterovirus family, and hepatits A or food-and-mouth disease viruses. There are no effective countermeasures against the vast majority of picornaviruses, with the exception of polio and hepatitis A vaccines. Human rhinoviruses (HRV are the most prevalent picornaviruses comprising more than one hundred serotypes. The existing and also emerging HRVs pose severe health risks for patients with asthma or chronic obstructive pulmonary disease. Here, we developed a serotype-independent infection assay using a commercially available mouse monoclonal antibody (mabJ2 detecting double-strand RNA. Results Immunocytochemical staining for RNA replication centers using mabJ2 identified cells that were infected with either HRV1A, 2, 14, 16, 37 or coxsackievirus (CV B3, B4 or A21. MabJ2 labeled-cells were immunocytochemically positive for newly synthesized viral capsid proteins from HRV1A, 14, 16, 37 or CVB3, 4. We optimized the procedure for detection of virus replication in settings for high content screening with automated fluorescence microscopy and single cell analysis. Our data show that the infection signal was dependent on multiplicity, time and temperature of infection, and the mabJ2-positive cell numbers correlated with viral titres determined in single step growth curves. The mabJ2 infection assay was adapted to determine the efficacy of anti-viral compounds and small interfering RNAs (siRNAs blocking enterovirus infections. Conclusions We report a broadly applicable, rapid protocol to measure infection of cultured cells with enteroviruses at single cell resolution. This assay can be applied to a wide range of plus-sense RNA viruses, and hence allows comparative studies of viral infection biology without dedicated reagents or procedures. This protocol also allows to directly compare results from small compound or siRNA infection screens

  1. An RNA replication-center assay for high content image-based quantifications of human rhinovirus and coxsackievirus infections

    Science.gov (United States)

    2010-01-01

    Background Picornaviruses are common human and animal pathogens, including polio and rhinoviruses of the enterovirus family, and hepatits A or food-and-mouth disease viruses. There are no effective countermeasures against the vast majority of picornaviruses, with the exception of polio and hepatitis A vaccines. Human rhinoviruses (HRV) are the most prevalent picornaviruses comprising more than one hundred serotypes. The existing and also emerging HRVs pose severe health risks for patients with asthma or chronic obstructive pulmonary disease. Here, we developed a serotype-independent infection assay using a commercially available mouse monoclonal antibody (mabJ2) detecting double-strand RNA. Results Immunocytochemical staining for RNA replication centers using mabJ2 identified cells that were infected with either HRV1A, 2, 14, 16, 37 or coxsackievirus (CV) B3, B4 or A21. MabJ2 labeled-cells were immunocytochemically positive for newly synthesized viral capsid proteins from HRV1A, 14, 16, 37 or CVB3, 4. We optimized the procedure for detection of virus replication in settings for high content screening with automated fluorescence microscopy and single cell analysis. Our data show that the infection signal was dependent on multiplicity, time and temperature of infection, and the mabJ2-positive cell numbers correlated with viral titres determined in single step growth curves. The mabJ2 infection assay was adapted to determine the efficacy of anti-viral compounds and small interfering RNAs (siRNAs) blocking enterovirus infections. Conclusions We report a broadly applicable, rapid protocol to measure infection of cultured cells with enteroviruses at single cell resolution. This assay can be applied to a wide range of plus-sense RNA viruses, and hence allows comparative studies of viral infection biology without dedicated reagents or procedures. This protocol also allows to directly compare results from small compound or siRNA infection screens for different serotypes

  2. Benzyl butyl phthalate promotes adipogenesis in 3T3-L1 preadipocytes: A High Content Cellomics and metabolomic analysis.

    Science.gov (United States)

    Yin, Lei; Yu, Kevin Shengyang; Lu, Kun; Yu, Xiaozhong

    2016-04-01

    Benzyl butyl phthalate (BBP) has been known to induce developmental and reproductive toxicity. However, its association with dysregulation of adipogenesis has been poorly investigated. The present study aimed to examine the effect of BBP on the adipogenesis, and to elucidate the underlying mechanisms using the 3T3-L1 cells. The capacity of BBP to promote adipogenesis was evaluated by multiple staining approaches combined with a High Content Cellomics analysis. The dynamic changes of adipogenic regulatory genes and proteins were examined, and the metabolite profile was identified using GC/MC based metabolomic analysis. The High Content analysis showed BBP in contrast with Bisphenol A (BPA), a known environmental obesogen, increased lipid droplet accumulation in a similar dose-dependent manner. However, the size of the lipid droplets in BBP-treated cells was significantly larger than those in cells treated with BPA. BBP significantly induced mRNA expression of transcriptional factors C/EBPα and PPARγ, their downstream genes, and numerous adipogenic proteins in a dose and time-dependent manner. Furthermore, GC/MC metabolomic analysis revealed that BBP exposure perturbed the metabolic profiles that are associated with glyceroneogenesis and fatty acid synthesis. Altogether, our current study clearly demonstrates that BBP promoted the differentiation of 3T3-L1 through the activation of the adipogenic pathway and metabolic disturbance.

  3. Cell Painting, a high-content image-based assay for morphological profiling using multiplexed fluorescent dyes.

    Science.gov (United States)

    Bray, Mark-Anthony; Singh, Shantanu; Han, Han; Davis, Chadwick T; Borgeson, Blake; Hartland, Cathy; Kost-Alimova, Maria; Gustafsdottir, Sigrun M; Gibson, Christopher C; Carpenter, Anne E

    2016-09-01

    In morphological profiling, quantitative data are extracted from microscopy images of cells to identify biologically relevant similarities and differences among samples based on these profiles. This protocol describes the design and execution of experiments using Cell Painting, which is a morphological profiling assay that multiplexes six fluorescent dyes, imaged in five channels, to reveal eight broadly relevant cellular components or organelles. Cells are plated in multiwell plates, perturbed with the treatments to be tested, stained, fixed, and imaged on a high-throughput microscope. Next, an automated image analysis software identifies individual cells and measures ∼1,500 morphological features (various measures of size, shape, texture, intensity, and so on) to produce a rich profile that is suitable for the detection of subtle phenotypes. Profiles of cell populations treated with different experimental perturbations can be compared to suit many goals, such as identifying the phenotypic impact of chemical or genetic perturbations, grouping compounds and/or genes into functional pathways, and identifying signatures of disease. Cell culture and image acquisition takes 2 weeks; feature extraction and data analysis take an additional 1-2 weeks.

  4. High-content image informatics of the structural nuclear protein NuMA parses trajectories for stem/progenitor cell lineages and oncogenic transformation.

    Science.gov (United States)

    Vega, Sebastián L; Liu, Er; Arvind, Varun; Bushman, Jared; Sung, Hak-Joon; Becker, Matthew L; Lelièvre, Sophie; Kohn, Joachim; Vidi, Pierre-Alexandre; Moghe, Prabhas V

    2017-02-01

    Stem and progenitor cells that exhibit significant regenerative potential and critical roles in cancer initiation and progression remain difficult to characterize. Cell fates are determined by reciprocal signaling between the cell microenvironment and the nucleus; hence parameters derived from nuclear remodeling are ideal candidates for stem/progenitor cell characterization. Here we applied high-content, single cell analysis of nuclear shape and organization to examine stem and progenitor cells destined to distinct differentiation endpoints, yet undistinguishable by conventional methods. Nuclear descriptors defined through image informatics classified mesenchymal stem cells poised to either adipogenic or osteogenic differentiation, and oligodendrocyte precursors isolated from different regions of the brain and destined to distinct astrocyte subtypes. Nuclear descriptors also revealed early changes in stem cells after chemical oncogenesis, allowing the identification of a class of cancer-mitigating biomaterials. To capture the metrology of nuclear changes, we developed a simple and quantitative "imaging-derived" parsing index, which reflects the dynamic evolution of the high-dimensional space of nuclear organizational features. A comparative analysis of parsing outcomes via either nuclear shape or textural metrics of the nuclear structural protein NuMA indicates the nuclear shape alone is a weak phenotypic predictor. In contrast, variations in the NuMA organization parsed emergent cell phenotypes and discerned emergent stages of stem cell transformation, supporting a prognosticating role for this protein in the outcomes of nuclear functions.

  5. Establishing a High-content Analysis Method for Tubulin Polymerization to Evaluate Both the Stabilizing and Destabilizing Activities of Compounds.

    Science.gov (United States)

    Sum, Chi Shing; Nickischer, Debra; Lei, Ming; Weston, Andrea; Zhang, Litao; Schweizer, Liang

    2014-01-01

    Microtubules are important components of the cellular cytoskeleton that play roles in various cellular processes such as vesicular transport and spindle formation during mitosis. They are formed by an ordered organization of α-tubulin and β-tubulin hetero-polymers. Altering microtubule polymerization has been known to be the mechanism of action for a number of therapeutically important drugs including taxanes and epothilones. Traditional cell-based assays for tubulin-interacting compounds rely on their indirect effects on cell cycle and/or cell proliferation. Direct monitoring of compound effects on microtubules is required to dissect detailed mechanisms of action in a cellular setting. Here we report a high-content assay platform to monitor tubulin polymerization status by directly measuring the acute effects of drug candidates on the cellular tubulin network with the capability to dissect the mechanisms of action. This high-content analysis distinguishes in a quantitative manner between compounds that act as tubulin stabilizers versus those that are tubulin destabilizers. In addition, using a multiplex approach, we expanded this analysis to simultaneously monitor physiological cellular responses and associated cellular phenotypes.

  6. High content analysis of human fibroblast cell cultures after exposure to space radiation.

    Science.gov (United States)

    Dieriks, Birger; De Vos, Winnok; Meesen, Geert; Van Oostveldt, Kaat; De Meyer, Tim; Ghardi, Myriam; Baatout, Sarah; Van Oostveldt, Patrick

    2009-10-01

    Space travel imposes risks to human health, in large part by the increased radiation levels compared to those on Earth. To understand the effects of space radiation on humans, it is important to determine the underlying cellular mechanisms. While general dosimetry describes average radiation levels accurately, it says little about the actual physiological impact and does not provide biological information about individual cellular events. In addition, there is no information about the nature and magnitude of a systemic response through extra- and intercellular communication. To assess the stress response in human fibroblasts that were sent into space with the Foton-M3 mission, we have developed a pluralistic setup to measure DNA damage and inflammation response by combining global and local dosimetry, image cytometry and multiplex array technology, thereby maximizing the scientific output. We were able to demonstrate a significant increase in DNA double-strand breaks, determined by a twofold increase of the gamma-H2AX signal at the level of the single cell and a threefold up-regulation of the soluble signal proteins CCL5, IL-6, IL-8, beta-2 microglobulin and EN-RAGE, which are key players in the process of inflammation, in the growth medium.

  7. Using a microfluidic device for high-content analysis of cell signaling.

    Science.gov (United States)

    Cheong, Raymond; Wang, Chiaochun Joanne; Levchenko, Andre

    2009-06-16

    Quantitative analysis and understanding of signaling networks require measurements of the location and activities of key proteins over time, at the level of single cells, in response to various perturbations. Microfluidic devices enable such analyses to be conducted in a high-throughput and in a highly controlled manner. We describe in detail how to design and use a microfluidic device to perform such information-rich experiments.

  8. High-Content Movement Analysis as a Diagnostic Tool in C. elegans

    Science.gov (United States)

    Winter, Peter; Lancichinetti, Andrea; Krevitt, Leah; Amaral, Luis; Morimoto, Rick

    2013-03-01

    Many neurodegenerative diseases manifest themselves through a loss of motor control and give us information about the underlying disease. This loss of coordination is observed in humans and in the model organisms used to study neurodegeneration. In Caenorhabditis elegans, there is an extensive genetic library of strains that lack functional neuronal signaling pathways and expressing proteins associated with neurodegenerative diseases. While most of these strains have decrease motility or cause paralysis, relatively few have been screened to look for more subtle changes in motor control such as stiffness, twitching, or other changes in behavior. we use high-resolution position and posture data to automatically analyze the movement of worms from different genetic backgrounds and characterize 14 movement characteristics. By creating a quantitative mapping between the movement characterization and an online database of gene annotation, gene expression, and anatomy, we aim to predict a likely set of cellular and molecular disruptions. This work provides a proof of concept for the use of detailed movement analysis to uncover novel disruptions in certain motor control processes. Knowledge of the molecular origin of these disruptions provided by our understanding of C. elegans genetics and physiology could lead to new diagnostic and therapeutic targets for neurodegenerative disease.

  9. Screening of siRNA nanoparticles for delivery to airway epithelial cells using high-content analysis

    LENUS (Irish Health Repository)

    Hibbitts, Alan

    2011-08-01

    Aims: Delivery of siRNA to the lungs via inhalation offers a unique opportunity to develop a new treatment paradigm for a range of respiratory conditions. However, progress has been greatly hindered by safety and delivery issues. This study developed a high-throughput method for screening novel nanotechnologies for pulmonary siRNA delivery. Methodology: Following physicochemical analysis, the ability of PEI–PEG–siRNA nanoparticles to facilitate siRNA delivery was determined using high-content analysis (HCA) in Calu-3 cells. Results obtained from HCA were validated using confocal microscopy. Finally, cytotoxicity of the PEI–PEG–siRNA particles was analyzed by HCA using the Cellomics® multiparameter cytotoxicity assay. Conclusion: PEI–PEG–siRNA nanoparticles facilitated increased siRNA uptake and luciferase knockdown in Calu-3 cells compared with PEI–siRNA.

  10. Isolation and culture of adult human microglia within mixed glial cultures for functional experimentation and high-content analysis.

    Science.gov (United States)

    Smith, Amy M; Gibbons, Hannah M; Lill, Claire; Faull, Richard L M; Dragunow, Mike

    2013-01-01

    Microglia are thought to be involved in diseases of the adult human brain as well as normal aging processes. While neonatal and rodent microglia are often used in studies investigating microglial function, there are important differences between rodent microglia and their adult human counterparts. Human brain tissue provides a unique and valuable tool for microglial cell and molecular biology. Routine protocols can now enable use of this culture method in many laboratories. Detailed protocols and advice for culture of human brain microglia are provided here. We demonstrate the protocol for culturing human adult microglia within a mixed glial culture and use a phagocytosis assay as an example of the functional studies possible with these cells as well as a high-content analysis method of quantification.

  11. High-content analysis of factors affecting gold nanoparticle uptake by neuronal and microglial cells in culture.

    Science.gov (United States)

    Stojiljković, A; Kuehni-Boghenbor, K; Gaschen, V; Schüpbach, G; Mevissen, M; Kinnear, C; Möller, A-M; Stoffel, M H

    2016-09-22

    Owing to their ubiquitous distribution, expected beneficial effects and suspected adverse effects, nanoparticles are viewed as a double-edged sword, necessitating a better understanding of their interactions with tissues and organisms. Thus, the goals of the present study were to develop and present a method to generate quantitative data on nanoparticle entry into cells in culture and to exemplarily demonstrate the usefulness of this approach by analyzing the impact of size, charge and various proteinaceous coatings on particle internalization. N9 microglial cells and both undifferentiated and differentiated SH-SY5Y neuroblastoma cells were exposed to customized gold nanoparticles. After silver enhancement, the particles were visualized by epipolarization microscopy and analysed by high-content analysis. The value of this approach was substantiated by assessing the impact of various parameters on nanoparticle uptake. Uptake was higher in microglial cells than in neuronal cells. Only microglial cells showed a distinct size preference, preferring particles with a diameter of 80 nm. Positive surface charge had the greatest impact on particle uptake. Coating with bovine serum albumin, fetuin or protein G significantly increased particle internalization in microglial cells but not in neuronal cells. Coating with wheat germ agglutinin increased particle uptake in both N9 and differentiated SH-SY5Y cells but not in undifferentiated SH-SY5Y cells. Furthermore, internalization was shown to be an active process and indicators of caspase-dependent apoptosis revealed that gold nanoparticles did not have any cytotoxic effects. The present study thus demonstrates the suitability of gold nanoparticles and high-content analysis for assessing numerous variables in a stringently quantitative and statistically significant manner. Furthermore, the results presented herein showcase the feasibility of specifically targeting nanoparticles to distinct cell types.

  12. High Content Analysis Provides Mechanistic Insights on the Pathways of Toxicity Induced by Amine-Modified Polystyrene Nanoparticles

    Science.gov (United States)

    Anguissola, Sergio; Garry, David; Salvati, Anna; O'Brien, Peter J.; Dawson, Kenneth A.

    2014-01-01

    The fast-paced development of nanotechnology needs the support of effective safety testing. We have developed a screening platform measuring simultaneously several cellular parameters for exposure to various concentrations of nanoparticles (NPs). Cell lines representative of different organ cell types, including lung, endothelium, liver, kidney, macrophages, glia, and neuronal cells were exposed to 50 nm amine-modified polystyrene (PS-NH2) NPs previously reported to induce apoptosis and to 50 nm sulphonated and carboxyl-modified polystyrene NPs that were reported to be silent. All cell lines apart from Raw 264.7 executed apoptosis in response to PS-NH2 NPs, showing specific sequences of EC50 thresholds; lysosomal acidification was the most sensitive parameter. Loss of mitochondrial membrane potential and plasma membrane integrity measured by High Content Analysis resulted comparably sensitive to the equivalent OECD-recommended assays, allowing increased output. Analysis of the acidic compartments revealed good cerrelation between size/fluorescence intensity and dose of PS-NH2 NPs applied; moreover steatosis and phospholipidosis were observed, consistent with the lysosomal alterations revealed by Lysotracker green; similar responses were observed when comparing astrocytoma cells with primary astrocytes. We have established a platform providing mechanistic insights on the response to exposure to nanoparticles. Such platform holds great potential for in vitro screening of nanomaterials in highthroughput format. PMID:25238162

  13. High content analysis provides mechanistic insights on the pathways of toxicity induced by amine-modified polystyrene nanoparticles.

    Science.gov (United States)

    Anguissola, Sergio; Garry, David; Salvati, Anna; O'Brien, Peter J; Dawson, Kenneth A

    2014-01-01

    The fast-paced development of nanotechnology needs the support of effective safety testing. We have developed a screening platform measuring simultaneously several cellular parameters for exposure to various concentrations of nanoparticles (NPs). Cell lines representative of different organ cell types, including lung, endothelium, liver, kidney, macrophages, glia, and neuronal cells were exposed to 50 nm amine-modified polystyrene (PS-NH2) NPs previously reported to induce apoptosis and to 50 nm sulphonated and carboxyl-modified polystyrene NPs that were reported to be silent. All cell lines apart from Raw 264.7 executed apoptosis in response to PS-NH2 NPs, showing specific sequences of EC50 thresholds; lysosomal acidification was the most sensitive parameter. Loss of mitochondrial membrane potential and plasma membrane integrity measured by High Content Analysis resulted comparably sensitive to the equivalent OECD-recommended assays, allowing increased output. Analysis of the acidic compartments revealed good cerrelation between size/fluorescence intensity and dose of PS-NH2 NPs applied; moreover steatosis and phospholipidosis were observed, consistent with the lysosomal alterations revealed by Lysotracker green; similar responses were observed when comparing astrocytoma cells with primary astrocytes. We have established a platform providing mechanistic insights on the response to exposure to nanoparticles. Such platform holds great potential for in vitro screening of nanomaterials in highthroughput format.

  14. A versatile, bar-coded nuclear marker/reporter for live cell fluorescent and multiplexed high content imaging.

    Directory of Open Access Journals (Sweden)

    Irina Krylova

    Full Text Available The screening of large numbers of compounds or siRNAs is a mainstay of both academic and pharmaceutical research. Most screens test those interventions against a single biochemical or cellular output whereas recording multiple complementary outputs may be more biologically relevant. High throughput, multi-channel fluorescence microscopy permits multiple outputs to be quantified in specific cellular subcompartments. However, the number of distinct fluorescent outputs available remains limited. Here, we describe a cellular bar-code technology in which multiple cell-based assays are combined in one well after which each assay is distinguished by fluorescence microscopy. The technology uses the unique fluorescent properties of assay-specific markers comprised of distinct combinations of different 'red' fluorescent proteins sandwiched around a nuclear localization signal. The bar-code markers are excited by a common wavelength of light but distinguished ratiometrically by their differing relative fluorescence in two emission channels. Targeting the bar-code to cell nuclei enables individual cells expressing distinguishable markers to be readily separated by standard image analysis programs. We validated the method by showing that the unique responses of different cell-based assays to specific drugs are retained when three assays are co-plated and separated by the bar-code. Based upon those studies, we discuss a roadmap in which even more assays may be combined in a well. The ability to analyze multiple assays simultaneously will enable screens that better identify, characterize and distinguish hits according to multiple biologically or clinically relevant criteria. These capabilities also enable the re-creation of complex mixtures of cell types that is emerging as a central area of interest in many fields.

  15. Multiparametric High Content Analysis for assessment of neurotoxicity in differentiated neuronal cell lines and human embryonic stem cell-derived neurons.

    Science.gov (United States)

    Wilson, Melinda S; Graham, James R; Ball, Andrew J

    2014-05-01

    The potential for adverse neurotoxic reactions in response to therapeutics and environmental hazards continues to prompt development of novel cell-based assays to determine neurotoxic risk. A challenge remains to characterize and understand differences between assays and between neuronal cellular models in their responses to neurotoxicants if scientists are to determine the optimal model, or combination of models, for neurotoxicity screening. Most studies to date have focused on developmental neurotoxicity applications. This study reports the development of a robust multiparameter High Content Analysis (HCA) assay for neurotoxicity screening in three differentiated neuronal cell models - SH-SY5Y, PC12 and human embryonic stem cell-derived hN2™ cells. Using a multiplexed detection reagent panel (Hoechst nuclear stain; antibodies against βIII-Tubulin and phosphorylated neurofilament subunit H, and Mitotracker(®) Red CMXRos), a multiparametric HCA assay was developed and used to characterize a test set of 36 chemicals. HCA data generated were compared to data generated using MTT and LDH assays under the same assay conditions. Data showed that multiparametric High Content Analysis of differentiated neuronal cells is feasible, and represents a highly effective method for obtaining large quantities of robust data on the neurotoxic effects of compounds compared with cytotoxicity assays like MTT and LDH. Significant differences were observed between the responses to compounds across the three cellular models tested, illustrating the heterogeneity in responses to neurotoxicants across different cell types. This study provides data strongly supporting the use of cellular imaging as a tool for neurotoxicity assessment in differentiated neuronal cells, and provides novel insights into the neurotoxic effects of a test set of compounds upon differentiated neuronal cell lines and human embryonic stem cell-derived neurons. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Differential phosphorylation of perilipin 1A at the initiation of lipolysis revealed by novel monoclonal antibodies and high content analysis.

    Science.gov (United States)

    McDonough, Patrick M; Maciejewski-Lenoir, Dominique; Hartig, Sean M; Hanna, Rita A; Whittaker, Ross; Heisel, Andrew; Nicoll, James B; Buehrer, Benjamin M; Christensen, Kurt; Mancini, Maureen G; Mancini, Michael A; Edwards, Dean P; Price, Jeffrey H

    2013-01-01

    Lipolysis in adipocytes is regulated by phosphorylation of lipid droplet-associated proteins, including perilipin 1A and hormone-sensitive lipase (HSL). Perilipin 1A is potentially phosphorylated by cAMP(adenosine 3',5'-cyclic monophosphate)-dependent protein kinase (PKA) on several sites, including conserved C-terminal residues, serine 497 (PKA-site 5) and serine 522 (PKA-site 6). To characterize perilipin 1A phosphorylation, novel monoclonal antibodies were developed, which selectively recognize perilipin 1A phosphorylation at PKA-site 5 and PKA-site 6. Utilizing these novel antibodies, as well as antibodies selectively recognizing HSL phosphorylation at serine 563 or serine 660, we used high content analysis to examine the phosphorylation of perilipin 1A and HSL in adipocytes exposed to lipolytic agents. We found that perilipin PKA-site 5 and HSL-serine 660 were phosphorylated to a similar extent in response to forskolin (FSK) and L-γ-melanocyte stimulating hormone (L-γ-MSH). In contrast, perilipin PKA-site 6 and HSL-serine 563 were phosphorylated more slowly and L-γ-MSH was a stronger agonist for these sites compared to FSK. When a panel of lipolytic agents was tested, including multiple concentrations of isoproterenol, FSK, and L-γ-MSH, the pattern of results was virtually identical for perilipin PKA-site 5 and HSL-serine 660, whereas a distinct pattern was observed for perilipin PKA-site 6 and HSL-serine 563. Notably, perilipin PKA-site 5 and HSL-serine 660 feature two arginine residues upstream from the phospho-acceptor site, which confers high affinity for PKA, whereas perilipin PKA-site 6 and HSL-serine 563 feature only a single arginine. Thus, we suggest perilipin 1A and HSL are differentially phosphorylated in a similar manner at the initiation of lipolysis and arginine residues near the target serines may influence this process.

  17. Differential phosphorylation of perilipin 1A at the initiation of lipolysis revealed by novel monoclonal antibodies and high content analysis.

    Directory of Open Access Journals (Sweden)

    Patrick M McDonough

    Full Text Available Lipolysis in adipocytes is regulated by phosphorylation of lipid droplet-associated proteins, including perilipin 1A and hormone-sensitive lipase (HSL. Perilipin 1A is potentially phosphorylated by cAMP(adenosine 3',5'-cyclic monophosphate-dependent protein kinase (PKA on several sites, including conserved C-terminal residues, serine 497 (PKA-site 5 and serine 522 (PKA-site 6. To characterize perilipin 1A phosphorylation, novel monoclonal antibodies were developed, which selectively recognize perilipin 1A phosphorylation at PKA-site 5 and PKA-site 6. Utilizing these novel antibodies, as well as antibodies selectively recognizing HSL phosphorylation at serine 563 or serine 660, we used high content analysis to examine the phosphorylation of perilipin 1A and HSL in adipocytes exposed to lipolytic agents. We found that perilipin PKA-site 5 and HSL-serine 660 were phosphorylated to a similar extent in response to forskolin (FSK and L-γ-melanocyte stimulating hormone (L-γ-MSH. In contrast, perilipin PKA-site 6 and HSL-serine 563 were phosphorylated more slowly and L-γ-MSH was a stronger agonist for these sites compared to FSK. When a panel of lipolytic agents was tested, including multiple concentrations of isoproterenol, FSK, and L-γ-MSH, the pattern of results was virtually identical for perilipin PKA-site 5 and HSL-serine 660, whereas a distinct pattern was observed for perilipin PKA-site 6 and HSL-serine 563. Notably, perilipin PKA-site 5 and HSL-serine 660 feature two arginine residues upstream from the phospho-acceptor site, which confers high affinity for PKA, whereas perilipin PKA-site 6 and HSL-serine 563 feature only a single arginine. Thus, we suggest perilipin 1A and HSL are differentially phosphorylated in a similar manner at the initiation of lipolysis and arginine residues near the target serines may influence this process.

  18. Differential Phosphorylation of Perilipin 1A at the Initiation of Lipolysis Revealed by Novel Monoclonal Antibodies and High Content Analysis

    Science.gov (United States)

    McDonough, Patrick M.; Hanna, Rita A.; Whittaker, Ross; Heisel, Andrew; Nicoll, James B.; Buehrer, Benjamin M.; Christensen, Kurt; Mancini, Maureen G.; Mancini, Michael A.; Edwards, Dean P.; Price, Jeffrey H.

    2013-01-01

    Lipolysis in adipocytes is regulated by phosphorylation of lipid droplet-associated proteins, including perilipin 1A and hormone-sensitive lipase (HSL). Perilipin 1A is potentially phosphorylated by cAMP(adenosine 3′,5′-cyclic monophosphate)-dependent protein kinase (PKA) on several sites, including conserved C-terminal residues, serine 497 (PKA-site 5) and serine 522 (PKA-site 6). To characterize perilipin 1A phosphorylation, novel monoclonal antibodies were developed, which selectively recognize perilipin 1A phosphorylation at PKA-site 5 and PKA-site 6. Utilizing these novel antibodies, as well as antibodies selectively recognizing HSL phosphorylation at serine 563 or serine 660, we used high content analysis to examine the phosphorylation of perilipin 1A and HSL in adipocytes exposed to lipolytic agents. We found that perilipin PKA-site 5 and HSL-serine 660 were phosphorylated to a similar extent in response to forskolin (FSK) and L-γ-melanocyte stimulating hormone (L-γ-MSH). In contrast, perilipin PKA-site 6 and HSL-serine 563 were phosphorylated more slowly and L-γ-MSH was a stronger agonist for these sites compared to FSK. When a panel of lipolytic agents was tested, including multiple concentrations of isoproterenol, FSK, and L-γ-MSH, the pattern of results was virtually identical for perilipin PKA-site 5 and HSL-serine 660, whereas a distinct pattern was observed for perilipin PKA-site 6 and HSL-serine 563. Notably, perilipin PKA-site 5 and HSL-serine 660 feature two arginine residues upstream from the phospho-acceptor site, which confers high affinity for PKA, whereas perilipin PKA-site 6 and HSL-serine 563 feature only a single arginine. Thus, we suggest perilipin 1A and HSL are differentially phosphorylated in a similar manner at the initiation of lipolysis and arginine residues near the target serines may influence this process. PMID:23405163

  19. Computer vision for high content screening.

    Science.gov (United States)

    Kraus, Oren Z; Frey, Brendan J

    2016-01-01

    High Content Screening (HCS) technologies that combine automated fluorescence microscopy with high throughput biotechnology have become powerful systems for studying cell biology and drug screening. These systems can produce more than 100 000 images per day, making their success dependent on automated image analysis. In this review, we describe the steps involved in quantifying microscopy images and different approaches for each step. Typically, individual cells are segmented from the background using a segmentation algorithm. Each cell is then quantified by extracting numerical features, such as area and intensity measurements. As these feature representations are typically high dimensional (>500), modern machine learning algorithms are used to classify, cluster and visualize cells in HCS experiments. Machine learning algorithms that learn feature representations, in addition to the classification or clustering task, have recently advanced the state of the art on several benchmarking tasks in the computer vision community. These techniques have also recently been applied to HCS image analysis.

  20. HC StratoMineR: A web-based tool for the rapid analysis of high content datasets

    NARCIS (Netherlands)

    Omta, W.; Heesbeen, R. van; Pagliero, R.; Velden, L. van der; Lelieveld, D.; Nellen, M.; Kramer, M.; Yeong, M.; Saeidi, A.; Medema, R.; Spruit, M.; Brinkkemper, S.; Klumperman, J.; Egan, D.

    2016-01-01

    High-content screening (HCS) can generate large multidimensional datasets and when aligned with the appropriate data mining tools, it can yield valuable insights into the mechanism of action of bioactive molecules. However, easy-to-use data mining tools are not widely available, with the result that

  1. HC StratoMineR : A Web-Based Tool for the Rapid Analysis of High-Content Datasets

    NARCIS (Netherlands)

    Omta, Wienand A; van Heesbeen, Roy G; Pagliero, Romina J; van der Velden, Lieke M; Lelieveld, Daphne; Nellen, Mehdi; Kramer, Maik; Yeong, Marley; Saeidi, Amir M; Medema, Rene H; Spruit, Marco; Brinkkemper, Sjaak; Klumperman, Judith; Egan, David A

    2016-01-01

    High-content screening (HCS) can generate large multidimensional datasets and when aligned with the appropriate data mining tools, it can yield valuable insights into the mechanism of action of bioactive molecules. However, easy-to-use data mining tools are not widely available, with the result that

  2. A high-content image-based method for quantitatively studying context-dependent cell population dynamics.

    Science.gov (United States)

    Garvey, Colleen M; Spiller, Erin; Lindsay, Danika; Chiang, Chun-Te; Choi, Nathan C; Agus, David B; Mallick, Parag; Foo, Jasmine; Mumenthaler, Shannon M

    2016-01-01

    Tumor progression results from a complex interplay between cellular heterogeneity, treatment response, microenvironment and heterocellular interactions. Existing approaches to characterize this interplay suffer from an inability to distinguish between multiple cell types, often lack environmental context, and are unable to perform multiplex phenotypic profiling of cell populations. Here we present a high-throughput platform for characterizing, with single-cell resolution, the dynamic phenotypic responses (i.e. morphology changes, proliferation, apoptosis) of heterogeneous cell populations both during standard growth and in response to multiple, co-occurring selective pressures. The speed of this platform enables a thorough investigation of the impacts of diverse selective pressures including genetic alterations, therapeutic interventions, heterocellular components and microenvironmental factors. The platform has been applied to both 2D and 3D culture systems and readily distinguishes between (1) cytotoxic versus cytostatic cellular responses; and (2) changes in morphological features over time and in response to perturbation. These important features can directly influence tumor evolution and clinical outcome. Our image-based approach provides a deeper insight into the cellular dynamics and heterogeneity of tumors (or other complex systems), with reduced reagents and time, offering advantages over traditional biological assays.

  3. High-content, high-throughput analysis of cell cycle perturbations induced by the HSP90 inhibitor XL888.

    Directory of Open Access Journals (Sweden)

    Susan K Lyman

    Full Text Available BACKGROUND: Many proteins that are dysregulated or mutated in cancer cells rely on the molecular chaperone HSP90 for their proper folding and activity, which has led to considerable interest in HSP90 as a cancer drug target. The diverse array of HSP90 client proteins encompasses oncogenic drivers, cell cycle components, and a variety of regulatory factors, so inhibition of HSP90 perturbs multiple cellular processes, including mitogenic signaling and cell cycle control. Although many reports have investigated HSP90 inhibition in the context of the cell cycle, no large-scale studies have examined potential correlations between cell genotype and the cell cycle phenotypes of HSP90 inhibition. METHODOLOGY/PRINCIPAL FINDINGS: To address this question, we developed a novel high-content, high-throughput cell cycle assay and profiled the effects of two distinct small molecule HSP90 inhibitors (XL888 and 17-AAG [17-allylamino-17-demethoxygeldanamycin] in a large, genetically diverse panel of cancer cell lines. The cell cycle phenotypes of both inhibitors were strikingly similar and fell into three classes: accumulation in M-phase, G2-phase, or G1-phase. Accumulation in M-phase was the most prominent phenotype and notably, was also correlated with TP53 mutant status. We additionally observed unexpected complexity in the response of the cell cycle-associated client PLK1 to HSP90 inhibition, and we suggest that inhibitor-induced PLK1 depletion may contribute to the striking metaphase arrest phenotype seen in many of the M-arrested cell lines. CONCLUSIONS/SIGNIFICANCE: Our analysis of the cell cycle phenotypes induced by HSP90 inhibition in 25 cancer cell lines revealed that the phenotypic response was highly dependent on cellular genotype as well as on the concentration of HSP90 inhibitor and the time of treatment. M-phase arrest correlated with the presence of TP53 mutations, while G2 or G1 arrest was more commonly seen in cells bearing wt TP53. We draw

  4. Cytoskeletal re-arrangement in TGF-β1-induced alveolar epithelial-mesenchymal transition studied by atomic force microscopy and high-content analysis.

    Science.gov (United States)

    Buckley, Stephen T; Medina, Carlos; Davies, Anthony M; Ehrhardt, Carsten

    2012-04-01

    Epithelial-mesenchymal transition (EMT) is closely implicated in the pathogenesis of idiopathic pulmonary fibrosis. Associated with this phenotypic transition is the acquisition of an elongated cell morphology and establishment of stress fibers. The extent to which these EMT-associated changes influence cellular mechanics is unclear. We assessed the biomechanical properties of alveolar epithelial cells (A549) following exposure to TGF-β1. Using atomic force microscopy, changes in cell stiffness and surface membrane features were determined. Stimulation with TGF-β1 gave rise to a significant increase in stiffness, which was augmented by a collagen I matrix. Additionally, TGF-β1-treated cells exhibited a rougher surface profile with notable protrusions. Simultaneous quantitative examination of the morphological attributes of stimulated cells using an image-based high-content analysis system revealed dramatic alterations in cell shape, F-actin content and distribution. Together, these investigations point to a strong correlation between the cytoskeletal-associated cellular architecture and the mechanical dynamics of alveolar epithelial cells undergoing EMT. From the Clinical Editor: Epithelial-mesenchymal transition is implicated in the pathogenesis of pulmonary fibrosis. Using atomic force microscopy, the authors demonstrate a strong correlation between the cytoskeletal-associated cellular architecture and the mechanical dynamics of alveolar epithelial cells undergoing mesenchymal transition.

  5. A Novel High Content Imaging-Based Screen Identifies the Anti-Helminthic Niclosamide as an Inhibitor of Lysosome Anterograde Trafficking and Prostate Cancer Cell Invasion.

    Directory of Open Access Journals (Sweden)

    Magdalena L Circu

    Full Text Available Lysosome trafficking plays a significant role in tumor invasion, a key event for the development of metastasis. Previous studies from our laboratory have demonstrated that the anterograde (outward movement of lysosomes to the cell surface in response to certain tumor microenvironment stimulus, such as hepatocyte growth factor (HGF or acidic extracellular pH (pHe, increases cathepsin B secretion and tumor cell invasion. Anterograde lysosome trafficking depends on sodium-proton exchanger activity and can be reversed by blocking these ion pumps with Troglitazone or EIPA. Since these drugs cannot be advanced into the clinic due to toxicity, we have designed a high-content assay to discover drugs that block peripheral lysosome trafficking with the goal of identifying novel drugs that inhibit tumor cell invasion. An automated high-content imaging system (Cellomics was used to measure the position of lysosomes relative to the nucleus. Among a total of 2210 repurposed and natural product drugs screened, 18 "hits" were identified. One of the compounds identified as an anterograde lysosome trafficking inhibitor was niclosamide, a marketed human anti-helminthic drug. Further studies revealed that niclosamide blocked acidic pHe, HGF, and epidermal growth factor (EGF-induced anterograde lysosome redistribution, protease secretion, motility, and invasion of DU145 castrate resistant prostate cancer cells at clinically relevant concentrations. In an effort to identify the mechanism by which niclosamide prevented anterograde lysosome movement, we found that this drug exhibited no significant effect on the level of ATP, microtubules or actin filaments, and had minimal effect on the PI3K and MAPK pathways. Niclosamide collapsed intralysosomal pH without disruption of the lysosome membrane, while bafilomycin, an agent that impairs lysosome acidification, was also found to induce JLA in our model. Taken together, these data suggest that niclosamide promotes

  6. Pattern recognition in pulmonary tuberculosis defined by high content peptide microarray chip analysis representing 61 proteins from M. tuberculosis.

    Directory of Open Access Journals (Sweden)

    Simani Gaseitsiwe

    Full Text Available BACKGROUND: Serum antibody-based target identification has been used to identify tumor-associated antigens (TAAs for development of anti-cancer vaccines. A similar approach can be helpful to identify biologically relevant and clinically meaningful targets in M. tuberculosis (MTB infection for diagnosis or TB vaccine development in clinically well defined populations. METHOD: We constructed a high-content peptide microarray with 61 M. tuberculosis proteins as linear 15 aa peptide stretches with 12 aa overlaps resulting in 7446 individual peptide epitopes. Antibody profiling was carried with serum from 34 individuals with active pulmonary TB and 35 healthy individuals in order to obtain an unbiased view of the MTB epitope pattern recognition pattern. Quality data extraction was performed, data sets were analyzed for significant differences and patterns predictive of TB+/-. FINDINGS: Three distinct patterns of IgG reactivity were identified: 89/7446 peptides were differentially recognized (in 34/34 TB+ patients and in 35/35 healthy individuals and are highly predictive of the division into TB+ and TB-, other targets were exclusively recognized in all patients with TB (e.g. sigmaF but not in any of the healthy individuals, and a third peptide set was recognized exclusively in healthy individuals (35/35 but no in TB+ patients. The segregation between TB+ and TB- does not cluster into specific recognition of distinct MTB proteins, but into specific peptide epitope 'hotspots' at different locations within the same protein. Antigen recognition pattern profiles in serum from TB+ patients from Armenia vs. patients recruited in Sweden showed that IgG-defined MTB epitopes are very similar in individuals with different genetic background. CONCLUSIONS: A uniform target MTB IgG-epitope recognition pattern exists in pulmonary tuberculosis. Unbiased, high-content peptide microarray chip-based testing of clinically well-defined populations allows to visualize

  7. Characterizing the DNA Damage Response by Cell Tracking Algorithms and Cell Features Classification Using High-Content Time-Lapse Analysis.

    Directory of Open Access Journals (Sweden)

    Walter Georgescu

    Full Text Available Traditionally, the kinetics of DNA repair have been estimated using immunocytochemistry by labeling proteins involved in the DNA damage response (DDR with fluorescent markers in a fixed cell assay. However, detailed knowledge of DDR dynamics across multiple cell generations cannot be obtained using a limited number of fixed cell time-points. Here we report on the dynamics of 53BP1 radiation induced foci (RIF across multiple cell generations using live cell imaging of non-malignant human mammary epithelial cells (MCF10A expressing histone H2B-GFP and the DNA repair protein 53BP1-mCherry. Using automatic extraction of RIF imaging features and linear programming techniques, we were able to characterize detailed RIF kinetics for 24 hours before and 24 hours after exposure to low and high doses of ionizing radiation. High-content-analysis at the single cell level over hundreds of cells allows us to quantify precisely the dose dependence of 53BP1 protein production, RIF nuclear localization and RIF movement after exposure to X-ray. Using elastic registration techniques based on the nuclear pattern of individual cells, we could describe the motion of individual RIF precisely within the nucleus. We show that DNA repair occurs in a limited number of large domains, within which multiple small RIFs form, merge and/or resolve with random motion following normal diffusion law. Large foci formation is shown to be mainly happening through the merging of smaller RIF rather than through growth of an individual focus. We estimate repair domain sizes of 7.5 to 11 µm2 with a maximum number of ~15 domains per MCF10A cell. This work also highlights DDR which are specific to doses larger than 1 Gy such as rapid 53BP1 protein increase in the nucleus and foci diffusion rates that are significantly faster than for spontaneous foci movement. We hypothesize that RIF merging reflects a "stressed" DNA repair process that has been taken outside physiological conditions when

  8. Image Analysis

    DEFF Research Database (Denmark)

    The 19th Scandinavian Conference on Image Analysis was held at the IT University of Copenhagen in Denmark during June 15-17, 2015. The SCIA conference series has been an ongoing biannual event for more than 30 years and over the years it has nurtured a world-class regional research and development....... The topics of the accepted papers range from novel applications of vision systems, pattern recognition, machine learning, feature extraction, segmentation, 3D vision, to medical and biomedical image analysis. The papers originate from all the Scandinavian countries and several other European countries...

  9. Characterization of HTT inclusion size, location, and timing in the zQ175 mouse model of Huntington's disease: an in vivo high-content imaging study.

    Directory of Open Access Journals (Sweden)

    Nikisha Carty

    Full Text Available Huntington's disease (HD is an autosomal dominant neurodegenerative disorder caused by a CAG trinucleotide repeat expansion in the huntingtin gene. Major pathological hallmarks of HD include inclusions of mutant huntingtin (mHTT protein, loss of neurons predominantly in the caudate nucleus, and atrophy of multiple brain regions. However, the early sequence of histological events that manifest in region- and cell-specific manner has not been well characterized. Here we use a high-content histological approach to precisely monitor changes in HTT expression and characterize deposition dynamics of mHTT protein inclusion bodies in the recently characterized zQ175 knock-in mouse line. We carried out an automated multi-parameter quantitative analysis of individual cortical and striatal cells in tissue slices from mice aged 2-12 months and confirmed biochemical reports of an age-associated increase in mHTT inclusions in this model. We also found distinct regional and subregional dynamics for inclusion number, size and distribution with subcellular resolution. We used viral-mediated suppression of total HTT in the striatum of zQ175 mice as an example of a therapeutically-relevant but heterogeneously transducing strategy to demonstrate successful application of this platform to quantitatively assess target engagement and outcome on a cellular basis.

  10. Dynamic heterogeneity of DNA methylation and hydroxymethylation in embryonic stem cell populations captured by single-cell 3D high-content analysis

    Energy Technology Data Exchange (ETDEWEB)

    Tajbakhsh, Jian, E-mail: tajbakhshj@cshs.org [Chromatin Biology Laboratory, Department of Surgery, Cedars-Sinai Medical Center, Los Angeles, CA 90048 (United States); Translational Cytomics Group, Cedars-Sinai Medical Center, Los Angeles, CA 90048 (United States); Samuel Oschin Comprehensive Cancer Institute, Cedars-Sinai Medical Center, Los Angeles, CA 90048 (United States); Stefanovski, Darko [Translational Cytomics Group, Cedars-Sinai Medical Center, Los Angeles, CA 90048 (United States); Department of Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, CA 90048 (United States); Department of Clinical Studies, School of Veterinary Medicine, University of Pennsylvania, Philadelphia, PA 19348 (United States); Tang, George [Chromatin Biology Laboratory, Department of Surgery, Cedars-Sinai Medical Center, Los Angeles, CA 90048 (United States); Translational Cytomics Group, Cedars-Sinai Medical Center, Los Angeles, CA 90048 (United States); Wawrowsky, Kolja [Translational Cytomics Group, Cedars-Sinai Medical Center, Los Angeles, CA 90048 (United States); Department of Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, CA 90048 (United States); Liu, Naiyou; Fair, Jeffrey H. [Department of Surgery and UF Health Comprehensive Transplant Center, University of Florida College of Medicine, Gainesville, FL 32608 (United States)

    2015-03-15

    Cell-surface markers and transcription factors are being used in the assessment of stem cell fate and therapeutic safety, but display significant variability in stem cell cultures. We assessed nuclear patterns of 5-hydroxymethylcytosine (5hmC, associated with pluripotency), a second important epigenetic mark, and its combination with 5-methylcytosine (5mC, associated with differentiation), also in comparison to more established markers of pluripotency (Oct-4) and endodermal differentiation (FoxA2, Sox17) in mouse embryonic stem cells (mESC) over a 10-day differentiation course in vitro: by means of confocal and super-resolution imaging together with 3D high-content analysis, an essential tool in single-cell screening. In summary: 1) We did not measure any significant correlation of putative markers with global 5mC or 5hmC. 2) While average Oct-4 levels stagnated on a cell-population base (0.015 lnIU/day), Sox17 and FoxA2 increased 22-fold and 3-fold faster, respectively (Sox17: 0.343 lnIU/day; FoxA2: 0.046 lnIU/day). In comparison, global DNA methylation levels increased 4-fold faster (0.068 lnIU/day), and global hydroxymethylation declined at 0.046 lnIU/day, both with a better explanation of the temporal profile. 3) This progression was concomitant with the occurrence of distinct nuclear codistribution patterns that represented a heterogeneous spectrum of states in differentiation; converging to three major coexisting 5mC/5hmC phenotypes by day 10: 5hmC{sup +}/5mC{sup −}, 5hmC{sup +}/5mC{sup +}, and 5hmC{sup −}/5mC{sup +} cells. 4) Using optical nanoscopy we could delineate the respective topologies of 5mC/5hmC colocalization in subregions of nuclear DNA: in the majority of 5hmC{sup +}/5mC{sup +} cells 5hmC and 5mC predominantly occupied mutually exclusive territories resembling euchromatic and heterochromatic regions, respectively. Simultaneously, in a smaller subset of cells we observed a tighter colocalization of the two cytosine variants, presumably

  11. Developmental toxicity assay using high content screening of zebrafish embryos.

    Science.gov (United States)

    Lantz-McPeak, Susan; Guo, Xiaoqing; Cuevas, Elvis; Dumas, Melanie; Newport, Glenn D; Ali, Syed F; Paule, Merle G; Kanungo, Jyotshna

    2015-03-01

    Typically, time-consuming standard toxicological assays using the zebrafish (Danio rerio) embryo model evaluate mortality and teratogenicity after exposure during the first 2 days post-fertilization. Here we describe an automated image-based high content screening (HCS) assay to identify the teratogenic/embryotoxic potential of compounds in zebrafish embryos in vivo. Automated image acquisition was performed using a high content microscope system. Further automated analysis of embryo length, as a statistically quantifiable endpoint of toxicity, was performed on images post-acquisition. The biological effects of ethanol, nicotine, ketamine, caffeine, dimethyl sulfoxide and temperature on zebrafish embryos were assessed. This automated developmental toxicity assay, based on a growth-retardation endpoint should be suitable for evaluating the effects of potential teratogens and developmental toxicants in a high throughput manner. This approach can significantly expedite the screening of potential teratogens and developmental toxicants, thereby improving the current risk assessment process by decreasing analysis time and required resources.

  12. Global histone analysis by mass spectrometry reveals a high content of acetylated lysine residues in the malaria parasite Plasmodium falciparum

    DEFF Research Database (Denmark)

    Trelle, Morten Beck; Salcedo-Amaya, Adriana M; Cohen, Adrian

    2009-01-01

    Post-translational modifications (PTMs) of histone tails play a key role in epigenetic regulation of gene expression in a range of organisms from yeast to human, however, little is known about histone proteins from the parasite that causes malaria in humans, Plasmodium falciparum. We characterize...... comprehensive map of histone modifications in Plasmodium falciparum and highlight the utility of tandem MS for detailed analysis of peptides containing multiple PTMs....

  13. Limitations of using Raman microscopy for the analysis of high-content-carbon-filled ethylene propylene diene monomer rubber

    DEFF Research Database (Denmark)

    Ghanbari-Siahkali, A.; Almdal, K.; Kingshott, P.

    2003-01-01

    on the sample, ranging from 4.55 mW to 0.09 mW. The surface of the EPDM was analyzed before and after laser exposure using X-ray photoelectron spectroscopy (XPS) and attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectroscopy. The techniques have surface probe depths of approximately less......) analysis was also performed on the Raman analyzed areas to visually illustrate the effects created due to laser light exposure (i.e., burning marks). The change in surface chemistry also occurs in regions a few millimeters from the exposed sites, indicating that the effect is quite long range. However...

  14. A computational platform for robotized fluorescence microscopy (II): DNA damage, replication, checkpoint activation, and cell cycle progression by high-content high-resolution multiparameter image-cytometry.

    Science.gov (United States)

    Furia, Laura; Pelicci, Pier Giuseppe; Faretta, Mario

    2013-04-01

    Dissection of complex molecular-networks in rare cell populations is limited by current technologies that do not allow simultaneous quantification, high-resolution localization, and statistically robust analysis of multiple parameters. We have developed a novel computational platform (Automated Microscopy for Image CytOmetry, A.M.I.CO) for quantitative image-analysis of data from confocal or widefield robotized microscopes. We have applied this image-cytometry technology to the study of checkpoint activation in response to spontaneous DNA damage in nontransformed mammary cells. Cell-cycle profile and active DNA-replication were correlated to (i) Ki67, to monitor proliferation; (ii) phosphorylated histone H2AX (γH2AX) and 53BP1, as markers of DNA-damage response (DDR); and (iii) p53 and p21, as checkpoint-activation markers. Our data suggest the existence of cell-cycle modulated mechanisms involving different functions of γH2AX and 53BP1 in DDR, and of p53 and p21 in checkpoint activation and quiescence regulation during the cell-cycle. Quantitative analysis, event selection, and physical relocalization have been then employed to correlate protein expression at the population level with interactions between molecules, measured with Proximity Ligation Analysis, with unprecedented statistical relevance. Copyright © 2013 International Society for Advancement of Cytometry.

  15. Cellular effect of high doses of silica-coated quantum dot profiled with high throughput gene expression analysis and high content cellomics measurements.

    Science.gov (United States)

    Zhang, Tingting; Stilwell, Jackie L; Gerion, Daniele; Ding, Lianghao; Elboudwarej, Omeed; Cooke, Patrick A; Gray, Joe W; Alivisatos, A Paul; Chen, Fanqing Frank

    2006-04-01

    Quantum dots (Qdots) are now used extensively for labeling in biomedical research, and this use is predicted to grow because of their many advantages over alternative labeling methods. Uncoated Qdots made of core/shell CdSe/ZnS are toxic to cells because of the release of Cd2+ ions into the cellular environment. This problem has been partially overcome by coating Qdots with polymers, poly(ethylene glycol) (PEG), or other inert molecules. The most promising coating to date, for reducing toxicity, appears to be PEG. When PEG-coated silanized Qdots (PEG-silane-Qdots) are used to treat cells, toxicity is not observed, even at dosages above 10-20 nM, a concentration inducing death when cells are treated with polymer or mercaptoacid coated Qdots. Because of the importance of Qdots in current and future biomedical and clinical applications, we believe it is essential to more completely understand and verify this negative global response from cells treated with PEG-silane-Qdots. Consequently, we examined the molecular and cellular response of cells treated with two different dosages of PEG-silane-Qdots. Human fibroblasts were exposed to 8 and 80 nM of these Qdots, and both phenotypic as well as whole genome expression measurements were made. PEG-silane-Qdots did not induce any statistically significant cell cycle changes and minimal apoptosis/necrosis in lung fibroblasts (IMR-90) as measured by high content image analysis, regardless of the treatment dosage. A slight increase in apoptosis/necrosis was observed in treated human skin fibroblasts (HSF-42) at both the low and the high dosages. We performed genome-wide expression array analysis of HSF-42 exposed to doses 8 and 80 nM to link the global cell response to a molecular and genetic phenotype. We used a gene array containing approximately 22,000 total probe sets, containing 18,400 probe sets from known genes. Only approximately 50 genes (approximately 0.2% of all the genes tested) exhibited a statistically significant

  16. High-Content Analysis Provides Mechanistic Insights into the Testicular Toxicity of Bisphenol A and Selected Analogues in Mouse Spermatogonial Cells.

    Science.gov (United States)

    Liang, Shenxuan; Yin, Lei; Shengyang Yu, Kevin; Hofmann, Marie-Claude; Yu, Xiaozhong

    2017-01-01

    Bisphenol A (BPA), an endocrine-disrupting compound, was found to be a testicular toxicant in animal models. Bisphenol S (BPS), bisphenol AF (BPAF), and tetrabromobisphenol A (TBBPA) were recently introduced to the market as alternatives to BPA. However, toxicological data of these compounds in the male reproductive system are still limited so far. This study developed and validated an automated multi-parametric high-content analysis (HCA) using the C18-4 spermatogonial cell line as a model. We applied these validated HCA, including nuclear morphology, DNA content, cell cycle progression, DNA synthesis, cytoskeleton integrity, and DNA damage responses, to characterize and compare the testicular toxicities of BPA and 3 selected commercial available BPA analogues, BPS, BPAF, and TBBPA. HCA revealed BPAF and TBBPA exhibited higher spermatogonial toxicities as compared with BPA and BPS, including dose- and time-dependent alterations in nuclear morphology, cell cycle, DNA damage responses, and perturbation of the cytoskeleton. Our results demonstrated that this specific culture model together with HCA can be utilized for quantitative screening and discriminating of chemical-specific testicular toxicity in spermatogonial cells. It also provides a fast and cost-effective approach for the identification of environmental chemicals that could have detrimental effects on reproduction.

  17. The application of high-content analysis in the study of targeted particulate delivery systems for intracellular drug delivery to alveolar macrophages.

    Science.gov (United States)

    Lawlor, Ciaran; O'Sullivan, Mary P; Sivadas, Neera; O'Leary, Seonadh; Gallagher, Paul J; Keane, Joseph; Cryan, Sally-Ann

    2011-08-01

    With an ever increasing number of particulate drug delivery systems being developed for the intracellular delivery of therapeutics a robust high-throughput method for studying particle-cell interactions is urgently required. Current methods used for analyzing particle-cell interaction include spectrofluorimetry, flow cytometry, and fluorescence/confocal microscopy, but these methods are not high throughput and provide only limited data on the specific number of particles delivered intracellularly to the target cell. The work herein presents an automated high-throughput method to analyze microparticulate drug delivery system (DDS) uptake byalveolar macrophages. Poly(lactic-co-glycolic acid) (PLGA) microparticles were prepared in a range of sizes using a solvent evaporation method. A human monocyte cell line (THP-1) was differentiated into macrophage like cells using phorbol 12-myristate 13-acetate (PMA), and cells were treated with microparticles for 1 h and studied using confocal laser scanning microscopy (CLSM), spectrofluorimetry and a high-content analysis (HCA). PLGA microparticles within the size range of 0.8-2.1 μm were found to be optimal for macrophage targeting (p quantitative data on the influence of carrier design on cell targeting that can be gathered in a high-throughput format and therefore has great potential in the screening of intracellularly targeted DDS.

  18. Spinal imaging and image analysis

    CERN Document Server

    Yao, Jianhua

    2015-01-01

    This book is instrumental to building a bridge between scientists and clinicians in the field of spine imaging by introducing state-of-the-art computational methods in the context of clinical applications.  Spine imaging via computed tomography, magnetic resonance imaging, and other radiologic imaging modalities, is essential for noninvasively visualizing and assessing spinal pathology. Computational methods support and enhance the physician’s ability to utilize these imaging techniques for diagnosis, non-invasive treatment, and intervention in clinical practice. Chapters cover a broad range of topics encompassing radiological imaging modalities, clinical imaging applications for common spine diseases, image processing, computer-aided diagnosis, quantitative analysis, data reconstruction and visualization, statistical modeling, image-guided spine intervention, and robotic surgery. This volume serves a broad audience as  contributions were written by both clinicians and researchers, which reflects the inte...

  19. Effects of defined mixtures of persistent organic pollutants (POPs) on multiple cellular responses in the human hepatocarcinoma cell line, HepG2, using high content analysis screening.

    Science.gov (United States)

    Wilson, Jodie; Berntsen, Hanne Friis; Zimmer, Karin Elisabeth; Frizzell, Caroline; Verhaegen, Steven; Ropstad, Erik; Connolly, Lisa

    2016-03-01

    Persistent organic pollutants (POPs) are toxic substances, highly resistant to environmental degradation, which can bio-accumulate and have long-range atmospheric transport potential. Most studies focus on single compound effects, however as humans are exposed to several POPs simultaneously, investigating exposure effects of real life POP mixtures on human health is necessary. A defined mixture of POPs was used, where the compound concentration reflected its contribution to the levels seen in Scandinavian human serum (total mix). Several sub mixtures representing different classes of POPs were also constructed. The perfluorinated (PFC) mixture contained six perfluorinated compounds, brominated (Br) mixture contained seven brominated compounds, chlorinated (Cl) mixture contained polychlorinated biphenyls and also p,p'-dichlorodiphenyldichloroethylene, hexachlorobenzene, three chlordanes, three hexachlorocyclohexanes and dieldrin. Human hepatocarcinoma (HepG2) cells were used for 2h and 48h exposures to the seven mixtures and analysis on a CellInsight™ NXT High Content Screening platform. Multiple cytotoxic endpoints were investigated: cell number, nuclear intensity and area, mitochondrial mass and membrane potential (MMP) and reactive oxygen species (ROS). Both the Br and Cl mixtures induced ROS production but did not lead to apoptosis. The PFC mixture induced ROS production and likely induced cell apoptosis accompanied by the dissipation of MMP. Synergistic effects were evident for ROS induction when cells were exposed to the PFC+Br mixture in comparison to the effects of the individual mixtures. No significant effects were detected in the Br+Cl, PFC+Cl or total mixtures, which contain the same concentrations of chlorinated compounds as the Cl mixture plus additional compounds; highlighting the need for further exploration of POP mixtures in risk assessment. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Retinal imaging and image analysis

    NARCIS (Netherlands)

    Abramoff, M.D.; Garvin, Mona K.; Sonka, Milan

    2010-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindne

  1. High-Content Screening for Quantitative Cell Biology.

    Science.gov (United States)

    Mattiazzi Usaj, Mojca; Styles, Erin B; Verster, Adrian J; Friesen, Helena; Boone, Charles; Andrews, Brenda J

    2016-08-01

    High-content screening (HCS), which combines automated fluorescence microscopy with quantitative image analysis, allows the acquisition of unbiased multiparametric data at the single cell level. This approach has been used to address diverse biological questions and identify a plethora of quantitative phenotypes of varying complexity in numerous different model systems. Here, we describe some recent applications of HCS, ranging from the identification of genes required for specific biological processes to the characterization of genetic interactions. We review the steps involved in the design of useful biological assays and automated image analysis, and describe major challenges associated with each. Additionally, we highlight emerging technologies and future challenges, and discuss how the field of HCS might be enhanced in the future.

  2. Color Medical Image Analysis

    CERN Document Server

    Schaefer, Gerald

    2013-01-01

    Since the early 20th century, medical imaging has been dominated by monochrome imaging modalities such as x-ray, computed tomography, ultrasound, and magnetic resonance imaging. As a result, color information has been overlooked in medical image analysis applications. Recently, various medical imaging modalities that involve color information have been introduced. These include cervicography, dermoscopy, fundus photography, gastrointestinal endoscopy, microscopy, and wound photography. However, in comparison to monochrome images, the analysis of color images is a relatively unexplored area. The multivariate nature of color image data presents new challenges for researchers and practitioners as the numerous methods developed for monochrome images are often not directly applicable to multichannel images. The goal of this volume is to summarize the state-of-the-art in the utilization of color information in medical image analysis.

  3. 高内涵分析在新药发现毒理学中的应用进展%Application progress of high content analysis in discovery toxicology

    Institute of Scientific and Technical Information of China (English)

    刘利波; 王莉莉

    2012-01-01

    在新药发现早期开展发现毒理学研究是提高新药研发效率的重要策略之一.高内涵分析(HCA)是基于高效新药筛选需求发展起来的一项新技术,其主要特点是基于活细胞、多参数、实时、高通量,能够实现化合物多种生物活性、毒性的早期、快速地检测,为发现毒理学研究提供了高效的技术手段.目前,HCA已用于多种靶器官细胞毒性、遗传毒性、神经毒性、血管毒性、生殖毒性等检测以及毒理学分子机制的研究,本文就HCA在新药发现毒理学方面的应用进展进行综述.%One important strategy for improving efficiency of new drug research and discovery is applying discovery toxicology earlier in the drug development process. High content analysis ( HCA) , a new technology based on efficient drug screening, employs image based cellular assays in a real-time, high throughput and multi-parameter analysis format, allowing identify a variety of biological and toxicological activity of compounds rapidly and early. Thus HCA provides a high-efficient technique for the discovery toxicology study. Currently, HCA has been applied in testing various cytotoxicity, genotoxicity, neurotoxicity, vessles toxicity, reproductive toxicity etc. In addition to those, the molecular mechanism of toxicology could also be studied with HCA. This paper reviewed the progressing of HCA in discovery toxicology.

  4. High-content screening of functional genomic libraries.

    Science.gov (United States)

    Rines, Daniel R; Tu, Buu; Miraglia, Loren; Welch, Genevieve L; Zhang, Jia; Hull, Mitchell V; Orth, Anthony P; Chanda, Sumit K

    2006-01-01

    Recent advances in functional genomics have enabled genome-wide genetic studies in mammalian cells. These include the establishment of high-throughput transfection and viral propagation methodologies, the production of large-scale cDNA and siRNA libraries, and the development of sensitive assay detection processes and instrumentation. The latter has been significantly facilitated by the implementation of automated microscopy and quantitative image analysis, collectively referred to as high-content screening (HCS), toward cell-based functional genomics application. This technology can be applied to whole genome analysis of discrete molecular and phenotypic events at the level of individual cells and promises to significantly expand the scope of functional genomic analyses in mammalian cells. This chapter provides a comprehensive guide for curating and preparing function genomics libraries and performing HCS at the level of the genome.

  5. Morphological image analysis

    NARCIS (Netherlands)

    Michielsen, K.; Raedt, H. De; Kawakatsu, T.

    2000-01-01

    We describe a morphological image analysis method to characterize images in terms of geometry and topology. We present a method to compute the morphological properties of the objects building up the image and apply the method to triply periodic minimal surfaces and to images taken from polymer chemi

  6. Morphological image analysis

    NARCIS (Netherlands)

    Michielsen, K; De Raedt, H; Kawakatsu, T; Landau, DP; Lewis, SP; Schuttler, HB

    2001-01-01

    We describe a morphological image analysis method to characterize images in terms of geometry and topology. We present a method to compute the morphological properties of the objects building up the image and apply the method to triply periodic minimal surfaces and to images taken from polymer chemi

  7. Quantification of hormone sensitive lipase phosphorylation and colocalization with lipid droplets in murine 3T3L1 and human subcutaneous adipocytes via automated digital microscopy and high-content analysis.

    Science.gov (United States)

    McDonough, Patrick M; Ingermanson, Randall S; Loy, Patricia A; Koon, Erick D; Whittaker, Ross; Laris, Casey A; Hilton, Jeffrey M; Nicoll, James B; Buehrer, Benjamin M; Price, Jeffrey H

    2011-06-01

    Lipolysis in adipocytes is associated with phosphorylation of hormone sensitive lipase (HSL) and translocation of HSL to lipid droplets. In this study, adipocytes were cultured in a high-throughput format (96-well dishes), exposed to lipolytic agents, and then fixed and labeled for nuclei, lipid droplets, and HSL (or HSL phosphorylated on serine 660 [pHSLser660]). The cells were imaged via automated digital fluorescence microscopy, and high-content analysis (HCA) methods were used to quantify HSL phosphorylation and the degree to which HSL (or pHSLser660) colocalizes with the lipid droplets. HSL:lipid droplet colocalization was quantified through use of Pearson's correlation, Mander's M1 Colocalization, and the Tanimoto coefficient. For murine 3T3L1 adipocytes, isoproterenol, Lys-γ3-melanocyte stimulating hormone, and forskolin elicited the appearance and colocalization of pHSLser660, whereas atrial natriuretic peptide (ANP) did not. For human subcutaneous adipocytes, isoproterenol, forskolin, and ANP activated HSL phosphorylation/colocalization, but Lys-γ3-melanocyte stimulating hormone had little or no effect. Since ANP activates guanosine 3',5'-cyclic monophosphate (cGMP)-dependent protein kinase, HSL serine 660 is likely a substrate for cGMP-dependent protein kinase in human adipocytes. For both adipocyte model systems, adipocytes with the greatest lipid content displayed the greatest lipolytic responses. The results for pHSLser660 were consistent with release of glycerol by the cells, a well-established assay of lipolysis, and the HCA methods yielded Z' values >0.50. The results illustrate several key differences between human and murine adipocytes and demonstrate advantages of utilizing HCA techniques to study lipolysis in cultured adipocytes.

  8. Assessment of 16 chemicals on proliferation and apoptosis in human neuroprogenitor cells using high-content image analysis (HCA).

    Science.gov (United States)

    The need for efficient methods of screening chemicals for the potential to cause developmental neurotoxicity is paramount. We previously described optimization of an HCA assay for proliferation and apoptosis in ReNcell CX cells (ReN), identifying appropriate controls. Utility of ...

  9. Quantitative assessment of neurite outgrowth in human embryonic stem-cell derived neurons using automated high-content image analysis

    Science.gov (United States)

    During development neurons undergo a number of morphological changes including neurite outgrowth from the cell body. Exposure to neurotoxicants that interfere with this process may cause in permanent deficits in nervous system function. While many studies have used rodent primary...

  10. High Content Analysis of Hippocampal Neuron-Astrocyte Co-cultures Shows a Positive Effect of Fortasyn Connect on Neuronal Survival and Postsynaptic Maturation

    Directory of Open Access Journals (Sweden)

    Anne-Lieke F. van Deijk

    2017-08-01

    Full Text Available Neuronal and synaptic membranes are composed of a phospholipid bilayer. Supplementation with dietary precursors for phospholipid synthesis –docosahexaenoic acid (DHA, uridine and choline– has been shown to increase neurite outgrowth and synaptogenesis both in vivo and in vitro. A role for multi-nutrient intervention with specific precursors and cofactors has recently emerged in early Alzheimer's disease, which is characterized by decreased synapse numbers in the hippocampus. Moreover, the medical food Souvenaid, containing the specific nutrient combination Fortasyn Connect (FC, improves memory performance in early Alzheimer's disease patients, possibly via maintaining brain connectivity. This suggests an effect of FC on synapses, but the underlying cellular mechanism is not fully understood. Therefore, we investigated the effect of FC (consisting of DHA, eicosapentaenoic acid (EPA, uridine, choline, phospholipids, folic acid, vitamins B12, B6, C and E, and selenium, on synaptogenesis by supplementing it to primary neuron-astrocyte co-cultures, a cellular model that mimics metabolic dependencies in the brain. We measured neuronal developmental processes using high content screening in an automated manner, including neuronal survival, neurite morphology, as well as the formation and maturation of synapses. Here, we show that FC supplementation resulted in increased numbers of neurons without affecting astrocyte number. Furthermore, FC increased postsynaptic PSD95 levels in both immature and mature synapses. These findings suggest that supplementation with FC to neuron-astrocyte co-cultures increased both neuronal survival and the maturation of postsynaptic terminals, which might aid the functional interpretation of FC-based intervention strategies in neurological diseases characterized by neuronal loss and impaired synaptic functioning.

  11. Analysis and recognition of touching cell images based on morphological structures.

    Science.gov (United States)

    Yu, Donggang; Pham, Tuan D; Zhou, Xiaobo

    2009-01-01

    Automated analysis and recognition of cell-nuclear phases using fluorescence microscopy images play an important role for high-content screening. A major task of automated imaging based high-content screening is to segment and reconstruct each cell from the touching cell images. In this paper we present new useful method for recognizing morphological structural models of touching cells, detecting segmentation points, determining the number of segmented cells in touching cell image, finding the related data of segmented cell arcs and reconstructing segmented cells. The conceptual frameworks are based on the morphological structures where a series of structural points and their morphological relationships are established. Experiment results have shown the efficient application of the new method for analysis and recognition of touching cell images of high-content screening.

  12. Gabor Analysis for Imaging

    DEFF Research Database (Denmark)

    Christensen, Ole; Feichtinger, Hans G.; Paukner, Stephan

    2015-01-01

    , it characterizes a function by its transform over phase space, which is the time–frequency plane (TF-plane) in a musical context or the location–wave-number domain in the context of image processing. Since the transition from the signal domain to the phase space domain introduces an enormous amount of data...... of the generalities relevant for an understanding of Gabor analysis of functions on Rd. We pay special attention to the case d = 2, which is the most important case for image processing and image analysis applications. The chapter is organized as follows. Section 2 presents central tools from functional analysis......, the application of Gabor expansions to image representation is considered in Sect. 6....

  13. Digital image analysis

    DEFF Research Database (Denmark)

    Riber-Hansen, Rikke; Vainer, Ben; Steiniche, Torben

    2012-01-01

    Digital image analysis (DIA) is increasingly implemented in histopathological research to facilitate truly quantitative measurements, decrease inter-observer variation and reduce hands-on time. Originally, efforts were made to enable DIA to reproduce manually obtained results on histological slides...... reproducibility, application of stereology-based quantitative measurements, time consumption, optimization of histological slides, regions of interest selection and recent developments in staining and imaging techniques....

  14. Functional genomic and high-content screening for target discovery and deconvolution

    Science.gov (United States)

    Heynen-Genel, Susanne; Pache, Lars; Chanda, Sumit K

    2014-01-01

    Introduction Functional genomic screens apply knowledge gained from the sequencing of the human genome toward rapid methods of identifying genes involved in cellular function based on a specific phenotype. This approach has been made possible through the use of advances in both molecular biology and automation. The utility of this approach has been further enhanced through the application of image-based high content screening, an automated microscopy and quantitative image analysis platform. These approaches can significantly enhance acquisition of novel targets for drug discovery. Areas covered Both the utility and potential issues associated with functional genomic screening approaches are discussed along with examples that illustrate both. The considerations for high content screening applied to functional genomics are also presented. Expert opinion Functional genomic and high content screening are extremely useful in the identification of new drug targets. However, the technical, experimental, and computational parameters have an enormous influence on the results. Thus, although new targets are identified, caution should be applied toward interpretation of screening data in isolation. Genomic screens should be viewed as an integral component of a target identification campaign that requires both the acquisition of orthogonal data, as well as a rigorous validation strategy. PMID:22860749

  15. Development of high-content assays for kidney progenitor cell expansion in transgenic zebrafish.

    Science.gov (United States)

    Sanker, Subramaniam; Cirio, Maria Cecilia; Vollmer, Laura L; Goldberg, Natasha D; McDermott, Lee A; Hukriede, Neil A; Vogt, Andreas

    2013-12-01

    Reactivation of genes normally expressed during organogenesis is a characteristic of kidney regeneration. Enhancing this reactivation could potentially be a therapeutic target to augment kidney regeneration. The inductive events that drive kidney organogenesis in zebrafish are similar to the initial steps in mammalian kidney organogenesis. Therefore, quantifying embryonic signals that drive zebrafish kidney development is an attractive strategy for the discovery of potential novel therapeutic modalities that accelerate kidney regeneration. The Lim1 homeobox protein, Lhx1, is a marker of kidney development that is also expressed in the regenerating kidneys after injury. Using a fluorescent Lhx1a-EGFP transgene whose phenotype faithfully recapitulates that of the endogenous protein, we developed a high-content assay for Lhx1a-EGFP expression in transgenic zebrafish embryos employing an artificial intelligence-based image analysis method termed cognition network technology (CNT). Implementation of the CNT assay on high-content readers enabled automated real-time in vivo time-course, dose-response, and variability studies in the developing embryo. The Lhx1a assay was complemented with a kidney-specific secondary CNT assay that enables direct measurements of the embryonic renal tubule cell population. The integration of fluorescent transgenic zebrafish embryos with automated imaging and artificial intelligence-based image analysis provides an in vivo analysis system for structure-activity relationship studies and de novo discovery of novel agents that augment innate regenerative processes.

  16. Medical Image Analysis Facility

    Science.gov (United States)

    1978-01-01

    To improve the quality of photos sent to Earth by unmanned spacecraft. NASA's Jet Propulsion Laboratory (JPL) developed a computerized image enhancement process that brings out detail not visible in the basic photo. JPL is now applying this technology to biomedical research in its Medical lrnage Analysis Facility, which employs computer enhancement techniques to analyze x-ray films of internal organs, such as the heart and lung. A major objective is study of the effects of I stress on persons with heart disease. In animal tests, computerized image processing is being used to study coronary artery lesions and the degree to which they reduce arterial blood flow when stress is applied. The photos illustrate the enhancement process. The upper picture is an x-ray photo in which the artery (dotted line) is barely discernible; in the post-enhancement photo at right, the whole artery and the lesions along its wall are clearly visible. The Medical lrnage Analysis Facility offers a faster means of studying the effects of complex coronary lesions in humans, and the research now being conducted on animals is expected to have important application to diagnosis and treatment of human coronary disease. Other uses of the facility's image processing capability include analysis of muscle biopsy and pap smear specimens, and study of the microscopic structure of fibroprotein in the human lung. Working with JPL on experiments are NASA's Ames Research Center, the University of Southern California School of Medicine, and Rancho Los Amigos Hospital, Downey, California.

  17. Image sequence analysis

    CERN Document Server

    1981-01-01

    The processing of image sequences has a broad spectrum of important applica­ tions including target tracking, robot navigation, bandwidth compression of TV conferencing video signals, studying the motion of biological cells using microcinematography, cloud tracking, and highway traffic monitoring. Image sequence processing involves a large amount of data. However, because of the progress in computer, LSI, and VLSI technologies, we have now reached a stage when many useful processing tasks can be done in a reasonable amount of time. As a result, research and development activities in image sequence analysis have recently been growing at a rapid pace. An IEEE Computer Society Workshop on Computer Analysis of Time-Varying Imagery was held in Philadelphia, April 5-6, 1979. A related special issue of the IEEE Transactions on Pattern Anal­ ysis and Machine Intelligence was published in November 1980. The IEEE Com­ puter magazine has also published a special issue on the subject in 1981. The purpose of this book ...

  18. BioSig3D: High Content Screening of Three-Dimensional Cell Culture Models.

    Directory of Open Access Journals (Sweden)

    Cemal Cagatay Bilgin

    Full Text Available BioSig3D is a computational platform for high-content screening of three-dimensional (3D cell culture models that are imaged in full 3D volume. It provides an end-to-end solution for designing high content screening assays, based on colony organization that is derived from segmentation of nuclei in each colony. BioSig3D also enables visualization of raw and processed 3D volumetric data for quality control, and integrates advanced bioinformatics analysis. The system consists of multiple computational and annotation modules that are coupled together with a strong use of controlled vocabularies to reduce ambiguities between different users. It is a web-based system that allows users to: design an experiment by defining experimental variables, upload a large set of volumetric images into the system, analyze and visualize the dataset, and either display computed indices as a heatmap, or phenotypic subtypes for heterogeneity analysis, or download computed indices for statistical analysis or integrative biology. BioSig3D has been used to profile baseline colony formations with two experiments: (i morphogenesis of a panel of human mammary epithelial cell lines (HMEC, and (ii heterogeneity in colony formation using an immortalized non-transformed cell line. These experiments reveal intrinsic growth properties of well-characterized cell lines that are routinely used for biological studies. BioSig3D is being released with seed datasets and video-based documentation.

  19. BioSig3D: High Content Screening of Three-Dimensional Cell Culture Models.

    Science.gov (United States)

    Bilgin, Cemal Cagatay; Fontenay, Gerald; Cheng, Qingsu; Chang, Hang; Han, Ju; Parvin, Bahram

    2016-01-01

    BioSig3D is a computational platform for high-content screening of three-dimensional (3D) cell culture models that are imaged in full 3D volume. It provides an end-to-end solution for designing high content screening assays, based on colony organization that is derived from segmentation of nuclei in each colony. BioSig3D also enables visualization of raw and processed 3D volumetric data for quality control, and integrates advanced bioinformatics analysis. The system consists of multiple computational and annotation modules that are coupled together with a strong use of controlled vocabularies to reduce ambiguities between different users. It is a web-based system that allows users to: design an experiment by defining experimental variables, upload a large set of volumetric images into the system, analyze and visualize the dataset, and either display computed indices as a heatmap, or phenotypic subtypes for heterogeneity analysis, or download computed indices for statistical analysis or integrative biology. BioSig3D has been used to profile baseline colony formations with two experiments: (i) morphogenesis of a panel of human mammary epithelial cell lines (HMEC), and (ii) heterogeneity in colony formation using an immortalized non-transformed cell line. These experiments reveal intrinsic growth properties of well-characterized cell lines that are routinely used for biological studies. BioSig3D is being released with seed datasets and video-based documentation.

  20. Image based performance analysis of thermal imagers

    Science.gov (United States)

    Wegner, D.; Repasi, E.

    2016-05-01

    Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.

  1. Reflections on ultrasound image analysis.

    Science.gov (United States)

    Alison Noble, J

    2016-10-01

    Ultrasound (US) image analysis has advanced considerably in twenty years. Progress in ultrasound image analysis has always been fundamental to the advancement of image-guided interventions research due to the real-time acquisition capability of ultrasound and this has remained true over the two decades. But in quantitative ultrasound image analysis - which takes US images and turns them into more meaningful clinical information - thinking has perhaps more fundamentally changed. From roots as a poor cousin to Computed Tomography (CT) and Magnetic Resonance (MR) image analysis, both of which have richer anatomical definition and thus were better suited to the earlier eras of medical image analysis which were dominated by model-based methods, ultrasound image analysis has now entered an exciting new era, assisted by advances in machine learning and the growing clinical and commercial interest in employing low-cost portable ultrasound devices outside traditional hospital-based clinical settings. This short article provides a perspective on this change, and highlights some challenges ahead and potential opportunities in ultrasound image analysis which may both have high impact on healthcare delivery worldwide in the future but may also, perhaps, take the subject further away from CT and MR image analysis research with time.

  2. Digital Images Analysis

    OpenAIRE

    2012-01-01

    International audience; A specific field of image processing focuses on the evaluation of image quality and assessment of their authenticity. A loss of image quality may be due to the various processes by which it passes. In assessing the authenticity of the image we detect forgeries, detection of hidden messages, etc. In this work, we present an overview of these areas; these areas have in common the need to develop theories and techniques to detect changes in the image that it is not detect...

  3. Image Analysis in CT Angiography

    NARCIS (Netherlands)

    Manniesing, R.

    2006-01-01

    In this thesis we develop and validate novel image processing techniques for the analysis of vascular structures in medical images. First a new type of filter is proposed which is capable of enhancing vascular structures while suppressing noise in the remainder of the image. This filter is based on

  4. High content screening in microfluidic devices

    Science.gov (United States)

    Cheong, Raymond; Paliwal, Saurabh; Levchenko, Andre

    2011-01-01

    Importance of the field Miniaturization is key to advancing the state-of-the-art in high content screening (HCS), in order to enable dramatic cost savings through reduced usage of expensive biochemical reagents and to enable large-scale screening on primary cells. Microfluidic technology offers the potential to enable HCS to be performed with an unprecedented degree of miniaturization. Areas covered in this review This perspective highlights a real-world example from the authors’ work of HCS assays implemented in a highly miniaturized microfluidic format. Advantages of this technology are discussed, including cost savings, high throughput screening on primary cells, improved accuracy, the ability to study complex time-varying stimuli, and ease of automation, integration, and scaling. What the reader will gain The reader will understand the capabilities of a new microfluidics-based platform for HCS, and the advantages it provides over conventional plate-based HCS. Take home message Microfluidics technology will drive significant advancements and broader usage and applicability of HCS in drug discovery. PMID:21852997

  5. Reference image selection for difference imaging analysis

    CERN Document Server

    Huckvale, Leo; Sale, Stuart E

    2014-01-01

    Difference image analysis (DIA) is an effective technique for obtaining photometry in crowded fields, relative to a chosen reference image. As yet, however, optimal reference image selection is an unsolved problem. We examine how this selection depends on the combination of seeing, background and detector pixel size. Our tests use a combination of simulated data and quality indicators from DIA of well-sampled optical data and under-sampled near-infrared data from the OGLE and VVV surveys, respectively. We search for a figure-of-merit (FoM) which could be used to select reference images for each survey. While we do not find a universally applicable FoM, survey-specific measures indicate that the effect of spatial under-sampling may require a change in strategy from the standard DIA approach, even though seeing remains the primary criterion. We find that background is not an important criterion for reference selection, at least for the dynamic range in the images we test. For our analysis of VVV data in particu...

  6. ANALYSIS OF FUNDUS IMAGES

    DEFF Research Database (Denmark)

    2000-01-01

    A method classifying objects man image as respective arterial or venous vessels comprising: identifying pixels of the said modified image which are located on a line object, determining which of the said image points is associated with crossing point or a bifurcation of the respective line object......, wherein a crossing point is represented by an image point which is the intersection of four line segments, performing a matching operation on pairs of said line segments for each said crossing point, to determine the path of blood vessels in the image, thereby classifying the line objects in the original...... image into two arbitrary sets, and thereafter designating one of the sets as representing venous structure, the other of the sets as representing arterial structure, depending on one or more of the following criteria: (a) complexity of structure; (b) average density; (c) average width; (d) tortuosity...

  7. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    2011-01-01

    of the book is to present the fascinating world of medical image analysis in an easy and interesting way. Compared to many standard books on image analysis, the approach we have chosen is less mathematical and more casual. Some of the key algorithms are exemplified in C-code. Please note that the code...

  8. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    of the book is to present the fascinating world of medical image analysis in an easy and interesting way. Compared to many standard books on image analysis, the approach we have chosen is less mathematical and more casual. Some of the key algorithms are exemplified in C-code. Please note that the code...

  9. Oncological image analysis: medical and molecular image analysis

    Science.gov (United States)

    Brady, Michael

    2007-03-01

    This paper summarises the work we have been doing on joint projects with GE Healthcare on colorectal and liver cancer, and with Siemens Molecular Imaging on dynamic PET. First, we recall the salient facts about cancer and oncological image analysis. Then we introduce some of the work that we have done on analysing clinical MRI images of colorectal and liver cancer, specifically the detection of lymph nodes and segmentation of the circumferential resection margin. In the second part of the paper, we shift attention to the complementary aspect of molecular image analysis, illustrating our approach with some recent work on: tumour acidosis, tumour hypoxia, and multiply drug resistant tumours.

  10. Hyperspectral image analysis. A tutorial

    DEFF Research Database (Denmark)

    Amigo Rubio, Jose Manuel; Babamoradi, Hamid; Elcoroaristizabal Martin, Saioa

    2015-01-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processi...... to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares - Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case....... will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology...

  11. Hyperspectral image analysis. A tutorial

    Energy Technology Data Exchange (ETDEWEB)

    Amigo, José Manuel, E-mail: jmar@food.ku.dk [Spectroscopy and Chemometrics Group, Department of Food Sciences, Faculty of Science, University of Copenhagen, Rolighedsvej 30, Frederiksberg C DK–1958 (Denmark); Babamoradi, Hamid [Spectroscopy and Chemometrics Group, Department of Food Sciences, Faculty of Science, University of Copenhagen, Rolighedsvej 30, Frederiksberg C DK–1958 (Denmark); Elcoroaristizabal, Saioa [Spectroscopy and Chemometrics Group, Department of Food Sciences, Faculty of Science, University of Copenhagen, Rolighedsvej 30, Frederiksberg C DK–1958 (Denmark); Chemical and Environmental Engineering Department, School of Engineering, University of the Basque Country, Alameda de Urquijo s/n, E-48013 Bilbao (Spain)

    2015-10-08

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processing will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares – Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case. - Highlights: • Comprehensive tutorial of Hyperspectral Image analysis. • Hierarchical discrimination of six classes of plastics containing flame retardant. • Step by step guidelines to perform class-modeling on hyperspectral images. • Fusion of multivariate data analysis and digital image processing methods. • Promising methodology for real-time detection of plastics containing flame retardant.

  12. Stochastic geometry for image analysis

    CERN Document Server

    Descombes, Xavier

    2013-01-01

    This book develops the stochastic geometry framework for image analysis purpose. Two main frameworks are  described: marked point process and random closed sets models. We derive the main issues for defining an appropriate model. The algorithms for sampling and optimizing the models as well as for estimating parameters are reviewed.  Numerous applications, covering remote sensing images, biological and medical imaging, are detailed.  This book provides all the necessary tools for developing an image analysis application based on modern stochastic modeling.

  13. Paraxial ghost image analysis

    Science.gov (United States)

    Abd El-Maksoud, Rania H.; Sasian, José M.

    2009-08-01

    This paper develops a methodology to model ghost images that are formed by two reflections between the surfaces of a multi-element lens system in the paraxial regime. An algorithm is presented to generate the ghost layouts from the nominal layout. For each possible ghost layout, paraxial ray tracing is performed to determine the ghost Gaussian cardinal points, the size of the ghost image at the nominal image plane, the location and diameter of the ghost entrance and exit pupils, and the location and diameter for the ghost entrance and exit windows. The paraxial ghost irradiance point spread function is obtained by adding up the irradiance contributions for all ghosts. Ghost simulation results for a simple lens system are provided. This approach provides a quick way to analyze ghost images in the paraxial regime.

  14. Teachable, high-content analytics for live-cell, phase contrast movies.

    Science.gov (United States)

    Alworth, Samuel V; Watanabe, Hirotada; Lee, James S J

    2010-09-01

    CL-Quant is a new solution platform for broad, high-content, live-cell image analysis. Powered by novel machine learning technologies and teach-by-example interfaces, CL-Quant provides a platform for the rapid development and application of scalable, high-performance, and fully automated analytics for a broad range of live-cell microscopy imaging applications, including label-free phase contrast imaging. The authors used CL-Quant to teach off-the-shelf universal analytics, called standard recipes, for cell proliferation, wound healing, cell counting, and cell motility assays using phase contrast movies collected on the BioStation CT and BioStation IM platforms. Similar to application modules, standard recipes are intended to work robustly across a wide range of imaging conditions without requiring customization by the end user. The authors validated the performance of the standard recipes by comparing their performance with truth created manually, or by custom analytics optimized for each individual movie (and therefore yielding the best possible result for the image), and validated by independent review. The validation data show that the standard recipes' performance is comparable with the validated truth with low variation. The data validate that the CL-Quant standard recipes can provide robust results without customization for live-cell assays in broad cell types and laboratory settings.

  15. Image Analysis for Tongue Characterization

    Institute of Scientific and Technical Information of China (English)

    SHENLansun; WEIBaoguo; CAIYiheng; ZHANGXinfeng; WANGYanqing; CHENJing; KONGLingbiao

    2003-01-01

    Tongue diagnosis is one of the essential methods in traditional Chinese medical diagnosis. The ac-curacy of tongue diagnosis can be improved by tongue char-acterization. This paper investigates the use of image anal-ysis techniques for tongue characterization by evaluating visual features obtained from images. A tongue imaging and analysis instrument (TIAI) was developed to acquire digital color tongue images. Several novel approaches are presented for color calibration, tongue area segmentation,quantitative analysis and qualitative description for the colors of tongue and its coating, the thickness and moisture of coating and quantification of the cracks of the toilgue.The overall accuracy of the automatic analysis of the colors of tongue and the thickness of tongue coating exceeds 85%.This work shows the promising future of tongue character-ization.

  16. Flightspeed Integral Image Analysis Toolkit

    Science.gov (United States)

    Thompson, David R.

    2009-01-01

    The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles

  17. Shape analysis in medical image analysis

    CERN Document Server

    Tavares, João

    2014-01-01

    This book contains thirteen contributions from invited experts of international recognition addressing important issues in shape analysis in medical image analysis, including techniques for image segmentation, registration, modelling and classification, and applications in biology, as well as in cardiac, brain, spine, chest, lung and clinical practice. This volume treats topics such as, anatomic and functional shape representation and matching; shape-based medical image segmentation; shape registration; statistical shape analysis; shape deformation; shape-based abnormity detection; shape tracking and longitudinal shape analysis; machine learning for shape modeling and analysis; shape-based computer-aided-diagnosis; shape-based medical navigation; benchmark and validation of shape representation, analysis and modeling algorithms. This work will be of interest to researchers, students, and manufacturers in the fields of artificial intelligence, bioengineering, biomechanics, computational mechanics, computationa...

  18. Quantitative assessment of neurite outgrowth in human embryonic stem cell derived hN2 cells using automated high-content image analysis

    Science.gov (United States)

    Throughout development neurons undergo a number of morphological changes including neurite outgrowth from the cell body. Exposure to neurotoxic chemicals that interfere with this process may result in permanent deficits in nervous system function. Traditionally, rodent primary ne...

  19. Errors from Image Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wood, William Monford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

  20. Basic image analysis and manipulation in ImageJ.

    Science.gov (United States)

    Hartig, Sean M

    2013-01-01

    Image analysis methods have been developed to provide quantitative assessment of microscopy data. In this unit, basic aspects of image analysis are outlined, including software installation, data import, image processing functions, and analytical tools that can be used to extract information from microscopy data using ImageJ. Step-by-step protocols for analyzing objects in a fluorescence image and extracting information from two-color tissue images collected by bright-field microscopy are included.

  1. Fully Automated One-Step Production of Functional 3D Tumor Spheroids for High-Content Screening.

    Science.gov (United States)

    Monjaret, François; Fernandes, Mathieu; Duchemin-Pelletier, Eve; Argento, Amelie; Degot, Sébastien; Young, Joanne

    2016-04-01

    Adoption of spheroids within high-content screening (HCS) has lagged behind high-throughput screening (HTS) due to issues with running complex assays on large three-dimensional (3D) structures.To enable multiplexed imaging and analysis of spheroids, different cancer cell lines were grown in 3D on micropatterned 96-well plates with automated production of nine uniform spheroids per well. Spheroids achieve diameters of up to 600 µm, and reproducibility was experimentally validated (interwell and interplate CV(diameter) integration of micropatterned spheroid models within fundamental research and drug discovery applications.

  2. Document image analysis: A primer

    Indian Academy of Sciences (India)

    Rangachar Kasturi; Lawrence O’Gorman; Venu Govindaraju

    2002-02-01

    Document image analysis refers to algorithms and techniques that are applied to images of documents to obtain a computer-readable description from pixel data. A well-known document image analysis product is the Optical Character Recognition (OCR) software that recognizes characters in a scanned document. OCR makes it possible for the user to edit or search the document’s contents. In this paper we briefly describe various components of a document analysis system. Many of these basic building blocks are found in most document analysis systems, irrespective of the particular domain or language to which they are applied. We hope that this paper will help the reader by providing the background necessary to understand the detailed descriptions of specific techniques presented in other papers in this issue.

  3. Pocket pumped image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kotov, I.V., E-mail: kotov@bnl.gov [Brookhaven National Laboratory, Upton, NY 11973 (United States); O' Connor, P. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Murray, N. [Centre for Electronic Imaging, Open University, Milton Keynes, MK7 6AA (United Kingdom)

    2015-07-01

    The pocket pumping technique is used to detect small electron trap sites. These traps, if present, degrade CCD charge transfer efficiency. To reveal traps in the active area, a CCD is illuminated with a flat field and, before image is read out, accumulated charges are moved back and forth number of times in parallel direction. As charges are moved over a trap, an electron is removed from the original pocket and re-emitted in the following pocket. As process repeats one pocket gets depleted and the neighboring pocket gets excess of charges. As a result a “dipole” signal appears on the otherwise flat background level. The amplitude of the dipole signal depends on the trap pumping efficiency. This paper is focused on trap identification technique and particularly on new methods developed for this purpose. The sensor with bad segments was deliberately chosen for algorithms development and to demonstrate sensitivity and power of new methods in uncovering sensor defects.

  4. Imaging spectroscopy for scene analysis

    CERN Document Server

    Robles-Kelly, Antonio

    2012-01-01

    This book presents a detailed analysis of spectral imaging, describing how it can be used for the purposes of material identification, object recognition and scene understanding. The opportunities and challenges of combining spatial and spectral information are explored in depth, as are a wide range of applications. Features: discusses spectral image acquisition by hyperspectral cameras, and the process of spectral image formation; examines models of surface reflectance, the recovery of photometric invariants, and the estimation of the illuminant power spectrum from spectral imagery; describes

  5. Multivariate image analysis in biomedicine.

    Science.gov (United States)

    Nattkemper, Tim W

    2004-10-01

    In recent years, multivariate imaging techniques are developed and applied in biomedical research in an increasing degree. In research projects and in clinical studies as well m-dimensional multivariate images (MVI) are recorded and stored to databases for a subsequent analysis. The complexity of the m-dimensional data and the growing number of high throughput applications call for new strategies for the application of image processing and data mining to support the direct interactive analysis by human experts. This article provides an overview of proposed approaches for MVI analysis in biomedicine. After summarizing the biomedical MVI techniques the two level framework for MVI analysis is illustrated. Following this framework, the state-of-the-art solutions from the fields of image processing and data mining are reviewed and discussed. Motivations for MVI data mining in biology and medicine are characterized, followed by an overview of graphical and auditory approaches for interactive data exploration. The paper concludes with summarizing open problems in MVI analysis and remarks upon the future development of biomedical MVI analysis.

  6. BioSig3D: High Content Screening of Three-Dimensional Cell Culture Models: e0148379

    National Research Council Canada - National Science Library

    Cemal Cagatay Bilgin; Gerald Fontenay; Qingsu Cheng; Hang Chang; Ju Han; Bahram Parvin

    2016-01-01

    ...) cell culture models that are imaged in full 3D volume. It provides an end-to-end solution for designing high content screening assays, based on colony organization that is derived from segmentation of nuclei in each colony...

  7. Development of a 3D Tissue Culture–Based High-Content Screening Platform That Uses Phenotypic Profiling to Discriminate Selective Inhibitors of Receptor Tyrosine Kinases

    OpenAIRE

    Booij, T.H.; Klop, M.J.; Yan, K.; Szántai-Kis, C.; Szokol, B.; L. Orfi; Water, de; Keri, G.; Price, L. S.

    2016-01-01

    3D tissue cultures provide a more physiologically relevant context for the screening of compounds, compared with 2D cell cultures. Cells cultured in 3D hydrogels also show complex phenotypes, increasing the scope for phenotypic profiling. Here we describe a high-content screening platform that uses invasive human prostate cancer cells cultured in 3D in standard 384-well assay plates to study the activity of potential therapeutic small molecules and antibody biologics. Image analysis tools wer...

  8. Quantitative histogram analysis of images

    Science.gov (United States)

    Holub, Oliver; Ferreira, Sérgio T.

    2006-11-01

    A routine for histogram analysis of images has been written in the object-oriented, graphical development environment LabVIEW. The program converts an RGB bitmap image into an intensity-linear greyscale image according to selectable conversion coefficients. This greyscale image is subsequently analysed by plots of the intensity histogram and probability distribution of brightness, and by calculation of various parameters, including average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of the histogram and the median of the probability distribution. The program allows interactive selection of specific regions of interest (ROI) in the image and definition of lower and upper threshold levels (e.g., to permit the removal of a constant background signal). The results of the analysis of multiple images can be conveniently saved and exported for plotting in other programs, which allows fast analysis of relatively large sets of image data. The program file accompanies this manuscript together with a detailed description of two application examples: The analysis of fluorescence microscopy images, specifically of tau-immunofluorescence in primary cultures of rat cortical and hippocampal neurons, and the quantification of protein bands by Western-blot. The possibilities and limitations of this kind of analysis are discussed. Program summaryTitle of program: HAWGC Catalogue identifier: ADXG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXG_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers: Mobile Intel Pentium III, AMD Duron Installations: No installation necessary—Executable file together with necessary files for LabVIEW Run-time engine Operating systems or monitors under which the program has been tested: WindowsME/2000/XP Programming language used: LabVIEW 7.0 Memory required to execute with typical data:˜16MB for starting and ˜160MB used for

  9. Signal and image multiresolution analysis

    CERN Document Server

    Ouahabi, Abdelialil

    2012-01-01

    Multiresolution analysis using the wavelet transform has received considerable attention in recent years by researchers in various fields. It is a powerful tool for efficiently representing signals and images at multiple levels of detail with many inherent advantages, including compression, level-of-detail display, progressive transmission, level-of-detail editing, filtering, modeling, fractals and multifractals, etc.This book aims to provide a simple formalization and new clarity on multiresolution analysis, rendering accessible obscure techniques, and merging, unifying or completing

  10. Multi-Source Image Analysis.

    Science.gov (United States)

    1979-12-01

    Laboratories, Fort Belvoir, Virginia. Estes, J. E., and L. W. Senger (eds.), 1974, Remote Sensing: Techniques for environmental analysis, Hamilton, Santa ...E. and W. Senger (eds.), Remote Sensing Techniques in Environmental Analysis, Santa Barbara, California, Hamilton Publishing Co., p. 127-165. Morain...The large body of water labeled "W" on each image represents the Agua Hedionda lagoon. East of the lagoon the area is primarily agricultural with a

  11. Teaching image analysis at DIKU

    DEFF Research Database (Denmark)

    Johansen, Peter

    2010-01-01

    The early development of computer vision at Department of Computer Science at University of Copenhagen (DIKU) is briefly described. The different disciplines in computer vision are introduced, and the principles for teaching two courses, an image analysis course, and a robot lab class are outlined....

  12. Astronomical Image and Data Analysis

    CERN Document Server

    Starck, J.-L

    2006-01-01

    With information and scale as central themes, this comprehensive survey explains how to handle real problems in astronomical data analysis using a modern arsenal of powerful techniques. It treats those innovative methods of image, signal, and data processing that are proving to be both effective and widely relevant. The authors are leaders in this rapidly developing field and draw upon decades of experience. They have been playing leading roles in international projects such as the Virtual Observatory and the Grid. The book addresses not only students and professional astronomers and astrophysicists, but also serious amateur astronomers and specialists in earth observation, medical imaging, and data mining. The coverage includes chapters or appendices on: detection and filtering; image compression; multichannel, multiscale, and catalog data analytical methods; wavelets transforms, Picard iteration, and software tools. This second edition of Starck and Murtagh's highly appreciated reference again deals with to...

  13. Automated image analysis techniques for cardiovascular magnetic resonance imaging

    NARCIS (Netherlands)

    Geest, Robertus Jacobus van der

    2011-01-01

    The introductory chapter provides an overview of various aspects related to quantitative analysis of cardiovascular MR (CMR) imaging studies. Subsequently, the thesis describes several automated methods for quantitative assessment of left ventricular function from CMR imaging studies. Several novel

  14. Image analysis in medical imaging: recent advances in selected examples

    Science.gov (United States)

    Dougherty, G

    2010-01-01

    Medical imaging has developed into one of the most important fields within scientific imaging due to the rapid and continuing progress in computerised medical image visualisation and advances in analysis methods and computer-aided diagnosis. Several research applications are selected to illustrate the advances in image analysis algorithms and visualisation. Recent results, including previously unpublished data, are presented to illustrate the challenges and ongoing developments. PMID:21611048

  15. A Multivariate Computational Method to Analyze High-Content RNAi Screening Data.

    Science.gov (United States)

    Rameseder, Jonathan; Krismer, Konstantin; Dayma, Yogesh; Ehrenberger, Tobias; Hwang, Mun Kyung; Airoldi, Edoardo M; Floyd, Scott R; Yaffe, Michael B

    2015-09-01

    High-content screening (HCS) using RNA interference (RNAi) in combination with automated microscopy is a powerful investigative tool to explore complex biological processes. However, despite the plethora of data generated from these screens, little progress has been made in analyzing HC data using multivariate methods that exploit the full richness of multidimensional data. We developed a novel multivariate method for HCS, multivariate robust analysis method (M-RAM), integrating image feature selection with ranking of perturbations for hit identification, and applied this method to an HC RNAi screen to discover novel components of the DNA damage response in an osteosarcoma cell line. M-RAM automatically selects the most informative phenotypic readouts and time points to facilitate the more efficient design of follow-up experiments and enhance biological understanding. Our method outperforms univariate hit identification and identifies relevant genes that these approaches would have missed. We found that statistical cell-to-cell variation in phenotypic responses is an important predictor of hits in RNAi-directed image-based screens. Genes that we identified as modulators of DNA damage signaling in U2OS cells include B-Raf, a cancer driver gene in multiple tumor types, whose role in DNA damage signaling we confirm experimentally, and multiple subunits of protein kinase A. © 2015 Society for Laboratory Automation and Screening.

  16. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  17. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation.......The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  18. NeuriteQuant: An open source toolkit for high content screens of neuronal Morphogenesis

    Directory of Open Access Journals (Sweden)

    Hwang Eric

    2011-10-01

    Full Text Available Abstract Background To date, some of the most useful and physiologically relevant neuronal cell culture systems, such as high density co-cultures of astrocytes and primary hippocampal neurons, or differentiated stem cell-derived cultures, are characterized by high cell density and partially overlapping cellular structures. Efficient analytical strategies are required to enable rapid, reliable, quantitative analysis of neuronal morphology in these valuable model systems. Results Here we present the development and validation of a novel bioinformatics pipeline called NeuriteQuant. This tool enables fully automated morphological analysis of large-scale image data from neuronal cultures or brain sections that display a high degree of complexity and overlap of neuronal outgrowths. It also provides an efficient web-based tool to review and evaluate the analysis process. In addition to its built-in functionality, NeuriteQuant can be readily extended based on the rich toolset offered by ImageJ and its associated community of developers. As proof of concept we performed automated screens for modulators of neuronal development in cultures of primary neurons and neuronally differentiated P19 stem cells, which demonstrated specific dose-dependent effects on neuronal morphology. Conclusions NeuriteQuant is a freely available open-source tool for the automated analysis and effective review of large-scale high-content screens. It is especially well suited to quantify the effect of experimental manipulations on physiologically relevant neuronal cultures or brain sections that display a high degree of complexity and overlap among neurites or other cellular structures.

  19. High-resolution image analysis.

    Science.gov (United States)

    Preston, K

    1986-01-01

    In many departments of cytology, cytogenetics, hematology, and pathology, research projects using high-resolution computerized microscopy are now being mounted for computation of morphometric measurements on various structural components, as well as for determination of cellular DNA content. The majority of these measurements are made in a partially automated, computer-assisted mode, wherein there is strong interaction between the user and the computerized microscope. At the same time, full automation has been accomplished for both sample preparation and sample examination for clinical determination of the white blood cell differential count. At the time of writing, approximately 1,000 robot differential counting microscopes are in the field, analyzing images of human white blood cells, red blood cells, and platelets at the overall rate of about 100,000 slides per day. This mammoth through-put represents a major accomplishment in the application of machine vision to automated microscopy for hematology. In other areas of automated high-resolution microscopy, such as cytology and cytogenetics, no commercial instruments are available (although a few metaphase-finding machines are available and other new machines have been announced during the past year). This is a disappointing product, considering the nearly half century of research effort in these areas. This paper provides examples of the state of the art in automation of cell analysis for blood smears, cervical smears, and chromosome preparations. Also treated are new developments in multi-resolution automated microscopy, where images are now being generated and analyzed by a single machine over a range of 64:1 magnification and from 10,000 X 20,000 to 500 X 500 in total picture elements (pixels). Examples of images of human lymph node and liver tissue are presented. Semi-automated systems are not treated, although there is mention of recent research in the automation of tissue analysis.

  20. Principles and clinical applications of image analysis.

    Science.gov (United States)

    Kisner, H J

    1988-12-01

    Image processing has traveled to the lunar surface and back, finding its way into the clinical laboratory. Advances in digital computers have improved the technology of image analysis, resulting in a wide variety of medical applications. Offering improvements in turnaround time, standardized systems, increased precision, and walkaway automation, digital image analysis has likely found a permanent home as a diagnostic aid in the interpretation of microscopic as well as macroscopic laboratory images.

  1. A reasoning system for image analysis

    Directory of Open Access Journals (Sweden)

    Gao Jin Sheng

    2016-01-01

    Full Text Available For image analysis in computer, the traditional approach is extracting and transcoding features after image segmentation. However, in this paper, we present a different way to analyze image. We adopt spatial logic technology to establish a reasoning system with corresponding semantic model, and prove its soundness and completeness, and then realize the image analysis in formal way. And it can be applied in artificial intelligence. This is a new attempt and also a challenging approach.

  2. High Content Analysis of Compositional Heterogeneities to Study GPCR Oligomerization

    DEFF Research Database (Denmark)

    Walsh, Samuel McEwen

    In this thesis I demonstrate how the natural compositional heterogeneities of synthetic and living cell model systems can be used to quantitate the mechanics of G-protein coupled receptor (GPCR) oligomerization with Förster resonance energy transfer (FRET). The thesis is structured around three a...

  3. High Content Analysis of Compositional Heterogeneities to Study GPCR Oligomerization

    DEFF Research Database (Denmark)

    Walsh, Samuel McEwen

    In this thesis I demonstrate how the natural compositional heterogeneities of synthetic and living cell model systems can be used to quantitate the mechanics of G-protein coupled receptor (GPCR) oligomerization with Förster resonance energy transfer (FRET). The thesis is structured around three a...

  4. One Approach to intellectual image analysis

    Directory of Open Access Journals (Sweden)

    Bellustin Nikolai

    2016-01-01

    Full Text Available This study investigated the method of semantic image analysis by using a set of neuron-like detectors of foreground objects. This method is intended to find different types of foreground objects and to determine properties of these objects. As a result of semantic analysis the semantic descriptor of the image is created. The descriptor is a set of foreground objects of the image and a set of properties for each object. The distance between images is defined as distance between their semantic descriptors. Using the concept of distance between images, “semantically similarity” between images or videos is defined.

  5. Image analysis: a consumer's guide.

    Science.gov (United States)

    Meyer, F

    1983-01-01

    The last years have seen an explosion of systems in image analysis. It is hard for the pathologist or the cytologist to make the right choice of equipment. All machines are stupid, and the only valuable thing is the human work put into it. So make your benefit of the work other people have done for you. Chose a method largely used on many systems and which has proved fertile in many domains and not only for your specific to day's application: Mathematical Morphology, to which are to be added the linear convolutions present on all machines is a strong candidate for becoming such a method. The paper illustrates a working day of an ideal system: research and diagnostic directed work during the working hours, automatic screening of cervical (or other) smears during night.

  6. Clock Scan Protocol for Image Analysis: ImageJ Plugins.

    Science.gov (United States)

    Dobretsov, Maxim; Petkau, Georg; Hayar, Abdallah; Petkau, Eugen

    2017-06-19

    The clock scan protocol for image analysis is an efficient tool to quantify the average pixel intensity within, at the border, and outside (background) a closed or segmented convex-shaped region of interest, leading to the generation of an averaged integral radial pixel-intensity profile. This protocol was originally developed in 2006, as a visual basic 6 script, but as such, it had limited distribution. To address this problem and to join similar recent efforts by others, we converted the original clock scan protocol code into two Java-based plugins compatible with NIH-sponsored and freely available image analysis programs like ImageJ or Fiji ImageJ. Furthermore, these plugins have several new functions, further expanding the range of capabilities of the original protocol, such as analysis of multiple regions of interest and image stacks. The latter feature of the program is especially useful in applications in which it is important to determine changes related to time and location. Thus, the clock scan analysis of stacks of biological images may potentially be applied to spreading of Na(+) or Ca(++) within a single cell, as well as to the analysis of spreading activity (e.g., Ca(++) waves) in populations of synaptically-connected or gap junction-coupled cells. Here, we describe these new clock scan plugins and show some examples of their applications in image analysis.

  7. Discovery of New Anti-Schistosomal Hits by Integration of QSAR-Based Virtual Screening and High Content Screening.

    Science.gov (United States)

    Neves, Bruno J; Dantas, Rafael F; Senger, Mario R; Melo-Filho, Cleber C; Valente, Walter C G; de Almeida, Ana C M; Rezende-Neto, João M; Lima, Elid F C; Paveley, Ross; Furnham, Nicholas; Muratov, Eugene; Kamentsky, Lee; Carpenter, Anne E; Braga, Rodolpho C; Silva-Junior, Floriano P; Andrade, Carolina Horta

    2016-08-11

    Schistosomiasis is a debilitating neglected tropical disease, caused by flatworms of Schistosoma genus. The treatment relies on a single drug, praziquantel (PZQ), making the discovery of new compounds extremely urgent. In this work, we integrated QSAR-based virtual screening (VS) of Schistosoma mansoni thioredoxin glutathione reductase (SmTGR) inhibitors and high content screening (HCS) aiming to discover new antischistosomal agents. Initially, binary QSAR models for inhibition of SmTGR were developed and validated using the Organization for Economic Co-operation and Development (OECD) guidance. Using these models, we prioritized 29 compounds for further testing in two HCS platforms based on image analysis of assay plates. Among them, 2-[2-(3-methyl-4-nitro-5-isoxazolyl)vinyl]pyridine and 2-(benzylsulfonyl)-1,3-benzothiazole, two compounds representing new chemical scaffolds have activity against schistosomula and adult worms at low micromolar concentrations and therefore represent promising antischistosomal hits for further hit-to-lead optimization.

  8. Analysis of Dynamic Brain Imaging Data

    CERN Document Server

    Mitra, P

    1998-01-01

    Modern imaging techniques for probing brain function, including functional Magnetic Resonance Imaging, intrinsic and extrinsic contrast optical imaging, and magnetoencephalography, generate large data sets with complex content. In this paper we develop appropriate techniques of analysis and visualization of such imaging data, in order to separate the signal from the noise, as well as to characterize the signal. The techniques developed fall into the general category of multivariate time series analysis, and in particular we extensively use the multitaper framework of spectral analysis. We develop specific protocols for the analysis of fMRI, optical imaging and MEG data, and illustrate the techniques by applications to real data sets generated by these imaging modalities. In general, the analysis protocols involve two distinct stages: `noise' characterization and suppression, and `signal' characterization and visualization. An important general conclusion of our study is the utility of a frequency-based repres...

  9. Image registration with uncertainty analysis

    Science.gov (United States)

    Simonson, Katherine M.

    2011-03-22

    In an image registration method, edges are detected in a first image and a second image. A percentage of edge pixels in a subset of the second image that are also edges in the first image shifted by a translation is calculated. A best registration point is calculated based on a maximum percentage of edges matched. In a predefined search region, all registration points other than the best registration point are identified that are not significantly worse than the best registration point according to a predetermined statistical criterion.

  10. Digital-image processing and image analysis of glacier ice

    Science.gov (United States)

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  11. Retinal image analysis: preprocessing and feature extraction

    Energy Technology Data Exchange (ETDEWEB)

    Marrugo, Andres G; Millan, Maria S, E-mail: andres.marrugo@upc.edu [Grup d' Optica Aplicada i Processament d' Imatge, Departament d' Optica i Optometria Univesitat Politecnica de Catalunya (Spain)

    2011-01-01

    Image processing, analysis and computer vision techniques are found today in all fields of medical science. These techniques are especially relevant to modern ophthalmology, a field heavily dependent on visual data. Retinal images are widely used for diagnostic purposes by ophthalmologists. However, these images often need visual enhancement prior to apply a digital analysis for pathological risk or damage detection. In this work we propose the use of an image enhancement technique for the compensation of non-uniform contrast and luminosity distribution in retinal images. We also explore optic nerve head segmentation by means of color mathematical morphology and the use of active contours.

  12. Hyperspectral image classification using functional data analysis.

    Science.gov (United States)

    Li, Hong; Xiao, Guangrun; Xia, Tian; Tang, Y Y; Li, Luoqing

    2014-09-01

    The large number of spectral bands acquired by hyperspectral imaging sensors allows us to better distinguish many subtle objects and materials. Unlike other classical hyperspectral image classification methods in the multivariate analysis framework, in this paper, a novel method using functional data analysis (FDA) for accurate classification of hyperspectral images has been proposed. The central idea of FDA is to treat multivariate data as continuous functions. From this perspective, the spectral curve of each pixel in the hyperspectral images is naturally viewed as a function. This can be beneficial for making full use of the abundant spectral information. The relevance between adjacent pixel elements in the hyperspectral images can also be utilized reasonably. Functional principal component analysis is applied to solve the classification problem of these functions. Experimental results on three hyperspectral images show that the proposed method can achieve higher classification accuracies in comparison to some state-of-the-art hyperspectral image classification methods.

  13. Fractal methods in image analysis and coding

    OpenAIRE

    Neary, David

    2001-01-01

    In this thesis we present an overview of image processing techniques which use fractal methods in some way. We show how these fields relate to each other, and examine various aspects of fractal methods in each area. The three principal fields of image processing and analysis th a t we examine are texture classification, image segmentation and image coding. In the area of texture classification, we examine fractal dimension estimators, comparing these methods to other methods in use, a...

  14. Automated high-content live animal drug screening using C. elegans expressing the aggregation prone serpin α1-antitrypsin Z.

    Directory of Open Access Journals (Sweden)

    Sager J Gosai

    Full Text Available The development of preclinical models amenable to live animal bioactive compound screening is an attractive approach to discovering effective pharmacological therapies for disorders caused by misfolded and aggregation-prone proteins. In general, however, live animal drug screening is labor and resource intensive, and has been hampered by the lack of robust assay designs and high throughput work-flows. Based on their small size, tissue transparency and ease of cultivation, the use of C. elegans should obviate many of the technical impediments associated with live animal drug screening. Moreover, their genetic tractability and accomplished record for providing insights into the molecular and cellular basis of human disease, should make C. elegans an ideal model system for in vivo drug discovery campaigns. The goal of this study was to determine whether C. elegans could be adapted to high-throughput and high-content drug screening strategies analogous to those developed for cell-based systems. Using transgenic animals expressing fluorescently-tagged proteins, we first developed a high-quality, high-throughput work-flow utilizing an automated fluorescence microscopy platform with integrated image acquisition and data analysis modules to qualitatively assess different biological processes including, growth, tissue development, cell viability and autophagy. We next adapted this technology to conduct a small molecule screen and identified compounds that altered the intracellular accumulation of the human aggregation prone mutant that causes liver disease in α1-antitrypsin deficiency. This study provides powerful validation for advancement in preclinical drug discovery campaigns by screening live C. elegans modeling α1-antitrypsin deficiency and other complex disease phenotypes on high-content imaging platforms.

  15. Merging Panchromatic and Multispectral Images for Enhanced Image Analysis

    Science.gov (United States)

    1990-08-01

    Multispectral Images for Enhanced Image Analysis I, Curtis K. Munechika grant permission to the Wallace Memorial Library of the Rochester Institute of...0.0 ()0 (.0(%C’ trees 3. 5 2.5% 0.0%l 44. 1% 5 (.()0th ,crass .1 ().W 0.0% 0).0% 97. overall classification accuracy: 87.5%( T-able DlIb . Confusion

  16. Solar Image Analysis and Visualization

    CERN Document Server

    Ireland, J

    2009-01-01

    This volume presents a selection of papers on the state of the art of image enhancement, automated feature detection, machine learning, and visualization tools in support of solar physics that focus on the challenges presented by new ground-based and space-based instrumentation. The articles and topics were inspired by the Third Solar Image Processing Workshop, held at Trinity College Dublin, Ireland but contributions from other experts have been included as well. This book is mainly aimed at researchers and graduate students working on image processing and compter vision in astronomy and solar physics.

  17. Multispectral Image Analysis for Astaxanthin Coating Classification

    DEFF Research Database (Denmark)

    Ljungqvist, Martin Georg; Ersbøll, Bjarne Kjær; Nielsen, Michael Engelbrecht

    2012-01-01

    only with fish oil. In this study, multispectral image analysis of pellets captured reflection in 20 wavelengths (385–1050 nm). Linear discriminant analysis (LDA), principal component analysis, and support vector machine were used as statistical analysis. The features extracted from the multispectral...

  18. The Gray Institute 'open' high-content, fluorescence lifetime microscopes.

    Science.gov (United States)

    Barber, P R; Tullis, I D C; Pierce, G P; Newman, R G; Prentice, J; Rowley, M I; Matthews, D R; Ameer-Beg, S M; Vojnovic, B

    2013-08-01

    We describe a microscopy design methodology and details of microscopes built to this 'open' design approach. These demonstrate the first implementation of time-domain fluorescence microscopy in a flexible automated platform with the ability to ease the transition of this and other advanced microscopy techniques from development to use in routine biology applications. This approach allows easy expansion and modification of the platform capabilities, as it moves away from the use of a commercial, monolithic, microscope body to small, commercial off-the-shelf and custom made modular components. Drawings and diagrams of our microscopes have been made available under an open license for noncommercial use at http://users.ox.ac.uk/~atdgroup. Several automated high-content fluorescence microscope implementations have been constructed with this design framework and optimized for specific applications with multiwell plates and tissue microarrays. In particular, three platforms incorporate time-domain FLIM via time-correlated single photon counting in an automated fashion. We also present data from experiments performed on these platforms highlighting their automated wide-field and laser scanning capabilities designed for high-content microscopy. Devices using these designs also form radiation-beam 'end-stations' at Oxford and Surrey Universities, showing the versatility and extendibility of this approach. © 2013 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.

  19. Fluorescence correlation spectroscopy as tool for high-content-screening in yeast (HCS-FCS)

    Science.gov (United States)

    Wood, Christopher; Huff, Joseph; Marshall, Will; Yu, Elden Qingfeng; Unruh, Jay; Slaughter, Brian; Wiegraebe, Winfried

    2011-03-01

    To measure protein interactions, diffusion properties, and local concentrations in single cells, Fluorescence Correlation Spectroscopy (FCS) is a well-established and widely accepted method. However, measurements can take a long time and are laborious. Therefore investigations are typically limited to tens or a few hundred cells. We developed an automated system to overcome these limitations and make FCS available for High Content Screening (HCS). We acquired data in an auto-correlation screen of more than 4000 of the 6000 proteins of the yeast Saccharomyces cerevisiae, tagged with eGFP and expanded the HCS to use cross-correlation between eGFP and mCherry tagged proteins to screen for molecular interactions. We performed all high-content FCS screens (HCS-FCS) in a 96 well plate format. The system is based on an extended Carl Zeiss fluorescence correlation spectrometer ConfoCor 3 attached to a confocal microscope LSM 510. We developed image-processing software to control these hardware components. The confocal microscope obtained overview images and we developed an algorithm to search for and detect single cells. At each cell, we positioned a laser beam at a well-defined point and recorded the fluctuation signal. We used automatic scoring of the signal for quality control. All data was stored and organized in a database based on the open source Open Microscopy Environment (OME) platform. To analyze the data we used the image processing language IDL and the open source statistical software package R.

  20. Optical High Content Nanoscopy of Epigenetic Marks Decodes Phenotypic Divergence in Stem Cells

    Science.gov (United States)

    Kim, Joseph J.; Bennett, Neal K.; Devita, Mitchel S.; Chahar, Sanjay; Viswanath, Satish; Lee, Eunjee A.; Jung, Giyoung; Shao, Paul P.; Childers, Erin P.; Liu, Shichong; Kulesa, Anthony; Garcia, Benjamin A.; Becker, Matthew L.; Hwang, Nathaniel S.; Madabhushi, Anant; Verzi, Michael P.; Moghe, Prabhas V.

    2017-01-01

    While distinct stem cell phenotypes follow global changes in chromatin marks, single-cell chromatin technologies are unable to resolve or predict stem cell fates. We propose the first such use of optical high content nanoscopy of histone epigenetic marks (epi-marks) in stem cells to classify emergent cell states. By combining nanoscopy with epi-mark textural image informatics, we developed a novel approach, termed EDICTS (Epi-mark Descriptor Imaging of Cell Transitional States), to discern chromatin organizational changes, demarcate lineage gradations across a range of stem cell types and robustly track lineage restriction kinetics. We demonstrate the utility of EDICTS by predicting the lineage progression of stem cells cultured on biomaterial substrates with graded nanotopographies and mechanical stiffness, thus parsing the role of specific biophysical cues as sensitive epigenetic drivers. We also demonstrate the unique power of EDICTS to resolve cellular states based on epi-marks that cannot be detected via mass spectrometry based methods for quantifying the abundance of histone post-translational modifications. Overall, EDICTS represents a powerful new methodology to predict single cell lineage decisions by integrating high content super-resolution nanoscopy and imaging informatics of the nuclear organization of epi-marks. PMID:28051095

  1. Natural user interfaces in medical image analysis cognitive analysis of brain and carotid artery images

    CERN Document Server

    Ogiela, Marek R

    2014-01-01

    This unique text/reference highlights a selection of practical applications of advanced image analysis methods for medical images. The book covers the complete methodology for processing, analysing and interpreting diagnostic results of sample CT images. The text also presents significant problems related to new approaches and paradigms in image understanding and semantic image analysis. To further engage the reader, example source code is provided for the implemented algorithms in the described solutions. Features: describes the most important methods and algorithms used for image analysis; e

  2. High content cell-based assay for the inflammatory pathway

    Science.gov (United States)

    Mukherjee, Abhishek; Song, Joon Myong

    2015-07-01

    Cellular inflammation is a non-specific immune response to tissue injury that takes place via cytokine network orchestration to maintain normal tissue homeostasis. However chronic inflammation that lasts for a longer period, plays the key role in human diseases like neurodegenerative disorders and cancer development. Understanding the cellular and molecular mechanisms underlying the inflammatory pathways may be effective in targeting and modulating their outcome. Tumor necrosis factor alpha (TNF-α) is a pro-inflammatory cytokine that effectively combines the pro-inflammatory features with the pro-apoptotic potential. Increased levels of TNF-α observed during acute and chronic inflammatory conditions are believed to induce adverse phenotypes like glucose intolerance and abnormal lipid profile. Natural products e. g., amygdalin, cinnamic acid, jasmonic acid and aspirin have proven efficacy in minimizing the TNF-α induced inflammation in vitro and in vivo. Cell lysis-free quantum dot (QDot) imaging is an emerging technique to identify the cellular mediators of a signaling cascade with a single assay in one run. In comparison to organic fluorophores, the inorganic QDots are bright, resistant to photobleaching and possess tunable optical properties that make them suitable for long term and multicolor imaging of various components in a cellular crosstalk. Hence we tested some components of the mitogen activated protein kinase (MAPK) pathway during TNF-α induced inflammation and the effects of aspirin in HepG2 cells by QDot multicolor imaging technique. Results demonstrated that aspirin showed significant protective effects against TNF-α induced cellular inflammation. The developed cell based assay paves the platform for the analysis of cellular components in a smooth and reliable way.

  3. High-content, high-throughput screening for the identification of cytotoxic compounds based on cell morphology and cell proliferation markers.

    Directory of Open Access Journals (Sweden)

    Heather L Martin

    Full Text Available Toxicity is a major cause of failure in drug discovery and development, and whilst robust toxicological testing occurs, efficiency could be improved if compounds with cytotoxic characteristics were identified during primary compound screening. The use of high-content imaging in primary screening is becoming more widespread, and by utilising phenotypic approaches it should be possible to incorporate cytotoxicity counter-screens into primary screens. Here we present a novel phenotypic assay that can be used as a counter-screen to identify compounds with adverse cellular effects. This assay has been developed using U2OS cells, the PerkinElmer Operetta high-content/high-throughput imaging system and Columbus image analysis software. In Columbus, algorithms were devised to identify changes in nuclear morphology, cell shape and proliferation using DAPI, TOTO-3 and phosphohistone H3 staining, respectively. The algorithms were developed and tested on cells treated with doxorubicin, taxol and nocodazole. The assay was then used to screen a novel, chemical library, rich in natural product-like molecules of over 300 compounds, 13.6% of which were identified as having adverse cellular effects. This assay provides a relatively cheap and rapid approach for identifying compounds with adverse cellular effects during screening assays, potentially reducing compound rejection due to toxicity in subsequent in vitro and in vivo assays.

  4. Imaging flow cytometry for phytoplankton analysis.

    Science.gov (United States)

    Dashkova, Veronika; Malashenkov, Dmitry; Poulton, Nicole; Vorobjev, Ivan; Barteneva, Natasha S

    2017-01-01

    This review highlights the concepts and instrumentation of imaging flow cytometry technology and in particular its use for phytoplankton analysis. Imaging flow cytometry, a hybrid technology combining speed and statistical capabilities of flow cytometry with imaging features of microscopy, is rapidly advancing as a cell imaging platform that overcomes many of the limitations of current techniques and contributed significantly to the advancement of phytoplankton analysis in recent years. This review presents the various instrumentation relevant to the field and currently used for assessment of complex phytoplankton communities' composition and abundance, size structure determination, biovolume estimation, detection of harmful algal bloom species, evaluation of viability and metabolic activity and other applications. Also we present our data on viability and metabolic assessment of Aphanizomenon sp. cyanobacteria using Imagestream X Mark II imaging cytometer. Herein, we highlight the immense potential of imaging flow cytometry for microalgal research, but also discuss limitations and future developments.

  5. Digital Image Analysis for Detechip Code Determination

    Directory of Open Access Journals (Sweden)

    Marcus Lyon

    2012-08-01

    Full Text Available DETECHIP® is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP® used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP® . Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of redgreen-blue (RGB values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods.

  6. Theory of Image Analysis and Recognition.

    Science.gov (United States)

    1983-01-24

    Narendra Ahuja Image models Ramalingam Chellappa Image models Matti Pietikainen * Texture analysis b David G. Morgenthaler’ 3D digital geometry c Angela Y. Wu...Restoration Parameter Choice A Quantitative Guide," TR-965, October 1980. 70. Matti Pietikainen , "On the Use of Hierarchically Computed ’Mexican Hat...81. Matti Pietikainen and Azriel Rosenfeld, "Image Segmenta- tion by Texture Using Pyramid Node Linking," TR-1008, February 1981. 82. David G. 1

  7. NIH Image to ImageJ: 25 years of image analysis.

    Science.gov (United States)

    Schneider, Caroline A; Rasband, Wayne S; Eliceiri, Kevin W

    2012-07-01

    For the past 25 years NIH Image and ImageJ software have been pioneers as open tools for the analysis of scientific images. We discuss the origins, challenges and solutions of these two programs, and how their history can serve to advise and inform other software projects.

  8. Statistical Smoothing Methods and Image Analysis

    Science.gov (United States)

    1988-12-01

    83 - 111. Rosenfeld, A. and Kak, A.C. (1982). Digital Picture Processing. Academic Press,Qrlando. Serra, J. (1982). Image Analysis and Mat hematical ...hypothesis testing. IEEE Trans. Med. Imaging, MI-6, 313-319. Wicksell, S.D. (1925) The corpuscle problem. A mathematical study of a biometric problem

  9. Automation in high-content flow cytometry screening.

    Science.gov (United States)

    Naumann, U; Wand, M P

    2009-09-01

    High-content flow cytometric screening (FC-HCS) is a 21st Century technology that combines robotic fluid handling, flow cytometric instrumentation, and bioinformatics software, so that relatively large numbers of flow cytometric samples can be processed and analysed in a short period of time. We revisit a recent application of FC-HCS to the problem of cellular signature definition for acute graft-versus-host-disease. Our focus is on automation of the data processing steps using recent advances in statistical methodology. We demonstrate that effective results, on par with those obtained via manual processing, can be achieved using our automatic techniques. Such automation of FC-HCS has the potential to drastically improve diagnosis and biomarker identification.

  10. Development of a 3D Tissue Culture-Based High-Content Screening Platform That Uses Phenotypic Profiling to Discriminate Selective Inhibitors of Receptor Tyrosine Kinases.

    Science.gov (United States)

    Booij, Tijmen H; Klop, Maarten J D; Yan, Kuan; Szántai-Kis, Csaba; Szokol, Balint; Orfi, Laszlo; van de Water, Bob; Keri, Gyorgy; Price, Leo S

    2016-10-01

    3D tissue cultures provide a more physiologically relevant context for the screening of compounds, compared with 2D cell cultures. Cells cultured in 3D hydrogels also show complex phenotypes, increasing the scope for phenotypic profiling. Here we describe a high-content screening platform that uses invasive human prostate cancer cells cultured in 3D in standard 384-well assay plates to study the activity of potential therapeutic small molecules and antibody biologics. Image analysis tools were developed to process 3D image data to measure over 800 phenotypic parameters. Multiparametric analysis was used to evaluate the effect of compounds on tissue morphology. We applied this screening platform to measure the activity and selectivity of inhibitors of the c-Met and epidermal growth factor (EGF) receptor (EGFR) tyrosine kinases in 3D cultured prostate carcinoma cells. c-Met and EGFR activity was quantified based on the phenotypic profiles induced by their respective ligands, hepatocyte growth factor and EGF. The screening method was applied to a novel collection of 80 putative inhibitors of c-Met and EGFR. Compounds were identified that induced phenotypic profiles indicative of selective inhibition of c-Met, EGFR, or bispecific inhibition of both targets. In conclusion, we describe a fully scalable high-content screening platform that uses phenotypic profiling to discriminate selective and nonselective (off-target) inhibitors in a physiologically relevant 3D cell culture setting.

  11. Active Learning Strategies for Phenotypic Profiling of High-Content Screens.

    Science.gov (United States)

    Smith, Kevin; Horvath, Peter

    2014-06-01

    High-content screening is a powerful method to discover new drugs and carry out basic biological research. Increasingly, high-content screens have come to rely on supervised machine learning (SML) to perform automatic phenotypic classification as an essential step of the analysis. However, this comes at a cost, namely, the labeled examples required to train the predictive model. Classification performance increases with the number of labeled examples, and because labeling examples demands time from an expert, the training process represents a significant time investment. Active learning strategies attempt to overcome this bottleneck by presenting the most relevant examples to the annotator, thereby achieving high accuracy while minimizing the cost of obtaining labeled data. In this article, we investigate the impact of active learning on single-cell-based phenotype recognition, using data from three large-scale RNA interference high-content screens representing diverse phenotypic profiling problems. We consider several combinations of active learning strategies and popular SML methods. Our results show that active learning significantly reduces the time cost and can be used to reveal the same phenotypic targets identified using SML. We also identify combinations of active learning strategies and SML methods which perform better than others on the phenotypic profiling problems we studied.

  12. Simulation Analysis of Cylindrical Panoramic Image Mosaic

    Directory of Open Access Journals (Sweden)

    ZHU Ningning

    2017-04-01

    Full Text Available With the rise of virtual reality (VR technology, panoramic images are used more widely, which obtained by multi-camera stitching and take advantage of homography matrix and image transformation, however, this method will destroy the collinear condition, make it's difficult to 3D reconstruction and other work. This paper proposes a new method for cylindrical panoramic image mosaic, which set the number of mosaic camera, imaging focal length, imaging position and imaging attitude, simulate the mapping process of multi-camera and construct cylindrical imaging equation from 3D points to 2D image based on photogrammetric collinearity equations. This cylindrical imaging equation can not only be used for panoramic stitching, but also be used for precision analysis, test results show: ①this method can be used for panoramic stitching under the condition of multi-camera and incline imaging; ②the accuracy of panoramic stitching is affected by 3 kinds of parameter errors including focus, displacement and rotation angle, in which focus error can be corrected by image resampling, displacement error is closely related to object distance and rotation angle error is affected mainly by the number of cameras.

  13. Malware Analysis Using Visualized Image Matrices

    Directory of Open Access Journals (Sweden)

    KyoungSoo Han

    2014-01-01

    Full Text Available This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.

  14. Scale-Specific Multifractal Medical Image Analysis

    Directory of Open Access Journals (Sweden)

    Boris Braverman

    2013-01-01

    irregular complex tissue structures that do not lend themselves to straightforward analysis with traditional Euclidean geometry. In this study, we treat the nonfractal behaviour of medical images over large-scale ranges by considering their box-counting fractal dimension as a scale-dependent parameter rather than a single number. We describe this approach in the context of the more generalized Rényi entropy, in which we can also compute the information and correlation dimensions of images. In addition, we describe and validate a computational improvement to box-counting fractal analysis. This improvement is based on integral images, which allows the speedup of any box-counting or similar fractal analysis algorithm, including estimation of scale-dependent dimensions. Finally, we applied our technique to images of invasive breast cancer tissue from 157 patients to show a relationship between the fractal analysis of these images over certain scale ranges and pathologic tumour grade (a standard prognosticator for breast cancer. Our approach is general and can be applied to any medical imaging application in which the complexity of pathological image structures may have clinical value.

  15. Hybrid Expert Systems In Image Analysis

    Science.gov (United States)

    Dixon, Mark J.; Gregory, Paul J.

    1987-04-01

    Vision systems capable of inspecting industrial components and assemblies have a large potential market if they can be easily programmed and produced quickly. Currently, vision application software written in conventional high-level languages such as C or Pascal are produced by experts in program design, image analysis, and process control. Applications written this way are difficult to maintain and modify. Unless other similar inspection problems can be found, the final program is essentially one-off redundant code. A general-purpose vision system targeted for the Visual Machines Ltd. C-VAS 3000 image processing workstation, is described which will make writing image analysis software accessible to the non-expert both in programming computers and image analysis. A significant reduction in the effort required to produce vision systems, will be gained through a graphically-driven interactive application generator. Finally, an Expert System will be layered on top to guide the naive user through the process of generating an application.

  16. Quantitative analysis of qualitative images

    Science.gov (United States)

    Hockney, David; Falco, Charles M.

    2005-03-01

    We show optical evidence that demonstrates artists as early as Jan van Eyck and Robert Campin (c1425) used optical projections as aids for producing their paintings. We also have found optical evidence within works by later artists, including Bermejo (c1475), Lotto (c1525), Caravaggio (c1600), de la Tour (c1650), Chardin (c1750) and Ingres (c1825), demonstrating a continuum in the use of optical projections by artists, along with an evolution in the sophistication of that use. However, even for paintings where we have been able to extract unambiguous, quantitative evidence of the direct use of optical projections for producing certain of the features, this does not mean that paintings are effectively photographs. Because the hand and mind of the artist are intimately involved in the creation process, understanding these complex images requires more than can be obtained from only applying the equations of geometrical optics.

  17. Development of an automated imaging pipeline for the analysis of the zebrafish larval kidney.

    Directory of Open Access Journals (Sweden)

    Jens H Westhoff

    Full Text Available The analysis of kidney malformation caused by environmental influences during nephrogenesis or by hereditary nephropathies requires animal models allowing the in vivo observation of developmental processes. The zebrafish has emerged as a useful model system for the analysis of vertebrate organ development and function, and it is suitable for the identification of organotoxic or disease-modulating compounds on a larger scale. However, to fully exploit its potential in high content screening applications, dedicated protocols are required allowing the consistent visualization of inner organs such as the embryonic kidney. To this end, we developed a high content screening compatible pipeline for the automated imaging of standardized views of the developing pronephros in zebrafish larvae. Using a custom designed tool, cavities were generated in agarose coated microtiter plates allowing for accurate positioning and orientation of zebrafish larvae. This enabled the subsequent automated acquisition of stable and consistent dorsal views of pronephric kidneys. The established pipeline was applied in a pilot screen for the analysis of the impact of potentially nephrotoxic drugs on zebrafish pronephros development in the Tg(wt1b:EGFP transgenic line in which the developing pronephros is highlighted by GFP expression. The consistent image data that was acquired allowed for quantification of gross morphological pronephric phenotypes, revealing concentration dependent effects of several compounds on nephrogenesis. In addition, applicability of the imaging pipeline was further confirmed in a morpholino based model for cilia-associated human genetic disorders associated with different intraflagellar transport genes. The developed tools and pipeline can be used to study various aspects in zebrafish kidney research, and can be readily adapted for the analysis of other organ systems.

  18. Design Criteria For Networked Image Analysis System

    Science.gov (United States)

    Reader, Cliff; Nitteberg, Alan

    1982-01-01

    Image systems design is currently undergoing a metamorphosis from the conventional computing systems of the past into a new generation of special purpose designs. This change is motivated by several factors, notably among which is the increased opportunity for high performance with low cost offered by advances in semiconductor technology. Another key issue is a maturing in understanding of problems and the applicability of digital processing techniques. These factors allow the design of cost-effective systems that are functionally dedicated to specific applications and used in a utilitarian fashion. Following an overview of the above stated issues, the paper presents a top-down approach to the design of networked image analysis systems. The requirements for such a system are presented, with orientation toward the hospital environment. The three main areas are image data base management, viewing of image data and image data processing. This is followed by a survey of the current state of the art, covering image display systems, data base techniques, communications networks and software systems control. The paper concludes with a description of the functional subystems and architectural framework for networked image analysis in a production environment.

  19. Multispectral Image Analysis for Astaxanthin Coating Classification

    DEFF Research Database (Denmark)

    Ljungqvist, Martin Georg; Ersbøll, Bjarne Kjær; Nielsen, Michael Engelbrecht

    2012-01-01

    Industrial quality inspection using image analysis on astaxanthin coating in aquaculture feed pellets is of great importance for automatic production control. The pellets were divided into two groups: one with pellets coated using synthetic astaxanthin in fish oil and the other with pellets coated...... images were pixel spectral values as well as using summary statistics such as the mean or median value of each pellet. Classification using LDA on pellet mean or median values showed overall good results. Multispectral imaging is a promising technique for noninvasive on-line quality food and feed...... products with optimal use of pigment and minimum amount of waste....

  20. Chromatic Image Analysis For Quantitative Thermal Mapping

    Science.gov (United States)

    Buck, Gregory M.

    1995-01-01

    Chromatic image analysis system (CIAS) developed for use in noncontact measurements of temperatures on aerothermodynamic models in hypersonic wind tunnels. Based on concept of temperature coupled to shift in color spectrum for optical measurement. Video camera images fluorescence emitted by phosphor-coated model at two wavelengths. Temperature map of model then computed from relative brightnesses in video images of model at those wavelengths. Eliminates need for intrusive, time-consuming, contact temperature measurements by gauges, making it possible to map temperatures on complex surfaces in timely manner and at reduced cost.

  1. Cancer detection by quantitative fluorescence image analysis.

    Science.gov (United States)

    Parry, W L; Hemstreet, G P

    1988-02-01

    Quantitative fluorescence image analysis is a rapidly evolving biophysical cytochemical technology with the potential for multiple clinical and basic research applications. We report the application of this technique for bladder cancer detection and discuss its potential usefulness as an adjunct to methods used currently by urologists for the diagnosis and management of bladder cancer. Quantitative fluorescence image analysis is a cytological method that incorporates 2 diagnostic techniques, quantitation of nuclear deoxyribonucleic acid and morphometric analysis, in a single semiautomated system to facilitate the identification of rare events, that is individual cancer cells. When compared to routine cytopathology for detection of bladder cancer in symptomatic patients, quantitative fluorescence image analysis demonstrated greater sensitivity (76 versus 33 per cent) for the detection of low grade transitional cell carcinoma. The specificity of quantitative fluorescence image analysis in a small control group was 94 per cent and with the manual method for quantitation of absolute nuclear fluorescence intensity in the screening of high risk asymptomatic subjects the specificity was 96.7 per cent. The more familiar flow cytometry is another fluorescence technique for measurement of nuclear deoxyribonucleic acid. However, rather than identifying individual cancer cells, flow cytometry identifies cellular pattern distributions, that is the ratio of normal to abnormal cells. Numerous studies by others have shown that flow cytometry is a sensitive method to monitor patients with diagnosed urological disease. Based upon results in separate quantitative fluorescence image analysis and flow cytometry studies, it appears that these 2 fluorescence techniques may be complementary tools for urological screening, diagnosis and management, and that they also may be useful separately or in combination to elucidate the oncogenic process, determine the biological potential of tumors

  2. Avaliação do consumo e análise da rotulagem nutricional de alimentos com alto teor de ácidos graxos trans Consumption and analysis of nutricional label of foods with high content of trans fatty acids

    Directory of Open Access Journals (Sweden)

    Juliana Ribeiro Dias

    2009-03-01

    samples do not comply with the new legislation. Analysis of the questionnaires identified that 39.7% of adults and 41.4% of children consume daily at least one food with high content of trans fatty acids. The ingestion of these products exceeds the daily recommendation. Adequate fiscalization and healthy diet programs should be stimulated.

  3. Phenotype-based high-content chemical library screening identifies statins as inhibitors of in vivo lymphangiogenesis.

    Science.gov (United States)

    Schulz, Martin Michael Peter; Reisen, Felix; Zgraggen, Silvana; Fischer, Stephanie; Yuen, Don; Kang, Gyeong Jin; Chen, Lu; Schneider, Gisbert; Detmar, Michael

    2012-10-02

    Lymphangiogenesis plays an important role in promoting cancer metastasis to sentinel lymph nodes and beyond and also promotes organ transplant rejection. We used human lymphatic endothelial cells to establish a reliable three-dimensional lymphangiogenic sprouting assay with automated image acquisition and analysis for inhibitor screening. This high-content phenotype-based assay quantifies sprouts by automated fluorescence microscopy and newly developed analysis software. We identified signaling pathways involved in lymphangiogenic sprouting by screening the Library of Pharmacologically Active Compounds (LOPAC)(1280) collection of pharmacologically relevant compounds. Hit characterization revealed that mitogen-activated protein kinase kinase (MEK) 1/2 inhibitors substantially block lymphangiogenesis in vitro and in vivo. Importantly, the drug class of statins, for the first time, emerged as potent inhibitors of lymphangiogenic sprouting in vitro and of corneal and cutaneous lymphangiogenesis in vivo. This effect was mediated by inhibition of the 3-hydroxy-3-methylglutaryl-coenzyme A (HMG-CoA) reductase and subsequently the isoprenylation of Rac1. Supplementation with the enzymatic products of HMG-CoA reductase functionally rescued lymphangiogenic sprouting and the recruitment of Rac1 to the plasma membrane.

  4. An oral multispecies biofilm model for high content screening applications

    Science.gov (United States)

    Kommerein, Nadine; Stumpp, Sascha N.; Müsken, Mathias; Ehlert, Nina; Winkel, Andreas; Häussler, Susanne; Behrens, Peter; Buettner, Falk F. R.; Stiesch, Meike

    2017-01-01

    Peri-implantitis caused by multispecies biofilms is a major complication in dental implant treatment. The bacterial infection surrounding dental implants can lead to bone loss and, in turn, to implant failure. A promising strategy to prevent these common complications is the development of implant surfaces that inhibit biofilm development. A reproducible and easy-to-use biofilm model as a test system for large scale screening of new implant surfaces with putative antibacterial potency is therefore of major importance. In the present study, we developed a highly reproducible in vitro four-species biofilm model consisting of the highly relevant oral bacterial species Streptococcus oralis, Actinomyces naeslundii, Veillonella dispar and Porphyromonas gingivalis. The application of live/dead staining, quantitative real time PCR (qRT-PCR), scanning electron microscopy (SEM) and urea-NaCl fluorescence in situ hybridization (urea-NaCl-FISH) revealed that the four-species biofilm community is robust in terms of biovolume, live/dead distribution and individual species distribution over time. The biofilm community is dominated by S. oralis, followed by V. dispar, A. naeslundii and P. gingivalis. The percentage distribution in this model closely reflects the situation in early native plaques and is therefore well suited as an in vitro model test system. Furthermore, despite its nearly native composition, the multispecies model does not depend on nutrient additives, such as native human saliva or serum, and is an inexpensive, easy to handle and highly reproducible alternative to the available model systems. The 96-well plate format enables high content screening for optimized implant surfaces impeding biofilm formation or the testing of multiple antimicrobial treatment strategies to fight multispecies biofilm infections, both exemplary proven in the manuscript. PMID:28296966

  5. Image analysis of insulation mineral fibres.

    Science.gov (United States)

    Talbot, H; Lee, T; Jeulin, D; Hanton, D; Hobbs, L W

    2000-12-01

    We present two methods for measuring the diameter and length of man-made vitreous fibres based on the automated image analysis of scanning electron microscopy images. The fibres we want to measure are used in materials such as glass wool, which in turn are used for thermal and acoustic insulation. The measurement of the diameters and lengths of these fibres is used by the glass wool industry for quality control purposes. To obtain reliable quality estimators, the measurement of several hundred images is necessary. These measurements are usually obtained manually by operators. Manual measurements, although reliable when performed by skilled operators, are slow due to the need for the operators to rest often to retain their ability to spot faint fibres on noisy backgrounds. Moreover, the task of measuring thousands of fibres every day, even with the help of semi-automated image analysis systems, is dull and repetitive. The need for an automated procedure which could replace manual measurements is quite real. For each of the two methods that we propose to accomplish this task, we present the sample preparation, the microscope setting and the image analysis algorithms used for the segmentation of the fibres and for their measurement. We also show how a statistical analysis of the results can alleviate most measurement biases, and how we can estimate the true distribution of fibre lengths by diameter class by measuring only the lengths of the fibres visible in the field of view.

  6. Medical image analysis with artificial neural networks.

    Science.gov (United States)

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. Fourier analysis: from cloaking to imaging

    Science.gov (United States)

    Wu, Kedi; Cheng, Qiluan; Wang, Guo Ping

    2016-04-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers.

  8. Hyperspectral Image Analysis of Food Quality

    DEFF Research Database (Denmark)

    Arngren, Morten

    Assessing the quality of food is a vital step in any food processing line to ensurethe best food quality and maximum profit for the farmer and food manufacturer.Traditional quality evaluation methods are often destructive and labourintensive procedures relying on wet chemistry or subjective human...... inspection.Near-infrared spectroscopy can address these issues by offering a fast and objectiveanalysis of the food quality. A natural extension to these single spectrumNIR systems is to include image information such that each pixel holds a NIRspectrum. This augmented image information offers several...... extensions to the analysis offood quality. This dissertation is concerned with hyperspectral image analysisused to assess the quality of single grain kernels. The focus is to highlight thebenefits and challenges of using hyperspectral imaging for food quality presentedin two research directions. Initially...

  9. Deep Learning in Medical Image Analysis.

    Science.gov (United States)

    Shen, Dinggang; Wu, Guorong; Suk, Heung-Il

    2017-03-09

    This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement. Expected final online publication date for the Annual Review of Biomedical Engineering Volume 19 is June 4, 2017. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.

  10. Principal Components Analysis In Medical Imaging

    Science.gov (United States)

    Weaver, J. B.; Huddleston, A. L.

    1986-06-01

    Principal components analysis, PCA, is basically a data reduction technique. PCA has been used in several problems in diagnostic radiology: processing radioisotope brain scans (Ref.1), automatic alignment of radionuclide images (Ref. 2), processing MRI images (Ref. 3,4), analyzing first-pass cardiac studies (Ref. 5) correcting for attenuation in bone mineral measurements (Ref. 6) and in dual energy x-ray imaging (Ref. 6,7). This paper will progress as follows; a brief introduction to the mathematics of PCA will be followed by two brief examples of how PCA has been used in the literature. Finally my own experience with PCA in dual-energy x-ray imaging will be given.

  11. Human pluripotent stem cells on artificial microenvironments: a high content perspective.

    Directory of Open Access Journals (Sweden)

    Priyalakshmi eViswanathan

    2014-07-01

    Full Text Available Self-renewing stem cell populations are increasingly considered as resources for cell therapy and tools for drug discovery. Human pluripotent stem (hPS cells in particular offer a virtually unlimited reservoir of homogeneous cells and can be differentiated towards diverse lineages. Many diseases show impairment in self-renewal or differentiation, abnormal lineage choice or other aberrant cell behavior in response to chemical or physical cues. To investigate these responses, there is a growing interest in the development of specific assays using hPS cells artificial microenvironments and high content analysis. Several hurdles need to be overcome that can be grouped in: (i availability of robust, homogeneous and consistent cell populations as a starting point; (ii appropriate understanding and use of chemical and physical microenvironments; (iii development of assays that dissect the complexity of cell populations in tissues while mirroring specific aspects of their behavior. Here we review recent progress in the culture of hPS cells and we detail the importance of the environment surrounding the cells with a focus on synthetic material and suitable high content analysis approaches. The technologies described if properly combined have the potential to create a paradigm shift in the way diseases are modelled and drug discovery is performed.

  12. High content screening as high quality assay for biological evaluation of photosensitizers in vitro.

    Directory of Open Access Journals (Sweden)

    Gisela M F Vaz

    Full Text Available A novel single step assay approach to screen a library of photdynamic therapy (PDT compounds was developed. Utilizing high content analysis (HCA technologies several robust cellular parameters were identified, which can be used to determine the phototoxic effects of porphyrin compounds which have been developed as potential anticancer agents directed against esophageal carcinoma. To demonstrate the proof of principle of this approach a small detailed study on five porphyrin based compounds was performed utilizing two relevant esophageal cancer cell lines (OE21 and SKGT-4. The measurable outputs from these early studies were then evaluated by performing a pilot screen using a set of 22 compounds. These data were evaluated and validated by performing comparative studies using a traditional colorimetric assay (MTT. The studies demonstrated that the HCS assay offers significant advantages over and above the currently used methods (directly related to the intracellular presence of the compounds by analysis of their integrated intensity and area within the cells. A high correlation was found between the high content screening (HCS and MTT data. However, the HCS approach provides additional information that allows a better understanding of the behavior of these compounds when interacting at the cellular level. This is the first step towards an automated high-throughput screening of photosensitizer drug candidates and the beginnings of an integrated and comprehensive quantitative structure action relationship (QSAR study for photosensitizer libraries.

  13. Characterization of SPAD Array for Multifocal High-Content Screening Applications

    Directory of Open Access Journals (Sweden)

    Anthony Tsikouras

    2016-10-01

    Full Text Available Current instruments used to detect specific protein-protein interactions in live cells for applications in high-content screening (HCS are limited by the time required to measure the lifetime. Here, a 32 × 1 single-photon avalanche diode (SPAD array was explored as a detector for fluorescence lifetime imaging (FLIM in HCS. Device parameters and characterization results were interpreted in the context of the application to determine if the SPAD array could satisfy the requirements of HCS-FLIM. Fluorescence lifetime measurements were performed using a known fluorescence standard; and the recovered fluorescence lifetime matched literature reported values. The design of a theoretical 32 × 32 SPAD array was also considered as a detector for a multi-point confocal scanning microscope.

  14. Measuring toothbrush interproximal penetration using image analysis

    Science.gov (United States)

    Hayworth, Mark S.; Lyons, Elizabeth K.

    1994-09-01

    An image analysis method of measuring the effectiveness of a toothbrush in reaching the interproximal spaces of teeth is described. Artificial teeth are coated with a stain that approximates real plaque and then brushed with a toothbrush on a brushing machine. The teeth are then removed and turned sideways so that the interproximal surfaces can be imaged. The areas of stain that have been removed within masked regions that define the interproximal regions are measured and reported. These areas correspond to the interproximal areas of the tooth reached by the toothbrush bristles. The image analysis method produces more precise results (10-fold decrease in standard deviation) in a fraction (22%) of the time as compared to our prior visual grading method.

  15. Piecewise flat embeddings for hyperspectral image analysis

    Science.gov (United States)

    Hayes, Tyler L.; Meinhold, Renee T.; Hamilton, John F.; Cahill, Nathan D.

    2017-05-01

    Graph-based dimensionality reduction techniques such as Laplacian Eigenmaps (LE), Local Linear Embedding (LLE), Isometric Feature Mapping (ISOMAP), and Kernel Principal Components Analysis (KPCA) have been used in a variety of hyperspectral image analysis applications for generating smooth data embeddings. Recently, Piecewise Flat Embeddings (PFE) were introduced in the computer vision community as a technique for generating piecewise constant embeddings that make data clustering / image segmentation a straightforward process. In this paper, we show how PFE arises by modifying LE, yielding a constrained ℓ1-minimization problem that can be solved iteratively. Using publicly available data, we carry out experiments to illustrate the implications of applying PFE to pixel-based hyperspectral image clustering and classification.

  16. Unsupervised hyperspectral image analysis using independent component analysis (ICA)

    Energy Technology Data Exchange (ETDEWEB)

    S. S. Chiang; I. W. Ginsberg

    2000-06-30

    In this paper, an ICA-based approach is proposed for hyperspectral image analysis. It can be viewed as a random version of the commonly used linear spectral mixture analysis, in which the abundance fractions in a linear mixture model are considered to be unknown independent signal sources. It does not require the full rank of the separating matrix or orthogonality as most ICA methods do. More importantly, the learning algorithm is designed based on the independency of the material abundance vector rather than the independency of the separating matrix generally used to constrain the standard ICA. As a result, the designed learning algorithm is able to converge to non-orthogonal independent components. This is particularly useful in hyperspectral image analysis since many materials extracted from a hyperspectral image may have similar spectral signatures and may not be orthogonal. The AVIRIS experiments have demonstrated that the proposed ICA provides an effective unsupervised technique for hyperspectral image classification.

  17. A parallel microfluidic flow cytometer for high-content screening.

    Science.gov (United States)

    McKenna, Brian K; Evans, James G; Cheung, Man Ching; Ehrlich, Daniel J

    2011-05-01

    A parallel microfluidic cytometer (PMC) uses a high-speed scanning photomultiplier-based detector to combine low-pixel-count, one-dimensional imaging with flow cytometry. The 384 parallel flow channels of the PMC decouple count rate from signal-to-noise ratio. Using six-pixel one-dimensional images, we investigated protein localization in a yeast model for human protein misfolding diseases and demonstrated the feasibility of a nuclear-translocation assay in Chinese hamster ovary (CHO) cells expressing an NFκB-EGFP reporter.

  18. An image processing analysis of skin textures

    CERN Document Server

    Sparavigna, A

    2008-01-01

    Colour and coarseness of skin are visually different. When image processing is involved in the skin analysis, it is important to quantitatively evaluate such differences using texture features. In this paper, we discuss a texture analysis and measurements based on a statistical approach to the pattern recognition. Grain size and anisotropy are evaluated with proper diagrams. The possibility to determine the presence of pattern defects is also discussed.

  19. IMAGES AND SOCIAL REPRESENTATION: SEMIOTIC ANALYSIS CONTRIBUTIONS

    Directory of Open Access Journals (Sweden)

    Izabela Gonçalves Terra

    2016-09-01

    Full Text Available The common sense knowledge formation is object of study of the Social Representation Theory, which highlights the role of communication in the production of comprehension by the subjects. The visual images favor the socialization of meanings and are active elements in the formation of social representations. Given the expressive role of the images in the formation of representational contents, this paper aims to present a semiotics analysis method for researches on social representations. The semiotic analysis of images was selected as a theoretical and methodological basis, for offering the means required for guidance for an effective research method to identify the social representations of socially shared iconic signs. The analysis method was explored by means of analytical procedures, employed for the apprehension of social representations of the feminine in posters for Brazilian Ministry of Health campaigns, which allowed access to the network of meanings associated with the analyzed visual image. It should be emphasized that the relevance of the use of semiotic analysis to analyze social representations, which presents itself as a fertile perspective for further studies expanding the possibilities of exploitation of visual content.

  20. Scanning transmission electron microscopy imaging and analysis

    CERN Document Server

    Pennycook, Stephen J

    2011-01-01

    Provides the first comprehensive treatment of the physics and applications of this mainstream technique for imaging and analysis at the atomic level Presents applications of STEM in condensed matter physics, materials science, catalysis, and nanoscience Suitable for graduate students learning microscopy, researchers wishing to utilize STEM, as well as for specialists in other areas of microscopy Edited and written by leading researchers and practitioners

  1. Visualization of Parameter Space for Image Analysis

    Science.gov (United States)

    Pretorius, A. Johannes; Bray, Mark-Anthony P.; Carpenter, Anne E.; Ruddle, Roy A.

    2013-01-01

    Image analysis algorithms are often highly parameterized and much human input is needed to optimize parameter settings. This incurs a time cost of up to several days. We analyze and characterize the conventional parameter optimization process for image analysis and formulate user requirements. With this as input, we propose a change in paradigm by optimizing parameters based on parameter sampling and interactive visual exploration. To save time and reduce memory load, users are only involved in the first step - initialization of sampling - and the last step - visual analysis of output. This helps users to more thoroughly explore the parameter space and produce higher quality results. We describe a custom sampling plug-in we developed for CellProfiler - a popular biomedical image analysis framework. Our main focus is the development of an interactive visualization technique that enables users to analyze the relationships between sampled input parameters and corresponding output. We implemented this in a prototype called Paramorama. It provides users with a visual overview of parameters and their sampled values. User-defined areas of interest are presented in a structured way that includes image-based output and a novel layout algorithm. To find optimal parameter settings, users can tag high- and low-quality results to refine their search. We include two case studies to illustrate the utility of this approach. PMID:22034361

  2. tranSMART: An Open Source Knowledge Management and High Content Data Analytics Platform.

    Science.gov (United States)

    Scheufele, Elisabeth; Aronzon, Dina; Coopersmith, Robert; McDuffie, Michael T; Kapoor, Manish; Uhrich, Christopher A; Avitabile, Jean E; Liu, Jinlei; Housman, Dan; Palchuk, Matvey B

    2014-01-01

    The tranSMART knowledge management and high-content analysis platform is a flexible software framework featuring novel research capabilities. It enables analysis of integrated data for the purposes of hypothesis generation, hypothesis validation, and cohort discovery in translational research. tranSMART bridges the prolific world of basic science and clinical practice data at the point of care by merging multiple types of data from disparate sources into a common environment. The application supports data harmonization and integration with analytical pipelines. The application code was released into the open source community in January 2012, with 32 instances in operation. tranSMART's extensible data model and corresponding data integration processes, rapid data analysis features, and open source nature make it an indispensable tool in translational or clinical research.

  3. Web Based Distributed Coastal Image Analysis System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This project develops Web based distributed image analysis system processing the Moderate Resolution Imaging Spectroradiometer (MODIS) data to provide decision...

  4. Image analysis for ophthalmological diagnosis image processing of Corvis ST images using Matlab

    CERN Document Server

    Koprowski, Robert

    2016-01-01

    This monograph focuses on the use of analysis and processing methods for images from the Corvis® ST tonometer. The presented analysis is associated with the quantitative, repeatable and fully automatic evaluation of the response of the eye, eyeball and cornea to an air-puff. All the described algorithms were practically implemented in MATLAB®. The monograph also describes and provides the full source code designed to perform the discussed calculations. As a result, this monograph is intended for scientists, graduate students and students of computer science and bioengineering as well as doctors wishing to expand their knowledge of modern diagnostic methods assisted by various image analysis and processing methods.

  5. Digital image sequence processing, compression, and analysis

    CERN Document Server

    Reed, Todd R

    2004-01-01

    IntroductionTodd R. ReedCONTENT-BASED IMAGE SEQUENCE REPRESENTATIONPedro M. Q. Aguiar, Radu S. Jasinschi, José M. F. Moura, andCharnchai PluempitiwiriyawejTHE COMPUTATION OF MOTIONChristoph Stiller, Sören Kammel, Jan Horn, and Thao DangMOTION ANALYSIS AND DISPLACEMENT ESTIMATION IN THE FREQUENCY DOMAINLuca Lucchese and Guido Maria CortelazzoQUALITY OF SERVICE ASSESSMENT IN NEW GENERATION WIRELESS VIDEO COMMUNICATIONSGaetano GiuntaERROR CONCEALMENT IN DIGITAL VIDEOFrancesco G.B. De NataleIMAGE SEQUENCE RESTORATION: A WIDER PERSPECTIVEAnil KokaramVIDEO SUMMARIZATIONCuneyt M. Taskiran and Edward

  6. Quantitative Analysis in Nuclear Medicine Imaging

    CERN Document Server

    2006-01-01

    This book provides a review of image analysis techniques as they are applied in the field of diagnostic and therapeutic nuclear medicine. Driven in part by the remarkable increase in computing power and its ready and inexpensive availability, this is a relatively new yet rapidly expanding field. Likewise, although the use of radionuclides for diagnosis and therapy has origins dating back almost to the discovery of natural radioactivity itself, radionuclide therapy and, in particular, targeted radionuclide therapy has only recently emerged as a promising approach for therapy of cancer and, to a lesser extent, other diseases. As effort has, therefore, been made to place the reviews provided in this book in a broader context. The effort to do this is reflected by the inclusion of introductory chapters that address basic principles of nuclear medicine imaging, followed by overview of issues that are closely related to quantitative nuclear imaging and its potential role in diagnostic and therapeutic applications. ...

  7. Fast image analysis in polarization SHG microscopy.

    Science.gov (United States)

    Amat-Roldan, Ivan; Psilodimitrakopoulos, Sotiris; Loza-Alvarez, Pablo; Artigas, David

    2010-08-02

    Pixel resolution polarization-sensitive second harmonic generation (PSHG) imaging has been recently shown as a promising imaging modality, by largely enhancing the capabilities of conventional intensity-based SHG microscopy. PSHG is able to obtain structural information from the elementary SHG active structures, which play an important role in many biological processes. Although the technique is of major interest, acquiring such information requires long offline processing, even with current computers. In this paper, we present an approach based on Fourier analysis of the anisotropy signature that allows processing the PSHG images in less than a second in standard single core computers. This represents a temporal improvement of several orders of magnitude compared to conventional fitting algorithms. This opens up the possibility for fast PSHG information with the subsequent benefit of potential use in medical applications.

  8. Image Processing and Analysis for DTMRI

    Directory of Open Access Journals (Sweden)

    Kondapalli Srinivasa Vara Prasad

    2012-01-01

    Full Text Available This paper describes image processing techniques for Diffusion Tensor Magnetic Resonance. In Diffusion Tensor MRI, a tensor describing local water diffusion is acquired for each voxel. The geometric nature of the diffusion tensors can quantitatively characterize the local structure in tissues such as bone, muscles, and white matter of the brain. The close relationship between local image structure and apparent diffusion makes this image modality very interesting for medical image analysis. We present a decomposition of the diffusion tensor based on its symmetry properties resulting in useful measures describing the geometry of the diffusion ellipsoid. A simple anisotropy measure follows naturally from this analysis. We describe how the geometry, or shape, of the tensor can be visualized using a coloring scheme based on the derived shape measures. We show how filtering of the tensor data of a human brain can provide a description of macro structural diffusion which can be used for measures of fiber-tract organization. We also describe how tracking of white matter tracts can be implemented using the introduced methods. These methods offers unique tools for the in vivo demonstration of neural connectivity in healthy and diseased brain tissue.

  9. Pain related inflammation analysis using infrared images

    Science.gov (United States)

    Bhowmik, Mrinal Kanti; Bardhan, Shawli; Das, Kakali; Bhattacharjee, Debotosh; Nath, Satyabrata

    2016-05-01

    Medical Infrared Thermography (MIT) offers a potential non-invasive, non-contact and radiation free imaging modality for assessment of abnormal inflammation having pain in the human body. The assessment of inflammation mainly depends on the emission of heat from the skin surface. Arthritis is a disease of joint damage that generates inflammation in one or more anatomical joints of the body. Osteoarthritis (OA) is the most frequent appearing form of arthritis, and rheumatoid arthritis (RA) is the most threatening form of them. In this study, the inflammatory analysis has been performed on the infrared images of patients suffering from RA and OA. For the analysis, a dataset of 30 bilateral knee thermograms has been captured from the patient of RA and OA by following a thermogram acquisition standard. The thermograms are pre-processed, and areas of interest are extracted for further processing. The investigation of the spread of inflammation is performed along with the statistical analysis of the pre-processed thermograms. The objectives of the study include: i) Generation of a novel thermogram acquisition standard for inflammatory pain disease ii) Analysis of the spread of the inflammation related to RA and OA using K-means clustering. iii) First and second order statistical analysis of pre-processed thermograms. The conclusion reflects that, in most of the cases, RA oriented inflammation affects bilateral knees whereas inflammation related to OA present in the unilateral knee. Also due to the spread of inflammation in OA, contralateral asymmetries are detected through the statistical analysis.

  10. Automated image-based assay for evaluation of HIV neutralization and cell-to-cell fusion inhibition

    National Research Council Canada - National Science Library

    Sheik-Khalil, Enas; Bray, Mark-Anthony; Özkaya Şahin, Gülsen; Scarlatti, Gabriella; Jansson, Marianne; Carpenter, Anne E; Fenyö, Eva Maria

    2014-01-01

    .... Here, we present a high-throughput, high-content automated plaque reduction (APR) assay based on automated microscopy and image analysis that allows evaluation of neutralization and inhibition of cell-cell fusion within the same assay...

  11. Quantitative image analysis of celiac disease.

    Science.gov (United States)

    Ciaccio, Edward J; Bhagat, Govind; Lewis, Suzanne K; Green, Peter H

    2015-03-07

    We outline the use of quantitative techniques that are currently used for analysis of celiac disease. Image processing techniques can be useful to statistically analyze the pixular data of endoscopic images that is acquired with standard or videocapsule endoscopy. It is shown how current techniques have evolved to become more useful for gastroenterologists who seek to understand celiac disease and to screen for it in suspected patients. New directions for focus in the development of methodology for diagnosis and treatment of this disease are suggested. It is evident that there are yet broad areas where there is potential to expand the use of quantitative techniques for improved analysis in suspected or known celiac disease patients.

  12. Quantitative image analysis of celiac disease

    Science.gov (United States)

    Ciaccio, Edward J; Bhagat, Govind; Lewis, Suzanne K; Green, Peter H

    2015-01-01

    We outline the use of quantitative techniques that are currently used for analysis of celiac disease. Image processing techniques can be useful to statistically analyze the pixular data of endoscopic images that is acquired with standard or videocapsule endoscopy. It is shown how current techniques have evolved to become more useful for gastroenterologists who seek to understand celiac disease and to screen for it in suspected patients. New directions for focus in the development of methodology for diagnosis and treatment of this disease are suggested. It is evident that there are yet broad areas where there is potential to expand the use of quantitative techniques for improved analysis in suspected or known celiac disease patients. PMID:25759524

  13. Morphometric image analysis of giant vesicles

    DEFF Research Database (Denmark)

    Husen, Peter Rasmussen; Arriaga, Laura; Monroy, Francisco

    2012-01-01

    We have developed a strategy to determine lengths and orientations of tie lines in the coexistence region of liquid-ordered and liquid-disordered phases of cholesterol containing ternary lipid mixtures. The method combines confocal-fluorescence-microscopy image stacks of giant unilamellar vesicles...... (GUVs), a dedicated 3D-image analysis, and a quantitative analysis based in equilibrium thermodynamic considerations. This approach was tested in GUVs composed of 1,2-dioleoyl-sn-glycero-3-phosphocholine/1,2-palmitoyl-sn-glycero-3-phosphocholine/cholesterol. In general, our results show a reasonable...... agreement with previously reported data obtained by other methods. For example, our computed tie lines were found to be nonhorizontal, indicating a difference in cholesterol content in the coexisting phases. This new, to our knowledge, analytical strategy offers a way to further exploit fluorescence...

  14. Automated high-content assay for compounds selectively toxic to Trypanosoma cruzi in a myoblastic cell line.

    Directory of Open Access Journals (Sweden)

    Julio Alonso-Padilla

    2015-01-01

    Full Text Available Chagas disease, caused by the protozoan parasite Trypanosoma cruzi, represents a very important public health problem in Latin America where it is endemic. Although mostly asymptomatic at its initial stage, after the disease becomes chronic, about a third of the infected patients progress to a potentially fatal outcome due to severe damage of heart and gut tissues. There is an urgent need for new drugs against Chagas disease since there are only two drugs available, benznidazole and nifurtimox, and both show toxic side effects and variable efficacy against the chronic stage of the disease.Genetically engineered parasitic strains are used for high throughput screening (HTS of large chemical collections in the search for new anti-parasitic compounds. These assays, although successful, are limited to reporter transgenic parasites and do not cover the wide T. cruzi genetic background. With the aim to contribute to the early drug discovery process against Chagas disease we have developed an automated image-based 384-well plate HTS assay for T. cruzi amastigote replication in a rat myoblast host cell line. An image analysis script was designed to inform on three outputs: total number of host cells, ratio of T. cruzi amastigotes per cell and percentage of infected cells, which respectively provides one host cell toxicity and two T. cruzi toxicity readouts. The assay was statistically robust (Z´ values >0.6 and was validated against a series of known anti-trypanosomatid drugs.We have established a highly reproducible, high content HTS assay for screening of chemical compounds against T. cruzi infection of myoblasts that is amenable for use with any T. cruzi strain capable of in vitro infection. Our visual assay informs on both anti-parasitic and host cell toxicity readouts in a single experiment, allowing the direct identification of compounds selectively targeted to the parasite.

  15. Machine learning for medical images analysis.

    Science.gov (United States)

    Criminisi, A

    2016-10-01

    This article discusses the application of machine learning for the analysis of medical images. Specifically: (i) We show how a special type of learning models can be thought of as automatically optimized, hierarchically-structured, rule-based algorithms, and (ii) We discuss how the issue of collecting large labelled datasets applies to both conventional algorithms as well as machine learning techniques. The size of the training database is a function of model complexity rather than a characteristic of machine learning methods.

  16. Biomedical Image Analysis by Program "Vision Assistant" and "Labview"

    Directory of Open Access Journals (Sweden)

    Peter Izak

    2005-01-01

    Full Text Available This paper introduces application in image analysis of biomedical images. General task is focused on analysis and diagnosis biomedical images obtained from program ImageJ. There are described methods which can be used for images in biomedical application. The main idea is based on particle analysis, pattern matching techniques. For this task was chosensophistication method by program Vision Assistant, which is a part of program LabVIEW.

  17. Image-based siRNA screen to identify kinases regulating Weibel-Palade body size control using electroporation.

    Science.gov (United States)

    Ketteler, Robin; Freeman, Jamie; Ferraro, Francesco; Bata, Nicole; Cutler, Dan F; Kriston-Vizi, Janos

    2017-03-01

    High-content screening of kinase inhibitors is important in order to identify biogenesis and function mechanisms of subcellular organelles. Here, we present a human kinome siRNA high-content screen on primary human umbilical vein endothelial cells, that were transfected by electroporation. The data descriptor contains a confocal fluorescence, microscopic image dataset. We also describe an open source, automated image analysis workflow that can be reused to perform high-content analysis of other organelles. This dataset is suitable for analysis of morphological parameters that are linked to human umbilical vein endothelial cell (HUVEC) biology.

  18. Image analysis of blood platelets adhesion.

    Science.gov (United States)

    Krízová, P; Rysavá, J; Vanícková, M; Cieslar, P; Dyr, J E

    2003-01-01

    Adhesion of blood platelets is one of the major events in haemostatic and thrombotic processes. We studied adhesion of blood platelets on fibrinogen and fibrin dimer sorbed on solid support material (glass, polystyrene). Adhesion was carried on under static and dynamic conditions and measured as percentage of the surface covered with platelets. Within a range of platelet counts in normal and in thrombocytopenic blood we observed a very significant decrease in platelet adhesion on fibrin dimer with bounded active thrombin with decreasing platelet count. Our results show the imperative use of platelet poor blood preparations as control samples in experiments with thrombocytopenic blood. Experiments carried on adhesive surfaces sorbed on polystyrene showed lower relative inaccuracy than on glass. Markedly different behaviour of platelets adhered on the same adhesive surface, which differed only in support material (glass or polystyrene) suggest that adhesion and mainly spreading of platelets depends on physical quality of the surface. While on polystyrene there were no significant differences between fibrin dimer and fibrinogen, adhesion measured on glass support material markedly differed between fibrin dimer and fibrinogen. We compared two methods of thresholding in image analysis of adhered platelets. Results obtained by image analysis of spreaded platelets showed higher relative inaccuracy than results obtained by image analysis of platelets centres and aggregates.

  19. An open-source solution for advanced imaging flow cytometry data analysis using machine learning.

    Science.gov (United States)

    Hennig, Holger; Rees, Paul; Blasi, Thomas; Kamentsky, Lee; Hung, Jane; Dao, David; Carpenter, Anne E; Filby, Andrew

    2017-01-01

    Imaging flow cytometry (IFC) enables the high throughput collection of morphological and spatial information from hundreds of thousands of single cells. This high content, information rich image data can in theory resolve important biological differences among complex, often heterogeneous biological samples. However, data analysis is often performed in a highly manual and subjective manner using very limited image analysis techniques in combination with conventional flow cytometry gating strategies. This approach is not scalable to the hundreds of available image-based features per cell and thus makes use of only a fraction of the spatial and morphometric information. As a result, the quality, reproducibility and rigour of results are limited by the skill, experience and ingenuity of the data analyst. Here, we describe a pipeline using open-source software that leverages the rich information in digital imagery using machine learning algorithms. Compensated and corrected raw image files (.rif) data files from an imaging flow cytometer (the proprietary .cif file format) are imported into the open-source software CellProfiler, where an image processing pipeline identifies cells and subcellular compartments allowing hundreds of morphological features to be measured. This high-dimensional data can then be analysed using cutting-edge machine learning and clustering approaches using "user-friendly" platforms such as CellProfiler Analyst. Researchers can train an automated cell classifier to recognize different cell types, cell cycle phases, drug treatment/control conditions, etc., using supervised machine learning. This workflow should enable the scientific community to leverage the full analytical power of IFC-derived data sets. It will help to reveal otherwise unappreciated populations of cells based on features that may be hidden to the human eye that include subtle measured differences in label free detection channels such as bright-field and dark-field imagery.

  20. Image analysis of Renaissance copperplate prints

    Science.gov (United States)

    Hedges, S. Blair

    2008-02-01

    From the fifteenth to the nineteenth centuries, prints were a common form of visual communication, analogous to photographs. Copperplate prints have many finely engraved black lines which were used to create the illusion of continuous tone. Line densities generally are 100-2000 lines per square centimeter and a print can contain more than a million total engraved lines 20-300 micrometers in width. Because hundreds to thousands of prints were made from a single copperplate over decades, variation among prints can have historical value. The largest variation is plate-related, which is the thinning of lines over successive editions as a result of plate polishing to remove time-accumulated corrosion. Thinning can be quantified with image analysis and used to date undated prints and books containing prints. Print-related variation, such as over-inking of the print, is a smaller but significant source. Image-related variation can introduce bias if images were differentially illuminated or not in focus, but improved imaging technology can limit this variation. The Print Index, the percentage of an area composed of lines, is proposed as a primary measure of variation. Statistical methods also are proposed for comparing and identifying prints in the context of a print database.

  1. Automatic dirt trail analysis in dermoscopy images.

    Science.gov (United States)

    Cheng, Beibei; Joe Stanley, R; Stoecker, William V; Osterwise, Christopher T P; Stricklin, Sherea M; Hinton, Kristen A; Moss, Randy H; Oliviero, Margaret; Rabinovitz, Harold S

    2013-02-01

    Basal cell carcinoma (BCC) is the most common cancer in the US. Dermatoscopes are devices used by physicians to facilitate the early detection of these cancers based on the identification of skin lesion structures often specific to BCCs. One new lesion structure, referred to as dirt trails, has the appearance of dark gray, brown or black dots and clods of varying sizes distributed in elongated clusters with indistinct borders, often appearing as curvilinear trails. In this research, we explore a dirt trail detection and analysis algorithm for extracting, measuring, and characterizing dirt trails based on size, distribution, and color in dermoscopic skin lesion images. These dirt trails are then used to automatically discriminate BCC from benign skin lesions. For an experimental data set of 35 BCC images with dirt trails and 79 benign lesion images, a neural network-based classifier achieved a 0.902 are under a receiver operating characteristic curve using a leave-one-out approach. Results obtained from this study show that automatic detection of dirt trails in dermoscopic images of BCC is feasible. This is important because of the large number of these skin cancers seen every year and the challenge of discovering these earlier with instrumentation. © 2011 John Wiley & Sons A/S.

  2. Quantitative color analysis for capillaroscopy image segmentation.

    Science.gov (United States)

    Goffredo, Michela; Schmid, Maurizio; Conforto, Silvia; Amorosi, Beatrice; D'Alessio, Tommaso; Palma, Claudio

    2012-06-01

    This communication introduces a novel approach for quantitatively evaluating the role of color space decomposition in digital nailfold capillaroscopy analysis. It is clinically recognized that any alterations of the capillary pattern, at the periungual skin region, are directly related to dermatologic and rheumatic diseases. The proposed algorithm for the segmentation of digital capillaroscopy images is optimized with respect to the choice of the color space and the contrast variation. Since the color space is a critical factor for segmenting low-contrast images, an exhaustive comparison between different color channels is conducted and a novel color channel combination is presented. Results from images of 15 healthy subjects are compared with annotated data, i.e. selected images approved by clinicians. By comparison, a set of figures of merit, which highlights the algorithm capability to correctly segment capillaries, their shape and their number, is extracted. Experimental tests depict that the optimized procedure for capillaries segmentation, based on a novel color channel combination, presents values of average accuracy higher than 0.8, and extracts capillaries whose shape and granularity are acceptable. The obtained results are particularly encouraging for future developments on the classification of capillary patterns with respect to dermatologic and rheumatic diseases.

  3. Analysis of 193 Mammographic phantom images

    Energy Technology Data Exchange (ETDEWEB)

    Son, Eun Ju; Kim, Eun Kyung; Ko, Kyung Hee; Kim, Young Ah; Oh, Ki Keun [College of Medicine, Yonsei Univ., Seoul (Korea, Republic of); Chung, Sun Yang [College of Medicine, Pochon CHA Univ., Pochon (Korea, Republic of); Kim, Hyuk Joo; Cha, Seung Hwan [Korea Food and Drug Administration, Seoul (Korea, Republic of)

    2003-11-01

    To evaluate the actual state of quality control in Korea through analysis of mammographic phantom images obtained from a multicenter, and to determine the proper exposure conditions required in order to obtain satisfactory phantom images. Between April and June, 2002, 193 phantom images were referred to the Korea Food and Drug Administration for evaluation. Two radiologists recorded the number of fibers, specks and masses they contained, and the 'pass' criteria were as follows: checked number of fibers: four or more; specks, three or more; masses, three or more (a total of ten or more features). Images in which optical density was over 1.2 were classified as satisfactory. In addition, changes in the success ratio, and difference between the two groups (i.e. 'pass' and 'fail', with regard to exposure conditions and optical density) were evaluated. Among the 193 images, 116 (60.1%) passed and 77 (39.9%) failed. Among those which passed, 73/100 (73%) involved to use of a grid, 80/117 (68.3%) were obtained within the optimal kVp range, 50/111 (45.0%) involved the use of optimal mAs, and 79/112 (70.5%) were obtained within the optimal range of optical density. Among those which failed, the corresponding figures were 17/52 (32.6%), 33/66 (50.0%), 31/69 (44.9%), and 35/65 (53.8%), There were statistically significant differences between the pass and fail rates, and with regard to kVp, optical density, and the use of a grid, but with regard to mAs, statistical differences were not significant. If only phantom images with an optical density of over 1.2 [as per the rule of the Mammographic Quality Standard Act (MQSA)] was included, the success rate would fall from 60.1% to 43.0%. The pass rate for mammographic phantom images was 60.1%. If such images are to be satisfactory, they should be obtained within the optimal range of optical density, using optimal kVp and a grid.

  4. Reticle defect sizing of optical proximity correction defects using SEM imaging and image analysis techniques

    Science.gov (United States)

    Zurbrick, Larry S.; Wang, Lantian; Konicek, Paul; Laird, Ellen R.

    2000-07-01

    Sizing of programmed defects on optical proximity correction (OPC) feature sis addressed using high resolution scanning electron microscope (SEM) images and image analysis techniques. A comparison and analysis of different sizing methods is made. This paper addresses the issues of OPC defect definition and discusses the experimental measurement results obtained by SEM in combination with image analysis techniques.

  5. Remote Sensing Digital Image Analysis An Introduction

    CERN Document Server

    Richards, John A

    2013-01-01

    Remote Sensing Digital Image Analysis provides the non-specialist with a treatment of the quantitative analysis of satellite and aircraft derived remotely sensed data. Since the first edition of the book there have been significant developments in the algorithms used for the processing and analysis of remote sensing imagery; nevertheless many of the fundamentals have substantially remained the same.  This new edition presents material that has retained value since those early days, along with new techniques that can be incorporated into an operational framework for the analysis of remote sensing data. The book is designed as a teaching text for the senior undergraduate and postgraduate student, and as a fundamental treatment for those engaged in research using digital image processing in remote sensing.  The presentation level is for the mathematical non-specialist.  Since the very great number of operational users of remote sensing come from the earth sciences communities, the text is pitched at a leve...

  6. Nursing image: an evolutionary concept analysis.

    Science.gov (United States)

    Rezaei-Adaryani, Morteza; Salsali, Mahvash; Mohammadi, Eesa

    2012-12-01

    A long-term challenge to the nursing profession is the concept of image. In this study, we used the Rodgers' evolutionary concept analysis approach to analyze the concept of nursing image (NI). The aim of this concept analysis was to clarify the attributes, antecedents, consequences, and implications associated with the concept. We performed an integrative internet-based literature review to retrieve English literature published from 1980-2011. Findings showed that NI is a multidimensional, all-inclusive, paradoxical, dynamic, and complex concept. The media, invisibility, clothing style, nurses' behaviors, gender issues, and professional organizations are the most important antecedents of the concept. We found that NI is pivotal in staff recruitment and nursing shortage, resource allocation to nursing, nurses' job performance, workload, burnout and job dissatisfaction, violence against nurses, public trust, and salaries available to nurses. An in-depth understanding of the NI concept would assist nurses to eliminate negative stereotypes and build a more professional image for the nurse and the profession.

  7. Simple Low Level Features for Image Analysis

    Science.gov (United States)

    Falcoz, Paolo

    As human beings, we perceive the world around us mainly through our eyes, and give what we see the status of “reality”; as such we historically tried to create ways of recording this reality so we could augment or extend our memory. From early attempts in photography like the image produced in 1826 by the French inventor Nicéphore Niépce (Figure 2.1) to the latest high definition camcorders, the number of recorded pieces of reality increased exponentially, posing the problem of managing all that information. Most of the raw video material produced today has lost its memory augmentation function, as it will hardly ever be viewed by any human; pervasive CCTVs are an example. They generate an enormous amount of data each day, but there is not enough “human processing power” to view them. Therefore the need for effective automatic image analysis tools is great, and a lot effort has been put in it, both from the academia and the industry. In this chapter, a review of some of the most important image analysis tools are presented.

  8. Analysis on enhanced depth of field for integral imaging microscope.

    Science.gov (United States)

    Lim, Young-Tae; Park, Jae-Hyeung; Kwon, Ki-Chul; Kim, Nam

    2012-10-08

    Depth of field of the integral imaging microscope is studied. In the integral imaging microscope, 3-D information is encoded as a form of elemental images Distance between intermediate plane and object point decides the number of elemental image and depth of field of integral imaging microscope. From the analysis, it is found that depth of field of the reconstructed depth plane image by computational integral imaging reconstruction is longer than depth of field of optical microscope. From analyzed relationship, experiment using integral imaging microscopy and conventional microscopy is also performed to confirm enhanced depth of field of integral imaging microscopy.

  9. Machine Learning Interface for Medical Image Analysis.

    Science.gov (United States)

    Zhang, Yi C; Kagen, Alexander C

    2016-10-11

    TensorFlow is a second-generation open-source machine learning software library with a built-in framework for implementing neural networks in wide variety of perceptual tasks. Although TensorFlow usage is well established with computer vision datasets, the TensorFlow interface with DICOM formats for medical imaging remains to be established. Our goal is to extend the TensorFlow API to accept raw DICOM images as input; 1513 DaTscan DICOM images were obtained from the Parkinson's Progression Markers Initiative (PPMI) database. DICOM pixel intensities were extracted and shaped into tensors, or n-dimensional arrays, to populate the training, validation, and test input datasets for machine learning. A simple neural network was constructed in TensorFlow to classify images into normal or Parkinson's disease groups. Training was executed over 1000 iterations for each cross-validation set. The gradient descent optimization and Adagrad optimization algorithms were used to minimize cross-entropy between the predicted and ground-truth labels. Cross-validation was performed ten times to produce a mean accuracy of 0.938 ± 0.047 (95 % CI 0.908-0.967). The mean sensitivity was 0.974 ± 0.043 (95 % CI 0.947-1.00) and mean specificity was 0.822 ± 0.207 (95 % CI 0.694-0.950). We extended the TensorFlow API to enable DICOM compatibility in the context of DaTscan image analysis. We implemented a neural network classifier that produces diagnostic accuracies on par with excellent results from previous machine learning models. These results indicate the potential role of TensorFlow as a useful adjunct diagnostic tool in the clinical setting.

  10. Research on automatic human chromosome image analysis

    Science.gov (United States)

    Ming, Delie; Tian, Jinwen; Liu, Jian

    2007-11-01

    Human chromosome karyotyping is one of the essential tasks in cytogenetics, especially in genetic syndrome diagnoses. In this thesis, an automatic procedure is introduced for human chromosome image analysis. According to different status of touching and overlapping chromosomes, several segmentation methods are proposed to achieve the best results. Medial axis is extracted by the middle point algorithm. Chromosome band is enhanced by the algorithm based on multiscale B-spline wavelets, extracted by average gray profile, gradient profile and shape profile, and calculated by the WDD (Weighted Density Distribution) descriptors. The multilayer classifier is used in classification. Experiment results demonstrate that the algorithms perform well.

  11. Rapid spectral analysis for spectral imaging.

    Science.gov (United States)

    Jacques, Steven L; Samatham, Ravikant; Choudhury, Niloy

    2010-07-15

    Spectral imaging requires rapid analysis of spectra associated with each pixel. A rapid algorithm has been developed that uses iterative matrix inversions to solve for the absorption spectra of a tissue using a lookup table for photon pathlength based on numerical simulations. The algorithm uses tissue water content as an internal standard to specify the strength of optical scattering. An experimental example is presented on the spectroscopy of portwine stain lesions. When implemented in MATLAB, the method is ~100-fold faster than using fminsearch().

  12. ImageJ-MATLAB: a bidirectional framework for scientific image analysis interoperability.

    Science.gov (United States)

    Hiner, Mark C; Rueden, Curtis T; Eliceiri, Kevin W

    2016-10-26

    ImageJ-MATLAB is a lightweight Java library facilitating bi-directional interoperability between MATLAB and ImageJ. By defining a standard for translation between matrix and image data structures, researchers are empowered to select the best tool for their image-analysis tasks.

  13. Etching and image analysis of the microstructure in marble

    DEFF Research Database (Denmark)

    Alm, Ditte; Brix, Susanne; Howe-Rasmussen, Helle

    2005-01-01

    of grains exposed on that surface are measured on the microscope images using image analysis by the program Adobe Photoshop 7.0 with Image Processing Toolkit 4.0. The parameters measured by the program on microscope images of thin sections of two marble types are used for calculation of the coefficient...

  14. Digital image analysis of palaeoenvironmental records and applications

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Environmental change signals in geological or biological records are commonly reflected on their reflecting or transmitting images. These environmental signals can be extracted through digital image analysis. The analysis principle involves section line selection, color value reading and calculating environmental proxy index along the section lines, layer identification, auto-chronology and investigation of structure evolution of growth bands. On detailed illustrations of the image technique, this note provides image analyzing procedures of coral, tree-ring and stalagmite records. The environmental implications of the proxy index from image analysis are accordingly given through application demonstration of the image technique.

  15. Wavelet Analysis of Space Solar Telescope Images

    Institute of Scientific and Technical Information of China (English)

    Xi-An Zhu; Sheng-Zhen Jin; Jing-Yu Wang; Shu-Nian Ning

    2003-01-01

    The scientific satellite SST (Space Solar Telescope) is an important research project strongly supported by the Chinese Academy of Sciences. Every day,SST acquires 50 GB of data (after processing) but only 10GB can be transmitted to the ground because of limited time of satellite passage and limited channel volume.Therefore, the data must be compressed before transmission. Wavelets analysis is a new technique developed over the last 10 years, with great potential of application.We start with a brief introduction to the essential principles of wavelet analysis,and then describe the main idea of embedded zerotree wavelet coding, used for compressing the SST images. The results show that this coding is adequate for the job.

  16. The Scientific Image in Behavior Analysis.

    Science.gov (United States)

    Keenan, Mickey

    2016-05-01

    Throughout the history of science, the scientific image has played a significant role in communication. With recent developments in computing technology, there has been an increase in the kinds of opportunities now available for scientists to communicate in more sophisticated ways. Within behavior analysis, though, we are only just beginning to appreciate the importance of going beyond the printing press to elucidate basic principles of behavior. The aim of this manuscript is to stimulate appreciation of both the role of the scientific image and the opportunities provided by a quick response code (QR code) for enhancing the functionality of the printed page. I discuss the limitations of imagery in behavior analysis ("Introduction"), and I show examples of what can be done with animations and multimedia for teaching philosophical issues that arise when teaching about private events ("Private Events 1 and 2"). Animations are also useful for bypassing ethical issues when showing examples of challenging behavior ("Challenging Behavior"). Each of these topics can be accessed only by scanning the QR code provided. This contingency has been arranged to help the reader embrace this new technology. In so doing, I hope to show its potential for going beyond the limitations of the printing press.

  17. Percent area coverage through image analysis

    Science.gov (United States)

    Wong, Chung M.; Hong, Sung M.; Liu, De-Ling

    2016-09-01

    The notion of percent area coverage (PAC) has been used to characterize surface cleanliness levels in the spacecraft contamination control community. Due to the lack of detailed particle data, PAC has been conventionally calculated by multiplying the particle surface density in predetermined particle size bins by a set of coefficients per MIL-STD-1246C. In deriving the set of coefficients, the surface particle size distribution is assumed to follow a log-normal relation between particle density and particle size, while the cross-sectional area function is given as a combination of regular geometric shapes. For particles with irregular shapes, the cross-sectional area function cannot describe the true particle area and, therefore, may introduce error in the PAC calculation. Other errors may also be introduced by using the lognormal surface particle size distribution function that highly depends on the environmental cleanliness and cleaning process. In this paper, we present PAC measurements from silicon witness wafers that collected fallouts from a fabric material after vibration testing. PAC calculations were performed through analysis of microscope images and compare them to values derived through the MIL-STD-1246C method. Our results showed that the MIL-STD-1246C method does provide a reasonable upper bound to the PAC values determined through image analysis, in particular for PAC values below 0.1.

  18. Comparative Analysis of Various Image Fusion Techniques For Biomedical Images: A Review

    Directory of Open Access Journals (Sweden)

    Nayera Nahvi,

    2014-05-01

    Full Text Available Image Fusion is a process of combining the relevant information from a set of images, into a single image, wherein the resultant fused image will be more informative and complete than any of the input images. This paper discusses implementation of DWT technique on different images to make a fused image having more information content. As DWT is the latest technique for image fusion as compared to simple image fusion and pyramid based image fusion, so we are going to implement DWT as the image fusion technique in our paper. Other methods such as Principal Component Analysis (PCA based fusion, Intensity hue Saturation (IHS Transform based fusion and high pass filtering methods are also discussed. A new algorithm is proposed using Discrete Wavelet transform and different fusion techniques including pixel averaging, min-max and max-min methods for medical image fusion. KEYWORDS:

  19. From Pixels to Geographic Objects in Remote Sensing Image Analysis

    NARCIS (Netherlands)

    Addink, E.A.; Van Coillie, Frieke M.B.; Jong, Steven M. de

    2012-01-01

    Traditional image analysis methods are mostly pixel-based and use the spectral differences of landscape elements at the Earth surface to classify these elements or to extract element properties from the Earth Observation image. Geographic object-based image analysis (GEOBIA) has received considerabl

  20. From Pixels to Geographic Objects in Remote Sensing Image Analysis

    NARCIS (Netherlands)

    Addink, E.A.; Van Coillie, Frieke M.B.; Jong, Steven M. de

    Traditional image analysis methods are mostly pixel-based and use the spectral differences of landscape elements at the Earth surface to classify these elements or to extract element properties from the Earth Observation image. Geographic object-based image analysis (GEOBIA) has received

  1. Image analysis and platform development for automated phenotyping in cytomics

    NARCIS (Netherlands)

    Yan, Kuan

    2013-01-01

    This thesis is dedicated to the empirical study of image analysis in HT/HC screen study. Often a HT/HC screening produces extensive amounts that cannot be manually analyzed. Thus, an automated image analysis solution is prior to an objective understanding of the raw image data. Compared to general a

  2. Blind Analysis of CT Image Noise Using Residual Denoised Images

    CERN Document Server

    Roychowdhury, Sohini; Alessio, Adam

    2016-01-01

    CT protocol design and quality control would benefit from automated tools to estimate the quality of generated CT images. These tools could be used to identify erroneous CT acquisitions or refine protocols to achieve certain signal to noise characteristics. This paper investigates blind estimation methods to determine global signal strength and noise levels in chest CT images. Methods: We propose novel performance metrics corresponding to the accuracy of noise and signal estimation. We implement and evaluate the noise estimation performance of six spatial- and frequency- based methods, derived from conventional image filtering algorithms. Algorithms were tested on patient data sets from whole-body repeat CT acquisitions performed with a higher and lower dose technique over the same scan region. Results: The proposed performance metrics can evaluate the relative tradeoff of filter parameters and noise estimation performance. The proposed automated methods tend to underestimate CT image noise at low-flux levels...

  3. Automated regional behavioral analysis for human brain images

    National Research Council Canada - National Science Library

    Lancaster, Jack L; Laird, Angela R; Eickhoff, Simon B; Martinez, Michael J; Fox, P Mickle; Fox, Peter T

    2012-01-01

    Behavioral categories of functional imaging experiments along with standardized brain coordinates of associated activations were used to develop a method to automate regional behavioral analysis of human brain images...

  4. Direct identification of fungi using image analysis

    DEFF Research Database (Denmark)

    Dørge, Thorsten Carlheim; Carstensen, Jens Michael; Frisvad, Jens Christian

    1999-01-01

    Filamentous fungi have often been characterized, classified or identified with a major emphasis on macromorphological characters, i.e. the size, texture and color of fungal colonies grown on one or more identification media. This approach has been rejcted by several taxonomists because of the sub......Filamentous fungi have often been characterized, classified or identified with a major emphasis on macromorphological characters, i.e. the size, texture and color of fungal colonies grown on one or more identification media. This approach has been rejcted by several taxonomists because...... of the subjectivity in the visual evaluation and quantification (if any)of such characters and the apparent large variability of the features. We present an image analysis approach for objective identification and classification of fungi. The approach is exemplified by several isolates of nine different species...... of the genus Penicillium, known to be very difficult to identify correctly. The fungi were incubated on YES and CYA for one week at 25 C (3 point inoculation) in 9 cm Petri dishes. The cultures are placed under a camera where a digital image of the front of the colonies is acquired under optimal illumination...

  5. MORPHOLOGICAL GRANULOMETRIC ANALYSIS OF SEDIMENT IMAGES

    Directory of Open Access Journals (Sweden)

    Yoganand Balagurunathan

    2011-05-01

    Full Text Available Sediments are routinely analyzed in terms of the sizing characteristics of the grains of which they are composed. Via sieving methods, the grains are separated and a weight-based size distribution constructed. Various moment parameters are computed from the size distribution and these serve as sediment characteristics. This paper examines the feasibility of a fully electronic granularity analysis using digital image processing. The study uses a random model of three-dimensional grains in conjunction with the morphological method of granulometric size distributions. The random model is constructed to simulate sand, silt, and clay particle distributions. Owing to the impossibility of perfectly sifting small grains so that they do not touch, the model is used in both disjoint and non-disjoint modes, and watershed segmentation is applied in the non-disjoint model. The image-based granulometric size distributions are transformed so that they take into account the necessity to view sediment fractions at different magnifications and in different frames. Gray-scale granulometric moments are then computed using both ordinary and reconstructive granulometries. The resulting moments are then compared to moments found from real grains in seven different sediments using standard weight-based size distributions.

  6. Accuracy of Image Analysis in Quantitative Study of Cement Paste

    Directory of Open Access Journals (Sweden)

    Feng Shu-Xia

    2016-01-01

    Full Text Available Quantitative study on cement paste especially blended cement paste has been a hot and difficult issue over the years, and the technique of backscattered electron image analysis showed unique advantages in this field. This paper compared the test results of cement hydration degree, Ca(OH2 content and pore size distribution in pure pastes by image analysis and other methods. Then the accuracy of qualitative study by image analysis was analyzed. The results showed that image analysis technique had displayed higher accuracy in quantifying cement hydration degree and Ca(OH2 content than non-evaporable water test and thermal analysis respectively.

  7. Image analysis and microscopy: a useful combination

    Directory of Open Access Journals (Sweden)

    Pinotti L.

    2009-01-01

    Full Text Available The TSE Roadmap published in 2005 (DG for Health and Consumer Protection, 2005 suggests that short and medium term (2005-2009 amendments to control BSE policy should include “a relaxation of certain measures of the current total feed ban when certain conditions are met”. The same document noted “the starting point when revising the current feed ban provisions should be risk-based but at the same time taking into account the control tools in place to evaluate and ensure the proper implementation of this feed ban”. The clear implication is that adequate analytical methods to detect constituents of animal origin in feedstuffs are required. The official analytical method for the detection of constituents of animal origin in feedstuffs is the microscopic examination technique as described in Commission Directive 2003/126/EC of 23 December 2003 [OJ L 339, 24.12.2003, 78]. Although the microscopic method is usually able to distinguish fish from land animal material, it is often unable to distinguish between different terrestrial animals. Fulfillments of the requirements of Regulation 1774/2002/EC laying down health rules concerning animal by-products not intended for human consumption, clearly implies that it must be possible to identify the origin animal materials, at higher taxonomic levels than in the past. Thus improvements in all methods of detecting constituents of animal origin are required, including the microscopic method. This article will examine the problem of meat and bone meal in animal feeds, and the use of microscopic methods in association with computer image analysis to identify the source species of these feedstuff contaminants. Image processing, integrated with morphometric measurements can provide accurate and reliable results and can be a very useful aid to the analyst in the characterization, analysis and control of feedstuffs.

  8. Image processing and analysis with graphs theory and practice

    CERN Document Server

    Lézoray, Olivier

    2012-01-01

    Covering the theoretical aspects of image processing and analysis through the use of graphs in the representation and analysis of objects, Image Processing and Analysis with Graphs: Theory and Practice also demonstrates how these concepts are indispensible for the design of cutting-edge solutions for real-world applications. Explores new applications in computational photography, image and video processing, computer graphics, recognition, medical and biomedical imaging With the explosive growth in image production, in everything from digital photographs to medical scans, there has been a drast

  9. Some selected quantitative methods of thermal image analysis in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image.

  10. High resolution ultraviolet imaging spectrometer for latent image analysis.

    Science.gov (United States)

    Lyu, Hang; Liao, Ningfang; Li, Hongsong; Wu, Wenmin

    2016-03-21

    In this work, we present a close-range ultraviolet imaging spectrometer with high spatial resolution, and reasonably high spectral resolution. As the transmissive optical components cause chromatic aberration in the ultraviolet (UV) spectral range, an all-reflective imaging scheme is introduced to promote the image quality. The proposed instrument consists of an oscillating mirror, a Cassegrain objective, a Michelson structure, an Offner relay, and a UV enhanced CCD. The finished spectrometer has a spatial resolution of 29.30μm on the target plane; the spectral scope covers both near and middle UV band; and can obtain approximately 100 wavelength samples over the range of 240~370nm. The control computer coordinates all the components of the instrument and enables capturing a series of images, which can be reconstructed into an interferogram datacube. The datacube can be converted into a spectrum datacube, which contains spectral information of each pixel with many wavelength samples. A spectral calibration is carried out by using a high pressure mercury discharge lamp. A test run demonstrated that this interferometric configuration can obtain high resolution spectrum datacube. The pattern recognition algorithm is introduced to analyze the datacube and distinguish the latent traces from the base materials. This design is particularly good at identifying the latent traces in the application field of forensic imaging.

  11. Dynamic Chest Image Analysis: Model-Based Perfusion Analysis in Dynamic Pulmonary Imaging

    Directory of Open Access Journals (Sweden)

    Kiuru Aaro

    2003-01-01

    Full Text Available The "Dynamic Chest Image Analysis" project aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the dynamic pulmonary imaging technique. We have proposed and evaluated a multiresolutional method with an explicit ventilation model for ventilation analysis. This paper presents a new model-based method for pulmonary perfusion analysis. According to perfusion properties, we first devise a novel mathematical function to form a perfusion model. A simple yet accurate approach is further introduced to extract cardiac systolic and diastolic phases from the heart, so that this cardiac information may be utilized to accelerate the perfusion analysis and improve its sensitivity in detecting pulmonary perfusion abnormalities. This makes perfusion analysis not only fast but also robust in computation; consequently, perfusion analysis becomes computationally feasible without using contrast media. Our clinical case studies with 52 patients show that this technique is effective for pulmonary embolism even without using contrast media, demonstrating consistent correlations with computed tomography (CT and nuclear medicine (NM studies. This fluoroscopical examination takes only about 2 seconds for perfusion study with only low radiation dose to patient, involving no preparation, no radioactive isotopes, and no contrast media.

  12. Analysis of Galileo Style Geostationary Satellite Imaging: Image Reconstruction

    Science.gov (United States)

    2012-09-01

    obtained using only baselines longer than 8 m does not sample the short spacial frequencies, and the image reconstruction is not able to recover the...the long spacial frequencies sampled in a shorter baseline overlap the short spacial frequencies sampled in a longer baseline. This technique will

  13. Link Graph Analysis for Adult Images Classification

    CERN Document Server

    Kharitonov, Evgeny; Muchnik, Ilya; Romanenko, Fedor; Belyaev, Dmitry; Kotlyarov, Dmitry

    2010-01-01

    In order to protect an image search engine's users from undesirable results adult images' classifier should be built. The information about links from websites to images is employed to create such a classifier. These links are represented as a bipartite website-image graph. Each vertex is equipped with scores of adultness and decentness. The scores for image vertexes are initialized with zero, those for website vertexes are initialized according to a text-based website classifier. An iterative algorithm that propagates scores within a website-image graph is described. The scores obtained are used to classify images by choosing an appropriate threshold. The experiments on Internet-scale data have shown that the algorithm under consideration increases classification recall by 17% in comparison with a simple algorithm which classifies an image as adult if it is connected with at least one adult site (at the same precision level).

  14. SAR Image Texture Analysis of Oil Spill

    Science.gov (United States)

    Ma, Long; Li, Ying; Liu, Yu

    Oil spills are seriously affecting the marine ecosystem and cause political and scientific concern since they have serious affect on fragile marine and coastal ecosystem. In order to implement an emergency in case of oil spills, it is necessary to monitor oil spill using remote sensing. Spaceborne SAR is considered a promising method to monitor oil spill, which causes attention from many researchers. However, research in SAR image texture analysis of oil spill is rarely reported. On 7 December 2007, a crane-carrying barge hit the Hong Kong-registered tanker "Hebei Spirit", which released an estimated 10,500 metric tons of crude oil into the sea. The texture features on this oil spill were acquired based on extracted GLCM (Grey Level Co-occurrence Matrix) by using SAR as data source. The affected area was extracted successfully after evaluating capabilities of different texture features to monitor the oil spill. The results revealed that the texture is an important feature for oil spill monitoring. Key words: oil spill, texture analysis, SAR

  15. Analysis of physical processes via imaging vectors

    Science.gov (United States)

    Volovodenko, V.; Efremova, N.; Efremov, V.

    2016-06-01

    Practically, all modeling processes in one way or another are random. The foremost formulated theoretical foundation embraces Markov processes, being represented in different forms. Markov processes are characterized as a random process that undergoes transitions from one state to another on a state space, whereas the probability distribution of the next state depends only on the current state and not on the sequence of events that preceded it. In the Markov processes the proposition (model) of the future by no means changes in the event of the expansion and/or strong information progression relative to preceding time. Basically, modeling physical fields involves process changing in time, i.e. non-stationay processes. In this case, the application of Laplace transformation provides unjustified description complications. Transition to other possibilities results in explicit simplification. The method of imaging vectors renders constructive mathematical models and necessary transition in the modeling process and analysis itself. The flexibility of the model itself using polynomial basis leads to the possible rapid transition of the mathematical model and further analysis acceleration. It should be noted that the mathematical description permits operator representation. Conversely, operator representation of the structures, algorithms and data processing procedures significantly improve the flexibility of the modeling process.

  16. Dynamic chest image analysis: model-based pulmonary perfusion analysis with pyramid images

    Science.gov (United States)

    Liang, Jianming; Haapanen, Arto; Jaervi, Timo; Kiuru, Aaro J.; Kormano, Martti; Svedstrom, Erkki; Virkki, Raimo

    1998-07-01

    The aim of the study 'Dynamic Chest Image Analysis' is to develop computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected at different phases of the respiratory/cardiac cycles in a short period of time. We have proposed a framework for ventilation study with an explicit ventilation model based on pyramid images. In this paper, we extend the framework to pulmonary perfusion study. A perfusion model and the truncated pyramid are introduced. The perfusion model aims at extracting accurate, geographic perfusion parameters, and the truncated pyramid helps in understanding perfusion at multiple resolutions and speeding up the convergence process in optimization. Three cases are included to illustrate the experimental results.

  17. Ripening of salami: assessment of colour and aspect evolution using image analysis and multivariate image analysis.

    Science.gov (United States)

    Fongaro, Lorenzo; Alamprese, Cristina; Casiraghi, Ernestina

    2015-03-01

    During ripening of salami, colour changes occur due to oxidation phenomena involving myoglobin. Moreover, shrinkage due to dehydration results in aspect modifications, mainly ascribable to fat aggregation. The aim of this work was the application of image analysis (IA) and multivariate image analysis (MIA) techniques to the study of colour and aspect changes occurring in salami during ripening. IA results showed that red, green, blue, and intensity parameters decreased due to the development of a global darker colour, while Heterogeneity increased due to fat aggregation. By applying MIA, different salami slice areas corresponding to fat and three different degrees of oxidised meat were identified and quantified. It was thus possible to study the trend of these different areas as a function of ripening, making objective an evaluation usually performed by subjective visual inspection. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Three modality image registration of brain SPECT/CT and MR images for quantitative analysis of dopamine transporter imaging

    Science.gov (United States)

    Yamaguchi, Yuzuho; Takeda, Yuta; Hara, Takeshi; Zhou, Xiangrong; Matsusako, Masaki; Tanaka, Yuki; Hosoya, Kazuhiko; Nihei, Tsutomu; Katafuchi, Tetsuro; Fujita, Hiroshi

    2016-03-01

    Important features in Parkinson's disease (PD) are degenerations and losses of dopamine neurons in corpus striatum. 123I-FP-CIT can visualize activities of the dopamine neurons. The activity radio of background to corpus striatum is used for diagnosis of PD and Dementia with Lewy Bodies (DLB). The specific activity can be observed in the corpus striatum on SPECT images, but the location and the shape of the corpus striatum on SPECT images only are often lost because of the low uptake. In contrast, MR images can visualize the locations of the corpus striatum. The purpose of this study was to realize a quantitative image analysis for the SPECT images by using image registration technique with brain MR images that can determine the region of corpus striatum. In this study, the image fusion technique was used to fuse SPECT and MR images by intervening CT image taken by SPECT/CT. The mutual information (MI) for image registration between CT and MR images was used for the registration. Six SPECT/CT and four MR scans of phantom materials are taken by changing the direction. As the results of the image registrations, 16 of 24 combinations were registered within 1.3mm. By applying the approach to 32 clinical SPECT/CT and MR cases, all of the cases were registered within 0.86mm. In conclusions, our registration method has a potential in superimposing MR images on SPECT images.

  19. Machine learning approaches in medical image analysis

    DEFF Research Database (Denmark)

    de Bruijne, Marleen

    2016-01-01

    Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. This paper highlights new research directions and discusses three main challenges related to machine learning in medical imaging: coping with variation in imaging protocols......, learning from weak labels, and interpretation and evaluation of results....

  20. Principal component analysis of psoriasis lesions images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær

    2003-01-01

    A set of RGB images of psoriasis lesions is used. By visual examination of these images, there seem to be no common pattern that could be used to find and align the lesions within and between sessions. It is expected that the principal components of the original images could be useful during future...

  1. Medical Image Analysis by Cognitive Information Systems - a Review.

    Science.gov (United States)

    Ogiela, Lidia; Takizawa, Makoto

    2016-10-01

    This publication presents a review of medical image analysis systems. The paradigms of cognitive information systems will be presented by examples of medical image analysis systems. The semantic processes present as it is applied to different types of medical images. Cognitive information systems were defined on the basis of methods for the semantic analysis and interpretation of information - medical images - applied to cognitive meaning of medical images contained in analyzed data sets. Semantic analysis was proposed to analyzed the meaning of data. Meaning is included in information, for example in medical images. Medical image analysis will be presented and discussed as they are applied to various types of medical images, presented selected human organs, with different pathologies. Those images were analyzed using different classes of cognitive information systems. Cognitive information systems dedicated to medical image analysis was also defined for the decision supporting tasks. This process is very important for example in diagnostic and therapy processes, in the selection of semantic aspects/features, from analyzed data sets. Those features allow to create a new way of analysis.

  2. Image Retrieval: Theoretical Analysis and Empirical User Studies on Accessing Information in Images.

    Science.gov (United States)

    Ornager, Susanne

    1997-01-01

    Discusses indexing and retrieval for effective searches of digitized images. Reports on an empirical study about criteria for analysis and indexing digitized images, and the different types of user queries done in newspaper image archives in Denmark. Concludes that it is necessary that the indexing represent both a factual and an expressional…

  3. Semantic Integrative Digital Pathology: Insights into Microsemiological Semantics and Image Analysis Scalability.

    Science.gov (United States)

    Racoceanu, Daniel; Capron, Frédérique

    2016-01-01

    be devoted to morphological microsemiology (microscopic morphology semantics). Besides insuring the traceability of the results (second opinion) and supporting the orchestration of high-content image analysis modules, the role of semantics will be crucial for the correlation between digital pathology and noninvasive medical imaging modalities. In addition, semantics has an important role in modelling the links between traditional microscopy and recent label-free technologies. The massive amount of visual data is challenging and represents a characteristic intrinsic to digital pathology. The design of an operational integrative microscopy framework needs to focus on scalable multiscale imaging formalism. In this sense, we prospectively consider some of the most recent scalable methodologies adapted to digital pathology as marked point processes for nuclear atypia and point-set mathematical morphology for architecture grading. To orchestrate this scalable framework, semantics-based WSI management (analysis, exploration, indexing, retrieval and report generation support) represents an important means towards approaches to integrating big data into biomedicine. This insight reflects our vision through an instantiation of essential bricks of this type of architecture. The generic approach introduced here is applicable to a number of challenges related to molecular imaging, high-content image management and, more generally, bioinformatics. © 2016 S. Karger AG, Basel.

  4. IN VITRO SCREENING OF DEVELOPMENTAL NEUROTOXICANTS IN RAT PRIMARY CORTICAL NEURONS USING HIGH CONTENT IMAGE

    Science.gov (United States)

    There is a need for more efficient and cost-effective methods for identifying, characterizing and prioritizing chemicals which may result in developmental neurotoxicity. One approach is to utilize in vitro test systems which recapitulate the critical processes of nervous system d...

  5. Toward high-content screening of mitochondrial morphology and membrane potential in living cells.

    Science.gov (United States)

    Iannetti, Eligio F; Willems, Peter H G M; Pellegrini, Mina; Beyrath, Julien; Smeitink, Jan A M; Blanchet, Lionel; Koopman, Werner J H

    2015-06-01

    Mitochondria are double membrane organelles involved in various key cellular processes. Governed by dedicated protein machinery, mitochondria move and continuously fuse and divide. These "mitochondrial dynamics" are bi-directionally linked to mitochondrial and cell functional state in space and time. Due to the action of the electron transport chain (ETC), the mitochondrial inner membrane displays a inside-negative membrane potential (Δψ). The latter is considered a functional readout of mitochondrial "health" and required to sustain normal mitochondrial ATP production and mitochondrial fusion. During the last decade, live-cell microscopy strategies were developed for simultaneous quantification of Δψ and mitochondrial morphology. This revealed that ETC dysfunction, changes in Δψ and aberrations in mitochondrial structure often occur in parallel, suggesting they are linked potential targets for therapeutic intervention. Here we discuss how combining high-content and high-throughput strategies can be used for analysis of genetic and/or drug-induced effects at the level of individual organelles, cells and cell populations. This article is part of a Directed Issue entitled: Energy Metabolism Disorders and Therapies.

  6. A high-content platform to characterise human induced pluripotent stem cell lines

    Science.gov (United States)

    Leha, Andreas; Moens, Nathalie; Meleckyte, Ruta; Culley, Oliver J.; Gervasio, Mia K.; Kerz, Maximilian; Reimer, Andreas; Cain, Stuart A.; Streeter, Ian; Folarin, Amos; Stegle, Oliver; Kielty, Cay M.; Durbin, Richard; Watt, Fiona M.; Danovi, Davide

    2016-01-01

    Induced pluripotent stem cells (iPSCs) provide invaluable opportunities for future cell therapies as well as for studying human development, modelling diseases and discovering therapeutics. In order to realise the potential of iPSCs, it is crucial to comprehensively characterise cells generated from large cohorts of healthy and diseased individuals. The human iPSC initiative (HipSci) is assessing a large panel of cell lines to define cell phenotypes, dissect inter- and intra-line and donor variability and identify its key determinant components. Here we report the establishment of a high-content platform for phenotypic analysis of human iPSC lines. In the described assay, cells are dissociated and seeded as single cells onto 96-well plates coated with fibronectin at three different concentrations. This method allows assessment of cell number, proliferation, morphology and intercellular adhesion. Altogether, our strategy delivers robust quantification of phenotypic diversity within complex cell populations facilitating future identification of the genetic, biological and technical determinants of variance. Approaches such as the one described can be used to benchmark iPSCs from multiple donors and create novel platforms that can readily be tailored for disease modelling and drug discovery. PMID:26608109

  7. Automatic quantitative analysis of cardiac MR perfusion images

    NARCIS (Netherlands)

    Breeuwer, Marcel; Spreeuwers, Luuk; Quist, Marcel

    2001-01-01

    Magnetic Resonance Imaging (MRI) is a powerful technique for imaging cardiovascular diseases. The introduction of cardiovascular MRI into clinical practice is however hampered by the lack of efficient and accurate image analysis methods. This paper focuses on the evaluation of blood perfusion in the

  8. Subsurface offset behaviour in velocity analysis with extended reflectivity images

    NARCIS (Netherlands)

    Mulder, W.A.

    2013-01-01

    Migration velocity analysis with the constant-density acoustic wave equation can be accomplished by the focusing of extended migration images, obtained by introducing a subsurface shift in the imaging condition. A reflector in a wrong velocity model will show up as a curve in the extended image. In

  9. Intrasubject registration for change analysis in medical imaging

    NARCIS (Netherlands)

    Staring, M.

    2008-01-01

    Image matching is important for the comparison of medical images. Comparison is of clinical relevance for the analysis of differences due to changes in the health of a patient. For example, when a disease is imaged at two time points, then one wants to know if it is stable, has regressed, or

  10. Mesh Processing in Medical-Image Analysis-a Tutorial

    DEFF Research Database (Denmark)

    Levine, Joshua A.; Paulsen, Rasmus Reinhold; Zhang, Yongjie

    2012-01-01

    Medical-image analysis requires an understanding of sophisticated scanning modalities, constructing geometric models, building meshes to represent domains, and downstream biological applications. These four steps form an image-to-mesh pipeline. For research in this field to progress, the imaging...

  11. Image analysis for dental bone quality assessment using CBCT imaging

    Science.gov (United States)

    Suprijanto; Epsilawati, L.; Hajarini, M. S.; Juliastuti, E.; Susanti, H.

    2016-03-01

    Cone beam computerized tomography (CBCT) is one of X-ray imaging modalities that are applied in dentistry. Its modality can visualize the oral region in 3D and in a high resolution. CBCT jaw image has potential information for the assessment of bone quality that often used for pre-operative implant planning. We propose comparison method based on normalized histogram (NH) on the region of inter-dental septum and premolar teeth. Furthermore, the NH characteristic from normal and abnormal bone condition are compared and analyzed. Four test parameters are proposed, i.e. the difference between teeth and bone average intensity (s), the ratio between bone and teeth average intensity (n) of NH, the difference between teeth and bone peak value (Δp) of NH, and the ratio between teeth and bone of NH range (r). The results showed that n, s, and Δp have potential to be the classification parameters of dental calcium density.

  12. CMOS Image Sensor with On-Chip Image Compression: A Review and Performance Analysis

    Directory of Open Access Journals (Sweden)

    Milin Zhang

    2010-01-01

    Full Text Available Demand for high-resolution, low-power sensing devices with integrated image processing capabilities, especially compression capability, is increasing. CMOS technology enables the integration of image sensing and image processing, making it possible to improve the overall system performance. This paper reviews the current state of the art in CMOS image sensors featuring on-chip image compression. Firstly, typical sensing systems consisting of separate image-capturing unit and image-compression processing unit are reviewed, followed by systems that integrate focal-plane compression. The paper also provides a thorough review of a new design paradigm, in which image compression is performed during the image-capture phase prior to storage, referred to as compressive acquisition. High-performance sensor systems reported in recent years are also introduced. Performance analysis and comparison of the reported designs using different design paradigm are presented at the end.

  13. Technique of Hadamard transform microscope fluorescence image analysis

    Institute of Scientific and Technical Information of China (English)

    梅二文; 顾文芳; 曾晓斌; 陈观铨; 曾云鹗

    1995-01-01

    Hadamard transform spatial multiplexed imaging technique is combined with fluorescence microscope and an instrument of Hadamard transform microscope fluorescence image analysis is developed. Images acquired by this instrument can provide a lot of useful information simultaneously, including three-dimensional Hadamard transform microscope cell fluorescence image, the fluorescence intensity and fluorescence distribution of a cell, the background signal intensity and the signal/noise ratio, etc.

  14. New approaches in intelligent image analysis techniques, methodologies and applications

    CERN Document Server

    Nakamatsu, Kazumi

    2016-01-01

    This book presents an Introduction and 11 independent chapters, which are devoted to various new approaches of intelligent image processing and analysis. The book also presents new methods, algorithms and applied systems for intelligent image processing, on the following basic topics: Methods for Hierarchical Image Decomposition; Intelligent Digital Signal Processing and Feature Extraction; Data Clustering and Visualization via Echo State Networks; Clustering of Natural Images in Automatic Image Annotation Systems; Control System for Remote Sensing Image Processing; Tissue Segmentation of MR Brain Images Sequence; Kidney Cysts Segmentation in CT Images; Audio Visual Attention Models in Mobile Robots Navigation; Local Adaptive Image Processing; Learning Techniques for Intelligent Access Control; Resolution Improvement in Acoustic Maps. Each chapter is self-contained with its own references. Some of the chapters are devoted to the theoretical aspects while the others are presenting the practical aspects and the...

  15. Analysis of engineering drawings and raster map images

    CERN Document Server

    Henderson, Thomas C

    2013-01-01

    Presents up-to-date methods and algorithms for the automated analysis of engineering drawings and digital cartographic maps Discusses automatic engineering drawing and map analysis techniques Covers detailed accounts of the use of unsupervised segmentation algorithms to map images

  16. IMAGE ANALYSIS FOR MODELLING SHEAR BEHAVIOUR

    Directory of Open Access Journals (Sweden)

    Philippe Lopez

    2011-05-01

    Full Text Available Through laboratory research performed over the past ten years, many of the critical links between fracture characteristics and hydromechanical and mechanical behaviour have been made for individual fractures. One of the remaining challenges at the laboratory scale is to directly link fracture morphology of shear behaviour with changes in stress and shear direction. A series of laboratory experiments were performed on cement mortar replicas of a granite sample with a natural fracture perpendicular to the axis of the core. Results show that there is a strong relationship between the fracture's geometry and its mechanical behaviour under shear stress and the resulting damage. Image analysis, geostatistical, stereological and directional data techniques are applied in combination to experimental data. The results highlight the role of geometric characteristics of the fracture surfaces (surface roughness, size, shape, locations and orientations of asperities to be damaged in shear behaviour. A notable improvement in shear understanding is that shear behaviour is controlled by the apparent dip in the shear direction of elementary facets forming the fracture.

  17. Slide Set: Reproducible image analysis and batch processing with ImageJ.

    Science.gov (United States)

    Nanes, Benjamin A

    2015-11-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution.

  18. Hierarchical manifold learning for regional image analysis.

    Science.gov (United States)

    Bhatia, Kanwal K; Rao, Anil; Price, Anthony N; Wolz, Robin; Hajnal, Joseph V; Rueckert, Daniel

    2014-02-01

    We present a novel method of hierarchical manifold learning which aims to automatically discover regional properties of image datasets. While traditional manifold learning methods have become widely used for dimensionality reduction in medical imaging, they suffer from only being able to consider whole images as single data points. We extend conventional techniques by additionally examining local variations, in order to produce spatially-varying manifold embeddings that characterize a given dataset. This involves constructing manifolds in a hierarchy of image patches of increasing granularity, while ensuring consistency between hierarchy levels. We demonstrate the utility of our method in two very different settings: 1) to learn the regional correlations in motion within a sequence of time-resolved MR images of the thoracic cavity; 2) to find discriminative regions of 3-D brain MR images associated with neurodegenerative disease.

  19. Some developments in multivariate image analysis

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey

    and classification. MIA considers all image pixels as objects and their color values (or spectrum in the case of hyperspectral images) as variables. So it gives data matrices with hundreds of thousands samples in the case of laboratory scale images and even more for aerial photos, where the number of pixels could...... subspace have been considered in respect to MIA purposes. First of all, Robust PCA has been applied to several images with and without outliers. Being proposed as a method to deal with high-dimensional data, it suits the needs of MIA very well. Also several non-linear methods have been tried, including...

  20. Geographic Object-Based Image Analysis: Towards a new paradigm

    NARCIS (Netherlands)

    Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.A.; Queiroz Feitosa, R.; van der Meer, F.D.; van der Werff, H.M.A.; van Coillie, F.; Tiede, A.

    2014-01-01

    The amount of scientific literature on (Geographic) Object-based Image Analysis – GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature extr

  1. Facial Image Analysis Based on Local Binary Patterns: A Survey

    NARCIS (Netherlands)

    Huang, D.; Shan, C.; Ardebilian, M.; Chen, L.

    2011-01-01

    Facial image analysis, including face detection, face recognition,facial expression analysis, facial demographic classification, and so on, is an important and interesting research topic in the computervision and image processing area, which has many important applications such as human-computer

  2. Facial Image Analysis Based on Local Binary Patterns: A Survey

    NARCIS (Netherlands)

    Huang, D.; Shan, C.; Ardebilian, M.; Chen, L.

    2011-01-01

    Facial image analysis, including face detection, face recognition,facial expression analysis, facial demographic classification, and so on, is an important and interesting research topic in the computervision and image processing area, which has many important applications such as human-computer

  3. Multi-spectral Image Analysis for Astaxanthin Coating Classification

    DEFF Research Database (Denmark)

    Ljungqvist, Martin Georg; Ersbøll, Bjarne Kjær; Nielsen, Michael Engelbrecht

    2011-01-01

    Industrial quality inspection using image analysis on astaxanthin coating in aquaculture feed pellets is of great importance for automatic production control. In this study multi-spectral image analysis of pellets was performed using LDA, QDA, SNV and PCA on pixel level and mean value of pixels...

  4. Digital image processing and analysis for activated sludge wastewater treatment.

    Science.gov (United States)

    Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed

    2015-01-01

    Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.

  5. Multimodal digital color imaging system for facial skin lesion analysis

    Science.gov (United States)

    Bae, Youngwoo; Lee, Youn-Heum; Jung, Byungjo

    2008-02-01

    In dermatology, various digital imaging modalities have been used as an important tool to quantitatively evaluate the treatment effect of skin lesions. Cross-polarization color image was used to evaluate skin chromophores (melanin and hemoglobin) information and parallel-polarization image to evaluate skin texture information. In addition, UV-A induced fluorescent image has been widely used to evaluate various skin conditions such as sebum, keratosis, sun damages, and vitiligo. In order to maximize the evaluation efficacy of various skin lesions, it is necessary to integrate various imaging modalities into an imaging system. In this study, we propose a multimodal digital color imaging system, which provides four different digital color images of standard color image, parallel and cross-polarization color image, and UV-A induced fluorescent color image. Herein, we describe the imaging system and present the examples of image analysis. By analyzing the color information and morphological features of facial skin lesions, we are able to comparably and simultaneously evaluate various skin lesions. In conclusion, we are sure that the multimodal color imaging system can be utilized as an important assistant tool in dermatology.

  6. High-content screening of yeast mutant libraries by shotgun lipidomics

    DEFF Research Database (Denmark)

    Tarasov, Kirill; Stefanko, Adam; Casanovas, Albert;

    2014-01-01

    To identify proteins with a functional role in lipid metabolism and homeostasis we designed a high-throughput platform for high-content lipidomic screening of yeast mutant libraries. To this end, we combined culturing and lipid extraction in 96-well format, automated direct infusion nanoelectrosp......To identify proteins with a functional role in lipid metabolism and homeostasis we designed a high-throughput platform for high-content lipidomic screening of yeast mutant libraries. To this end, we combined culturing and lipid extraction in 96-well format, automated direct infusion...... factor KAR4 precipitated distinct lipid metabolic phenotypes. These results demonstrate that the high-throughput shotgun lipidomics platform is a valid and complementary proxy for high-content screening of yeast mutant libraries....

  7. Analysis of Images from Experiments Investigating Fragmentation of Materials

    Energy Technology Data Exchange (ETDEWEB)

    Kamath, C; Hurricane, O

    2007-09-10

    Image processing techniques have been used extensively to identify objects of interest in image data and extract representative characteristics for these objects. However, this can be a challenge due to the presence of noise in the images and the variation across images in a dataset. When the number of images to be analyzed is large, the algorithms used must also be relatively insensitive to the choice of parameters and lend themselves to partial or full automation. This not only avoids manual analysis which can be time consuming and error-prone, but also makes the analysis reproducible, thus enabling comparisons between images which have been processed in an identical manner. In this paper, we describe our approach to extracting features for objects of interest in experimental images. Focusing on the specific problem of fragmentation of materials, we show how we can extract statistics for the fragments and the gaps between them.

  8. PIZZARO: Forensic analysis and restoration of image and video data.

    Science.gov (United States)

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences.

  9. An application of image processing techniques in computed tomography image analysis

    DEFF Research Database (Denmark)

    McEvoy, Fintan

    2007-01-01

    An estimate of the thickness of subcutaneous adipose tissue at differing positions around the body was required in a study examining body composition. To eliminate human error associated with the manual placement of markers for measurements and to facilitate the collection of data from a large...... number of animals and image slices, automation of the process was desirable. The open-source and free image analysis program ImageJ was used. A macro procedure was created that provided the required functionality. The macro performs a number of basic image processing procedures. These include an initial...... process designed to remove the scanning table from the image and to center the animal in the image. This is followed by placement of a vertical line segment from the mid point of the upper border of the image to the image center. Measurements are made between automatically detected outer and inner...

  10. A linear mixture analysis-based compression for hyperspectral image analysis

    Energy Technology Data Exchange (ETDEWEB)

    C. I. Chang; I. W. Ginsberg

    2000-06-30

    In this paper, the authors present a fully constrained least squares linear spectral mixture analysis-based compression technique for hyperspectral image analysis, particularly, target detection and classification. Unlike most compression techniques that directly deal with image gray levels, the proposed compression approach generates the abundance fractional images of potential targets present in an image scene and then encodes these fractional images so as to achieve data compression. Since the vital information used for image analysis is generally preserved and retained in the abundance fractional images, the loss of information may have very little impact on image analysis. In some occasions, it even improves analysis performance. Airborne visible infrared imaging spectrometer (AVIRIS) data experiments demonstrate that it can effectively detect and classify targets while achieving very high compression ratios.

  11. Dynamic Chest Image Analysis: Evaluation of Model-Based Pulmonary Perfusion Analysis With Pyramid Images

    Science.gov (United States)

    2007-11-02

    Address(es) US Army Research, Development & Standardization Group (UK) PSC 802 Box 15 FPO AE 09499-1500 Sponsor/Monitor’s Acronym(s) Sponsor...6G O D 2 O P " GH I KJML E8 O D 83O P " (8) This naturally implies that as long as O D 2 O P KJ...A method for the solution of certain problems in least squares. Quart. Appl. Math ., 2:164–168, 1944. [8] J. Liang. Dynamic Chest Image Analysis. Turku

  12. Whole-slide imaging and automated image analysis: considerations and opportunities in the practice of pathology.

    Science.gov (United States)

    Webster, J D; Dunstan, R W

    2014-01-01

    Digital pathology, the practice of pathology using digitized images of pathologic specimens, has been transformed in recent years by the development of whole-slide imaging systems, which allow for the evaluation and interpretation of digital images of entire histologic sections. Applications of whole-slide imaging include rapid transmission of pathologic data for consultations and collaborations, standardization and distribution of pathologic materials for education, tissue specimen archiving, and image analysis of histologic specimens. Histologic image analysis allows for the acquisition of objective measurements of histomorphologic, histochemical, and immunohistochemical properties of tissue sections, increasing both the quantity and quality of data obtained from histologic assessments. Currently, numerous histologic image analysis software solutions are commercially available. Choosing the appropriate solution is dependent on considerations of the investigative question, computer programming and image analysis expertise, and cost. However, all studies using histologic image analysis require careful consideration of preanalytical variables, such as tissue collection, fixation, and processing, and experimental design, including sample selection, controls, reference standards, and the variables being measured. The fields of digital pathology and histologic image analysis are continuing to evolve, and their potential impact on pathology is still growing. These methodologies will increasingly transform the practice of pathology, allowing it to mature toward a quantitative science. However, this maturation requires pathologists to be at the forefront of the process, ensuring their appropriate application and the validity of their results. Therefore, histologic image analysis and the field of pathology should co-evolve, creating a symbiotic relationship that results in high-quality reproducible, objective data.

  13. IMAGE ANALYSIS BASED ON EDGE DETECTION TECHNIQUES

    Institute of Scientific and Technical Information of China (English)

    纳瑟; 刘重庆

    2002-01-01

    A method that incorporates edge detection technique, Markov Random field (MRF), watershed segmentation and merging techniques was presented for performing image segmentation and edge detection tasks. It first applies edge detection technique to obtain a Difference In Strength (DIS) map. An initial segmented result is obtained based on K-means clustering technique and the minimum distance. Then the region process is modeled by MRF to obtain an image that contains different intensity regions. The gradient values are calculated and then the watershed technique is used. DIS calculation is used for each pixel to define all the edges (weak or strong) in the image. The DIS map is obtained. This help as priority knowledge to know the possibility of the region segmentation by the next step (MRF), which gives an image that has all the edges and regions information. In MRF model,gray level l, at pixel location i, in an image X, depends on the gray levels of neighboring pixels. The segmentation results are improved by using watershed algorithm. After all pixels of the segmented regions are processed, a map of primitive region with edges is generated. The edge map is obtained using a merge process based on averaged intensity mean values. A common edge detectors that work on (MRF) segmented image are used and the results are compared. The segmentation and edge detection result is one closed boundary per actual region in the image.

  14. Electromagnetic Time Reversal Imaging: Analysis and Experimentation

    Science.gov (United States)

    2010-04-26

    Biomedical Imaging: From Nano to Macro, ISBI󈧌, Paris, Friance, May 14-17, 2008 [6] Y. Jin, J. M. F. Moura, N. O’Donoughue, "Adaptive Time Reversal...Zhu, and Q. He, “Breast cancer detection by time reversal imaging,” 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro...target (a galvanized steel sheet) is surrounded by a large amount of PVC rods. Our experiments showed that the collected EM data in frequency and

  15. Prototype for Meta-Algorithmic, Content-Aware Image Analysis

    Science.gov (United States)

    2015-03-01

    PROTOTYPE FOR META- ALGORITHMIC , CONTENT-AWARE IMAGE ANALYSIS UNIVERSITY OF VIRGINIA MARCH 2015 FINAL TECHNICAL REPORT...Visual Object Recognition : A Review." IEEE Trans. on Pattern Analysis and Machine Analysis , 2013. [32] Rakotomamonjy A., Bach F., Canu S., Y...Learning a Discriminative Dictionary for Recognition ," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 11, pp. 2651 - 2664

  16. MR brain image analysis in dementia: From quantitative imaging biomarkers to ageing brain models and imaging genetics.

    Science.gov (United States)

    Niessen, Wiro J

    2016-10-01

    MR brain image analysis has constantly been a hot topic research area in medical image analysis over the past two decades. In this article, it is discussed how the field developed from the construction of tools for automatic quantification of brain morphology, function, connectivity and pathology, to creating models of the ageing brain in normal ageing and disease, and tools for integrated analysis of imaging and genetic data. The current and future role of the field in improved understanding of the development of neurodegenerative disease is discussed, and its potential for aiding in early and differential diagnosis and prognosis of different types of dementia. For the latter, the use of reference imaging data and reference models derived from large clinical and population imaging studies, and the application of machine learning techniques on these reference data, are expected to play a key role. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Adaptive Local Image Registration: Analysis on Filter Size

    OpenAIRE

    Vishnukumar S; M.Wilscy

    2012-01-01

    Adaptive Local Image Registration is a Local Image Registration based on an Adaptive Filtering frame work. A filter of appropriate size convolves with reference image and gives the pixel values corresponding to the distorted image and the filter is updated in each stage of the convolution. When the filter converges to the system model, it provides the registered image. The filter size plays an important role in this method. The analysis on the filter size is done using Peak Signal-to-Noise Ra...

  18. Multivariate image analysis for quality inspection in fish feed production

    DEFF Research Database (Denmark)

    Ljungqvist, Martin Georg

    , or synthesised chemically. Common for both types is that they are relatively expensive in comparison to the other feed ingredients. This thesis investigates multi-variate data collection for visual inspection and optimisation of industrial production in the fish feed industry. Quality parameters focused on here...... are: pellet size, type and concentration level of astaxanthin in pellet coating, as well as astaxanthin type detected in salmonid fish. Methods used are three different devices for multi- and hyper-spectral imaging, together with shape analysis and multi-variate statistical analysis. The results...... of the work demonstrate a high potential of image analysis and spectral imaging for assessing the product quality of fish feed pellets, astaxanthin and fish meat. We show how image analysis can be used to inspect the pellet size, and how spectral imaging can be used to inspect the surface quality...

  19. Quantitative methods for the analysis of electron microscope images

    DEFF Research Database (Denmark)

    Skands, Peter Ulrik Vallø

    1996-01-01

    The topic of this thesis is an general introduction to quantitative methods for the analysis of digital microscope images. The images presented are primarily been acquired from Scanning Electron Microscopes (SEM) and interfermeter microscopes (IFM). The topic is approached though several examples...... foundation of the thesis fall in the areas of: 1) Mathematical Morphology; 2) Distance transforms and applications; and 3) Fractal geometry. Image analysis opens in general the possibility of a quantitative and statistical well founded measurement of digital microscope images. Herein lies also the conditions...

  20. Theoretical analysis of radiographic images by nonstationary Poisson processes

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, K.; Uchida, S. (Gifu Univ. (Japan)); Yamada, I.

    1980-12-01

    This paper deals with the noise analysis of radiographic images obtained in the usual fluorescent screen-film system. The theory of nonstationary Poisson processes is applied to the analysis of the radiographic images containing the object information. The ensemble averages, the autocorrelation functions, and the Wiener spectrum densities of the light-energy distribution at the fluorescent screen and of the film optical-density distribution are obtained. The detection characteristics of the system are evaluated theoretically. Numerical examples one-dimensional image are shown and the results are compared with those obtained under the assumption that the object image is related to the background noise by the additive process.

  1. Theoretical Analysis of Radiographic Images by Nonstationary Poisson Processes

    Science.gov (United States)

    Tanaka, Kazuo; Yamada, Isao; Uchida, Suguru

    1980-12-01

    This paper deals with the noise analysis of radiographic images obtained in the usual fluorescent screen-film system. The theory of nonstationary Poisson processes is applied to the analysis of the radiographic images containing the object information. The ensemble averages, the autocorrelation functions, and the Wiener spectrum densities of the light-energy distribution at the fluorescent screen and of the film optical-density distribution are obtained. The detection characteristics of the system are evaluated theoretically. Numerical examples of the one-dimensional image are shown and the results are compared with those obtained under the assumption that the object image is related to the background noise by the additive process.

  2. Automated thermal mapping techniques using chromatic image analysis

    Science.gov (United States)

    Buck, Gregory M.

    1989-01-01

    Thermal imaging techniques are introduced using a chromatic image analysis system and temperature sensitive coatings. These techniques are used for thermal mapping and surface heat transfer measurements on aerothermodynamic test models in hypersonic wind tunnels. Measurements are made on complex vehicle configurations in a timely manner and at minimal expense. The image analysis system uses separate wavelength filtered images to analyze surface spectral intensity data. The system was initially developed for quantitative surface temperature mapping using two-color thermographic phosphors but was found useful in interpreting phase change paint and liquid crystal data as well.

  3. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    Science.gov (United States)

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets.

  4. Introducing PLIA: Planetary Laboratory for Image Analysis

    Science.gov (United States)

    Peralta, J.; Hueso, R.; Barrado, N.; Sánchez-Lavega, A.

    2005-08-01

    We present a graphical software tool developed under IDL software to navigate, process and analyze planetary images. The software has a complete Graphical User Interface and is cross-platform. It can also run under the IDL Virtual Machine without the need to own an IDL license. The set of tools included allow image navigation (orientation, centring and automatic limb determination), dynamical and photometric atmospheric measurements (winds and cloud albedos), cylindrical and polar projections, as well as image treatment under several procedures. Being written in IDL, it is modular and easy to modify and grow for adding new capabilities. We show several examples of the software capabilities with Galileo-Venus observations: Image navigation, photometrical corrections, wind profiles obtained by cloud tracking, cylindrical projections and cloud photometric measurements. Acknowledgements: This work has been funded by Spanish MCYT PNAYA2003-03216, fondos FEDER and Grupos UPV 15946/2004. R. Hueso acknowledges a post-doc fellowship from Gobierno Vasco.

  5. Social Media Image Analysis for Public Health

    OpenAIRE

    2015-01-01

    Several projects have shown the feasibility to use textual social media data to track public health concerns, such as temporal influenza patterns or geographical obesity patterns. In this paper, we look at whether geo-tagged images from Instagram also provide a viable data source. Especially for "lifestyle" diseases, such as obesity, drinking or smoking, images of social gatherings could provide information that is not necessarily shared in, say, tweets. In this study, we explore whether (i) ...

  6. Autonomous image data reduction by analysis and interpretation

    Science.gov (United States)

    Eberlein, Susan; Yates, Gigi; Ritter, Niles

    Image data is a critical component of the scientific information acquired by space missions. Compression of image data is required due to the limited bandwidth of the data transmission channel and limited memory space on the acquisition vehicle. This need becomes more pressing when dealing with multispectral data where each pixel may comprise 300 or more bytes. An autonomous, real time, on-board image analysis system for an exploratory vehicle such as a Mars Rover is developed. The completed system will be capable of interpreting image data to produce reduced representations of the image, and of making decisions regarding the importance of data based on current scientific goals. Data from multiple sources, including stereo images, color images, and multispectral data, are fused into single image representations. Analysis techniques emphasize artificial neural networks. Clusters are described by their outlines and class values. These analysis and compression techniques are coupled with decision-making capacity for determining importance of each image region. Areas determined to be noise or uninteresting can be discarded in favor of more important areas. Thus limited resources for data storage and transmission are allocated to the most significant images.

  7. Exploiting multi-context analysis in semantic image classification

    Institute of Scientific and Technical Information of China (English)

    TIAN Yong-hong; HUANG Tie-jun; GAO Wen

    2005-01-01

    As the popularity of digital images is rapidly increasing on the Internet, research on technologies for semantic image classification has become an important research topic. However, the well-known content-based image classification methods do not overcome the so-called semantic gap problem in which low-level visual features cannot represent the high-level semantic content of images. Image classification using visual and textual information often performs poorly since the extracted textual features are often too limited to accurately represent the images. In this paper, we propose a semantic image classification approach using multi-context analysis. For a given image, we model the relevant textual information as its multi-modal context, and regard the related images connected by hyperlinks as its link context. Two kinds of context analysis models, i.e., cross-modal correlation analysis and link-based correlation model, are used to capture the correlation among different modals of features and the topical dependency among images induced by the link structure. We propose a new collective classification model called relational support vector classifier (RSVC) based on the well-known Support Vector Machines (SVMs) and the link-based correlation model. Experiments showed that the proposed approach significantly improved classification accuracy over that of SVM classifiers using visual and/or textual features.

  8. Diagnostic imaging analysis of the impacted mesiodens

    Energy Technology Data Exchange (ETDEWEB)

    Noh, Jeong Jun; Choi, Bo Ram; Jeong, Hwan Seok; Huh, Kyung Hoe; Yi, Won Jin; Heo, Min Suk; Lee, Sam Sun; Choi, Soon Chul [School of Dentistry, Seoul National University, Seoul (Korea, Republic of)

    2010-06-15

    The research was performed to predict the three dimensional relationship between the impacted mesiodens and the maxillary central incisors and the proximity with the anatomic structures by comparing their panoramic images with the CT images. Among the patients visiting Seoul National University Dental Hospital from April 2003 to July 2007, those with mesiodens were selected (154 mesiodens of 120 patients). The numbers, shapes, orientation and positional relationship of mesiodens with maxillary central incisors were investigated in the panoramic images. The proximity with the anatomical structures and complications were investigated in the CT images as well. The sex ratio (M : F) was 2.28 : 1 and the mean number of mesiodens per one patient was 1.28. Conical shape was 84.4% and inverted orientation was 51.9%. There were more cases of anatomical structures encroachment, especially on the nasal floor and nasopalatine duct, when the mesiodens was not superimposed with the central incisor. There were, however, many cases of the nasopalatine duct encroachment when the mesiodens was superimpoised with the apical 1/3 of central incisor (52.6%). Delayed eruption (55.6%), crown rotation (66.7%) and crown resorption (100%) were observed when the mesiodens was superimposed with the crown of the central incisor. It is possible to predict three dimensional relationship between the impacted mesiodens and the maxillary central incisors in the panoramic images, but more details should be confirmed by the CT images when necessary.

  9. Efficiency analysis of color image filtering

    Directory of Open Access Journals (Sweden)

    Egiazarian Karen

    2011-01-01

    Full Text Available Abstract This article addresses under which conditions filtering can visibly improve the image quality. The key points are the following. First, we analyze filtering efficiency for 25 test images, from the color image database TID2008. This database allows assessing filter efficiency for images corrupted by different noise types for several levels of noise variance. Second, the limit of filtering efficiency is determined for independent and identically distributed (i.i.d. additive noise and compared to the output mean square error of state-of-the-art filters. Third, component-wise and vector denoising is studied, where the latter approach is demonstrated to be more efficient. Fourth, using of modern visual quality metrics, we determine that for which levels of i.i.d. and spatially correlated noise the noise in original images or residual noise and distortions because of filtering in output images are practically invisible. We also demonstrate that it is possible to roughly estimate whether or not the visual quality can clearly be improved by filtering.

  10. Digital image processing and analysis human and computer vision applications with CVIPtools

    CERN Document Server

    Umbaugh, Scott E

    2010-01-01

    Section I Introduction to Digital Image Processing and AnalysisDigital Image Processing and AnalysisOverviewImage Analysis and Computer VisionImage Processing and Human VisionKey PointsExercisesReferencesFurther ReadingComputer Imaging SystemsImaging Systems OverviewImage Formation and SensingCVIPtools SoftwareImage RepresentationKey PointsExercisesSupplementary ExercisesReferencesFurther ReadingSection II Digital Image Analysis and Computer VisionIntroduction to Digital Image AnalysisIntroductionPreprocessingBinary Image AnalysisKey PointsExercisesSupplementary ExercisesReferencesFurther Read

  11. Comparing methods for analysis of biomedical hyperspectral image data

    Science.gov (United States)

    Leavesley, Silas J.; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter F.; Annamdevula, Naga S.; Rich, Thomas C.

    2017-02-01

    Over the past 2 decades, hyperspectral imaging technologies have been adapted to address the need for molecule-specific identification in the biomedical imaging field. Applications have ranged from single-cell microscopy to whole-animal in vivo imaging and from basic research to clinical systems. Enabling this growth has been the availability of faster, more effective hyperspectral filtering technologies and more sensitive detectors. Hence, the potential for growth of biomedical hyperspectral imaging is high, and many hyperspectral imaging options are already commercially available. However, despite the growth in hyperspectral technologies for biomedical imaging, little work has been done to aid users of hyperspectral imaging instruments in selecting appropriate analysis algorithms. Here, we present an approach for comparing the effectiveness of spectral analysis algorithms by combining experimental image data with a theoretical "what if" scenario. This approach allows us to quantify several key outcomes that characterize a hyperspectral imaging study: linearity of sensitivity, positive detection cut-off slope, dynamic range, and false positive events. We present results of using this approach for comparing the effectiveness of several common spectral analysis algorithms for detecting weak fluorescent protein emission in the midst of strong tissue autofluorescence. Results indicate that this approach should be applicable to a very wide range of applications, allowing a quantitative assessment of the effectiveness of the combined biology, hardware, and computational analysis for detecting a specific molecular signature.

  12. Pattern recognition software and techniques for biological image analysis.

    Directory of Open Access Journals (Sweden)

    Lior Shamir

    Full Text Available The increasing prevalence of automated image acquisition systems is enabling new types of microscopy experiments that generate large image datasets. However, there is a perceived lack of robust image analysis systems required to process these diverse datasets. Most automated image analysis systems are tailored for specific types of microscopy, contrast methods, probes, and even cell types. This imposes significant constraints on experimental design, limiting their application to the narrow set of imaging methods for which they were designed. One of the approaches to address these limitations is pattern recognition, which was originally developed for remote sensing, and is increasingly being applied to the biology domain. This approach relies on training a computer to recognize patterns in images rather than developing algorithms or tuning parameters for specific image processing tasks. The generality of this approach promises to enable data mining in extensive image repositories, and provide objective and quantitative imaging assays for routine use. Here, we provide a brief overview of the technologies behind pattern recognition and its use in computer vision for biological and biomedical imaging. We list available software tools that can be used by biologists and suggest practical experimental considerations to make the best use of pattern recognition techniques for imaging assays.

  13. Pattern recognition software and techniques for biological image analysis.

    Science.gov (United States)

    Shamir, Lior; Delaney, John D; Orlov, Nikita; Eckley, D Mark; Goldberg, Ilya G

    2010-11-24

    The increasing prevalence of automated image acquisition systems is enabling new types of microscopy experiments that generate large image datasets. However, there is a perceived lack of robust image analysis systems required to process these diverse datasets. Most automated image analysis systems are tailored for specific types of microscopy, contrast methods, probes, and even cell types. This imposes significant constraints on experimental design, limiting their application to the narrow set of imaging methods for which they were designed. One of the approaches to address these limitations is pattern recognition, which was originally developed for remote sensing, and is increasingly being applied to the biology domain. This approach relies on training a computer to recognize patterns in images rather than developing algorithms or tuning parameters for specific image processing tasks. The generality of this approach promises to enable data mining in extensive image repositories, and provide objective and quantitative imaging assays for routine use. Here, we provide a brief overview of the technologies behind pattern recognition and its use in computer vision for biological and biomedical imaging. We list available software tools that can be used by biologists and suggest practical experimental considerations to make the best use of pattern recognition techniques for imaging assays.

  14. Research of second harmonic generation images based on texture analysis

    Science.gov (United States)

    Liu, Yao; Li, Yan; Gong, Haiming; Zhu, Xiaoqin; Huang, Zufang; Chen, Guannan

    2014-09-01

    Texture analysis plays a crucial role in identifying objects or regions of interest in an image. It has been applied to a variety of medical image processing, ranging from the detection of disease and the segmentation of specific anatomical structures, to differentiation between healthy and pathological tissues. Second harmonic generation (SHG) microscopy as a potential noninvasive tool for imaging biological tissues has been widely used in medicine, with reduced phototoxicity and photobleaching. In this paper, we clarified the principles of texture analysis including statistical, transform, structural and model-based methods and gave examples of its applications, reviewing studies of the technique. Moreover, we tried to apply texture analysis to the SHG images for the differentiation of human skin scar tissues. Texture analysis method based on local binary pattern (LBP) and wavelet transform was used to extract texture features of SHG images from collagen in normal and abnormal scars, and then the scar SHG images were classified into normal or abnormal ones. Compared with other texture analysis methods with respect to the receiver operating characteristic analysis, LBP combined with wavelet transform was demonstrated to achieve higher accuracy. It can provide a new way for clinical diagnosis of scar types. At last, future development of texture analysis in SHG images were discussed.

  15. [Determination of high content of tin in geochemical samples by solid emission spectrometry].

    Science.gov (United States)

    Yao, Jian-Zhen; Hao, Zhi-Hong; Tang, Rui-Ling; Li, Xiao-Jing; Li, Wen-Ge; Zhang, Qin

    2013-11-01

    A method for the determination of high content of tin in geochemical samples by solid emission spectrometry was presented. The dedicated high content tin spectrum standard series was developed. K2S2O7, NaF, Al2O3 and carbon powder were used as buffers and Ge was used as internal standard, and the ratio of sample/matrix/buffer is 1 : 1 : 2. A weak sensitive line (Sn 242. 170 0 nm) was used as the analytical line. The technologies of vertical electrodes, AC arc overlap spectrograph, interception of the exposure, quantitative computer translation spectrum and background correction were used. The determination range is 100-22 350 microg x g(-1), the detection limit is 16.64 microg x g(-1), and the precision is (RSD, n = 12) 4.11%-6.46%. The accuracy of the method has been verified by determination of high content of tin in national geochemical standard samples and the results are in agreement with certified value. The method can be used for measurement directly without dilution of high content of tin in geochemical samples, and it greatly improved the detection upper limit for the determination of tin with solid emission spectroscopy and has certain practical value.

  16. Image analysis of moving seeds in an indented cylinder

    DEFF Research Database (Denmark)

    Buus, Ole; Jørgensen, Johannes Ravn

    2010-01-01

    -Spline surfaces. Using image analysis, the seeds will be tracked using a kalman filter and the 2D trajectory, length, velocity, weight, and rotation will be sampled. We expect a high correspondence between seed length and certain spatially optimal seed trajectories. This work is done in collaboration with Westrup...... work we will seek to understand more about the internal dynamics of the indented cylinder. We will apply image analysis to observe the movement of seeds in the indented cylinder. This work is laying the groundwork for future studies into the application of image analysis as a tool for autonomous...

  17. Applications of Digital Image Analysis in Experimental Mechanics

    DEFF Research Database (Denmark)

    Lyngbye, J. : Ph.D.

    The present thesis "Application of Digital Image Analysis in Experimental Mechanics" has been prepared as a part of Janus Lyngbyes Ph.D. study during the period December 1988 to June 1992 at the Department of Building technology and Structural Engineering, University of Aalborg, Denmark....... In this thesis attention will be focused on optimal use and analysis of the information of digital images. This is realized during investigation and application of parametric methods in digital image analysis. The parametric methods will be implemented in applications representative for the area of experimental...

  18. Uncooled LWIR imaging: applications and market analysis

    Science.gov (United States)

    Takasawa, Satomi

    2015-05-01

    The evolution of infrared (IR) imaging sensor technology for defense market has played an important role in developing commercial market, as dual use of the technology has expanded. In particular, technologies of both reduction in pixel pitch and vacuum package have drastically evolved in the area of uncooled Long-Wave IR (LWIR; 8-14 μm wavelength region) imaging sensor, increasing opportunity to create new applications. From the macroscopic point of view, the uncooled LWIR imaging market is divided into two areas. One is a high-end market where uncooled LWIR imaging sensor with sensitivity as close to that of cooled one as possible is required, while the other is a low-end market which is promoted by miniaturization and reduction in price. Especially, in the latter case, approaches towards consumer market have recently appeared, such as applications of uncooled LWIR imaging sensors to night visions for automobiles and smart phones. The appearance of such a kind of commodity surely changes existing business models. Further technological innovation is necessary for creating consumer market, and there will be a room for other companies treating components and materials such as lens materials and getter materials and so on to enter into the consumer market.

  19. Analysis of filtering techniques and image quality in pixel duplicated images

    Science.gov (United States)

    Mehrubeoglu, Mehrube; McLauchlan, Lifford

    2009-08-01

    When images undergo filtering operations, valuable information can be lost besides the intended noise or frequencies due to averaging of neighboring pixels. When the image is enlarged by duplicating pixels, such filtering effects can be reduced and more information retained, which could be critical when analyzing image content automatically. Analysis of retinal images could reveal many diseases at early stage as long as minor changes that depart from a normal retinal scan can be identified and enhanced. In this paper, typical filtering techniques are applied to an early stage diabetic retinopathy image which has undergone digital pixel duplication. The same techniques are applied to the original images for comparison. The effects of filtering are then demonstrated for both pixel duplicated and original images to show the information retention capability of pixel duplication. Image quality is computed based on published metrics. Our analysis shows that pixel duplication is effective in retaining information on smoothing operations such as mean filtering in the spatial domain, as well as lowpass and highpass filtering in the frequency domain, based on the filter window size. Blocking effects due to image compression and pixel duplication become apparent in frequency analysis.

  20. Basic research planning in mathematical pattern recognition and image analysis

    Science.gov (United States)

    Bryant, J.; Guseman, L. F., Jr.

    1981-01-01

    Fundamental problems encountered while attempting to develop automated techniques for applications of remote sensing are discussed under the following categories: (1) geometric and radiometric preprocessing; (2) spatial, spectral, temporal, syntactic, and ancillary digital image representation; (3) image partitioning, proportion estimation, and error models in object scene interference; (4) parallel processing and image data structures; and (5) continuing studies in polarization; computer architectures and parallel processing; and the applicability of "expert systems" to interactive analysis.

  1. An Analysis of the Magneto-Optic Imaging System

    Science.gov (United States)

    Nath, Shridhar

    1996-01-01

    The Magneto-Optic Imaging system is being used for the detection of defects in airframes and other aircraft structures. The system has been successfully applied to detecting surface cracks, but has difficulty in the detection of sub-surface defects such as corrosion. The intent of the grant was to understand the physics of the MOI better, in order to use it effectively for detecting corrosion and for classifying surface defects. Finite element analysis, image classification, and image processing are addressed.

  2. Computer Vision-Based Image Analysis of Bacteria.

    Science.gov (United States)

    Danielsen, Jonas; Nordenfelt, Pontus

    2017-01-01

    Microscopy is an essential tool for studying bacteria, but is today mostly used in a qualitative or possibly semi-quantitative manner often involving time-consuming manual analysis. It also makes it difficult to assess the importance of individual bacterial phenotypes, especially when there are only subtle differences in features such as shape, size, or signal intensity, which is typically very difficult for the human eye to discern. With computer vision-based image analysis - where computer algorithms interpret image data - it is possible to achieve an objective and reproducible quantification of images in an automated fashion. Besides being a much more efficient and consistent way to analyze images, this can also reveal important information that was previously hard to extract with traditional methods. Here, we present basic concepts of automated image processing, segmentation and analysis that can be relatively easy implemented for use with bacterial research.

  3. A high-content small molecule screen identifies sensitivity of glioblastoma stem cells to inhibition of polo-like kinase 1.

    Directory of Open Access Journals (Sweden)

    Davide Danovi

    Full Text Available Glioblastoma multiforme (GBM is the most common primary brain cancer in adults and there are few effective treatments. GBMs contain cells with molecular and cellular characteristics of neural stem cells that drive tumour growth. Here we compare responses of human glioblastoma-derived neural stem (GNS cells and genetically normal neural stem (NS cells to a panel of 160 small molecule kinase inhibitors. We used live-cell imaging and high content image analysis tools and identified JNJ-10198409 (J101 as an agent that induces mitotic arrest at prometaphase in GNS cells but not NS cells. Antibody microarrays and kinase profiling suggested that J101 responses are triggered by suppression of the active phosphorylated form of polo-like kinase 1 (Plk1 (phospho T210, with resultant spindle defects and arrest at prometaphase. We found that potent and specific Plk1 inhibitors already in clinical development (BI 2536, BI 6727 and GSK 461364 phenocopied J101 and were selective against GNS cells. Using a porcine brain endothelial cell blood-brain barrier model we also observed that these compounds exhibited greater blood-brain barrier permeability in vitro than J101. Our analysis of mouse mutant NS cells (INK4a/ARF(-/-, or p53(-/-, as well as the acute genetic deletion of p53 from a conditional p53 floxed NS cell line, suggests that the sensitivity of GNS cells to BI 2536 or J101 may be explained by the lack of a p53-mediated compensatory pathway. Together these data indicate that GBM stem cells are acutely susceptible to proliferative disruption by Plk1 inhibitors and that such agents may have immediate therapeutic value.

  4. ANALYSIS OF MULAN’S IMAGE

    Directory of Open Access Journals (Sweden)

    Chen Chongjie

    2010-05-01

    Full Text Available Mulan is an ancient Chinese heroine. The story shows that she is the same as other women,having personal life, but also has a heart that loves her country and her parents. She always thinksabout her family, society, dan country. This article was based on library research, using Mulan poemdan a folklore entitled Mulan Pergi Berperang as media to analyze Mulan’s image. It can be said thateventhough at that time there was no gender perspective concept, Mulan had been able to replace herfather to go to war. She gave a lot of contributions for her country, society, and family. Analysisrepresents that Mulan has three images, as a heroine, as a daughter for her family, and self-image orleader for the women. Having good character and spirit to love country, people, and family make herequal to men. And, it makes her respected by people in each generation.

  5. SLAR image interpretation keys for geographic analysis

    Science.gov (United States)

    Coiner, J. C.

    1972-01-01

    A means for side-looking airborne radar (SLAR) imagery to become a more widely used data source in geoscience and agriculture is suggested by providing interpretation keys as an easily implemented interpretation model. Interpretation problems faced by the researcher wishing to employ SLAR are specifically described, and the use of various types of image interpretation keys to overcome these problems is suggested. With examples drawn from agriculture and vegetation mapping, direct and associate dichotomous image interpretation keys are discussed and methods of constructing keys are outlined. Initial testing of the keys, key-based automated decision rules, and the role of the keys in an information system for agriculture are developed.

  6. Automated analysis of protein subcellular location in time series images.

    Science.gov (United States)

    Hu, Yanhua; Osuna-Highley, Elvira; Hua, Juchang; Nowicki, Theodore Scott; Stolz, Robert; McKayle, Camille; Murphy, Robert F

    2010-07-01

    Image analysis, machine learning and statistical modeling have become well established for the automatic recognition and comparison of the subcellular locations of proteins in microscope images. By using a comprehensive set of features describing static images, major subcellular patterns can be distinguished with near perfect accuracy. We now extend this work to time series images, which contain both spatial and temporal information. The goal is to use temporal features to improve recognition of protein patterns that are not fully distinguishable by their static features alone. We have adopted and designed five sets of features for capturing temporal behavior in 2D time series images, based on object tracking, temporal texture, normal flow, Fourier transforms and autoregression. Classification accuracy on an image collection for 12 fluorescently tagged proteins was increased when temporal features were used in addition to static features. Temporal texture, normal flow and Fourier transform features were most effective at increasing classification accuracy. We therefore extended these three feature sets to 3D time series images, but observed no significant improvement over results for 2D images. The methods for 2D and 3D temporal pattern analysis do not require segmentation of images into single cell regions, and are suitable for automated high-throughput microscopy applications. Images, source code and results will be available upon publication at http://murphylab.web.cmu.edu/software murphy@cmu.edu.

  7. A Survey on Deep Learning in Medical Image Analysis

    NARCIS (Netherlands)

    Litjens, G.J.; Kooi, T.; Ehteshami Bejnordi, B.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; Laak, J.A.W.M. van der; Ginneken, B. van; Sanchez, C.I.

    2017-01-01

    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared

  8. Higher Education Institution Image: A Correspondence Analysis Approach.

    Science.gov (United States)

    Ivy, Jonathan

    2001-01-01

    Investigated how marketing is used to convey higher education institution type image in the United Kingdom and South Africa. Using correspondence analysis, revealed the unique positionings created by old and new universities and technikons in these countries. Also identified which marketing tools they use in conveying their image. (EV)

  9. Spinal X-ray image analysis in scoliosis

    NARCIS (Netherlands)

    Sardjono, Tri Arief

    2007-01-01

    In this thesis new image analysis methods are discussed to determine the curvature of scoliotic patients characterised by the Cobb angle and to enhance the vertebral parts based on features from a frontal X-ray image. Chapter 1 provides some background information on scoliosis, how to diagnose it, t

  10. Subsurface offset behaviour in velocity analysis with extended reflectivity images

    NARCIS (Netherlands)

    Mulder, W.A.

    2012-01-01

    Migration velocity analysis with the wave equation can be accomplished by focusing of extended migration images, obtained by introducing a subsurface offset or shift. A reflector in the wrong velocity model will show up as a curve in the extended image. In the correct model, it should collapse to a

  11. Four challenges in medical image analysis from an industrial perspective.

    Science.gov (United States)

    Weese, Jürgen; Lorenz, Cristian

    2016-10-01

    Today's medical imaging systems produce a huge amount of images containing a wealth of information. However, the information is hidden in the data and image analysis algorithms are needed to extract it, to make it readily available for medical decisions and to enable an efficient work flow. Advances in medical image analysis over the past 20 years mean there are now many algorithms and ideas available that allow to address medical image analysis tasks in commercial solutions with sufficient performance in terms of accuracy, reliability and speed. At the same time new challenges have arisen. Firstly, there is a need for more generic image analysis technologies that can be efficiently adapted for a specific clinical task. Secondly, efficient approaches for ground truth generation are needed to match the increasing demands regarding validation and machine learning. Thirdly, algorithms for analyzing heterogeneous image data are needed. Finally, anatomical and organ models play a crucial role in many applications, and algorithms to construct patient-specific models from medical images with a minimum of user interaction are needed. These challenges are complementary to the on-going need for more accurate, more reliable and faster algorithms, and dedicated algorithmic solutions for specific applications.

  12. Disability in Physical Education Textbooks: An Analysis of Image Content

    Science.gov (United States)

    Taboas-Pais, Maria Ines; Rey-Cao, Ana

    2012-01-01

    The aim of this paper is to show how images of disability are portrayed in physical education textbooks for secondary schools in Spain. The sample was composed of 3,316 images published in 36 textbooks by 10 publishing houses. A content analysis was carried out using a coding scheme based on categories employed in other similar studies and adapted…

  13. Principal component analysis of image gradient orientations for face recognition

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    We introduce the notion of Principal Component Analysis (PCA) of image gradient orientations. As image data is typically noisy, but noise is substantially different from Gaussian, traditional PCA of pixel intensities very often fails to estimate reliably the low-dimensional subspace of a given data

  14. 3-dimensional terahertz imaging and analysis of historical art pieces

    DEFF Research Database (Denmark)

    Dandolo, Corinna Ludovica Koch; Jepsen, Peter Uhd

    Imaging with terahertz (THz) waves offers the capability of seeing through traditionally opaque materials, with maintaining a spatial resolution comparable to that of the naked eye. The use of ultrashort THz pulses allow for depth resolved imaging by time-of-flight analysis of reflected signals...

  15. Computerized comprehensive data analysis of Lung Imaging Database Consortium (LIDC)

    OpenAIRE

    Tan, Jun; Pu, Jiantao; Zheng, Bin; Wang, Xingwei; Leader, Joseph K.

    2010-01-01

    Purpose: Lung Image Database Consortium (LIDC) is the largest public CT image database of lung nodules. In this study, the authors present a comprehensive and the most updated analysis of this dynamically growing database under the help of a computerized tool, aiming to assist researchers to optimally use this database for lung cancer related investigations.

  16. VIDA: an environment for multidimensional image display and analysis

    Science.gov (United States)

    Hoffman, Eric A.; Gnanaprakasam, Daniel; Gupta, Krishanu B.; Hoford, John D.; Kugelmass, Steven D.; Kulawiec, Richard S.

    1992-06-01

    Since the first dynamic volumetric studies were done in the early 1980s on the dynamic spatial reconstructor (DSR), there has been a surge of interest in volumetric and dynamic imaging using a number of tomographic techniques. Knowledge gained in handling DSR image data has readily transferred to the current use of a number of other volumetric and dynamic imaging modalities including cine and spiral CT, MR, and PET. This in turn has lead to our development of a new image display and quantitation package which we have named VIDATM (volumetric image display and analysis). VIDA is written in C, runs under the UNIXTM operating system, and uses the XView toolkit to conform to the Open LookTM graphical user interface specification. A shared memory structure has been designed which allows for the manipulation of multiple volumes simultaneously. VIDA utilizes a windowing environment and allows execution of multiple processes simultaneously. Available programs include: oblique sectioning, volume rendering, region of interest analysis, interactive image segmentation/editing, algebraic image manipulation, conventional cardiac mechanics analysis, homogeneous strain analysis, tissue blood flow evaluation, etc. VIDA is a built modularly, allowing new programs to be developed and integrated easily. An emphasis has been placed upon image quantitation for the purpose of physiological evaluation.

  17. The ImageJ ecosystem: An open platform for biomedical image analysis.

    Science.gov (United States)

    Schindelin, Johannes; Rueden, Curtis T; Hiner, Mark C; Eliceiri, Kevin W

    2015-01-01

    Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available-from commercial to academic, special-purpose to Swiss army knife, small to large-but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on the life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts the life sciences, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem.

  18. On two methods of statistical image analysis

    NARCIS (Netherlands)

    Missimer, J; Knorr, U; Maguire, RP; Herzog, H; Seitz, RJ; Tellman, L; Leenders, KL

    1999-01-01

    The computerized brain atlas (CBA) and statistical parametric mapping (SPM) are two procedures for voxel-based statistical evaluation of PET activation studies. Each includes spatial standardization of image volumes, computation of a statistic, and evaluation of its significance. In addition, smooth

  19. Exploratory matrix factorization for PET image analysis.

    Science.gov (United States)

    Kodewitz, A; Keck, I R; Tomé, A M; Lang, E W

    2010-01-01

    Features are extracted from PET images employing exploratory matrix factorization techniques such as nonnegative matrix factorization (NMF). Appropriate features are fed into classifiers such as a support vector machine or a random forest tree classifier. An automatic feature extraction and classification is achieved with high classification rate which is robust and reliable and can help in an early diagnosis of Alzheimer's disease.

  20. Wound Image Analysis Using Contour Evolution

    Directory of Open Access Journals (Sweden)

    K. Sundeep Kumar

    2014-05-01

    Full Text Available The aim of the algorithm described in this paper is to segment wound images from the normal and classify them according to the types of the wound. The segmentation of wounds extravagates color representation, which has been followed by an algorithm of grayscale segmentation based on the stack mathematical approach. Accurate classification of wounds and analyzing wound healing process is a critical task for patient care and health cost reduction at hospital. The tissue uniformity and flatness leads to a simplified approach but requires multispectral imaging for enhanced wound delineation. Contour Evolution method which uses multispectral imaging replaces more complex tools such as, SVM supervised classification, as no training step is required. In Contour Evolution, classification can be done by clustering color information, with differential quantization algorithm, the color centroids of small squares taken from segmented part of the wound image in (C1,C2 plane. Where C1, C2 are two chrominance components. Wound healing is identified by measuring the size of the wound through various means like contact and noncontact methods of wound. The wound tissues proportion is also estimated by a qualitative visual assessment based on the red-yellow-black code. Moreover, involving all the spectral response of the tissue and not only RGB components provides a higher discrimination for separating healed epithelial tissue from granulation tissue.

  1. Enhanced Image Analysis Using Cached Mobile Robots

    Directory of Open Access Journals (Sweden)

    Kabeer Mohammed

    2012-11-01

    Full Text Available In the field of Artificial intelligence Image processing plays a vital role in Decision making .Now a day’s Mobile robots work as a Network sharing Centralized Data base.All Image inputs are compared against this database and decision is made.In some cases the Centralized database is in other side of the globe and Mobile robots compare Input image through satellite link this sometime results in delays in decision making which may result in castrophe.This Research paper is about how to make image processing in mobile robots less time consuming and fast decision making.This research paper compares search techniques employed currently and optimum search method which we are going to state.Now a days Mobile robots are extensively used in environments which are dangerous to human beings.In this dangerous situations quick Decision making makes the difference between Hit and Miss this can also results in Day to day tasks performed by Mobile robots Successful or Failure.

  2. Electron Microscopy and Image Analysis for Selected Materials

    Science.gov (United States)

    Williams, George

    1999-01-01

    This particular project was completed in collaboration with the metallurgical diagnostics facility. The objective of this research had four major components. First, we required training in the operation of the environmental scanning electron microscope (ESEM) for imaging of selected materials including biological specimens. The types of materials range from cyanobacteria and diatoms to cloth, metals, sand, composites and other materials. Second, to obtain training in surface elemental analysis technology using energy dispersive x-ray (EDX) analysis, and in the preparation of x-ray maps of these same materials. Third, to provide training for the staff of the metallurgical diagnostics and failure analysis team in the area of image processing and image analysis technology using NIH Image software. Finally, we were to assist in the sample preparation, observing, imaging, and elemental analysis for Mr. Richard Hoover, one of NASA MSFC's solar physicists and Marshall's principal scientist for the agency-wide virtual Astrobiology Institute. These materials have been collected from various places around the world including the Fox Tunnel in Alaska, Siberia, Antarctica, ice core samples from near Lake Vostoc, thermal vents in the ocean floor, hot springs and many others. We were successful in our efforts to obtain high quality, high resolution images of various materials including selected biological ones. Surface analyses (EDX) and x-ray maps were easily prepared with this technology. We also discovered and used some applications for NIH Image software in the metallurgical diagnostics facility.

  3. Segmentation and learning in the quantitative analysis of microscopy images

    Science.gov (United States)

    Ruggiero, Christy; Ross, Amy; Porter, Reid

    2015-02-01

    In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.

  4. Determination of fish gender using fractal analysis of ultrasound images

    DEFF Research Database (Denmark)

    McEvoy, Fintan J.; Tomkiewicz, Jonna; Støttrup, Josianne;

    2009-01-01

    The gender of cod Gadus morhua can be determined by considering the complexity in their gonadal ultrasonographic appearance. The fractal dimension (DB) can be used to describe this feature in images. B-mode gonadal ultrasound images in 32 cod, where gender was known, were collected. Fractal...... by subjective analysis alone. The mean (and standard deviation) of the fractal dimension DB for male fish was 1.554 (0.073) while for female fish it was 1.468 (0.061); the difference was statistically significant (P=0.001). The area under the ROC curve was 0.84 indicating the value of fractal analysis in gender...... result. Fractal analysis is useful for gender determination in cod. This or a similar form of analysis may have wide application in veterinary imaging as a tool for quantification of complexity in images...

  5. A performance analysis system for MEMS using automated imaging methods

    Energy Technology Data Exchange (ETDEWEB)

    LaVigne, G.F.; Miller, S.L.

    1998-08-01

    The ability to make in-situ performance measurements of MEMS operating at high speeds has been demonstrated using a new image analysis system. Significant improvements in performance and reliability have directly resulted from the use of this system.

  6. Analysis Operator Learning and Its Application to Image Reconstruction

    CERN Document Server

    Hawe, Simon; Diepold, Klaus

    2012-01-01

    Exploiting a priori known structural information lies at the core of many image reconstruction methods that can be stated as inverse problems. The synthesis model, which assumes that images can be decomposed into a linear combination of very few atoms of some dictionary, is now a well established tool for the design of image reconstruction algorithms. An interesting alternative is the analysis model, where the signal is multiplied by an analysis operator and the outcome is assumed to be the sparse. This approach has only recently gained increasing interest. The quality of reconstruction methods based on an analysis model severely depends on the right choice of the suitable operator. In this work, we present an algorithm for learning an analysis operator from training images. Our method is based on an $\\ell_p$-norm minimization on the set of full rank matrices with normalized columns. We carefully introduce the employed conjugate gradient method on manifolds, and explain the underlying geometry of the constrai...

  7. ANALYSIS OF MULTIPATH PIXELS IN SAR IMAGES

    Directory of Open Access Journals (Sweden)

    J. W. Zhao

    2016-06-01

    Full Text Available As the received radar signal is the sum of signal contributions overlaid in one single pixel regardless of the travel path, the multipath effect should be seriously tackled as the multiple bounce returns are added to direct scatter echoes which leads to ghost scatters. Most of the existing solution towards the multipath is to recover the signal propagation path. To facilitate the signal propagation simulation process, plenty of aspects such as sensor parameters, the geometry of the objects (shape, location, orientation, mutual position between adjacent buildings and the physical parameters of the surface (roughness, correlation length, permittivitywhich determine the strength of radar signal backscattered to the SAR sensor should be given in previous. However, it's not practical to obtain the highly detailed object model in unfamiliar area by field survey as it's a laborious work and time-consuming. In this paper, SAR imaging simulation based on RaySAR is conducted at first aiming at basic understanding of multipath effects and for further comparison. Besides of the pre-imaging simulation, the product of the after-imaging, which refers to radar images is also taken into consideration. Both Cosmo-SkyMed ascending and descending SAR images of Lupu Bridge in Shanghai are used for the experiment. As a result, the reflectivity map and signal distribution map of different bounce level are simulated and validated by 3D real model. The statistic indexes such as the phase stability, mean amplitude, amplitude dispersion, coherence and mean-sigma ratio in case of layover are analyzed with combination of the RaySAR output.

  8. Image Segmentation Analysis for NASA Earth Science Applications

    Science.gov (United States)

    Tilton, James C.

    2010-01-01

    NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.

  9. Image analysis of self-organized multicellular patterns

    Directory of Open Access Journals (Sweden)

    Thies Christian

    2016-09-01

    Full Text Available Analysis of multicellular patterns is required to understand tissue organizational processes. By using a multi-scale object oriented image processing method, the spatial information of cells can be extracted automatically. Instead of manual segmentation or indirect measurements, such as general distribution of contrast or flow, the orientation and distribution of individual cells is extracted for quantitative analysis. Relevant objects are identified by feature queries and no low-level knowledge of image processing is required.

  10. Imaging Heat and Mass Transfer Processes Visualization and Analysis

    CERN Document Server

    Panigrahi, Pradipta Kumar

    2013-01-01

    Imaging Heat and Mass Transfer Processes: Visualization and Analysis applies Schlieren and shadowgraph techniques to complex heat and mass transfer processes. Several applications are considered where thermal and concentration fields play a central role. These include vortex shedding and suppression from stationary and oscillating bluff bodies such as cylinders, convection around crystals growing from solution, and buoyant jets. Many of these processes are unsteady and three dimensional. The interpretation and analysis of images recorded are discussed in the text.

  11. Independent component analysis applications on THz sensing and imaging

    Science.gov (United States)

    Balci, Soner; Maleski, Alexander; Nascimento, Matheus Mello; Philip, Elizabath; Kim, Ju-Hyung; Kung, Patrick; Kim, Seongsin M.

    2016-05-01

    We report Independent Component Analysis (ICA) technique applied to THz spectroscopy and imaging to achieve a blind source separation. A reference water vapor absorption spectrum was extracted via ICA, then ICA was utilized on a THz spectroscopic image in order to clean the absorption of water molecules from each pixel. For this purpose, silica gel was chosen as the material of interest for its strong water absorption. The resulting image clearly showed that ICA effectively removed the water content in the detected signal allowing us to image the silica gel beads distinctively even though it was totally embedded in water before ICA was applied.

  12. Image registration based on matrix perturbation analysis using spectral graph

    Institute of Scientific and Technical Information of China (English)

    Chengcai Leng; Zheng Tian; Jing Li; Mingtao Ding

    2009-01-01

    @@ We present a novel perspective on characterizing the spectral correspondence between nodes of the weighted graph with application to image registration.It is based on matrix perturbation analysis on the spectral graph.The contribution may be divided into three parts.Firstly, the perturbation matrix is obtained by perturbing the matrix of graph model.Secondly, an orthogonal matrix is obtained based on an optimal parameter, which can better capture correspondence features.Thirdly, the optimal matching matrix is proposed by adjusting signs of orthogonal matrix for image registration.Experiments on both synthetic images and real-world images demonstrate the effectiveness and accuracy of the proposed method.

  13. A collaborative biomedical image mining framework: application on the image analysis of microscopic kidney biopsies.

    Science.gov (United States)

    Goudas, T; Doukas, C; Chatziioannou, A; Maglogiannis, I

    2013-01-01

    The analysis and characterization of biomedical image data is a complex procedure involving several processing phases, like data acquisition, preprocessing, segmentation, feature extraction and classification. The proper combination and parameterization of the utilized methods are heavily relying on the given image data set and experiment type. They may thus necessitate advanced image processing and classification knowledge and skills from the side of the biomedical expert. In this work, an application, exploiting web services and applying ontological modeling, is presented, to enable the intelligent creation of image mining workflows. The described tool can be directly integrated to the RapidMiner, Taverna or similar workflow management platforms. A case study dealing with the creation of a sample workflow for the analysis of kidney biopsy microscopy images is presented to demonstrate the functionality of the proposed framework.

  14. Chemical imaging and solid state analysis at compact surfaces using UV imaging

    DEFF Research Database (Denmark)

    Wu, Jian X.; Rehder, Sönke; van den Berg, Frans

    2014-01-01

    Fast non-destructive multi-wavelength UV imaging together with multivariate image analysis was utilized to visualize distribution of chemical components and their solid state form at compact surfaces. Amorphous and crystalline solid forms of the antidiabetic compound glibenclamide...... and excipients in a non-invasive way, as well as mapping the glibenclamide solid state form. An exploratory data analysis supported the critical evaluation of the mapping results and the selection of model parameters for the chemical mapping. The present study demonstrated that the multi-wavelength UV imaging...

  15. Diagnostic support for glaucoma using retinal images: a hybrid image analysis and data mining approach.

    Science.gov (United States)

    Yu, Jin; Abidi, Syed Sibte Raza; Artes, Paul; McIntyre, Andy; Heywood, Malcolm

    2005-01-01

    The availability of modern imaging techniques such as Confocal Scanning Laser Tomography (CSLT) for capturing high-quality optic nerve images offer the potential for developing automatic and objective methods for diagnosing glaucoma. We present a hybrid approach that features the analysis of CSLT images using moment methods to derive abstract image defining features. The features are then used to train classifers for automatically distinguishing CSLT images of normal and glaucoma patient. As a first, in this paper, we present investigations in feature subset selction methods for reducing the relatively large input space produced by the moment methods. We use neural networks and support vector machines to determine a sub-set of moments that offer high classification accuracy. We demonstratee the efficacy of our methods to discriminate between healthy and glaucomatous optic disks based on shape information automatically derived from optic disk topography and reflectance images.

  16. Digital pathology and image analysis in tissue biomarker research.

    Science.gov (United States)

    Hamilton, Peter W; Bankhead, Peter; Wang, Yinhai; Hutchinson, Ryan; Kieran, Declan; McArt, Darragh G; James, Jacqueline; Salto-Tellez, Manuel

    2014-11-01

    Digital pathology and the adoption of image analysis have grown rapidly in the last few years. This is largely due to the implementation of whole slide scanning, advances in software and computer processing capacity and the increasing importance of tissue-based research for biomarker discovery and stratified medicine. This review sets out the key application areas for digital pathology and image analysis, with a particular focus on research and biomarker discovery. A variety of image analysis applications are reviewed including nuclear morphometry and tissue architecture analysis, but with emphasis on immunohistochemistry and fluorescence analysis of tissue biomarkers. Digital pathology and image analysis have important roles across the drug/companion diagnostic development pipeline including biobanking, molecular pathology, tissue microarray analysis, molecular profiling of tissue and these important developments are reviewed. Underpinning all of these important developments is the need for high quality tissue samples and the impact of pre-analytical variables on tissue research is discussed. This requirement is combined with practical advice on setting up and running a digital pathology laboratory. Finally, we discuss the need to integrate digital image analysis data with epidemiological, clinical and genomic data in order to fully understand the relationship between genotype and phenotype and to drive discovery and the delivery of personalized medicine.

  17. User image mismatch in anaesthesia alarms: a cognitive systems analysis.

    Science.gov (United States)

    Raymer, Karen E; Bergström, Johan

    2013-01-01

    In this study, principles of Cognitive Systems Engineering are used to better understand the human-machine interaction manifesting in the use of anaesthesia alarms. The hypothesis is that the design of the machine incorporates built-in assumptions of the user that are discrepant with the anaesthesiologist's self-assessment, creating 'user image mismatch'. Mismatch was interpreted by focusing on the 'user image' as described from the perspectives of both machine and user. The machine-embedded image was interpreted through document analysis. The user-described image was interpreted through user (anaesthesiologist) interviews. Finally, an analysis was conducted in which the machine-embedded and user-described images were contrasted to identify user image mismatch. It is concluded that analysing user image mismatch expands the focus of attention towards macro-elements in the interaction between man and machine. User image mismatch is interpreted to arise from complexity of algorithm design and incongruity between alarm design and tenets of anaesthesia practice. Cognitive system engineering principles are applied to enhance the understanding of the interaction between anaesthesiologist and alarm. The 'user image' is interpreted and contrasted from the perspectives of machine as well as the user. Apparent machine-user mismatch is explored pertaining to specific design features.

  18. Cardiac nonrigid motion analysis from image sequences

    Institute of Scientific and Technical Information of China (English)

    LIU Huafeng

    2006-01-01

    Noninvasive estimation of the soft tissue kinematics properties from medical image sequences has many important clinical and physiological implications, such as the diagnosis of heart diseases and the understanding of cardiac mechanics. In this paper, we present a biomechanics based strategy, framed as a priori constraints for the ill-posed motion recovery problema, to realize estimation of the cardiac motion and deformation parameters. By constructing the heart dynamics system equations from biomechanics principles, we use the finite element method to generate smooth estimates.of heart kinematics throughout the cardiac cycle. We present the application of the strategy to the estimation of displacements and strains from in vivo left ventricular magnetic resonance image sequence.

  19. Extended -Regular Sequence for Automated Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jin Hee-Jeong

    2006-01-01

    Full Text Available Microarray study enables us to obtain hundreds of thousands of expressions of genes or genotypes at once, and it is an indispensable technology for genome research. The first step is the analysis of scanned microarray images. This is the most important procedure for obtaining biologically reliable data. Currently most microarray image processing systems require burdensome manual block/spot indexing work. Since the amount of experimental data is increasing very quickly, automated microarray image analysis software becomes important. In this paper, we propose two automated methods for analyzing microarray images. First, we propose the extended -regular sequence to index blocks and spots, which enables a novel automatic gridding procedure. Second, we provide a methodology, hierarchical metagrid alignment, to allow reliable and efficient batch processing for a set of microarray images. Experimental results show that the proposed methods are more reliable and convenient than the commercial tools.

  20. Quantitative and qualitative analysis and interpretation of CT perfusion imaging.

    Science.gov (United States)

    Valdiviezo, Carolina; Ambrose, Marietta; Mehra, Vishal; Lardo, Albert C; Lima, Joao A C; George, Richard T

    2010-12-01

    Coronary artery disease (CAD) remains the leading cause of death in the United States. Rest and stress myocardial perfusion imaging has an important role in the non-invasive risk stratification of patients with CAD. However, diagnostic accuracies have been limited, which has led to the development of several myocardial perfusion imaging techniques. Among them, myocardial computed tomography perfusion imaging (CTP) is especially interesting as it has the unique capability of providing anatomic- as well as coronary stenosis-related functional data when combined with computed tomography angiography (CTA). The primary aim of this article is to review the qualitative, semi-quantitative, and quantitative analysis approaches to CTP imaging. In doing so, we will describe the image data required for each analysis and discuss the advantages and disadvantages of each approach.

  1. Analysis and Management System of Digital Ultrasonic Image

    Institute of Scientific and Technical Information of China (English)

    TAO Qiang; ZHANG Hai-yan; LI Xia; WANG Ke

    2008-01-01

    This paper presents the analysis and management system of digital ultrasonic image. The system can manage medical ultrasonic image by collecting, saving and transferring, and realize that section offices of ultrasonic image in hospital network manage. The system use network technology in transferring image between ultrasonic equipments to share patient data in ultrasonic equipments. And doctors can input patient diagnostic report,saved by text file and case history, digitally managed. The system can be realized by Visual C++ which make windows applied. The system can be brought forward because PACS prevail with various hospitals,but PACS is expensive. In view of this status, we put forward to the analysis and management system of digital ultrasonic image,which is similar to PACS.

  2. 3D Images of Materials Structures Processing and Analysis

    CERN Document Server

    Ohser, Joachim

    2009-01-01

    Taking and analyzing images of materials' microstructures is essential for quality control, choice and design of all kind of products. Today, the standard method still is to analyze 2D microscopy images. But, insight into the 3D geometry of the microstructure of materials and measuring its characteristics become more and more prerequisites in order to choose and design advanced materials according to desired product properties. This first book on processing and analysis of 3D images of materials structures describes how to develop and apply efficient and versatile tools for geometric analysis

  3. Visualization and Analysis of 3D Microscopic Images

    Science.gov (United States)

    Long, Fuhui; Zhou, Jianlong; Peng, Hanchuan

    2012-01-01

    In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain. PMID:22719236

  4. Micro imaging analysis for osteoporosis assessment

    Energy Technology Data Exchange (ETDEWEB)

    Lima, I., E-mail: inaya@lin.ufrj.b [Nuclear Instrumentation Laboratory, COPPE, UFRJ (Brazil); Polytechnic Institute of Rio de Janeiro State/UERJ/Brazil (Brazil); Farias, M.L.F. [University Hospital, UFRJ, RJ (Brazil); Percegoni, N. [Biophysics Institute, CCS, UFRJ, RJ (Brazil); Rosenthal, D. [Physics Institute, UERJ, RJ (Brazil); Assis, J.T. de [Polytechnic Institute of Rio de Janeiro State/UERJ/Brazil (Brazil); Anjos, M.J. [Physics Institute, UERJ, RJ (Brazil); Lopes, R.T. [Nuclear Instrumentation Laboratory, COPPE, UFRJ (Brazil)

    2010-03-15

    Characterization of trabeculae structures is one of the most important applications of imaging techniques in the biomedical area. The aim of this study was to investigate structure modifications in trabecular and cortical bones using non destructive techniques such as X-ray microtomography, X-ray microfluorescence by synchrotron radiation and scanning electron microscopy. The results obtained reveal the potential of this computational technique to verify the capability of characterization of internal bone structures.

  5. Forensic Analysis of Digital Image Tampering

    Science.gov (United States)

    2004-12-01

    2.2 – Example of invisible watermark using Steganography Software F5 ............. 8 Figure 2.3 – Example of copy-move image forgery [12...examples of this evolution. Audio has progressed from analog audio tapes and records to Compact Discs and MP3s. Video displays have advanced from the...on it for security or anti-tamper reasons. Figure 2.2 shows an example of this. Figure 2.2 – Example of invisible watermark using Steganography

  6. Imaging System and Method for Biomedical Analysis

    Science.gov (United States)

    2013-03-11

    fluorescent nanoparticles . Generally, Noiseux et al. teach injecting multiple fluorescent nanoparticle dyes into the food sample, imaging the sample a...example, AIDS, malaria , cholera, lymphoma, and typhoid. The present disclosure can be used to capture and count microscopic cells for application as...Base plate 214 is sealed against cover 204 by the adhesive 210. Base plate 214 can have a thickness 216 of, for example, about 100 µm. At least a

  7. MULTISPECTRAL IMAGE ANALYSIS USING RANDOM FOREST

    OpenAIRE

    Barrett Lowe; Arun Kulkarni

    2015-01-01

    Classical methods for classification of pixels in multispectral images include supervised classifiers such as the maximum-likelihood classifier, neural network classifiers, fuzzy neural networks, support vector machines, and decision trees. Recently, there has been an increase of interest in ensemble learning – a method that generates many classifiers and aggregates their results. Breiman proposed Random Forestin 2001 for classification and clustering. Random Forest grows many decision tre...

  8. Measurement and analysis of image sensors

    Science.gov (United States)

    Vitek, Stanislav

    2005-06-01

    For astronomical applications is necessary to have high precision in sensing and processing the image data. In this time are used the large CCD sensors from the various reasons. For the replacement of CCD sensors with CMOS sensing devices is important to know transfer characteristics of used CCD sensors. In the special applications like the robotic telescopes (fully automatic, without human interactions) seems to be good using of specially designed smart sensors, which have integrated more functions and have more features than CCDs.

  9. Real-time video-image analysis

    Science.gov (United States)

    Eskenazi, R.; Rayfield, M. J.; Yakimovsky, Y.

    1979-01-01

    Digitizer and storage system allow rapid random access to video data by computer. RAPID (random-access picture digitizer) uses two commercially-available, charge-injection, solid-state TV cameras as sensors. It can continuously update its memory with each frame of video signal, or it can hold given frame in memory. In either mode, it generates composite video output signal representing digitized image in memory.

  10. An investigation of image compression on NIIRS rating degradation through automated image analysis

    Science.gov (United States)

    Chen, Hua-Mei; Blasch, Erik; Pham, Khanh; Wang, Zhonghai; Chen, Genshe

    2016-05-01

    The National Imagery Interpretability Rating Scale (NIIRS) is a subjective quantification of static image widely adopted by the Geographic Information System (GIS) community. Efforts have been made to relate NIIRS image quality to sensor parameters using the general image quality equations (GIQE), which make it possible to automatically predict the NIIRS rating of an image through automated image analysis. In this paper, we present an automated procedure to extract line edge profile based on which the NIIRS rating of a given image can be estimated through the GIQEs if the ground sampling distance (GSD) is known. Steps involved include straight edge detection, edge stripes determination, and edge intensity determination, among others. Next, we show how to employ GIQEs to estimate NIIRS degradation without knowing the ground truth GSD and investigate the effects of image compression on the degradation of an image's NIIRS rating. Specifically, we consider JPEG and JPEG2000 image compression standards. The extensive experimental results demonstrate the effect of image compression on the ground sampling distance and relative edge response, which are the major factors effecting NIIRS rating.

  11. Method for measuring anterior chamber volume by image analysis

    Science.gov (United States)

    Zhai, Gaoshou; Zhang, Junhong; Wang, Ruichang; Wang, Bingsong; Wang, Ningli

    2007-12-01

    Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some measures should be taken to simplify the handcraft preprocess working as to images.

  12. Improving lip wrinkles: lipstick-related image analysis.

    Science.gov (United States)

    Ryu, Jong-Seong; Park, Sun-Gyoo; Kwak, Taek-Jong; Chang, Min-Youl; Park, Moon-Eok; Choi, Khee-Hwan; Sung, Kyung-Hye; Shin, Hyun-Jong; Lee, Cheon-Koo; Kang, Yun-Seok; Yoon, Moung-Seok; Rang, Moon-Jeong; Kim, Seong-Jin

    2005-08-01

    The appearance of lip wrinkles is problematic if it is adversely influenced by lipstick make-up causing incomplete color tone, spread phenomenon and pigment remnants. It is mandatory to develop an objective assessment method for lip wrinkle status by which the potential of wrinkle-improving products to lips can be screened. The present study is aimed at finding out the useful parameters from the image analysis of lip wrinkles that is affected by lipstick application. The digital photograph image of lips before and after lipstick application was assessed from 20 female volunteers. Color tone was measured by Hue, Saturation and Intensity parameters, and time-related pigment spread was calculated by the area over vermilion border by image-analysis software (Image-Pro). The efficacy of wrinkle-improving lipstick containing asiaticoside was evaluated from 50 women by using subjective and objective methods including image analysis in a double-blind placebo-controlled fashion. The color tone and spread phenomenon after lipstick make-up were remarkably affected by lip wrinkles. The level of standard deviation by saturation value of image-analysis software was revealed as a good parameter for lip wrinkles. By using the lipstick containing asiaticoside for 8 weeks, the change of visual grading scores and replica analysis indicated the wrinkle-improving effect. As the depth and number of wrinkles were reduced, the lipstick make-up appearance by image analysis also improved significantly. The lip wrinkle pattern together with lipstick make-up can be evaluated by the image-analysis system in addition to traditional assessment methods. Thus, this evaluation system is expected to test the efficacy of wrinkle-reducing lipstick that was not described in previous dermatologic clinical studies.

  13. An approach for quantitative image quality analysis for CT

    Science.gov (United States)

    Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe

    2016-03-01

    An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.

  14. Calculation Method to Determine the Group Composition of Vacuum Distillate with High Content of Saturated Hydrocarbons

    Directory of Open Access Journals (Sweden)

    Nazarova Galina

    2016-01-01

    Full Text Available Calculation method to determine the group composition of the heavy fraction of vacuum distillate with high content of saturated hydrocarbons, obtained by vacuum distillation of the residue from the West Siberian oil with subsequent hydrotreating, are given in this research. The method is built on the basis of calculation the physico-chemical characteristics and the group composition of vacuum distillate according to the fractional composition and density considering with high content of saturated hydrocarbons in the fraction. Calculation method allows to determine the content of paraffinic, naphthenic, aromatic hydrocarbons and the resins in vacuum distillate with high accuracy and can be used in refineries for rapid determination of the group composition of vacuum distillate.

  15. Electric-arc synthesis of soot with high content of higher fullerenes in parallel arc

    Science.gov (United States)

    Dutlov, A. E.; Nekrasov, V. M.; Sergeev, A. G.; Bubnov, V. P.; Kareev, I. E.

    2016-12-01

    Soot with a relatively high content of higher fullerenes (C76, C78, C80, C82, C84, C86, etc.) is synthesized in a parallel arc upon evaporation of pure carbon electrodes. The content of higher fullerenes in soot extract amounts to 13.8 wt % when two electrodes are simultaneously burnt in electric-arc reactor. Such a content is comparable with the content obtained upon evaporation of composite graphite electrodes with potassium carbonate impurity.

  16. Automated image analysis of atomic force microscopy images of rotavirus particles

    Energy Technology Data Exchange (ETDEWEB)

    Venkataraman, S. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Allison, D.P. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Biochemistry, Cellular, and Molecular Biology, University of Tennessee, Knoxville, TN 37996 (United States); Molecular Imaging Inc. Tempe, AZ, 85282 (United States); Qi, H. [Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Morrell-Falvey, J.L. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Kallewaard, N.L. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Crowe, J.E. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Doktycz, M.J. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States)]. E-mail: doktyczmj@ornl.gov

    2006-06-15

    A variety of biological samples can be imaged by the atomic force microscope (AFM) under environments that range from vacuum to ambient to liquid. Generally imaging is pursued to evaluate structural features of the sample or perhaps identify some structural changes in the sample that are induced by the investigator. In many cases, AFM images of sample features and induced structural changes are interpreted in general qualitative terms such as markedly smaller or larger, rougher, highly irregular, or smooth. Various manual tools can be used to analyze images and extract more quantitative data, but this is usually a cumbersome process. To facilitate quantitative AFM imaging, automated image analysis routines are being developed. Viral particles imaged in water were used as a test case to develop an algorithm that automatically extracts average dimensional information from a large set of individual particles. The extracted information allows statistical analyses of the dimensional characteristics of the particles and facilitates interpretation related to the binding of the particles to the surface. This algorithm is being extended for analysis of other biological samples and physical objects that are imaged by AFM.

  17. Towards a quantitative OCT image analysis.

    Directory of Open Access Journals (Sweden)

    Marina Garcia Garrido

    Full Text Available Optical coherence tomography (OCT is an invaluable diagnostic tool for the detection and follow-up of retinal pathology in patients and experimental disease models. However, as morphological structures and layering in health as well as their alterations in disease are complex, segmentation procedures have not yet reached a satisfactory level of performance. Therefore, raw images and qualitative data are commonly used in clinical and scientific reports. Here, we assess the value of OCT reflectivity profiles as a basis for a quantitative characterization of the retinal status in a cross-species comparative study.Spectral-Domain Optical Coherence Tomography (OCT, confocal Scanning-Laser Ophthalmoscopy (SLO, and Fluorescein Angiography (FA were performed in mice (Mus musculus, gerbils (Gerbillus perpadillus, and cynomolgus monkeys (Macaca fascicularis using the Heidelberg Engineering Spectralis system, and additional SLOs and FAs were obtained with the HRA I (same manufacturer. Reflectivity profiles were extracted from 8-bit greyscale OCT images using the ImageJ software package (http://rsb.info.nih.gov/ij/.Reflectivity profiles obtained from OCT scans of all three animal species correlated well with ex vivo histomorphometric data. Each of the retinal layers showed a typical pattern that varied in relative size and degree of reflectivity across species. In general, plexiform layers showed a higher level of reflectivity than nuclear layers. A comparison of reflectivity profiles from specialized retinal regions (e.g. visual streak in gerbils, fovea in non-human primates with respective regions of human retina revealed multiple similarities. In a model of Retinitis Pigmentosa (RP, the value of reflectivity profiles for the follow-up of therapeutic interventions was demonstrated.OCT reflectivity profiles provide a detailed, quantitative description of retinal layers and structures including specialized retinal regions. Our results highlight the

  18. Abnormally High Content of Free Glucosamine Residues Identified in a Preparation of Commercially Available Porcine Intestinal Heparan Sulfate.

    Science.gov (United States)

    Mulloy, Barbara; Wu, Nian; Gyapon-Quast, Frederick; Lin, Lei; Zhang, Fuming; Pickering, Matthew C; Linhardt, Robert J; Feizi, Ten; Chai, Wengang

    2016-07-05

    Heparan sulfate (HS) polysaccharides are ubiquitous in animal tissues as components of proteoglycans, and they participate in many important biological processes. HS carbohydrate chains are complex and can contain rare structural components such as N-unsubstituted glucosamine (GlcN). Commercially available HS preparations have been invaluable in many types of research activities. In the course of preparing microarrays to include probes derived from HS oligosaccharides, we found an unusually high content of GlcN residue in a recently purchased batch of porcine intestinal mucosal HS. Composition and sequence analysis by mass spectrometry of the oligosaccharides obtained after heparin lyase III digestion of the polysaccharide indicated two and three GlcN in the tetrasaccharide and hexasaccharide fractions, respectively. (1)H NMR of the intact polysaccharide showed that this unusual batch differed strikingly from other HS preparations obtained from bovine kidney and porcine intestine. The very high content of GlcN (30%) and low content of GlcNAc (4.2%) determined by disaccharide composition analysis indicated that N-deacetylation and/or N-desulfation may have taken place. HS is widely used by the scientific community to investigate HS structures and activities. Great care has to be taken in drawing conclusions from investigations of structural features of HS and specificities of HS interaction with proteins when commercial HS is used without further analysis. Pending the availability of a validated commercial HS reference preparation, our data may be useful to members of the scientific community who have used the present preparation in their studies.

  19. Abnormally High Content of Free Glucosamine Residues Identified in a Preparation of Commercially Available Porcine Intestinal Heparan Sulfate

    Science.gov (United States)

    2016-01-01

    Heparan sulfate (HS) polysaccharides are ubiquitous in animal tissues as components of proteoglycans, and they participate in many important biological processes. HS carbohydrate chains are complex and can contain rare structural components such as N-unsubstituted glucosamine (GlcN). Commercially available HS preparations have been invaluable in many types of research activities. In the course of preparing microarrays to include probes derived from HS oligosaccharides, we found an unusually high content of GlcN residue in a recently purchased batch of porcine intestinal mucosal HS. Composition and sequence analysis by mass spectrometry of the oligosaccharides obtained after heparin lyase III digestion of the polysaccharide indicated two and three GlcN in the tetrasaccharide and hexasaccharide fractions, respectively. 1H NMR of the intact polysaccharide showed that this unusual batch differed strikingly from other HS preparations obtained from bovine kidney and porcine intestine. The very high content of GlcN (30%) and low content of GlcNAc (4.2%) determined by disaccharide composition analysis indicated that N-deacetylation and/or N-desulfation may have taken place. HS is widely used by the scientific community to investigate HS structures and activities. Great care has to be taken in drawing conclusions from investigations of structural features of HS and specificities of HS interaction with proteins when commercial HS is used without further analysis. Pending the availability of a validated commercial HS reference preparation, our data may be useful to members of the scientific community who have used the present preparation in their studies. PMID:27295282

  20. Assessment of cluster yield components by image analysis.

    Science.gov (United States)

    Diago, Maria P; Tardaguila, Javier; Aleixos, Nuria; Millan, Borja; Prats-Montalban, Jose M; Cubero, Sergio; Blasco, Jose

    2015-04-01

    Berry weight, berry number and cluster weight are key parameters for yield estimation for wine and tablegrape industry. Current yield prediction methods are destructive, labour-demanding and time-consuming. In this work, a new methodology, based on image analysis was developed to determine cluster yield components in a fast and inexpensive way. Clusters of seven different red varieties of grapevine (Vitis vinifera L.) were photographed under laboratory conditions and their cluster yield components manually determined after image acquisition. Two algorithms based on the Canny and the logarithmic image processing approaches were tested to find the contours of the berries in the images prior to berry detection performed by means of the Hough Transform. Results were obtained in two ways: by analysing either a single image of the cluster or using four images per cluster from different orientations. The best results (R(2) between 69% and 95% in berry detection and between 65% and 97% in cluster weight estimation) were achieved using four images and the Canny algorithm. The model's capability based on image analysis to predict berry weight was 84%. The new and low-cost methodology presented here enabled the assessment of cluster yield components, saving time and providing inexpensive information in comparison with current manual methods. © 2014 Society of Chemical Industry.

  1. Cnn Based Retinal Image Upscaling Using Zero Component Analysis

    Science.gov (United States)

    Nasonov, A.; Chesnakov, K.; Krylov, A.

    2017-05-01

    The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.

  2. Enhanced bone structural analysis through pQCT image preprocessing.

    Science.gov (United States)

    Cervinka, T; Hyttinen, J; Sievanen, H

    2010-05-01

    Several factors, including preprocessing of the image, can affect the reliability of pQCT-measured bone traits, such as cortical area and trabecular density. Using repeated scans of four different liquid phantoms and repeated in vivo scans of distal tibiae from 25 subjects, the performance of two novel preprocessing methods, based on the down-sampling of grayscale intensity histogram and the statistical approximation of image data, was compared to 3 x 3 and 5 x 5 median filtering. According to phantom measurements, the signal to noise ratio in the raw pQCT images (XCT 3000) was low ( approximately 20dB) which posed a challenge for preprocessing. Concerning the cortical analysis, the reliability coefficient (R) was 67% for the raw image and increased to 94-97% after preprocessing without apparent preference for any method. Concerning the trabecular density, the R-values were already high ( approximately 99%) in the raw images leaving virtually no room for improvement. However, some coarse structural patterns could be seen in the preprocessed images in contrast to a disperse distribution of density levels in the raw image. In conclusion, preprocessing cannot suppress the high noise level to the extent that the analysis of mean trabecular density is essentially improved, whereas preprocessing can enhance cortical bone analysis and also facilitate coarse structural analyses of the trabecular region.

  3. Advanced Color Image Processing and Analysis

    CERN Document Server

    2013-01-01

    This volume does much more than survey modern advanced color processing. Starting with a historical perspective on ways we have classified color, it sets out the latest numerical techniques for analyzing and processing colors, the leading edge in our search to accurately record and print what we see. The human eye perceives only a fraction of available light wavelengths, yet we live in a multicolor world of myriad shining hues. Colors rich in metaphorical associations make us “purple with rage” or “green with envy” and cause us to “see red.” Defining colors has been the work of centuries, culminating in today’s complex mathematical coding that nonetheless remains a work in progress: only recently have we possessed the computing capacity to process the algebraic matrices that reproduce color more accurately. With chapters on dihedral color and image spectrometers, this book provides technicians and researchers with the knowledge they need to grasp the intricacies of today’s color imaging.

  4. Spectral mixture analysis of EELS spectrum-images

    Energy Technology Data Exchange (ETDEWEB)

    Dobigeon, Nicolas [University of Toulouse, IRIT/INP-ENSEEIHT, 2 rue Camichel, 31071 Toulouse Cedex 7 (France); Brun, Nathalie, E-mail: nathalie.brun@u-psud.fr [University of Paris Sud, Laboratoire de Physique des Solides, CNRS, UMR 8502, 91405 Orsay Cedex (France)

    2012-09-15

    Recent advances in detectors and computer science have enabled the acquisition and the processing of multidimensional datasets, in particular in the field of spectral imaging. Benefiting from these new developments, Earth scientists try to recover the reflectance spectra of macroscopic materials (e.g., water, grass, mineral types Horizontal-Ellipsis ) present in an observed scene and to estimate their respective proportions in each mixed pixel of the acquired image. This task is usually referred to as spectral mixture analysis or spectral unmixing (SU). SU aims at decomposing the measured pixel spectrum into a collection of constituent spectra, called endmembers, and a set of corresponding fractions (abundances) that indicate the proportion of each endmember present in the pixel. Similarly, when processing spectrum-images, microscopists usually try to map elemental, physical and chemical state information of a given material. This paper reports how a SU algorithm dedicated to remote sensing hyperspectral images can be successfully applied to analyze spectrum-image resulting from electron energy-loss spectroscopy (EELS). SU generally overcomes standard limitations inherent to other multivariate statistical analysis methods, such as principal component analysis (PCA) or independent component analysis (ICA), that have been previously used to analyze EELS maps. Indeed, ICA and PCA may perform poorly for linear spectral mixture analysis due to the strong dependence between the abundances of the different materials. One example is presented here to demonstrate the potential of this technique for EELS analysis. -- Highlights: Black-Right-Pointing-Pointer EELS spectrum images are identical to hyperspectral images for Earth science. Black-Right-Pointing-Pointer Spectral unmixing algorithms have proliferated in the remote sensing field. Black-Right-Pointing-Pointer These powerful techniques can be successfully applied to EELS mapping. Black-Right-Pointing-Pointer Potential

  5. Spectral mixture analysis of EELS spectrum-images.

    Science.gov (United States)

    Dobigeon, Nicolas; Brun, Nathalie

    2012-09-01

    Recent advances in detectors and computer science have enabled the acquisition and the processing of multidimensional datasets, in particular in the field of spectral imaging. Benefiting from these new developments, Earth scientists try to recover the reflectance spectra of macroscopic materials (e.g., water, grass, mineral types…) present in an observed scene and to estimate their respective proportions in each mixed pixel of the acquired image. This task is usually referred to as spectral mixture analysis or spectral unmixing (SU). SU aims at decomposing the measured pixel spectrum into a collection of constituent spectra, called endmembers, and a set of corresponding fractions (abundances) that indicate the proportion of each endmember present in the pixel. Similarly, when processing spectrum-images, microscopists usually try to map elemental, physical and chemical state information of a given material. This paper reports how a SU algorithm dedicated to remote sensing hyperspectral images can be successfully applied to analyze spectrum-image resulting from electron energy-loss spectroscopy (EELS). SU generally overcomes standard limitations inherent to other multivariate statistical analysis methods, such as principal component analysis (PCA) or independent component analysis (ICA), that have been previously used to analyze EELS maps. Indeed, ICA and PCA may perform poorly for linear spectral mixture analysis due to the strong dependence between the abundances of the different materials. One example is presented here to demonstrate the potential of this technique for EELS analysis. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. NEPR Principle Component Analysis - NOAA TIFF Image

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This GeoTiff is a representation of seafloor topography in Northeast Puerto Rico derived from a bathymetry model with a principle component analysis (PCA). The area...

  7. Quantitative analysis of in vivo confocal microscopy images: a review.

    Science.gov (United States)

    Patel, Dipika V; McGhee, Charles N

    2013-01-01

    In vivo confocal microscopy (IVCM) is a non-invasive method of examining the living human cornea. The recent trend towards quantitative studies using IVCM has led to the development of a variety of methods for quantifying image parameters. When selecting IVCM images for quantitative analysis, it is important to be consistent regarding the location, depth, and quality of images. All images should be de-identified, randomized, and calibrated prior to analysis. Numerous image analysis software are available, each with their own advantages and disadvantages. Criteria for analyzing corneal epithelium, sub-basal nerves, keratocytes, endothelium, and immune/inflammatory cells have been developed, although there is inconsistency among research groups regarding parameter definition. The quantification of stromal nerve parameters, however, remains a challenge. Most studies report lower inter-observer repeatability compared with intra-observer repeatability, and observer experience is known to be an important factor. Standardization of IVCM image analysis through the use of a reading center would be crucial for any future large, multi-centre clinical trials using IVCM.

  8. Visual analysis of the computer simulation for both imaging and non-imaging optical systems

    Science.gov (United States)

    Barladian, B. K.; Potemin, I. S.; Zhdanov, D. D.; Voloboy, A. G.; Shapiro, L. S.; Valiev, I. V.; Birukov, E. D.

    2016-10-01

    Typical results of the optic simulation are images generated on the virtual sensors of various kinds. As a rule, these images represent two-dimensional distribution of the light values in Cartesian coordinates (luminance, illuminance) or in polar coordinates (luminous intensity). Using the virtual sensors allows making the calculation and design of different kinds of illumination devices, providing stray light analysis, synthesizing of photorealistic images of three-dimensional scenes under the complex illumination generated with optical systems, etc. Based on rich experience in the development and practical using of computer systems of virtual prototyping and photorealistic visualization the authors formulated a number of basic requirements for the visualization and analysis of the results of light simulations represented as two-dimensional distribution of luminance, illuminance and luminous intensity values. The requirements include the tone mapping operators, pseudo color imaging, visualization of the spherical panorama, regression analysis, the analysis of the image sections and regions, analysis of pixel values, the image data export, etc. All those requirements were successfully satisfied in designed software component for visual analysis of the light simulation results. The module "LumiVue" is an integral part of "Lumicept" modeling system and the corresponding plug-in of computer-aided design and support for CATIA product. A number of visual examples of analysis of calculated two-dimensional distribution of luminous intensity, illuminance and luminance illustrate the article. The examples are results of simulation and design of lighting optical systems, secondary optics for LEDs, stray light analysis, virtual prototyping and photorealistic rendering.

  9. Analysis of Fast- ICA Algorithm for Separation of Mixed Images

    Directory of Open Access Journals (Sweden)

    Tanmay Awasthy

    2013-10-01

    Full Text Available Independent component analysis (ICA is a newly developed method in which the aim is to find a linear representation of nongaussian statistics so that the components are statistically independent, or as independent as possible. Such techniques are actively being used in study of both statistical image processing and unsupervised neural learning application. This paper represents the Fast Independent component analysis algorithm for separation of mixed images. To solve the blind signal separation problems Independent component analysis approach used statistical independence of the source signals. This paper focuses on the theory and methods of ICA in contrast to classical transformations along with the applications of this method to blind source separation .For an illustration of the algorithm, visualized the immixing process with a set of images has been done. To express the results of our analysis simulations have been presented.

  10. A novel automatic image processing algorithm for detection of hard exudates based on retinal image analysis.

    Science.gov (United States)

    Sánchez, Clara I; Hornero, Roberto; López, María I; Aboy, Mateo; Poza, Jesús; Abásolo, Daniel

    2008-04-01

    We present an automatic image processing algorithm to detect hard exudates. Automatic detection of hard exudates from retinal images is an important problem since hard exudates are associated with diabetic retinopathy and have been found to be one of the most prevalent earliest signs of retinopathy. The algorithm is based on Fisher's linear discriminant analysis and makes use of colour information to perform the classification of retinal exudates. We prospectively assessed the algorithm performance using a database containing 58 retinal images with variable colour, brightness, and quality. Our proposed algorithm obtained a sensitivity of 88% with a mean number of 4.83+/-4.64 false positives per image using the lesion-based performance evaluation criterion, and achieved an image-based classification accuracy of 100% (sensitivity of 100% and specificity of 100%).

  11. [Development of a high content protein beverage from Chilean mesquite, lupine and quinoa for the diet of pre-schoolers].

    Science.gov (United States)

    Cerezal Mezquita, P; Acosta Barrientos, E; Rojas Valdivia, G; Romero Palacios, N; Arcos Zavala, R

    2012-01-01

    This research was aimed at developing a high content protein beverage from the mixture of liquid extracts of a pseudocereal, quinoa (Chenopodium quinoa Willd) and two legumes: mesquite (Prosopis chilensis (Mol.) Stunz) and lupine (Lupinus albus L.), native from the Andean highlands of the Chilean northern macro-zone, flavored with raspberry pulp, to help in the feeding of children between 2 and 5 years of lower socioeconomic status with nutritional deficiencies. The formulation was defined by linear programming, its composition was determined by proximate analysis and physical, microbiological and sensory acceptance tests were performed. After 90 days of storage time, the beverage got a protein content of 1.36%, being tryptophan the limiting amino acid; for its part, the chromaticity coordinates of CIEL*a*b* color space showed no statistical significant differences (p < 0.05) maintaining the "dark pink" tonality, the viscosity and the sensory evaluation were acceptable for drinking.

  12. High content screening of a kinase-focused library reveals compounds broadly-active against dengue viruses.

    Directory of Open Access Journals (Sweden)

    Deu John M Cruz

    Full Text Available Dengue virus is a mosquito-borne flavivirus that has a large impact in global health. It is considered as one of the medically important arboviruses, and developing a preventive or therapeutic solution remains a top priority in the medical and scientific community. Drug discovery programs for potential dengue antivirals have increased dramatically over the last decade, largely in part to the introduction of high-throughput assays. In this study, we have developed an image-based dengue high-throughput/high-content assay (HT/HCA using an innovative computer vision approach to screen a kinase-focused library for anti-dengue compounds. Using this dengue HT/HCA, we identified a group of compounds with a 4-(1-aminoethyl-N-methylthiazol-2-amine as a common core structure that inhibits dengue viral infection in a human liver-derived cell line (Huh-7.5 cells. Compounds CND1201, CND1203 and CND1243 exhibited strong antiviral activities against all four dengue serotypes. Plaque reduction and time-of-addition assays suggests that these compounds interfere with the late stage of viral infection cycle. These findings demonstrate that our image-based dengue HT/HCA is a reliable tool that can be used to screen various chemical libraries for potential dengue antiviral candidates.

  13. Acne image analysis: lesion localization and classification

    Science.gov (United States)

    Abas, Fazly Salleh; Kaffenberger, Benjamin; Bikowski, Joseph; Gurcan, Metin N.

    2016-03-01

    Acne is a common skin condition present predominantly in the adolescent population, but may continue into adulthood. Scarring occurs commonly as a sequel to severe inflammatory acne. The presence of acne and resultant scars are more than cosmetic, with a significant potential to alter quality of life and even job prospects. The psychosocial effects of acne and scars can be disturbing and may be a risk factor for serious psychological concerns. Treatment efficacy is generally determined based on an invalidated gestalt by the physician and patient. However, the validated assessment of acne can be challenging and time consuming. Acne can be classified into several morphologies including closed comedones (whiteheads), open comedones (blackheads), papules, pustules, cysts (nodules) and scars. For a validated assessment, the different morphologies need to be counted independently, a method that is far too time consuming considering the limited time available for a consultation. However, it is practical to record and analyze images since dermatologists can validate the severity of acne within seconds after uploading an image. This paper covers the processes of region-ofinterest determination using entropy-based filtering and thresholding as well acne lesion feature extraction. Feature extraction methods using discrete wavelet frames and gray-level co-occurence matrix were presented and their effectiveness in separating the six major acne lesion classes were discussed. Several classifiers were used to test the extracted features. Correct classification accuracy as high as 85.5% was achieved using the binary classification tree with fourteen principle components used as descriptors. Further studies are underway to further improve the algorithm performance and validate it on a larger database.

  14. Glioblastoma multiforme: exploratory radiogenomic analysis by using quantitative image features.

    Science.gov (United States)

    Gevaert, Olivier; Mitchell, Lex A; Achrol, Achal S; Xu, Jiajing; Echegaray, Sebastian; Steinberg, Gary K; Cheshier, Samuel H; Napel, Sandy; Zaharchuk, Greg; Plevritis, Sylvia K

    2014-10-01

    To derive quantitative image features from magnetic resonance (MR) images that characterize the radiographic phenotype of glioblastoma multiforme (GBM) lesions and to create radiogenomic maps associating these features with various molecular data. Clinical, molecular, and MR imaging data for GBMs in 55 patients were obtained from the Cancer Genome Atlas and the Cancer Imaging Archive after local ethics committee and institutional review board approval. Regions of interest (ROIs) corresponding to enhancing necrotic portions of tumor and peritumoral edema were drawn, and quantitative image features were derived from these ROIs. Robust quantitative image features were defined on the basis of an intraclass correlation coefficient of 0.6 for a digital algorithmic modification and a test-retest analysis. The robust features were visualized by using hierarchic clustering and were correlated with survival by using Cox proportional hazards modeling. Next, these robust image features were correlated with manual radiologist annotations from the Visually Accessible Rembrandt Images (VASARI) feature set and GBM molecular subgroups by using nonparametric statistical tests. A bioinformatic algorithm was used to create gene expression modules, defined as a set of coexpressed genes together with a multivariate model of cancer driver genes predictive of the module's expression pattern. Modules were correlated with robust image features by using the Spearman correlation test to create radiogenomic maps and to link robust image features with molecular pathways. Eighteen image features passed the robustness analysis and were further analyzed for the three types of ROIs, for a total of 54 image features. Three enhancement features were significantly correlated with survival, 77 significant correlations were found between robust quantitative features and the VASARI feature set, and seven image features were correlated with molecular subgroups (P < .05 for all). A radiogenomics map was

  15. Analysis of Two-Dimensional Electrophoresis Gel Images

    DEFF Research Database (Denmark)

    Pedersen, Lars

    2002-01-01

    This thesis describes and proposes solutions to some of the currently most important problems in pattern recognition and image analysis of two-dimensional gel electrophoresis (2DGE) images. 2DGE is the leading technique to separate individual proteins in biological samples with many biological...... the methods developed in the literature specifically for matching protein spot patterns, the focus is on a method based on neighbourhood relations. These methods are applied to a range of 2DGE protein spot data in a comparative study. The point pattern matching requires segmentation of the gel images...... and since the correct image segmentation can be difficult, a new alternative approach, exploiting prior knowledge from a reference gel about the protein locations to segment an incoming gel image, is proposed....

  16. Image analysis of ocular fundus for retinopathy characterization

    Energy Technology Data Exchange (ETDEWEB)

    Ushizima, Daniela; Cuadros, Jorge

    2010-02-05

    Automated analysis of ocular fundus images is a common procedure in countries as England, including both nonemergency examination and retinal screening of patients with diabetes mellitus. This involves digital image capture and transmission of the images to a digital reading center for evaluation and treatment referral. In collaboration with the Optometry Department, University of California, Berkeley, we have tested computer vision algorithms to segment vessels and lesions in ground-truth data (DRIVE database) and hundreds of images of non-macular centric and nonuniform illumination views of the eye fundus from EyePACS program. Methods under investigation involve mathematical morphology (Figure 1) for image enhancement and pattern matching. Recently, we have focused in more efficient techniques to model the ocular fundus vasculature (Figure 2), using deformable contours. Preliminary results show accurate segmentation of vessels and high level of true-positive microaneurysms.

  17. Deep Learning for Intelligent Substation Device Infrared Fault Image Analysis

    Directory of Open Access Journals (Sweden)

    Lin Ying

    2016-01-01

    Full Text Available As an important kind of data for device status evaluation, the increasing infrared image data in electrical system puts forward a new challenge to traditional manually processing mode. To overcome this problem, this paper proposes a feasible way to automatically process massive infrared fault images. We take advantage of the imaging characteristics of infrared fault images and detect fault regions together with its belonging device part by our proposed algorithm, which first segment images into superpixels, and then adopt the state-of-the-art convolutional and recursive neural network for intelligent object recognition. In the experiment, we compare several unsupervised pre-training methods considering the importance of a pre-train procedure, and discuss the proper parameters for the proposed network. The experimental results show the good performance of our algorithm, and its efficiency for infrared analysis.

  18. Standardization of Image Quality Analysis – ISO 19264

    DEFF Research Database (Denmark)

    Kejser, Ulla Bøgvad; Wüller, Dietmar

    2016-01-01

    There are a variety of image quality analysis tools available for the archiving world, which are based on different test charts and analysis algorithms. ISO has formed a working group in 2012 to harmonize these approaches and create a standard way of analyzing the image quality for archiving...... systems. This has resulted in three documents that have been or are going to be published soon. ISO 19262 defines the terms used in the area of image capture to unify the language. ISO 19263 describes the workflow issues and provides detailed information on how the measurements are done. Last...... but not least ISO 19264 describes the measurements in detail and provides aims and tolerance levels for the different aspects. This paper will present the new ISO 19264 technical specification to analyze image quality based on a single capture of a multi-pattern test chart, and discuss the reasoning behind its...

  19. Issues in Quantitative Analysis of Ultraviolet Imager (UV) Data: Airglow

    Science.gov (United States)

    Germany, G. A.; Richards, P. G.; Spann, J. F.; Brittnacher, M. J.; Parks, G. K.

    1999-01-01

    The GGS Ultraviolet Imager (UVI) has proven to be especially valuable in correlative substorm, auroral morphology, and extended statistical studies of the auroral regions. Such studies are based on knowledge of the location, spatial, and temporal behavior of auroral emissions. More quantitative studies, based on absolute radiometric intensities from UVI images, require a more intimate knowledge of the instrument behavior and data processing requirements and are inherently more difficult than studies based on relative knowledge of the oval location. In this study, UVI airglow observations are analyzed and compared with model predictions to illustrate issues that arise in quantitative analysis of UVI images. These issues include instrument calibration, long term changes in sensitivity, and imager flat field response as well as proper background correction. Airglow emissions are chosen for this study because of their relatively straightforward modeling requirements and because of their implications for thermospheric compositional studies. The analysis issues discussed here, however, are identical to those faced in quantitative auroral studies.

  20. Issues in Quantitative Analysis of Ultraviolet Imager (UV) Data: Airglow

    Science.gov (United States)

    Germany, G. A.; Richards, P. G.; Spann, J. F.; Brittnacher, M. J.; Parks, G. K.

    1999-01-01

    The GGS Ultraviolet Imager (UVI) has proven to be especially valuable in correlative substorm, auroral morphology, and extended statistical studies of the auroral regions. Such studies are based on knowledge of the location, spatial, and temporal behavior of auroral emissions. More quantitative studies, based on absolute radiometric intensities from UVI images, require a more intimate knowledge of the instrument behavior and data processing requirements and are inherently more difficult than studies based on relative knowledge of the oval location. In this study, UVI airglow observations are analyzed and compared with model predictions to illustrate issues that arise in quantitative analysis of UVI images. These issues include instrument calibration, long term changes in sensitivity, and imager flat field response as well as proper background correction. Airglow emissions are chosen for this study because of their relatively straightforward modeling requirements and because of their implications for thermospheric compositional studies. The analysis issues discussed here, however, are identical to those faced in quantitative auroral studies.

  1. Standardization of Image Quality Analysis – ISO 19264

    DEFF Research Database (Denmark)

    Kejser, Ulla Bøgvad; Wüller, Dietmar

    2016-01-01

    There are a variety of image quality analysis tools available for the archiving world, which are based on different test charts and analysis algorithms. ISO has formed a working group in 2012 to harmonize these approaches and create a standard way of analyzing the image quality for archiving...... systems. This has resulted in three documents that have been or are going to be published soon. ISO 19262 defines the terms used in the area of image capture to unify the language. ISO 19263 describes the workflow issues and provides detailed information on how the measurements are done. Last...... but not least ISO 19264 describes the measurements in detail and provides aims and tolerance levels for the different aspects. This paper will present the new ISO 19264 technical specification to analyze image quality based on a single capture of a multi-pattern test chart, and discuss the reasoning behind its...

  2. Mathematical methods in time series analysis and digital image processing

    CERN Document Server

    Kurths, J; Maass, P; Timmer, J

    2008-01-01

    The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/enviromental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.

  3. Peripheral blood smear image analysis: A comprehensive review.

    Science.gov (United States)

    Mohammed, Emad A; Mohamed, Mostafa M A; Far, Behrouz H; Naugler, Christopher

    2014-01-01

    Peripheral blood smear image examination is a part of the routine work of every laboratory. The manual examination of these images is tedious, time-consuming and suffers from interobserver variation. This has motivated researchers to develop different algorithms and methods to automate peripheral blood smear image analysis. Image analysis itself consists of a sequence of steps consisting of image segmentation, features extraction and selection and pattern classification. The image segmentation step addresses the problem of extraction of the object or region of interest from the complicated peripheral blood smear image. Support vector machine (SVM) and artificial neural networks (ANNs) are two common approaches to image segmentation. Features extraction and selection aims to derive descriptive characteristics of the extracted object, which are similar within the same object class and different between different objects. This will facilitate the last step of the image analysis process: pattern classification. The goal of pattern classification is to assign a class to the selected features from a group of known classes. There are two types of classifier learning algorithms: supervised and unsupervised. Supervised learning algorithms predict the class of the object under test using training data of known classes. The training data have a predefined label for every class and the learning algorithm can utilize this data to predict the class of a test object. Unsupervised learning algorithms use unlabeled training data and divide them into groups using similarity measurements. Unsupervised learning algorithms predict the group to which a new test object belong to, based on the training data without giving an explicit class to that object. ANN, SVM, decision tree and K-nearest neighbor are possible approaches to classification algorithms. Increased discrimination may be obtained by combining several classifiers together.

  4. Peripheral blood smear image analysis: A comprehensive review

    Directory of Open Access Journals (Sweden)

    Emad A Mohammed

    2014-01-01

    Full Text Available Peripheral blood smear image examination is a part of the routine work of every laboratory. The manual examination of these images is tedious, time-consuming and suffers from interobserver variation. This has motivated researchers to develop different algorithms and methods to automate peripheral blood smear image analysis. Image analysis itself consists of a sequence of steps consisting of image segmentation, features extraction and selection and pattern classification. The image segmentation step addresses the problem of extraction of the object or region of interest from the complicated peripheral blood smear image. Support vector machine (SVM and artificial neural networks (ANNs are two common approaches to image segmentation. Features extraction and selection aims to derive descriptive characteristics of the extracted object, which are similar within the same object class and different between different objects. This will facilitate the last step of the image analysis process: pattern classification. The goal of pattern classification is to assign a class to the selected features from a group of known classes. There are two types of classifier learning algorithms: supervised and unsupervised. Supervised learning algorithms predict the class of the object under test using training data of known classes. The training data have a predefined label for every class and the learning algorithm can utilize this data to predict the class of a test object. Unsupervised learning algorithms use unlabeled training data and divide them into groups using similarity measurements. Unsupervised learning algorithms predict the group to which a new test object belong to, based on the training data without giving an explicit class to that object. ANN, SVM, decision tree and K-nearest neighbor are possible approaches to classification algorithms. Increased discrimination may be obtained by combining several classifiers together.

  5. Nonlinear Denoising and Analysis of Neuroimages With Kernel Principal Component Analysis and Pre-Image Estimation

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Abrahamsen, Trine Julie; Madsen, Kristoffer Hougaard

    2012-01-01

    We investigate the use of kernel principal component analysis (PCA) and the inverse problem known as pre-image estimation in neuroimaging: i) We explore kernel PCA and pre-image estimation as a means for image denoising as part of the image preprocessing pipeline. Evaluation of the denoising...... base these illustrations on two fMRI BOLD data sets — one from a simple finger tapping experiment and the other from an experiment on object recognition in the ventral temporal lobe....

  6. IJBlob: An ImageJ Library for Connected Component Analysis and Shape Analysis

    OpenAIRE

    2013-01-01

    The IJBlob library is a free ImageJ library for connected component analysis. Furthermore, it implements several contour based shape features to describe, filter or classify binary objects in images. Other features are extensible by the IJBlob extension framework. Because connected component labeling is a fundamental operation in many image processing pipelines (e.g. pattern recognition), the library could be useful for many ImageJ projects. The library is written in Java and the recent relea...

  7. Applying Image Matching to Video Analysis

    Science.gov (United States)

    2010-09-01

    Database of Spent Cartridge Cases of Firearms". Forensic Science International . Page(s) 97-106. 2001. 21: Birchfield, S. "Derivation of Kanade-Lucas-Tomasi...Ortega-Garcia, J. "Bayesian Analysis of Fingerprint, Face and Signature Evidences with Automatic Biometric Systems". Forensic Science International . Vol

  8. Evaluation of stereoscopic 3D displays for image analysis tasks

    Science.gov (United States)

    Peinsipp-Byma, E.; Rehfeld, N.; Eck, R.

    2009-02-01

    In many application domains the analysis of aerial or satellite images plays an important role. The use of stereoscopic display technologies can enhance the image analyst's ability to detect or to identify certain objects of interest, which results in a higher performance. Changing image acquisition from analog to digital techniques entailed the change of stereoscopic visualisation techniques. Recently different kinds of digital stereoscopic display techniques with affordable prices have appeared on the market. At Fraunhofer IITB usability tests were carried out to find out (1) with which kind of these commercially available stereoscopic display techniques image analysts achieve the best performance and (2) which of these techniques achieve a high acceptance. First, image analysts were interviewed to define typical image analysis tasks which were expected to be solved with a higher performance using stereoscopic display techniques. Next, observer experiments were carried out whereby image analysts had to solve defined tasks with different visualization techniques. Based on the experimental results (performance parameters and qualitative subjective evaluations of the used display techniques) two of the examined stereoscopic display technologies were found to be very good and appropriate.

  9. Microarray BASICA: Background Adjustment, Segmentation, Image Compression and Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jianping Hua

    2004-01-01

    Full Text Available This paper presents microarray BASICA: an integrated image processing tool for background adjustment, segmentation, image compression, and analysis of cDNA microarray images. BASICA uses a fast Mann-Whitney test-based algorithm to segment cDNA microarray images, and performs postprocessing to eliminate the segmentation irregularities. The segmentation results, along with the foreground and background intensities obtained with the background adjustment, are then used for independent compression of the foreground and background. We introduce a new distortion measurement for cDNA microarray image compression and devise a coding scheme by modifying the embedded block coding with optimized truncation (EBCOT algorithm (Taubman, 2000 to achieve optimal rate-distortion performance in lossy coding while still maintaining outstanding lossless compression performance. Experimental results show that the bit rate required to ensure sufficiently accurate gene expression measurement varies and depends on the quality of cDNA microarray images. For homogeneously hybridized cDNA microarray images, BASICA is able to provide from a bit rate as low as 5 bpp the gene expression data that are 99% in agreement with those of the original 32 bpp images.

  10. Satellite Image Pansharpening Using a Hybrid Approach for Object-Based Image Analysis

    Directory of Open Access Journals (Sweden)

    Nguyen Thanh Hoan

    2012-10-01

    Full Text Available Intensity-Hue-Saturation (IHS, Brovey Transform (BT, and Smoothing-Filter-Based-Intensity Modulation (SFIM algorithms were used to pansharpen GeoEye-1 imagery. The pansharpened images were then segmented in Berkeley Image Seg using a wide range of segmentation parameters, and the spatial and spectral accuracy of image segments was measured. We found that pansharpening algorithms that preserve more of the spatial information of the higher resolution panchromatic image band (i.e., IHS and BT led to more spatially-accurate segmentations, while pansharpening algorithms that minimize the distortion of spectral information of the lower resolution multispectral image bands (i.e., SFIM led to more spectrally-accurate image segments. Based on these findings, we developed a new IHS-SFIM combination approach, specifically for object-based image analysis (OBIA, which combined the better spatial information of IHS and the more accurate spectral information of SFIM to produce image segments with very high spatial and spectral accuracy.

  11. Image edge detection based on multi-fractal spectrum analysis

    Institute of Scientific and Technical Information of China (English)

    WANG Shao-yuan; WANG Yao-nan

    2006-01-01

    In this paper,an image edge detection method based on multi-fractal spectrum analysis is presented.The coarse grain H(o)lder exponent of the image pixels is first computed,then,its multi-fractal spectrum is estimated by the kernel estimation method.Finally,the image edge detection is done by means of different multi-fractal spectrum values.Simulation results show that this method is efficient and has better locality compared with the traditional edge detection methods such as the Sobel method.

  12. Multiphoton autofluorescence spectral analysis for fungus imaging and identification

    Science.gov (United States)

    Lin, Sung-Jan; Tan, Hsin-Yuan; Kuo, Chien-Jui; Wu, Ruei-Jr; Wang, Shiou-Han; Chen, Wei-Liang; Jee, Shiou-Hwa; Dong, Chen-Yuan

    2009-07-01

    We performed multiphoton imaging on fungi of medical significance. Fungal hyphae and spores of Aspergillus flavus, Micosporum gypseum, Micosoprum canis, Trichophyton rubrum, and Trichophyton tonsurans were found to be strongly autofluorescent but generate less prominent second harmonic signal. The cell wall and septum of fungal hyphae can be easily identified by autofluorescence imaging. We found that fungi of various species have distinct autofluorescence characteristics. Our result shows that the combination of multiphoton imaging and spectral analysis can be used to visualize and identify fungal species. This approach may be developed into an effective diagnostic tool for fungal identification.

  13. Water imaging in living plant by nondestructive neutron beam analysis

    Energy Technology Data Exchange (ETDEWEB)

    Nakanishi, M. Tomoko [Graduate School of Agricultural and Life Sciences, Univ. of Tokyo, Tokyo (Japan)

    1998-12-31

    Analysis of biological activity in intact cells or tissues is essential to understand many life processes. Techniques for these in vivo measurements have not been well developed. We present here a nondestructive method to image water in living plants using a neutron beam. This technique provides the highest resolution for water in tissue yet obtainable. With high specificity to water, this neutron beam technique images water movement in seeds or in roots imbedded in soil, as well as in wood and meristems during development. The resolution of the image attainable now is about 15um. We also describe how this new technique will allow new investigations in the field of plant research. (author)

  14. Image Analysis of Fabric Pilling Based on Light Projection

    Institute of Scientific and Technical Information of China (English)

    陈霞; 黄秀宝

    2003-01-01

    The objective assessment of fabric pilling based on light projection and image analysis has been exploited recently.The device for capturing the cross-sectional images of the pilled fabrics with light projection is elaborated.The detection of the profile line and integration of the sequential cross-sectional pilled image are discussed.The threshold based on Gaussian model is recommended for pill segmentation.The results show that the installed system is capable of eliminating the interference with pill information from the fabric color and pattern.

  15. Automatic quantitative analysis of cardiac MR perfusion images

    Science.gov (United States)

    Breeuwer, Marcel M.; Spreeuwers, Luuk J.; Quist, Marcel J.

    2001-07-01

    Magnetic Resonance Imaging (MRI) is a powerful technique for imaging cardiovascular diseases. The introduction of cardiovascular MRI into clinical practice is however hampered by the lack of efficient and accurate image analysis methods. This paper focuses on the evaluation of blood perfusion in the myocardium (the heart muscle) from MR images, using contrast-enhanced ECG-triggered MRI. We have developed an automatic quantitative analysis method, which works as follows. First, image registration is used to compensate for translation and rotation of the myocardium over time. Next, the boundaries of the myocardium are detected and for each position within the myocardium a time-intensity profile is constructed. The time interval during which the contrast agent passes for the first time through the left ventricle and the myocardium is detected and various parameters are measured from the time-intensity profiles in this interval. The measured parameters are visualized as color overlays on the original images. Analysis results are stored, so that they can later on be compared for different stress levels of the heart. The method is described in detail in this paper and preliminary validation results are presented.

  16. SIMA: Python software for analysis of dynamic fluorescence imaging data

    Directory of Open Access Journals (Sweden)

    Patrick eKaifosh

    2014-09-01

    Full Text Available Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs, and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/.

  17. Proceedings of the Airborne Imaging Spectrometer Data Analysis Workshop

    Science.gov (United States)

    Vane, G. (Editor); Goetz, A. F. H. (Editor)

    1985-01-01

    The Airborne Imaging Spectrometer (AIS) Data Analysis Workshop was held at the Jet Propulsion Laboratory on April 8 to 10, 1985. It was attended by 92 people who heard reports on 30 investigations currently under way using AIS data that have been collected over the past two years. Written summaries of 27 of the presentations are in these Proceedings. Many of the results presented at the Workshop are preliminary because most investigators have been working with this fundamentally new type of data for only a relatively short time. Nevertheless, several conclusions can be drawn from the Workshop presentations concerning the value of imaging spectrometry to Earth remote sensing. First, work with AIS has shown that direct identification of minerals through high spectral resolution imaging is a reality for a wide range of materials and geological settings. Second, there are strong indications that high spectral resolution remote sensing will enhance the ability to map vegetation species. There are also good indications that imaging spectrometry will be useful for biochemical studies of vegetation. Finally, there are a number of new data analysis techniques under development which should lead to more efficient and complete information extraction from imaging spectrometer data. The results of the Workshop indicate that as experience is gained with this new class of data, and as new analysis methodologies are developed and applied, the value of imaging spectrometry should increase.

  18. Breast Image Analysis for Risk Assessment, Detection, Diagnosis, and Treatment of Cancer

    NARCIS (Netherlands)

    Giger, M.L.; Karssemeijer, N.; Schnabel, J.A.

    2013-01-01

    The role of breast image analysis in radiologists' interpretation tasks in cancer risk assessment, detection, diagnosis, and treatment continues to expand. Breast image analysis methods include segmentation, feature extraction techniques, classifier design, biomechanical modeling, image registration

  19. Nanobiodevices for Biomolecule Analysis and Imaging

    Science.gov (United States)

    Yasui, Takao; Kaji, Noritada; Baba, Yoshinobu

    2013-06-01

    Nanobiodevices have been developed to analyze biomolecules and cells for biomedical applications. In this review, we discuss several nanobiodevices used for disease-diagnostic devices, molecular imaging devices, regenerative medicine, and drug-delivery systems and describe the numerous advantages of nanobiodevices, especially in biological, medical, and clinical applications. This review also outlines the fabrication technologies for nanostructures and nanomaterials, including top-down nanofabrication and bottom-up molecular self-assembly approaches. We describe nanopillar arrays and nanowall arrays for the ultrafast separation of DNA or protein molecules and nanoball materials for the fast separation of a wide range of DNA molecules, and we present examples of applications of functionalized carbon nanotubes to obtain information about subcellular localization on the basis of mobility differences between free fluorophores and fluorophore-labeled carbon nanotubes. Finally, we discuss applications of newly synthesized quantum dots to the screening of small interfering RNA, highly sensitive detection of disease-related proteins, and development of cancer therapeutics and diagnostics.

  20. Perfect imaging analysis of the spherical geodesic waveguide

    Science.gov (United States)

    González, Juan C.; Benítez, Pablo; Miñano, Juan C.; Grabovičkić, Dejan

    2012-12-01

    Negative Refractive Lens (NRL) has shown that an optical system can produce images with details below the classic Abbe diffraction limit. This optical system transmits the electromagnetic fields, emitted by an object plane, towards an image plane producing the same field distribution in both planes. In particular, a Dirac delta electric field in the object plane is focused without diffraction limit to the Dirac delta electric field in the image plane. Two devices with positive refraction, the Maxwell Fish Eye lens (MFE) and the Spherical Geodesic Waveguide (SGW) have been claimed to break the diffraction limit using positive refraction with a different meaning. In these cases, it has been considered the power transmission from a point source to a point receptor, which falls drastically when the receptor is displaced from the focus by a distance much smaller than the wavelength. Although these systems can detect displacements up to λ/3000, they cannot be compared to the NRL, since the concept of image is different. The SGW deals only with point source and drain, while in the case of the NRL, there is an object and an image surface. Here, it is presented an analysis of the SGW with defined object and image surfaces (both are conical surfaces), similarly as in the case of the NRL. The results show that a Dirac delta electric field on the object surface produces an image below the diffraction limit on the image surface.

  1. Fractal-based image texture analysis of trabecular bone architecture.

    Science.gov (United States)

    Jiang, C; Pitt, R E; Bertram, J E; Aneshansley, D J

    1999-07-01

    Fractal-based image analysis methods are investigated to extract textural features related to the anisotropic structure of trabecular bone from the X-ray images of cubic bone specimens. Three methods are used to quantify image textural features: power spectrum, Minkowski dimension and mean intercept length. The global fractal dimension is used to describe the overall roughness of the image texture. The anisotropic features formed by the trabeculae are characterised by a fabric ellipse, whose orientation and eccentricity reflect the textural anisotropy of the image. Tests of these methods with synthetic images of known fractal dimension show that the Minkowski dimension provides a more accurate and consistent estimation of global fractal dimension. Tests on bone x-ray (eccentricity range 0.25-0.80) images indicate that the Minkowski dimension is more sensitive to the changes in textural orientation. The results suggest that the Minkowski dimension is a better measure for characterising trabecular bone anisotropy in the x-ray images of thick specimens.

  2. Image analysis of moving seeds in an indented cylinder

    DEFF Research Database (Denmark)

    Buus, Ole; Jørgensen, Johannes Ravn

    2010-01-01

    -Spline surfaces. Using image analysis, the seeds will be tracked using a kalman filter and the 2D trajectory, length, velocity, weight, and rotation will be sampled. We expect a high correspondence between seed length and certain spatially optimal seed trajectories. This work is done in collaboration with Westrup...... threshold. The threshold is dependent on a number of different parameters. Besides the seed length, the rotation, general size, shape, and surface texture of each seed, are also known to influence the final sorting result. Such knowledge comes from previous experimentation with the indented cylinder. In our...... work we will seek to understand more about the internal dynamics of the indented cylinder. We will apply image analysis to observe the movement of seeds in the indented cylinder. This work is laying the groundwork for future studies into the application of image analysis as a tool for autonomous...

  3. Studying developmental variation with Geometric Morphometric Image Analysis (GMIA).

    Science.gov (United States)

    Mayer, Christine; Metscher, Brian D; Müller, Gerd B; Mitteroecker, Philipp

    2014-01-01

    The ways in which embryo development can vary across individuals of a population determine how genetic variation translates into adult phenotypic variation. The study of developmental variation has been hampered by the lack of quantitative methods for the joint analysis of embryo shape and the spatial distribution of cellular activity within the developing embryo geometry. By drawing from the strength of geometric morphometrics and pixel/voxel-based image analysis, we present a new approach for the biometric analysis of two-dimensional and three-dimensional embryonic images. Well-differentiated structures are described in terms of their shape, whereas structures with diffuse boundaries, such as emerging cell condensations or molecular gradients, are described as spatial patterns of intensities. We applied this approach to microscopic images of the tail fins of larval and juvenile rainbow trout. Inter-individual variation of shape and cell density was found highly spatially structured across the tail fin and temporally dynamic throughout the investigated period.

  4. FIBER ORIENTATION DISTRIBUTION OF PAPER SURFACE CALCULATED BY IMAGE ANALYSIS

    Institute of Scientific and Technical Information of China (English)

    Toshiharu Enomae; Yoon-Hee Han; Akira Isogai

    2004-01-01

    Anisotropy of paper is an important parameter of paper structure. Image analysis technique was improved for accurate fiber orientation in paper surfaces. Image analysis using Fast Fourier Transform was demonstrated to be an effective means to determine fiber orientation angle and its intensity. Binarization process of micrograph images of paper surface and precise calculation for average Fourier coefficients as an angular distribution by interpolation developed were found to improve the accuracy. This analysis method was applied to digital optical micrographs and scanning electron micrographs of paper. A laboratory handsheet showed a large deviation in the average value of fiber orientation angle, but some kinds of machine-made paper showed about 90 degrees in the orientation angle with very small deviations as expected. Korea and Japanese paper made in the traditional ways showed its own characteristic depending on its hand making processes.

  5. MR image analysis: Longitudinal cardiac motion influences left ventricular measurements

    Energy Technology Data Exchange (ETDEWEB)

    Berkovic, Patrick [University Hospital Antwerp, Department of Cardiology (Belgium)], E-mail: pberko17@hotmail.com; Hemmink, Maarten [University Hospital Antwerp, Department of Cardiology (Belgium)], E-mail: maartenhemmink@gmail.com; Parizel, Paul M. [University Hospital Antwerp, Department of Radiology (Belgium)], E-mail: paul.parizel@uza.be; Vrints, Christiaan J. [University Hospital Antwerp, Department of Cardiology (Belgium)], E-mail: chris.vrints@uza.be; Paelinck, Bernard P. [University Hospital Antwerp, Department of Cardiology (Belgium)], E-mail: Bernard.paelinck@uza.be

    2010-02-15

    Background: Software for the analysis of left ventricular (LV) volumes and mass using border detection in short-axis images only, is hampered by through-plane cardiac motion. Therefore we aimed to evaluate software that involves longitudinal cardiac motion. Methods: Twenty-three consecutive patients underwent 1.5-Tesla cine magnetic resonance (MR) imaging of the entire heart in the long-axis and short-axis orientation with breath-hold steady-state free precession imaging. Offline analysis was performed using software that uses short-axis images (Medis MASS) and software that includes two-chamber and four-chamber images to involve longitudinal LV expansion and shortening (CAAS-MRV). Intraobserver and interobserver reproducibility was assessed by using Bland-Altman analysis. Results: Compared with MASS software, CAAS-MRV resulted in significantly smaller end-diastolic (156 {+-} 48 ml versus 167 {+-} 52 ml, p = 0.001) and end-systolic LV volumes (79 {+-} 48 ml versus 94 {+-} 52 ml, p < 0.001). In addition, CAAS-MRV resulted in higher LV ejection fraction (52 {+-} 14% versus 46 {+-} 13%, p < 0.001) and calculated LV mass (154 {+-} 52 g versus 142 {+-} 52 g, p = 0.004). Intraobserver and interobserver limits of agreement were similar for both methods. Conclusion: MR analysis of LV volumes and mass involving long-axis LV motion is a highly reproducible method, resulting in smaller LV volumes, higher ejection fraction and calculated LV mass.

  6. ASTER Imaging and Analysis of Glacier Hazards

    Science.gov (United States)

    Kargel, Jeffrey; Furfaro, Roberto; Kaser, Georg; Leonard, Gregory; Fink, Wolfgang; Huggel, Christian; Kääb, Andreas; Raup, Bruce; Reynolds, John; Wolfe, David; Zapata, Marco

    Most scientific attention to glaciers, including ASTER and other satellite-derived applications in glacier science, pertains to their roles in the following seven functions: (1) as signposts of climate change (Kaser et al. 1990; Williams and Ferrigno 1999, 2002; Williams et al. 2008; Kargel et al. 2005; Oerlemans 2005), (2) as natural reservoirs of fresh water (Yamada and Motoyama 1988; Yang and Hu 1992; Shiyin et al. 2003; Juen et al. 2007), (3) as contributors to sea-level change (Arendt et al. 2002), (4) as sources of hydropower (Reynolds 1993); much work also relates to the basic science of glaciology, especially (5) the physical phenomeno­logy of glacier flow processes and glacier change (DeAngelis and Skvarca 2003; Berthier et al. 2007; Rivera et al. 2007), (6) glacial geomorphology (Bishop et al. 1999, 2003), and (7) the technology required to acquire and analyze satellite images of glaciers (Bishop et al. 1999, 2000, 2003, 2004; Quincey et al. 2005, 2007; Raup et al. 2000, 2006ab; Khalsa et al. 2004; Paul et al. 2004a, b). These seven functions define the important areas of glaciological science and technology, yet a more pressing issue in parts of the world is the direct danger to people and infrastructure posed by some glaciers (Trask 2005; Morales 1969; Lliboutry et al. 1977; Evans and Clague 1988; Xu and Feng 1989; Reynolds 1993, 1998, 1999; Yamada and Sharma 1993; Hastenrath and Ames 1995; Mool 1995; Ames 1998; Chikita et al. 1999; Williams and Ferrigno 1999; Richardson and Reynolds 2000a, b; Zapata 2002; Huggel et al. 2002, 2004; Xiangsong 1992; Kääb et al. 2003, 2005, 2005c; Salzmann et al. 2004; Noetzli et al. 2006).

  7. High-content behavioral analysis of Caenorhabditis elegans in precise spatiotemporal chemical environments.

    Science.gov (United States)

    Albrecht, Dirk R; Bargmann, Cornelia I

    2011-06-12

    To quantitatively understand chemosensory behaviors, it is desirable to present many animals with repeatable, well-defined chemical stimuli. To that end, we describe a microfluidic system to analyze Caenorhabditis elegans behavior in defined temporal and spatial stimulus patterns. A 2 cm × 2 cm structured arena allowed C. elegans to perform crawling locomotion in a controlled liquid environment. We characterized behavioral responses to attractive odors with three stimulus patterns: temporal pulses, spatial stripes and a linear concentration gradient, all delivered in the fluid phase to eliminate variability associated with air-fluid transitions. Different stimulus configurations preferentially revealed turning dynamics in a biased random walk, directed orientation into an odor stripe and speed regulation by odor. We identified both expected and unexpected responses in wild-type worms and sensory mutants by quantifying dozens of behavioral parameters. The devices are inexpensive, easy to fabricate, reusable and suitable for delivering any liquid-borne stimulus.

  8. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    Science.gov (United States)

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.

  9. LIRA: Low-Count Image Reconstruction and Analysis

    Science.gov (United States)

    Stein, Nathan; van Dyk, David; Connors, Alanna; Siemiginowska, Aneta; Kashyap, Vinay

    2009-09-01

    LIRA is a new software package for the R statistical computing language. The package is designed for multi-scale non-parametric image analysis for use in high-energy astrophysics. The code implements an MCMC sampler that simultaneously fits the image and the necessary tuning/smoothing parameters in the model (an advance from `EMC2' of Esch et al. 2004). The model-based approach allows for quantification of the standard error of the fitted image and can be used to access the statistical significance of features in the image or to evaluate the goodness-of-fit of a proposed model. The method does not rely on Gaussian approximations, instead modeling image counts as Poisson data, making it suitable for images with extremely low counts. LIRA can include a null (or background) model and fit the departure between the observed data and the null model via a wavelet-like multi-scale component. The technique is therefore suited for problems in which some aspect of an observation is well understood (e.g, a point source), but questions remain about observed departures. To quantitatively test for the presence of diffuse structure unaccounted for by a point source null model, first, the observed image is fit with the null model. Second, multiple simulated images, generated as Poisson realizations of the point source model, are fit using the same null model. MCMC samples from the posterior distributions of the parameters of the fitted models can be compared and can be used to calibrate the misfit between the observed data and the null model. Additionally, output from LIRA includes the MCMC draws of the multi-scale component images, so that the departure of the (simulated or observed) data from the point source null model can be examined visually. To demonstrate LIRA, an example of reconstructing Chandra images of high redshift quasars with jets is presented.

  10. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    Science.gov (United States)

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  11. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis.

    Directory of Open Access Journals (Sweden)

    Nan Lin

    Full Text Available Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis.

  12. The analysis of image feature robustness using cometcloud

    Directory of Open Access Journals (Sweden)

    Xin Qi

    2012-01-01

    Full Text Available The robustness of image features is a very important consideration in quantitative image analysis. The objective of this paper is to investigate the robustness of a range of image texture features using hematoxylin stained breast tissue microarray slides which are assessed while simulating different imaging challenges including out of focus, changes in magnification and variations in illumination, noise, compression, distortion, and rotation. We employed five texture analysis methods and tested them while introducing all of the challenges listed above. The texture features that were evaluated include co-occurrence matrix, center-symmetric auto-correlation, texture feature coding method, local binary pattern, and texton. Due to the independence of each transformation and texture descriptor, a network structured combination was proposed and deployed on the Rutgers private cloud. The experiments utilized 20 randomly selected tissue microarray cores. All the combinations of the image transformations and deformations are calculated, and the whole feature extraction procedure was completed in 70 minutes using a cloud equipped with 20 nodes. Center-symmetric auto-correlation outperforms all the other four texture descriptors but also requires the longest computational time. It is roughly 10 times slower than local binary pattern and texton. From a speed perspective, both the local binary pattern and texton features provided excellent performance for classification and content-based image retrieval.

  13. Image classification based on scheme of principal node analysis

    Science.gov (United States)

    Yang, Feng; Ma, Zheng; Xie, Mei

    2016-11-01

    This paper presents a scheme of principal node analysis (PNA) with the aim to improve the representativeness of the learned codebook so as to enhance the classification rate of scene image. Original images are normalized into gray ones and the scale-invariant feature transform (SIFT) descriptors are extracted from each image in the preprocessing stage. Then, the PNA-based scheme is applied to the SIFT descriptors with iteration and selection algorithms. The principal nodes of each image are selected through spatial analysis of the SIFT descriptors with Manhattan distance (L1 norm) and Euclidean distance (L2 norm) in order to increase the representativeness of the codebook. With the purpose of evaluating the performance of our scheme, the feature vector of the image is calculated by two baseline methods after the codebook is constructed. The L1-PNA- and L2-PNA-based baseline methods are tested and compared with different scales of codebooks over three public scene image databases. The experimental results show the effectiveness of the proposed scheme of PNA with a higher categorization rate.

  14. Forensic analysis of bicomponent fibers using infrared chemical imaging.

    Science.gov (United States)

    Flynn, Katherine; O'Leary, Robyn; Roux, Claude; Reedy, Brian J

    2006-05-01

    The application of infrared chemical imaging to the analysis of bicomponent fibers was evaluated. Eleven nominally bicomponent fibers were examined either side-on or in cross-section. In six of the 11 samples, infrared chemical imaging was able to spatially resolve two spectroscopically distinct regions when the fibers were examined side-on. As well as yielding characteristic infrared spectra of each component, the technique also provided images that clearly illustrated the side-by-side configuration of these components in the fiber. In one case it was possible to prepare and image a cross-section of the fiber, but in general the preparation of fiber cross-sections proved very difficult. In five of the 11 samples, the infrared spectra could be used to identify the overall chemical composition of the fibers, according to a published classification scheme, but the fiber components could not be spatially resolved. Difficulties that are inherent to conventional "single-point" infrared spectroscopy, such as interference fringing and sloping baselines, particularly when analyzing acrylic type fibers, were also encountered in the infrared chemical image analysis of bicomponent fibers. A number of infrared sampling techniques were investigated to overcome these problems, and recommendations for the best sampling technique are given. Chemical imaging results were compared with those obtained using conventional fiber microscopy techniques.

  15. An Analysis of the Image of Shylock

    Institute of Scientific and Technical Information of China (English)

    张熙强

    2015-01-01

    The Merchant of Venice is one of Shakespeare’s most famous comedy in the world. While Shylock is a classical figure and analyzed a gain and again. Before, especially in 16th century, many people think Shylock was a greedy, selfish, cruel and so on, and his final result is a reason that this play is called comedy. As times goes by, people begin to re-analyze Shylock. They found when we analyze this play from Shylock’s angle, it is a tragedy. Shylock also has other characters which was not discovered before and gradually accepted.This article attempts to analyze Shylock in a positive way.Apparently Shylock is a true villain,but with careful analysis on the social background,it's obvious that such viewpoint is quite biased.Thus justice and fairness should be returned to Shylock.

  16. Image decomposition as a tool for validating stress analysis models

    Directory of Open Access Journals (Sweden)

    Mottershead J.

    2010-06-01

    Full Text Available It is good practice to validate analytical and numerical models used in stress analysis for engineering design by comparison with measurements obtained from real components either in-service or in the laboratory. In reality, this critical step is often neglected or reduced to placing a single strain gage at the predicted hot-spot of stress. Modern techniques of optical analysis allow full-field maps of displacement, strain and, or stress to be obtained from real components with relative ease and at modest cost. However, validations continued to be performed only at predicted and, or observed hot-spots and most of the wealth of data is ignored. It is proposed that image decomposition methods, commonly employed in techniques such as fingerprinting and iris recognition, can be employed to validate stress analysis models by comparing all of the key features in the data from the experiment and the model. Image decomposition techniques such as Zernike moments and Fourier transforms have been used to decompose full-field distributions for strain generated from optical techniques such as digital image correlation and thermoelastic stress analysis as well as from analytical and numerical models by treating the strain distributions as images. The result of the decomposition is 101 to 102 image descriptors instead of the 105 or 106 pixels in the original data. As a consequence, it is relatively easy to make a statistical comparison of the image descriptors from the experiment and from the analytical/numerical model and to provide a quantitative assessment of the stress analysis.

  17. Application of multi-resolution analysis in sonar image denoising

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Sonar images have complex background, low contrast, and deteriorative edges; these characteristics make it difficult for researchers to dispose the sonar objects. The multi-resolution analysis represents the signals in different scales efficiently, which is widely used in image processing. Wavelets are successful in disposing point discontinuities in one dimension, but not in two dimensions. The finite Ridgelet transform (FRIT) deals efficiently with the singularity in high dimension. It presents three improved denoising approaches, which are based on FRIT and used in the sonar image disposal technique. By experiment and comparison with traditional methods, these approaches not only suppress the artifacts, but also obtain good effect in edge keeping and SNR of the sonar image denoising.

  18. Difference image analysis: Automatic kernel design using information criteria

    CERN Document Server

    Bramich, D M; Alsubai, K A; Bachelet, E; Mislis, D; Parley, N

    2015-01-01

    We present a selection of methods for automatically constructing an optimal kernel model for difference image analysis which require very few external parameters to control the kernel design. Each method consists of two components; namely, a kernel design algorithm to generate a set of candidate kernel models, and a model selection criterion to select the simplest kernel model from the candidate models that provides a sufficiently good fit to the target image. We restricted our attention to the case of solving for a spatially-invariant convolution kernel composed of delta basis functions, and we considered 19 different kernel solution methods including six employing kernel regularisation. We tested these kernel solution methods by performing a comprehensive set of image simulations and investigating how their performance in terms of model error, fit quality, and photometric accuracy depends on the properties of the reference and target images. We find that the irregular kernel design algorithm employing unreg...

  19. Image Combination Analysis in SPECAN Algorithm of Spaceborne SAR

    Institute of Scientific and Technical Information of China (English)

    臧铁飞; 李方慧; 龙腾

    2003-01-01

    An analysis of image combination in SPECAN algorithm is delivered in time-frequency domain in detail and a new image combination method is proposed. For four multi-looks processing one sub-aperture data in every three sub-apertures is processed in this combination method. The continual sub-aperture processing in SPECAN algorithm is realized and the processing efficiency can be dramatically increased. A new parameter is also put forward to measure the processing efficient of SAR image processing. Finally, the raw data of RADARSAT are used to test the method and the result proves that this method is feasible to be used in SPECAN algorithm of spaceborne SAR and can improve processing efficiently. SPECAN algorithm with this method can be used in quick-look imaging.

  20. Infrared medical image visualization and anomalies analysis method

    Science.gov (United States)

    Gong, Jing; Chen, Zhong; Fan, Jing; Yan, Liang

    2015-12-01

    Infrared medical examination finds the diseases through scanning the overall human body temperature and obtaining the temperature anomalies of the corresponding parts with the infrared thermal equipment. In order to obtain the temperature anomalies and disease parts, Infrared Medical Image Visualization and Anomalies Analysis Method is proposed in this paper. Firstly, visualize the original data into a single channel gray image: secondly, turn the normalized gray image into a pseudo color image; thirdly, a method of background segmentation is taken to filter out background noise; fourthly, cluster those special pixels with the breadth-first search algorithm; lastly, mark the regions of the temperature anomalies or disease parts. The test is shown that it's an efficient and accurate way to intuitively analyze and diagnose body disease parts through the temperature anomalies.

  1. Automated analysis of image mammogram for breast cancer diagnosis

    Science.gov (United States)

    Nurhasanah, Sampurno, Joko; Faryuni, Irfana Diah; Ivansyah, Okto

    2016-03-01

    Medical imaging help doctors in diagnosing and detecting diseases that attack the inside of the body without surgery. Mammogram image is a medical image of the inner breast imaging. Diagnosis of breast cancer needs to be done in detail and as soon as possible for determination of next medical treatment. The aim of this work is to increase the objectivity of clinical diagnostic by using fractal analysis. This study applies fractal method based on 2D Fourier analysis to determine the density of normal and abnormal and applying the segmentation technique based on K-Means clustering algorithm to image abnormal for determine the boundary of the organ and calculate the area of organ segmentation results. The results show fractal method based on 2D Fourier analysis can be used to distinguish between the normal and abnormal breast and segmentation techniques with K-Means Clustering algorithm is able to generate the boundaries of normal and abnormal tissue organs, so area of the abnormal tissue can be determined.

  2. Failure Analysis of CCD Image Sensors Using SQUID and GMR Magnetic Current Imaging

    Science.gov (United States)

    Felt, Frederick S.

    2005-01-01

    During electrical testing of a Full Field CCD Image Senor, electrical shorts were detected on three of six devices. These failures occurred after the parts were soldered to the PCB. Failure analysis was performed to determine the cause and locations of these failures on the devices. After removing the fiber optic faceplate, optical inspection was performed on the CCDs to understand the design and package layout. Optical inspection revealed that the device had a light shield ringing the CCD array. This structure complicated the failure analysis. Alternate methods of analysis were considered, including liquid crystal, light and thermal emission, LT/A, TT/A SQUID, and MP. Of these, SQUID and MP techniques were pursued for further analysis. Also magnetoresistive current imaging technology is discussed and compared to SQUID.

  3. Analysis of discrete-to-discrete imaging models for iterative tomographic image reconstruction and compressive sensing

    CERN Document Server

    Jørgensen, Jakob H; Pan, Xiaochuan

    2011-01-01

    Discrete-to-discrete imaging models for computed tomography (CT) are becoming increasingly ubiquitous as the interest in iterative image reconstruction algorithms has heightened. Despite this trend, all the intuition for algorithm and system design derives from analysis of continuous-to-continuous models such as the X-ray and Radon transform. While the similarity between these models justifies some crossover, questions such as what are sufficient sampling conditions can be quite different for the two models. This sampling issue is addressed extensively in the first half of the article using singular value decomposition analysis for determining sufficient number of views and detector bins. The question of full sampling for CT is particularly relevant to current attempts to adapt compressive sensing (CS) motivated methods to application in CT image reconstruction. The second half goes in depth on this subject and discusses the link between object sparsity and sufficient sampling for accurate reconstruction. Par...

  4. A Liposomal Formulation Able to Incorporate a High Content of Paclitaxel and Exert Promising Anticancer Effect

    Directory of Open Access Journals (Sweden)

    Pei Kan

    2011-01-01

    Full Text Available A liposome formulation for paclitaxel was developed in this study. The liposomes, composed of naturally unsaturated and hydrogenated phosphatidylcholines, with significant phase transition temperature difference, were prepared and characterized. The liposomes exhibited a high content of paclitaxel, which was incorporated within the segregated microdomains coexisting on phospholipid bilayer of liposomes. As much as 15% paclitaxel to phospholipid molar ratio were attained without precipitates observed during preparation. In addition, the liposomes remained stable in liquid form at 4∘C for at least 6 months. The special composition of liposomal membrane which could reduce paclitaxel aggregation could account for such a capacity and stability. The cytotoxicity of prepared paclitaxel liposomes on the colon cancer C-26 cell culture was comparable to Taxol. Acute toxicity test revealed that LD50 for intravenous bolus injection in mice exceeded by 40 mg/kg. In antitumor efficacy study, the prepared liposomal paclitaxel demonstrated the increase in the efficacy against human cancer in animal model. Taken together, the novel formulated liposomes can incorporate high content of paclitaxel, remaining stable for long-term storage. These animal data also demonstrate that the liposomal paclitaxel is promising for further clinical use.

  5. Electrical resistance stability of high content carbon fiber reinforced cement composite

    Institute of Scientific and Technical Information of China (English)

    YANG Zai-fu; TANG Zu-quan; LI Zhuo-qiu; QIAN Jue-shi

    2005-01-01

    The influences of curing time, the content of free evaporable water in cement paste, environmental temperature, and alternative heating and cooling on the electrical resistance of high content carbon fiber reinforced cement (CFRC) paste are studied by experiments with specimens of Portland cement 42.5 with 10 mm PAN-based carbon fiber and methylcellulose. Experimental results indicate that the electrical resistance of CFRC increases relatively by 24% within a hydration time of 90 d and almost keeps constant after 14 d, changes hardly with the mass loss of free evaporable water in the concrete dried at 50℃C, increases relatively by 4% when ambient temperature decreases from 15℃ to-20℃, and decreases relatively by 13% with temperature increasing by 88℃. It is suggested that the electric resistance of the CFRC is stable, which is testified by the stable power output obtained by electrifying the CFRC slab with a given voltage. This implies that such kind of high content carbon fiber reinforced cement composite is potentially a desirable electrothermal material for airfield runways and road surfaces deicing.

  6. The Gray Institute ‘open’ high-content, fluorescence lifetime microscopes

    Science.gov (United States)

    BARBER, PR; TULLIS, IDC; PIERCE, GP; NEWMAN, RG; PRENTICE, J; ROWLEY, MI; MATTHEWS, DR; AMEER-BEG, SM; VOJNOVIC, B

    2013-01-01

    Summary We describe a microscopy design methodology and details of microscopes built to this ‘open’ design approach. These demonstrate the first implementation of time-domain fluorescence microscopy in a flexible automated platform with the ability to ease the transition of this and other advanced microscopy techniques from development to use in routine biology applications. This approach allows easy expansion and modification of the platform capabilities, as it moves away from the use of a commercial, monolithic, microscope body to small, commercial off-the-shelf and custom made modular components. Drawings and diagrams of our microscopes have been made available under an open license for noncommercial use at http://users.ox.ac.uk/~atdgroup. Several automated high-content fluorescence microscope implementations have been constructed with this design framework and optimized for specific applications with multiwell plates and tissue microarrays. In particular, three platforms incorporate time-domain FLIM via time-correlated single photon counting in an automated fashion. We also present data from experiments performed on these platforms highlighting their automated wide-field and laser scanning capabilities designed for high-content microscopy. Devices using these designs also form radiation-beam ‘end-stations’ at Oxford and Surrey Universities, showing the versatility and extendibility of this approach. PMID:23772985

  7. Secure thin client architecture for DICOM image analysis

    Science.gov (United States)

    Mogatala, Harsha V. R.; Gallet, Jacqueline

    2005-04-01

    This paper presents a concept of Secure Thin Client (STC) Architecture for Digital Imaging and Communications in Medicine (DICOM) image analysis over Internet. STC Architecture provides in-depth analysis and design of customized reports for DICOM images using drag-and-drop and data warehouse technology. Using a personal computer and a common set of browsing software, STC can be used for analyzing and reporting detailed patient information, type of examinations, date, Computer Tomography (CT) dose index, and other relevant information stored within the images header files as well as in the hospital databases. STC Architecture is three-tier architecture. The First-Tier consists of drag-and-drop web based interface and web server, which provides customized analysis and reporting ability to the users. The Second-Tier consists of an online analytical processing (OLAP) server and database system, which serves fast, real-time, aggregated multi-dimensional data using OLAP technology. The Third-Tier consists of a smart algorithm based software program which extracts DICOM tags from CT images in this particular application, irrespective of CT vendor's, and transfers these tags into a secure database system. This architecture provides Winnipeg Regional Health Authorities (WRHA) with quality indicators for CT examinations in the hospitals. It also provides health care professionals with analytical tool to optimize radiation dose and image quality parameters. The information is provided to the user by way of a secure socket layer (SSL) and role based security criteria over Internet. Although this particular application has been developed for WRHA, this paper also discusses the effort to extend the Architecture to other hospitals in the region. Any DICOM tag from any imaging modality could be tracked with this software.

  8. Traking of Laboratory Debris Flow Fronts with Image Analysis

    Science.gov (United States)

    Queiroz de Oliveira, Gustavo; Kulisch, Helmut; Fischer, Jan-Thomas; Scheidl, Christian; Pudasaini, Shiva P.

    2015-04-01

    Image analysis technique is applied to track the time evolution of rapid debris flow fronts and their velocities in laboratory experiments. These experiments are parts of the project avaflow.org that intends to develop a GIS-based open source computational tool to describe wide spectrum of rapid geophysical mass flows, including avalanches and real two-phase debris flows down complex natural slopes. The laboratory model consists of a large rectangular channel 1.4m wide and 10m long, with adjustable inclination and other flow configurations. The setup allows investigate different two phase material compositions including large fluid fractions. The large size enables to transfer the results to large-scale natural events providing increased measurement accuracy. The images are captured by a high speed camera, a standard digital camera. The fronts are tracked by the camera to obtain data in debris flow experiments. The reflectance analysis detects the debris front in every image frame; its presence changes the reflectance at a certain pixel location during the flow. The accuracy of the measurements was improved with a camera calibration procedure. As one of the great problems in imaging and analysis, the systematic distortions of the camera lens are contained in terms of radial and tangential parameters. The calibration procedure estimates the optimal values for these parameters. This allows us to obtain physically correct and undistorted image pixels. Then, we map the images onto a physical model geometry, which is the projective photogrammetry, in which the image coordinates are connected with the object space coordinates of the flow. Finally, the physical model geometry is rewritten in the direct linear transformation form, which allows for the conversion from one to another coordinate system. With our approach, the debris front position can then be estimated by combining the reflectance, calibration and the linear transformation. The consecutive debris front

  9. Computer vision analysis of image motion by variational methods

    CERN Document Server

    Mitiche, Amar

    2014-01-01

    This book presents a unified view of image motion analysis under the variational framework. Variational methods, rooted in physics and mechanics, but appearing in many other domains, such as statistics, control, and computer vision, address a problem from an optimization standpoint, i.e., they formulate it as the optimization of an objective function or functional. The methods of image motion analysis described in this book use the calculus of variations to minimize (or maximize) an objective functional which transcribes all of the constraints that characterize the desired motion variables. The book addresses the four core subjects of motion analysis: Motion estimation, detection, tracking, and three-dimensional interpretation. Each topic is covered in a dedicated chapter. The presentation is prefaced by an introductory chapter which discusses the purpose of motion analysis. Further, a chapter is included which gives the basic tools and formulae related to curvature, Euler Lagrange equations, unconstrained de...

  10. Multi spectral imaging analysis for meat spoilage discrimination

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Carstensen, Jens Michael; Papadopoulou, Olga

    with corresponding sensory data would be of great interest. The purpose of this research was to produce a method capable of quantifying and/or predicting the spoilage status (e.g. express in TVC counts as well as on sensory evaluation) using a multi spectral image of a meat sample and thereby avoid any time...... classification methods: Naive Bayes Classifier as a reference model, Canonical Discriminant Analysis (CDA) and Support Vector Classification (SVC). As the final step, generalization of the models was performed using k-fold validation (k=10). Results showed that image analysis provided good discrimination of meat...... samples. In the case where all data were taken together the misclassification error amounted to 16%. When spoilage status was based on visual sensory data, the model produced a MER of 22% for the combined dataset. These results suggest that it is feasible to employ a multi spectral image...

  11. Analysis of imaging for laser triangulation sensors under Scheimpflug rule.

    Science.gov (United States)

    Miks, Antonin; Novak, Jiri; Novak, Pavel

    2013-07-29

    In this work a detailed analysis of the problem of imaging of objects lying in the plane tilted with respect to the optical axis of the rotationally symmetrical optical system is performed by means of geometrical optics theory. It is shown that the fulfillment of the so called Scheimpflug condition (Scheimpflug rule) does not guarantee the sharp image of the object as it is usually declared because of the fact that due to the dependence of aberrations of real optical systems on the object distance the image becomes blurred. The f-number of a given optical system also varies with the object distance. It is shown the influence of above mentioned effects on the accuracy of the laser triangulation sensors measurements. A detailed analysis of laser triangulation sensors, based on geometrical optics theory, is performed and relations for the calculation of measurement errors and construction parameters of laser triangulation sensors are derived.

  12. Texture analysis and classification of ultrasound liver images.

    Science.gov (United States)

    Gao, Shuang; Peng, Yuhua; Guo, Huizhi; Liu, Weifeng; Gao, Tianxin; Xu, Yuanqing; Tang, Xiaoying

    2014-01-01

    Ultrasound as a noninvasive imaging technique is widely used to diagnose liver diseases. Texture analysis and classification of ultrasound liver images have become an important research topic across the world. In this study, GLGCM (Gray Level Gradient Co-Occurrence Matrix) was implemented for texture analysis of ultrasound liver images first, followed by the use of GLCM (Gray Level Co-occurrence Matrix) at the second stage. Twenty two features were obtained using the two methods, and seven most powerful features were selected for classification using BP (Back Propagation) neural network. Fibrosis was divided into five stages (S0-S4) in this study. The classification accuracies of S0-S4 were 100%, 90%, 70%, 90% and 100%, respectively.

  13. Trabecular architecture analysis in femur radiographic images using fractals.

    Science.gov (United States)

    Udhayakumar, G; Sujatha, C M; Ramakrishnan, S

    2013-04-01

    Trabecular bone is a highly complex anisotropic material that exhibits varying magnitudes of strength in compression and tension. Analysis of the trabecular architectural alteration that manifest as loss of trabecular plates and connection has been shown to yield better estimation of bone strength. In this work, an attempt has been made toward the development of an automated system for investigation of trabecular femur bone architecture using fractal analysis. Conventional radiographic femur bone images recorded using standard protocols are used in this study. The compressive and tensile regions in the images are delineated using preprocessing procedures. The delineated images are analyzed using Higuchi's fractal method to quantify pattern heterogeneity and anisotropy of trabecular bone structure. The results show that the extracted fractal features are distinct for compressive and tensile regions of normal and abnormal human femur bone. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  14. Imaging spectroscopic analysis at the Advanced Light Source

    Energy Technology Data Exchange (ETDEWEB)

    MacDowell, A. A.; Warwick, T.; Anders, S.; Lamble, G.M.; Martin, M.C.; McKinney, W.R.; Padmore, H.A.

    1999-05-12

    One of the major advances at the high brightness third generation synchrotrons is the dramatic improvement of imaging capability. There is a large multi-disciplinary effort underway at the ALS to develop imaging X-ray, UV and Infra-red spectroscopic analysis on a spatial scale from. a few microns to 10nm. These developments make use of light that varies in energy from 6meV to 15KeV. Imaging and spectroscopy are finding applications in surface science, bulk materials analysis, semiconductor structures, particulate contaminants, magnetic thin films, biology and environmental science. This article is an overview and status report from the developers of some of these techniques at the ALS. The following table lists all the currently available microscopes at the. ALS. This article will describe some of the microscopes and some of the early applications.

  15. Stromatoporoid biometrics using image analysis software: A first order approach

    Science.gov (United States)

    Wolniewicz, Pawel

    2010-04-01

    Strommetric is a new image analysis computer program that performs morphometric measurements of stromatoporoid sponges. The program measures 15 features of skeletal elements (pillars and laminae) visible in both longitudinal and transverse thin sections. The software is implemented in C++, using the Open Computer Vision (OpenCV) library. The image analysis system distinguishes skeletal elements from sparry calcite using Otsu's method for image thresholding. More than 150 photos of thin sections were used as a test set, from which 36,159 measurements were obtained. The software provided about one hundred times more data than the current method applied until now. The data obtained are reproducible, even if the work is repeated by different workers. Thus the method makes the biometric studies of stromatoporoids objective.

  16. Image processing and analysis using neural networks for optometry area

    Science.gov (United States)

    Netto, Antonio V.; Ferreira de Oliveira, Maria C.

    2002-11-01

    In this work we describe the framework of a functional system for processing and analyzing images of the human eye acquired by the Hartmann-Shack technique (HS), in order to extract information to formulate a diagnosis of eye refractive errors (astigmatism, hypermetropia and myopia). The analysis is to be carried out using an Artificial Intelligence system based on Neural Nets, Fuzzy Logic and Classifier Combination. The major goal is to establish the basis of a new technology to effectively measure ocular refractive errors that is based on methods alternative those adopted in current patented systems. Moreover, analysis of images acquired with the Hartmann-Shack technique may enable the extraction of additional information on the health of an eye under exam from the same image used to detect refraction errors.

  17. Mesoporous silica materials with an extremely high content of organic sulfonic groups and their comparable activities with that of concentrated sulfuric acid in catalytic esterification.

    Science.gov (United States)

    Feng, Ye-Fei; Yang, Xiao-Yu; Di, Yan; Du, Yun-Chen; Zhang, Yong-Lai; Xiao, Feng-Shou

    2006-07-27

    Mesoporous silica materials (HS-JLU-20) with an extremely high content of mercaptopropyl groups have been successfully synthesized using fluorocarbon-hydrocarbon surfactant mixtures through a simple co-condensation approach of tetraethyl orthosilicate (TEOS) and (3-mercaptopropyl)trimethoxysilane (MPTS), which are characterized by X-ray diffraction (XRD), nitrogen adsorption and desorption isotherms, transmission electron microscopy (TEM), CHNS elemental analysis, thermogravimetry analysis (TGA), and (29)Si NMR spectroscopy. The results show that HS-JLU-20 samples with molar ratios of MPTS/(MPTS + TEOS) at 0.5-0.8 in the starting synthetic gels still show their mesostructures, while HS-SBA-15 with the molar ratio of MPTS/(MPTS + TEOS) at 0.50 completely loses its mesostructure in the absence of fluorocarbon surfactant. Possibly, fluorocarbon surfactant containing N(+) species with a positive charge could effectively interact with negatively charged mercapto groups in the synthesis of HS-JLU-20 materials, resulting in the formation of mesoporous silicas with good cross-linking of silica condensation even at an extremely high content of organic mercapto groups. More interestingly, after the treatment with hydrogen peroxide, HSO(3)-JLU-20 materials with an extremely high content of organic sulfonic groups exhibit comparable activity with liquid concentrated sulfuric acid in catalytic esterification of cyclohexanol with acetic acid.

  18. Analysis and Comparison of Objective Methods for Image Quality Assessment

    Directory of Open Access Journals (Sweden)

    P. S. Babkin

    2014-01-01

    Full Text Available The purpose of this work is research and modification of the reference objective methods for image quality assessment. The ultimate goal is to obtain a modification of formal assessments that more closely corresponds to the subjective expert estimates (MOS.In considering the formal reference objective methods for image quality assessment we used the results of other authors, which offer results and comparative analyzes of the most effective algorithms. Based on these investigations we have chosen two of the most successful algorithm for which was made a further analysis in the MATLAB 7.8 R 2009 a (PQS and MSSSIM. The publication focuses on the features of the algorithms, which have great importance in practical implementation, but are insufficiently covered in the publications by other authors.In the implemented modification of the algorithm PQS boundary detector Kirsch was replaced by the boundary detector Canny. Further experiments were carried out according to the method of the ITU-R VT.500-13 (01/2012 using monochrome images treated with different types of filters (should be emphasized that an objective assessment of image quality PQS is applicable only to monochrome images. Images were obtained with a thermal imaging surveillance system. The experimental results proved the effectiveness of this modification.In the specialized literature in the field of formal to evaluation methods pictures, this type of modification was not mentioned.The method described in the publication can be applied to various practical implementations of digital image processing.Advisability and effectiveness of using the modified method of PQS to assess the structural differences between the images are shown in the article and this will be used in solving the problems of identification and automatic control.

  19. The Medical Analysis of Child Sexual Abuse Images

    Science.gov (United States)

    Cooper, Sharon W.

    2011-01-01

    Analysis of child sexual abuse images, commonly referred to as pornography, requires a familiarity with the sexual maturation rating of children and an understanding of growth and development parameters. This article explains barriers that exist in working in this area of child abuse, the differences between subjective and objective analyses,…

  20. Computing support for advanced medical data analysis and imaging

    CERN Document Server

    Wiślicki, W; Białas, P; Czerwiński, E; Kapłon, Ł; Kochanowski, A; Korcyl, G; Kowal, J; Kowalski, P; Kozik, T; Krzemień, W; Molenda, M; Moskal, P; Niedźwiecki, S; Pałka, M; Pawlik, M; Raczyński, L; Rudy, Z; Salabura, P; Sharma, N G; Silarski, M; Słomski, A; Smyrski, J; Strzelecki, A; Wieczorek, A; Zieliński, M; Zoń, N

    2014-01-01

    We discuss computing issues for data analysis and image reconstruction of PET-TOF medical scanner or other medical scanning devices producing large volumes of data. Service architecture based on the grid and cloud concepts for distributed processing is proposed and critically discussed.