WorldWideScience

Sample records for high-content image analysis

  1. iScreen: Image-Based High-Content RNAi Screening Analysis Tools.

    Science.gov (United States)

    Zhong, Rui; Dong, Xiaonan; Levine, Beth; Xie, Yang; Xiao, Guanghua

    2015-09-01

    High-throughput RNA interference (RNAi) screening has opened up a path to investigating functional genomics in a genome-wide pattern. However, such studies are often restricted to assays that have a single readout format. Recently, advanced image technologies have been coupled with high-throughput RNAi screening to develop high-content screening, in which one or more cell image(s), instead of a single readout, were generated from each well. This image-based high-content screening technology has led to genome-wide functional annotation in a wider spectrum of biological research studies, as well as in drug and target discovery, so that complex cellular phenotypes can be measured in a multiparametric format. Despite these advances, data analysis and visualization tools are still largely lacking for these types of experiments. Therefore, we developed iScreen (image-Based High-content RNAi Screening Analysis Tool), an R package for the statistical modeling and visualization of image-based high-content RNAi screening. Two case studies were used to demonstrate the capability and efficiency of the iScreen package. iScreen is available for download on CRAN (http://cran.cnr.berkeley.edu/web/packages/iScreen/index.html). The user manual is also available as a supplementary document. © 2014 Society for Laboratory Automation and Screening.

  2. Development of automatic image analysis methods for high-throughput and high-content screening

    NARCIS (Netherlands)

    Di, Zi

    2013-01-01

    This thesis focuses on the development of image analysis methods for ultra-high content analysis of high-throughput screens where cellular phenotype responses to various genetic or chemical perturbations that are under investigation. Our primary goal is to deliver efficient and robust image analysis

  3. General Staining and Segmentation Procedures for High Content Imaging and Analysis.

    Science.gov (United States)

    Chambers, Kevin M; Mandavilli, Bhaskar S; Dolman, Nick J; Janes, Michael S

    2018-01-01

    Automated quantitative fluorescence microscopy, also known as high content imaging (HCI), is a rapidly growing analytical approach in cell biology. Because automated image analysis relies heavily on robust demarcation of cells and subcellular regions, reliable methods for labeling cells is a critical component of the HCI workflow. Labeling of cells for image segmentation is typically performed with fluorescent probes that bind DNA for nuclear-based cell demarcation or with those which react with proteins for image analysis based on whole cell staining. These reagents, along with instrument and software settings, play an important role in the successful segmentation of cells in a population for automated and quantitative image analysis. In this chapter, we describe standard procedures for labeling and image segmentation in both live and fixed cell samples. The chapter will also provide troubleshooting guidelines for some of the common problems associated with these aspects of HCI.

  4. High-content analysis of single cells directly assembled on CMOS sensor based on color imaging.

    Science.gov (United States)

    Tanaka, Tsuyoshi; Saeki, Tatsuya; Sunaga, Yoshihiko; Matsunaga, Tadashi

    2010-12-15

    A complementary metal oxide semiconductor (CMOS) image sensor was applied to high-content analysis of single cells which were assembled closely or directly onto the CMOS sensor surface. The direct assembling of cell groups on CMOS sensor surface allows large-field (6.66 mm×5.32 mm in entire active area of CMOS sensor) imaging within a second. Trypan blue-stained and non-stained cells in the same field area on the CMOS sensor were successfully distinguished as white- and blue-colored images under white LED light irradiation. Furthermore, the chemiluminescent signals of each cell were successfully visualized as blue-colored images on CMOS sensor only when HeLa cells were placed directly on the micro-lens array of the CMOS sensor. Our proposed approach will be a promising technique for real-time and high-content analysis of single cells in a large-field area based on color imaging. Copyright © 2010 Elsevier B.V. All rights reserved.

  5. Information management for high content live cell imaging

    Directory of Open Access Journals (Sweden)

    White Michael RH

    2009-07-01

    Full Text Available Abstract Background High content live cell imaging experiments are able to track the cellular localisation of labelled proteins in multiple live cells over a time course. Experiments using high content live cell imaging will generate multiple large datasets that are often stored in an ad-hoc manner. This hinders identification of previously gathered data that may be relevant to current analyses. Whilst solutions exist for managing image data, they are primarily concerned with storage and retrieval of the images themselves and not the data derived from the images. There is therefore a requirement for an information management solution that facilitates the indexing of experimental metadata and results of high content live cell imaging experiments. Results We have designed and implemented a data model and information management solution for the data gathered through high content live cell imaging experiments. Many of the experiments to be stored measure the translocation of fluorescently labelled proteins from cytoplasm to nucleus in individual cells. The functionality of this database has been enhanced by the addition of an algorithm that automatically annotates results of these experiments with the timings of translocations and periods of any oscillatory translocations as they are uploaded to the repository. Testing has shown the algorithm to perform well with a variety of previously unseen data. Conclusion Our repository is a fully functional example of how high throughput imaging data may be effectively indexed and managed to address the requirements of end users. By implementing the automated analysis of experimental results, we have provided a clear impetus for individuals to ensure that their data forms part of that which is stored in the repository. Although focused on imaging, the solution provided is sufficiently generic to be applied to other functional proteomics and genomics experiments. The software is available from: fhttp://code.google.com/p/livecellim/

  6. Automated analysis of high-content microscopy data with deep learning.

    Science.gov (United States)

    Kraus, Oren Z; Grys, Ben T; Ba, Jimmy; Chong, Yolanda; Frey, Brendan J; Boone, Charles; Andrews, Brenda J

    2017-04-18

    Existing computational pipelines for quantitative analysis of high-content microscopy data rely on traditional machine learning approaches that fail to accurately classify more than a single dataset without substantial tuning and training, requiring extensive analysis. Here, we demonstrate that the application of deep learning to biological image data can overcome the pitfalls associated with conventional machine learning classifiers. Using a deep convolutional neural network (DeepLoc) to analyze yeast cell images, we show improved performance over traditional approaches in the automated classification of protein subcellular localization. We also demonstrate the ability of DeepLoc to classify highly divergent image sets, including images of pheromone-arrested cells with abnormal cellular morphology, as well as images generated in different genetic backgrounds and in different laboratories. We offer an open-source implementation that enables updating DeepLoc on new microscopy datasets. This study highlights deep learning as an important tool for the expedited analysis of high-content microscopy data. © 2017 The Authors. Published under the terms of the CC BY 4.0 license.

  7. Morphometric Characterization of Rat and Human Alveolar Macrophage Cell Models and their Response to Amiodarone using High Content Image Analysis.

    Science.gov (United States)

    Hoffman, Ewelina; Patel, Aateka; Ball, Doug; Klapwijk, Jan; Millar, Val; Kumar, Abhinav; Martin, Abigail; Mahendran, Rhamiya; Dailey, Lea Ann; Forbes, Ben; Hutter, Victoria

    2017-12-01

    Progress to the clinic may be delayed or prevented when vacuolated or "foamy" alveolar macrophages are observed during non-clinical inhalation toxicology assessment. The first step in developing methods to study this response in vitro is to characterize macrophage cell lines and their response to drug exposures. Human (U937) and rat (NR8383) cell lines and primary rat alveolar macrophages obtained by bronchoalveolar lavage were characterized using high content fluorescence imaging analysis quantification of cell viability, morphometry, and phospholipid and neutral lipid accumulation. Cell health, morphology and lipid content were comparable (p content. Responses to amiodarone, a known inducer of phospholipidosis, required analysis of shifts in cell population profiles (the proportion of cells with elevated vacuolation or lipid content) rather than average population data which was insensitive to the changes observed. A high content image analysis assay was developed and used to provide detailed morphological characterization of rat and human alveolar-like macrophages and their response to a phospholipidosis-inducing agent. This provides a basis for development of assays to predict or understand macrophage vacuolation following inhaled drug exposure.

  8. Phaedra, a protocol-driven system for analysis and validation of high-content imaging and flow cytometry.

    Science.gov (United States)

    Cornelissen, Frans; Cik, Miroslav; Gustin, Emmanuel

    2012-04-01

    High-content screening has brought new dimensions to cellular assays by generating rich data sets that characterize cell populations in great detail and detect subtle phenotypes. To derive relevant, reliable conclusions from these complex data, it is crucial to have informatics tools supporting quality control, data reduction, and data mining. These tools must reconcile the complexity of advanced analysis methods with the user-friendliness demanded by the user community. After review of existing applications, we realized the possibility of adding innovative new analysis options. Phaedra was developed to support workflows for drug screening and target discovery, interact with several laboratory information management systems, and process data generated by a range of techniques including high-content imaging, multicolor flow cytometry, and traditional high-throughput screening assays. The application is modular and flexible, with an interface that can be tuned to specific user roles. It offers user-friendly data visualization and reduction tools for HCS but also integrates Matlab for custom image analysis and the Konstanz Information Miner (KNIME) framework for data mining. Phaedra features efficient JPEG2000 compression and full drill-down functionality from dose-response curves down to individual cells, with exclusion and annotation options, cell classification, statistical quality controls, and reporting.

  9. Localization-based super-resolution imaging meets high-content screening.

    Science.gov (United States)

    Beghin, Anne; Kechkar, Adel; Butler, Corey; Levet, Florian; Cabillic, Marine; Rossier, Olivier; Giannone, Gregory; Galland, Rémi; Choquet, Daniel; Sibarita, Jean-Baptiste

    2017-12-01

    Single-molecule localization microscopy techniques have proven to be essential tools for quantitatively monitoring biological processes at unprecedented spatial resolution. However, these techniques are very low throughput and are not yet compatible with fully automated, multiparametric cellular assays. This shortcoming is primarily due to the huge amount of data generated during imaging and the lack of software for automation and dedicated data mining. We describe an automated quantitative single-molecule-based super-resolution methodology that operates in standard multiwell plates and uses analysis based on high-content screening and data-mining software. The workflow is compatible with fixed- and live-cell imaging and allows extraction of quantitative data like fluorophore photophysics, protein clustering or dynamic behavior of biomolecules. We demonstrate that the method is compatible with high-content screening using 3D dSTORM and DNA-PAINT based super-resolution microscopy as well as single-particle tracking.

  10. Shedding Light on Filovirus Infection with High-Content Imaging

    Directory of Open Access Journals (Sweden)

    Rekha G. Panchal

    2012-08-01

    Full Text Available Microscopy has been instrumental in the discovery and characterization of microorganisms. Major advances in high-throughput fluorescence microscopy and automated, high-content image analysis tools are paving the way to the systematic and quantitative study of the molecular properties of cellular systems, both at the population and at the single-cell level. High-Content Imaging (HCI has been used to characterize host-virus interactions in genome-wide reverse genetic screens and to identify novel cellular factors implicated in the binding, entry, replication and egress of several pathogenic viruses. Here we present an overview of the most significant applications of HCI in the context of the cell biology of filovirus infection. HCI assays have been recently implemented to quantitatively study filoviruses in cell culture, employing either infectious viruses in a BSL-4 environment or surrogate genetic systems in a BSL-2 environment. These assays are becoming instrumental for small molecule and siRNA screens aimed at the discovery of both cellular therapeutic targets and of compounds with anti-viral properties. We discuss the current practical constraints limiting the implementation of high-throughput biology in a BSL-4 environment, and propose possible solutions to safely perform high-content, high-throughput filovirus infection assays. Finally, we discuss possible novel applications of HCI in the context of filovirus research with particular emphasis on the identification of possible cellular biomarkers of virus infection.

  11. Development of a quantitative morphological assessment of toxicant-treated zebrafish larvae using brightfield imaging and high-content analysis.

    Science.gov (United States)

    Deal, Samantha; Wambaugh, John; Judson, Richard; Mosher, Shad; Radio, Nick; Houck, Keith; Padilla, Stephanie

    2016-09-01

    One of the rate-limiting procedures in a developmental zebrafish screen is the morphological assessment of each larva. Most researchers opt for a time-consuming, structured visual assessment by trained human observer(s). The present studies were designed to develop a more objective, accurate and rapid method for screening zebrafish for dysmorphology. Instead of the very detailed human assessment, we have developed the computational malformation index, which combines the use of high-content imaging with a very brief human visual assessment. Each larva was quickly assessed by a human observer (basic visual assessment), killed, fixed and assessed for dysmorphology with the Zebratox V4 BioApplication using the Cellomics® ArrayScan® V(TI) high-content image analysis platform. The basic visual assessment adds in-life parameters, and the high-content analysis assesses each individual larva for various features (total area, width, spine length, head-tail length, length-width ratio, perimeter-area ratio). In developing the computational malformation index, a training set of hundreds of embryos treated with hundreds of chemicals were visually assessed using the basic or detailed method. In the second phase, we assessed both the stability of these high-content measurements and its performance using a test set of zebrafish treated with a dose range of two reference chemicals (trans-retinoic acid or cadmium). We found the measures were stable for at least 1 week and comparison of these automated measures to detailed visual inspection of the larvae showed excellent congruence. Our computational malformation index provides an objective manner for rapid phenotypic brightfield assessment of individual larva in a developmental zebrafish assay. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. An automated wide-field time-gated optically sectioning fluorescence lifetime imaging multiwell plate reader for high-content analysis of protein-protein interactions

    Science.gov (United States)

    Alibhai, Dominic; Kumar, Sunil; Kelly, Douglas; Warren, Sean; Alexandrov, Yuriy; Munro, Ian; McGinty, James; Talbot, Clifford; Murray, Edward J.; Stuhmeier, Frank; Neil, Mark A. A.; Dunsby, Chris; French, Paul M. W.

    2011-03-01

    We describe an optically-sectioned FLIM multiwell plate reader that combines Nipkow microscopy with wide-field time-gated FLIM, and its application to high content analysis of FRET. The system acquires sectioned FLIM images in fluorescent protein. It has been applied to study the formation of immature HIV virus like particles (VLPs) in live cells by monitoring Gag-Gag protein interactions using FLIM FRET of HIV-1 Gag transfected with CFP or YFP. VLP formation results in FRET between closely packed Gag proteins, as confirmed by our FLIM analysis that includes automatic image segmentation.

  13. Advanced Cell Classifier: User-Friendly Machine-Learning-Based Software for Discovering Phenotypes in High-Content Imaging Data.

    Science.gov (United States)

    Piccinini, Filippo; Balassa, Tamas; Szkalisity, Abel; Molnar, Csaba; Paavolainen, Lassi; Kujala, Kaisa; Buzas, Krisztina; Sarazova, Marie; Pietiainen, Vilja; Kutay, Ulrike; Smith, Kevin; Horvath, Peter

    2017-06-28

    High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. High content analysis of phagocytic activity and cell morphology with PuntoMorph

    DEFF Research Database (Denmark)

    Al-Ali, Hassan; Gao, Han; Dalby-Hansen, Camilla

    2017-01-01

    methods for quantifying phagocytic activity in multiple dimensions including speed, accuracy, and resolution. Conclusions We provide a framework to facilitate the development of high content assays suitable for drug screening. For convenience, we implemented our algorithm in a standalone software package...... with image-based quantification of phagocytic activity. New method We present a robust algorithm and cell-based assay system for high content analysis of phagocytic activity. The method utilizes fluorescently labeled beads as a phagocytic substrate with defined physical properties. The algorithm employs...... content screening. Results We tested our assay system using microglial cultures. Our results recapitulated previous findings on the effects of microglial stimulation on cell morphology and phagocytic activity. Moreover, our cell-level analysis revealed that the two phenotypes associated with microglial...

  15. High content analysis of phagocytic activity and cell morphology with PuntoMorph.

    Science.gov (United States)

    Al-Ali, Hassan; Gao, Han; Dalby-Hansen, Camilla; Peters, Vanessa Ann; Shi, Yan; Brambilla, Roberta

    2017-11-01

    Phagocytosis is essential for maintenance of normal homeostasis and healthy tissue. As such, it is a therapeutic target for a wide range of clinical applications. The development of phenotypic screens targeting phagocytosis has lagged behind, however, due to the difficulties associated with image-based quantification of phagocytic activity. We present a robust algorithm and cell-based assay system for high content analysis of phagocytic activity. The method utilizes fluorescently labeled beads as a phagocytic substrate with defined physical properties. The algorithm employs statistical modeling to determine the mean fluorescence of individual beads within each image, and uses the information to conduct an accurate count of phagocytosed beads. In addition, the algorithm conducts detailed and sophisticated analysis of cellular morphology, making it a standalone tool for high content screening. We tested our assay system using microglial cultures. Our results recapitulated previous findings on the effects of microglial stimulation on cell morphology and phagocytic activity. Moreover, our cell-level analysis revealed that the two phenotypes associated with microglial activation, specifically cell body hypertrophy and increased phagocytic activity, are not highly correlated. This novel finding suggests the two phenotypes may be under the control of distinct signaling pathways. We demonstrate that our assay system outperforms preexisting methods for quantifying phagocytic activity in multiple dimensions including speed, accuracy, and resolution. We provide a framework to facilitate the development of high content assays suitable for drug screening. For convenience, we implemented our algorithm in a standalone software package, PuntoMorph. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Analysis, Retrieval and Delivery of Multimedia Content

    CERN Document Server

    Cavallaro, Andrea; Leonardi, Riccardo; Migliorati, Pierangelo

    2013-01-01

    Covering some of the most cutting-edge research on the delivery and retrieval of interactive multimedia content, this volume of specially chosen contributions provides the most updated perspective on one of the hottest contemporary topics. The material represents extended versions of papers presented at the 11th International Workshop on Image Analysis for Multimedia Interactive Services, a vital international forum on this fast-moving field. Logically organized in discrete sections that approach the subject from its various angles, the content deals in turn with content analysis, motion and activity analysis, high-level descriptors and video retrieval, 3-D and multi-view, and multimedia delivery. The chapters cover the finest detail of emerging techniques such as the use of high-level audio information in improving scene segmentation and the use of subjective logic for forensic visual surveillance. On content delivery, the book examines both images and video, focusing on key subjects including an efficient p...

  17. Cultural Parallax and Content Analysis: Images of Black Women in High School History Textbooks

    Science.gov (United States)

    Woyshner, Christine; Schocker, Jessica B.

    2015-01-01

    This study investigates the representation of Black women in high school history textbooks. To examine the extent to which Black women are represented visually and to explore how they are portrayed, the authors use a mixed-methods approach that draws on analytical techniques in content analysis and from visual culture studies. Their findings…

  18. A picture tells a thousand words: A content analysis of concussion-related images online.

    Science.gov (United States)

    Ahmed, Osman H; Lee, Hopin; Struik, Laura L

    2016-09-01

    Recently image-sharing social media platforms have become a popular medium for sharing health-related images and associated information. However within the field of sports medicine, and more specifically sports related concussion, the content of images and meta-data shared through these popular platforms have not been investigated. The aim of this study was to analyse the content of concussion-related images and its accompanying meta-data on image-sharing social media platforms. We retrieved 300 images from Pinterest, Instagram and Flickr by using a standardised search strategy. All images were screened and duplicate images were removed. We excluded images if they were: non-static images; illustrations; animations; or screenshots. The content and characteristics of each image was evaluated using a customised coding scheme to determine major content themes, and images were referenced to the current international concussion management guidelines. From 300 potentially relevant images, 176 images were included for analysis; 70 from Pinterest, 63 from Flickr, and 43 from Instagram. Most images were of another person or a scene (64%), with the primary content depicting injured individuals (39%). The primary purposes of the images were to share a concussion-related incident (33%) and to dispense education (19%). For those images where it could be evaluated, the majority (91%) were found to reflect the Sports Concussion Assessment Tool 3 (SCAT3) guidelines. The ability to rapidly disseminate rich information though photos, images, and infographics to a wide-reaching audience suggests that image-sharing social media platforms could be used as an effective communication tool for sports concussion. Public health strategies could direct educative content to targeted populations via the use of image-sharing platforms. Further research is required to understand how image-sharing platforms can be used to effectively relay evidence-based information to patients and sports medicine

  19. Content Based Retrieval System for Magnetic Resonance Images

    International Nuclear Information System (INIS)

    Trojachanets, Katarina

    2010-01-01

    The amount of medical images is continuously increasing as a consequence of the constant growth and development of techniques for digital image acquisition. Manual annotation and description of each image is impractical, expensive and time consuming approach. Moreover, it is an imprecise and insufficient way for describing all information stored in medical images. This induces the necessity for developing efficient image storage, annotation and retrieval systems. Content based image retrieval (CBIR) emerges as an efficient approach for digital image retrieval from large databases. It includes two phases. In the first phase, the visual content of the image is analyzed and the feature extraction process is performed. An appropriate descriptor, namely, feature vector is then associated with each image. These descriptors are used in the second phase, i.e. the retrieval process. With the aim to improve the efficiency and precision of the content based image retrieval systems, feature extraction and automatic image annotation techniques are subject of continuous researches and development. Including the classification techniques in the retrieval process enables automatic image annotation in an existing CBIR system. It contributes to more efficient and easier image organization in the system.Applying content based retrieval in the field of magnetic resonance is a big challenge. Magnetic resonance imaging is an image based diagnostic technique which is widely used in medical environment. According to this, the number of magnetic resonance images is enormously growing. Magnetic resonance images provide plentiful medical information, high resolution and specific nature. Thus, the capability of CBIR systems for image retrieval from large database is of great importance for efficient analysis of this kind of images. The aim of this thesis is to propose content based retrieval system architecture for magnetic resonance images. To provide the system efficiency, feature

  20. Profiling stem cell states in three-dimensional biomaterial niches using high content image informatics.

    Science.gov (United States)

    Dhaliwal, Anandika; Brenner, Matthew; Wolujewicz, Paul; Zhang, Zheng; Mao, Yong; Batish, Mona; Kohn, Joachim; Moghe, Prabhas V

    2016-11-01

    A predictive framework for the evolution of stem cell biology in 3-D is currently lacking. In this study we propose deep image informatics of the nuclear biology of stem cells to elucidate how 3-D biomaterials steer stem cell lineage phenotypes. The approach is based on high content imaging informatics to capture minute variations in the 3-D spatial organization of splicing factor SC-35 in the nucleoplasm as a marker to classify emergent cell phenotypes of human mesenchymal stem cells (hMSCs). The cells were cultured in varied 3-D culture systems including hydrogels, electrospun mats and salt leached scaffolds. The approach encompasses high resolution 3-D imaging of SC-35 domains and high content image analysis (HCIA) to compute quantitative 3-D nuclear metrics for SC-35 organization in single cells in concert with machine learning approaches to construct a predictive cell-state classification model. Our findings indicate that hMSCs cultured in collagen hydrogels and induced to differentiate into osteogenic or adipogenic lineages could be classified into the three lineages (stem, adipogenic, osteogenic) with ⩾80% precision and sensitivity, within 72h. Using this framework, the augmentation of osteogenesis by scaffold design exerted by porogen leached scaffolds was also profiled within 72h with ∼80% high sensitivity. Furthermore, by employing 3-D SC-35 organizational metrics, differential osteogenesis induced by novel electrospun fibrous polymer mats incorporating decellularized matrix could also be elucidated and predictably modeled at just 3days with high precision. We demonstrate that 3-D SC-35 organizational metrics can be applied to model the stem cell state in 3-D scaffolds. We propose that this methodology can robustly discern minute changes in stem cell states within complex 3-D architectures and map single cell biological readouts that are critical to assessing population level cell heterogeneity. The sustained development and validation of bioactive

  1. High Content Screening: Understanding Cellular Pathway

    International Nuclear Information System (INIS)

    Mohamed Zaffar Ali Mohamed Amiroudine; Daryl Jesus Arapoc; Zainah Adam; Shafii Khamis

    2015-01-01

    High content screening (HCS) is the convergence between cell-based assays, high-resolution fluorescence imaging, phase-contrast imaging of fixed- or live-cell assays, tissues and small organisms. It has been widely adopted in the pharmaceutical and biotech industries for target identification and validation and as secondary screens to reveal potential toxicities or to elucidate a drugs mechanism of action. By using the ImageXpress® Micro XLS System HCS, the complex network of key players controlling proliferation and apoptosis can be reduced to several sentinel markers for analysis. Cell proliferation and apoptosis are two key areas in cell biology and drug discovery research. Understanding the signaling pathways in cell proliferation and apoptosis is important for new therapeutic discovery because the imbalance between these two events is predominant in the progression of many human diseases, including cancer. The DNA binding dye DAPI is used to determine the nuclear size and nuclear morphology as well as cell cycle phases by DNA content. Images together with MetaXpress® analysis results provide a convenient and easy to use solution to high volume image management. In particular, HCS platform is beginning to have an important impact on early drug discovery, basic research in systems cell biology, and is expected to play a role in personalized medicine or revealing off-target drug effects. (author)

  2. High content live cell imaging for the discovery of new antimalarial marine natural products

    Directory of Open Access Journals (Sweden)

    Cervantes Serena

    2012-01-01

    Full Text Available Abstract Background The human malaria parasite remains a burden in developing nations. It is responsible for up to one million deaths a year, a number that could rise due to increasing multi-drug resistance to all antimalarial drugs currently available. Therefore, there is an urgent need for the discovery of new drug therapies. Recently, our laboratory developed a simple one-step fluorescence-based live cell-imaging assay to integrate the complex biology of the human malaria parasite into drug discovery. Here we used our newly developed live cell-imaging platform to discover novel marine natural products and their cellular phenotypic effects against the most lethal malaria parasite, Plasmodium falciparum. Methods A high content live cell imaging platform was used to screen marine extracts effects on malaria. Parasites were grown in vitro in the presence of extracts, stained with RNA sensitive dye, and imaged at timed intervals with the BD Pathway HT automated confocal microscope. Results Image analysis validated our new methodology at a larger scale level and revealed potential antimalarial activity of selected extracts with a minimal cytotoxic effect on host red blood cells. To further validate our assay, we investigated parasite's phenotypes when incubated with the purified bioactive natural product bromophycolide A. We show that bromophycolide A has a strong and specific morphological effect on parasites, similar to the ones observed from the initial extracts. Conclusion Collectively, our results show that high-content live cell-imaging (HCLCI can be used to screen chemical libraries and identify parasite specific inhibitors with limited host cytotoxic effects. All together we provide new leads for the discovery of novel antimalarials.

  3. High content live cell imaging for the discovery of new antimalarial marine natural products.

    Science.gov (United States)

    Cervantes, Serena; Stout, Paige E; Prudhomme, Jacques; Engel, Sebastian; Bruton, Matthew; Cervantes, Michael; Carter, David; Tae-Chang, Young; Hay, Mark E; Aalbersberg, William; Kubanek, Julia; Le Roch, Karine G

    2012-01-03

    The human malaria parasite remains a burden in developing nations. It is responsible for up to one million deaths a year, a number that could rise due to increasing multi-drug resistance to all antimalarial drugs currently available. Therefore, there is an urgent need for the discovery of new drug therapies. Recently, our laboratory developed a simple one-step fluorescence-based live cell-imaging assay to integrate the complex biology of the human malaria parasite into drug discovery. Here we used our newly developed live cell-imaging platform to discover novel marine natural products and their cellular phenotypic effects against the most lethal malaria parasite, Plasmodium falciparum. A high content live cell imaging platform was used to screen marine extracts effects on malaria. Parasites were grown in vitro in the presence of extracts, stained with RNA sensitive dye, and imaged at timed intervals with the BD Pathway HT automated confocal microscope. Image analysis validated our new methodology at a larger scale level and revealed potential antimalarial activity of selected extracts with a minimal cytotoxic effect on host red blood cells. To further validate our assay, we investigated parasite's phenotypes when incubated with the purified bioactive natural product bromophycolide A. We show that bromophycolide A has a strong and specific morphological effect on parasites, similar to the ones observed from the initial extracts. Collectively, our results show that high-content live cell-imaging (HCLCI) can be used to screen chemical libraries and identify parasite specific inhibitors with limited host cytotoxic effects. All together we provide new leads for the discovery of novel antimalarials. © 2011 Cervantes et al; licensee BioMed Central Ltd.

  4. Synthetic Biomaterials to Rival Nature's Complexity-a Path Forward with Combinatorics, High-Throughput Discovery, and High-Content Analysis.

    Science.gov (United States)

    Zhang, Douglas; Lee, Junmin; Kilian, Kristopher A

    2017-10-01

    Cells in tissue receive a host of soluble and insoluble signals in a context-dependent fashion, where integration of these cues through a complex network of signal transduction cascades will define a particular outcome. Biomaterials scientists and engineers are tasked with designing materials that can at least partially recreate this complex signaling milieu towards new materials for biomedical applications. In this progress report, recent advances in high throughput techniques and high content imaging approaches that are facilitating the discovery of efficacious biomaterials are described. From microarrays of synthetic polymers, peptides and full-length proteins, to designer cell culture systems that present multiple biophysical and biochemical cues in tandem, it is discussed how the integration of combinatorics with high content imaging and analysis is essential to extracting biologically meaningful information from large scale cellular screens to inform the design of next generation biomaterials. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Content-addressable read/write memories for image analysis

    Science.gov (United States)

    Snyder, W. E.; Savage, C. D.

    1982-01-01

    The commonly encountered image analysis problems of region labeling and clustering are found to be cases of search-and-rename problem which can be solved in parallel by a system architecture that is inherently suitable for VLSI implementation. This architecture is a novel form of content-addressable memory (CAM) which provides parallel search and update functions, allowing speed reductions down to constant time per operation. It has been proposed in related investigations by Hall (1981) that, with VLSI, CAM-based structures with enhanced instruction sets for general purpose processing will be feasible.

  6. High content analysis of differentiation and cell death in human adipocytes.

    Science.gov (United States)

    Doan-Xuan, Quang Minh; Sarvari, Anitta K; Fischer-Posovszky, Pamela; Wabitsch, Martin; Balajthy, Zoltan; Fesus, Laszlo; Bacso, Zsolt

    2013-10-01

    Understanding adipocyte biology and its homeostasis is in the focus of current obesity research. We aimed to introduce a high-content analysis procedure for directly visualizing and quantifying adipogenesis and adipoapoptosis by laser scanning cytometry (LSC) in a large population of cell. Slide-based image cytometry and image processing algorithms were used and optimized for high-throughput analysis of differentiating cells and apoptotic processes in cell culture at high confluence. Both preadipocytes and adipocytes were simultaneously scrutinized for lipid accumulation, texture properties, nuclear condensation, and DNA fragmentation. Adipocyte commitment was found after incubation in adipogenic medium for 3 days identified by lipid droplet formation and increased light absorption, while terminal differentiation of adipocytes occurred throughout day 9-14 with characteristic nuclear shrinkage, eccentric nuclei localization, chromatin condensation, and massive lipid deposition. Preadipocytes were shown to be more prone to tumor necrosis factor alpha (TNFα)-induced apoptosis compared to mature adipocytes. Importantly, spontaneous DNA fragmentation was observed at early stage when adipocyte commitment occurs. This DNA damage was independent from either spontaneous or induced apoptosis and probably was part of the differentiation program. © 2013 International Society for Advancement of Cytometry. Copyright © 2013 International Society for Advancement of Cytometry.

  7. Depth-resolved incoherent and coherent wide-field high-content imaging (Conference Presentation)

    Science.gov (United States)

    So, Peter T.

    2016-03-01

    Recent advances in depth-resolved wide-field imaging technique has enabled many high throughput applications in biology and medicine. Depth resolved imaging of incoherent signals can be readily accomplished with structured light illumination or nonlinear temporal focusing. The integration of these high throughput systems with novel spectroscopic resolving elements further enable high-content information extraction. We will introduce a novel near common-path interferometer and demonstrate its uses in toxicology and cancer biology applications. The extension of incoherent depth-resolved wide-field imaging to coherent modality is non-trivial. Here, we will cover recent advances in wide-field 3D resolved mapping of refractive index, absorbance, and vibronic components in biological specimens.

  8. Magnetic resonance imaging and quantitative analysis of contents of epidermoid and dermoid cysts

    Energy Technology Data Exchange (ETDEWEB)

    Takeshita, Mikihiko; Kubo, Osami; Hiyama, Hirofumi; Tajika, Yasuhiko; Izawa, Masahiro; Kagawa, Mizuo; Takakura, Kintomo; Kobayashi, Naotoshi; Toyoda, Masako [Tokyo Women' s Medical Coll. (Japan)

    1994-07-01

    The intracapsular cholesterol protein, and calcium contents of epidermoid and dermoid cysts from seven patients were compared with the signal intensities on T[sub 1]-weighted spin-echo magnetic resonance (MR) images. All specimens had a paste-like consistency when resected. Epidermoid and dermoid cysts demonstrated a wide range of cholesterol and calcium contents, and epidermoid cysts were not always rich in cholesterol. Five patients had cysts with lower signal intensity than white matter, which contained more than 18.3 mg/g wet weight of protein. One of these patients had the highest cholesterol content of all seven patients (22.25 mg/g wet weight) and another had the highest calcium content (0.75 mg/g wet weight). Two patients had cysts with higher signal intensity than white matter, with protein contents of lower than 4.3 mg/g wet weight. High protein content (>18.3 mg/g wet weight) may decrease signal intensity on T[sub 1]-weighted MR images, while low protein content (<4.3 mg/g wet weight) may increase signal intensity in epidermoid and dermoid cysts with high viscosity (paste-like consistency) contents. (author).

  9. Content-based Image Hiding Method for Secure Network Biometric Verification

    Directory of Open Access Journals (Sweden)

    Xiangjiu Che

    2011-08-01

    Full Text Available For secure biometric verification, most existing methods embed biometric information directly into the cover image, but content correlation analysis between the biometric image and the cover image is often ignored. In this paper, we propose a novel biometric image hiding approach based on the content correlation analysis to protect the network-based transmitted image. By using principal component analysis (PCA, the content correlation between the biometric image and the cover image is firstly analyzed. Then based on particle swarm optimization (PSO algorithm, some regions of the cover image are selected to represent the biometric image, in which the cover image can carry partial content of the biometric image. As a result of the correlation analysis, the unrepresented part of the biometric image is embedded into the cover image by using the discrete wavelet transform (DWT. Combined with human visual system (HVS model, this approach makes the hiding result perceptually invisible. The extensive experimental results demonstrate that the proposed hiding approach is robust against some common frequency and geometric attacks; it also provides an effective protection for the secure biometric verification.

  10. Content Analysis of the Concept of Addiction in High School Textbooks of Iran.

    Science.gov (United States)

    Mirzamohammadi, Mohammad Hasan; Mousavi, Sayedeh Zainab; Massah, Omid; Farhoudian, Ali

    2017-01-01

    This research sought to determine how well the causes of addiction, addiction harms, and prevention of addiction have been noticed in high school textbooks. We used descriptive method to select the main related components of the addiction concept and content analysis method for analyzing the content of textbooks. The study population comprised 61 secondary school curriculum textbooks and study sample consisted of 14 secondary school textbooks selected by purposeful sampling method. The tools for collecting data were "content analysis inventory" which its validity was confirmed by educational and social sciences experts and its reliability has been found to be 91%. About 67 components were prepared for content analysis and were divided to 3 categories of causes, harms, and prevention of addiction. The analysis units in this study comprised phrases, topics, examples, course topics, words, poems, images, questions, tables, and exercises. Results of the study showed that the components of the addiction concept have presented with 212 remarks in the textbooks. Also, the degree of attention given to any of the 3 main components of the addiction concept were presented as follows: causes with 52 (24.52%) remarks, harm with 89 (41.98%) remarks, and prevention with 71 (33.49%) remarks. In high school textbooks, little attention has been paid to the concept of addiction and mostly its biological dimension were addressed while social, personal, familial, and religious dimensions of addiction have been neglected.

  11. INTEGRATION OF SPATIAL INFORMATION WITH COLOR FOR CONTENT RETRIEVAL OF REMOTE SENSING IMAGES

    Directory of Open Access Journals (Sweden)

    Bikesh Kumar Singh

    2010-08-01

    Full Text Available There is rapid increase in image databases of remote sensing images due to image satellites with high resolution, commercial applications of remote sensing & high available bandwidth in last few years. The problem of content-based image retrieval (CBIR of remotely sensed images presents a major challenge not only because of the surprisingly increasing volume of images acquired from a wide range of sensors but also because of the complexity of images themselves. In this paper, a software system for content-based retrieval of remote sensing images using RGB and HSV color spaces is presented. Further, we also compare our results with spatiogram based content retrieval which integrates spatial information along with color histogram. Experimental results show that the integration of spatial information in color improves the image analysis of remote sensing data. In general, retrievals in HSV color space showed better performance than in RGB color space.

  12. HCS-Neurons: identifying phenotypic changes in multi-neuron images upon drug treatments of high-content screening.

    Science.gov (United States)

    Charoenkwan, Phasit; Hwang, Eric; Cutler, Robert W; Lee, Hua-Chin; Ko, Li-Wei; Huang, Hui-Ling; Ho, Shinn-Ying

    2013-01-01

    High-content screening (HCS) has become a powerful tool for drug discovery. However, the discovery of drugs targeting neurons is still hampered by the inability to accurately identify and quantify the phenotypic changes of multiple neurons in a single image (named multi-neuron image) of a high-content screen. Therefore, it is desirable to develop an automated image analysis method for analyzing multi-neuron images. We propose an automated analysis method with novel descriptors of neuromorphology features for analyzing HCS-based multi-neuron images, called HCS-neurons. To observe multiple phenotypic changes of neurons, we propose two kinds of descriptors which are neuron feature descriptor (NFD) of 13 neuromorphology features, e.g., neurite length, and generic feature descriptors (GFDs), e.g., Haralick texture. HCS-neurons can 1) automatically extract all quantitative phenotype features in both NFD and GFDs, 2) identify statistically significant phenotypic changes upon drug treatments using ANOVA and regression analysis, and 3) generate an accurate classifier to group neurons treated by different drug concentrations using support vector machine and an intelligent feature selection method. To evaluate HCS-neurons, we treated P19 neurons with nocodazole (a microtubule depolymerizing drug which has been shown to impair neurite development) at six concentrations ranging from 0 to 1000 ng/mL. The experimental results show that all the 13 features of NFD have statistically significant difference with respect to changes in various levels of nocodazole drug concentrations (NDC) and the phenotypic changes of neurites were consistent to the known effect of nocodazole in promoting neurite retraction. Three identified features, total neurite length, average neurite length, and average neurite area were able to achieve an independent test accuracy of 90.28% for the six-dosage classification problem. This NFD module and neuron image datasets are provided as a freely downloadable

  13. A content analysis of thinspiration images and text posts on Tumblr.

    Science.gov (United States)

    Wick, Madeline R; Harriger, Jennifer A

    2018-03-01

    Thinspiration is content advocating extreme weight loss by means of images and/or text posts. While past content analyses have examined thinspiration content on social media and other websites, no research to date has examined thinspiration content on Tumblr. Over the course of a week, 222 images and text posts were collected after entering the keyword 'thinspiration' into the Tumblr search bar. These images were then rated on a variety of characteristics. The majority of thinspiration images included a thin woman adhering to culturally based beauty, often posing in a manner that accentuated her thinness or sexuality. The most common themes for thinspiration text posts included dieting/restraint, weight loss, food guilt, and body guilt. The thinspiration content on Tumblr appears to be consistent with that on other mediums. Future research should utilize experimental methods to examine the potential effects of consuming thinspiration content on Tumblr. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. 'Strong is the new skinny': A content analysis of #fitspiration images on Instagram.

    Science.gov (United States)

    Tiggemann, Marika; Zaccardo, Mia

    2018-07-01

    'Fitspiration' is an online trend designed to inspire viewers towards a healthier lifestyle by promoting exercise and healthy food. This study provides a content analysis of fitspiration imagery on the social networking site Instagram. A set of 600 images were coded for body type, activity, objectification and textual elements. Results showed that the majority of images of women contained only one body type: thin and toned. In addition, most images contained objectifying elements. Accordingly, while fitspiration images may be inspirational for viewers, they also contain a number of elements likely to have negative effects on the viewer's body image.

  15. Investigating the link between radiologists’ gaze, diagnostic decision, and image content

    Science.gov (United States)

    Tourassi, Georgia; Voisin, Sophie; Paquit, Vincent; Krupinski, Elizabeth

    2013-01-01

    Objective To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods Gaze data and diagnostic decisions were collected from three breast imaging radiologists and three radiology residents who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Image analysis was performed in mammographic regions that attracted radiologists’ attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results By pooling the data from all readers, machine learning produced highly accurate predictive models linking image content, gaze, and cognition. Potential linking of those with diagnostic error was also supported to some extent. Merging readers’ gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the readers’ diagnostic errors while confirming 97.3% of their correct diagnoses. The readers’ individual perceptual and cognitive behaviors could be adequately predicted by modeling the behavior of others. However, personalized tuning was in many cases beneficial for capturing more accurately individual behavior. Conclusions There is clearly an interaction between radiologists’ gaze, diagnostic decision, and image content which can be modeled with machine learning algorithms. PMID:23788627

  16. Anti-cancer agents in Saudi Arabian herbals revealed by automated high-content imaging

    KAUST Repository

    Hajjar, Dina A.; Kremb, Stephan Georg; Sioud, Salim; Emwas, Abdul-Hamid M.; Voolstra, Christian R.; Ravasi, Timothy

    2017-01-01

    in cancer therapy. Here, we used cell-based phenotypic profiling and image-based high-content screening to study the mode of action and potential cellular targets of plants historically used in Saudi Arabia's traditional medicine. We compared the cytological

  17. Cytological study of DNA content and nuclear morphometric analysis for aid in the diagnosis of high-grade dysplasia within oral leukoplakia.

    Science.gov (United States)

    Yang, Xi; Xiao, Xuan; Wu, Wenyan; Shen, Xuemin; Zhou, Zengtong; Liu, Wei; Shi, Linjun

    2017-09-01

    To quantitatively examine the DNA content and nuclear morphometric status of oral leukoplakia (OL) and investigate its association with the degree of dysplasia in a cytologic study. Oral cytobrush biopsy was carried out to obtain exfoliative epithelial cells from lesions before scalpel biopsy at the same location in a blinded series of 70 patients with OL. Analysis of nuclear morphometry and DNA content status using image cytometry was performed with oral smears stained with the Feulgen-thionin method. Nuclear morphometric analysis revealed significant differences in DNA content amount, DNA index, nuclear area, nuclear radius, nuclear intensity, sphericity, entropy, and fractal dimension (all P content analysis identified 34 patients with OL (48.6%) with DNA content abnormality. Nonhomogeneous lesion (P = .018) and high-grade dysplasia (P = .008) were significantly associated with abnormal DNA content. Importantly, the positive correlation between the degree of oral dysplasia and DNA content status was significant (P = .004, correlation coefficient = 0.342). Cytology analysis of DNA content and nuclear morphometric status using image cytometry may support their use as a screening and monitoring tool for OL progression. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. A multi-scale convolutional neural network for phenotyping high-content cellular images.

    Science.gov (United States)

    Godinez, William J; Hossain, Imtiaz; Lazic, Stanley E; Davies, John W; Zhang, Xian

    2017-07-01

    Identifying phenotypes based on high-content cellular images is challenging. Conventional image analysis pipelines for phenotype identification comprise multiple independent steps, with each step requiring method customization and adjustment of multiple parameters. Here, we present an approach based on a multi-scale convolutional neural network (M-CNN) that classifies, in a single cohesive step, cellular images into phenotypes by using directly and solely the images' pixel intensity values. The only parameters in the approach are the weights of the neural network, which are automatically optimized based on training images. The approach requires no a priori knowledge or manual customization, and is applicable to single- or multi-channel images displaying single or multiple cells. We evaluated the classification performance of the approach on eight diverse benchmark datasets. The approach yielded overall a higher classification accuracy compared with state-of-the-art results, including those of other deep CNN architectures. In addition to using the network to simply obtain a yes-or-no prediction for a given phenotype, we use the probability outputs calculated by the network to quantitatively describe the phenotypes. This study shows that these probability values correlate with chemical treatment concentrations. This finding validates further our approach and enables chemical treatment potency estimation via CNNs. The network specifications and solver definitions are provided in Supplementary Software 1. william_jose.godinez_navarro@novartis.com or xian-1.zhang@novartis.com. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  19. An investigation of content and media images in gay men's magazines.

    Science.gov (United States)

    Saucier, Jason A; Caron, Sandra L

    2008-01-01

    This study provides an analysis of gay men's magazines, examining both the content and advertisements. Four magazine titles were selected, including The Advocate, Genre, Instinct, and Out, each targeting gay men as its target audience. These magazines were coded for both article content and advertisement content. In the advertisement analysis, both the type of advertisement and characteristics of the men depicted within the advertisement when present. The results mirror previous research findings relating to the portrayal of women, including the objectification of specific body parts and the high community standards set by the images depicted. These findings were reinforced by both the advertisements and content analyzed to include a high degree of importance being placed on having the right body type. Implications for further research are discussed.

  20. Content-Based Image Retrial Based on Hadoop

    Directory of Open Access Journals (Sweden)

    DongSheng Yin

    2013-01-01

    Full Text Available Generally, time complexity of algorithms for content-based image retrial is extremely high. In order to retrieve images on large-scale databases efficiently, a new way for retrieving based on Hadoop distributed framework is proposed. Firstly, a database of images features is built by using Speeded Up Robust Features algorithm and Locality-Sensitive Hashing and then perform the search on Hadoop platform in a parallel way specially designed. Considerable experimental results show that it is able to retrieve images based on content on large-scale cluster and image sets effectively.

  1. Investigating the Link Between Radiologists Gaze, Diagnostic Decision, and Image Content

    Energy Technology Data Exchange (ETDEWEB)

    Tourassi, Georgia [ORNL; Voisin, Sophie [ORNL; Paquit, Vincent C [ORNL; Krupinski, Elizabeth [University of Arizona

    2013-01-01

    Objective: To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods: Gaze data and diagnostic decisions were collected from six radiologists who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Texture analysis was performed in mammographic regions that attracted radiologists attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results: By pooling the data from all radiologists machine learning produced highly accurate predictive models linking image content, gaze, cognition, and error. Merging radiologists gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the radiologists diagnostic errors while confirming 96.2% of their correct diagnoses. The radiologists individual errors could be adequately predicted by modeling the behavior of their peers. However, personalized tuning appears to be beneficial in many cases to capture more accurately individual behavior. Conclusions: Machine learning algorithms combining image features with radiologists gaze data and diagnostic decisions can be effectively developed to recognize cognitive and perceptual errors associated with the diagnostic interpretation of mammograms.

  2. Content analysis of Australian direct-to-consumer websites for emerging breast cancer imaging devices.

    Science.gov (United States)

    Vreugdenburg, Thomas D; Laurence, Caroline O; Willis, Cameron D; Mundy, Linda; Hiller, Janet E

    2014-09-01

    To describe the nature and frequency of information presented on direct-to-consumer websites for emerging breast cancer imaging devices. Content analysis of Australian website advertisements from 2 March 2011 to 30 March 2012, for three emerging breast cancer imaging devices: digital infrared thermal imaging, electrical impedance scanning and electronic palpation imaging. Type of imaging offered, device safety, device performance, application of device, target population, supporting evidence and comparator tests. Thirty-nine unique Australian websites promoting a direct-to-consumer breast imaging device were identified. Despite a lack of supporting evidence, 22 websites advertised devices for diagnosis, 20 advertised devices for screening, 13 advertised devices for prevention and 13 advertised devices for identifying breast cancer risk factors. Similarly, advertised ranges of diagnostic sensitivity (78%-99%) and specificity (44%-91%) were relatively high compared with published literature. Direct comparisons with conventional screening tools that favoured the new device were highly prominent (31 websites), and one-third of websites (12) explicitly promoted their device as a suitable alternative. Australian websites for emerging breast imaging devices, which are also available internationally, promote the use of such devices as safe and effective solutions for breast cancer screening and diagnosis in a range of target populations. Many of these claims are not supported by peer-reviewed evidence, raising questions about the manner in which these devices and their advertising material are regulated, particularly when they are promoted as direct alternatives to established screening interventions.

  3. Impact of image segmentation on high-content screening data quality for SK-BR-3 cells

    Directory of Open Access Journals (Sweden)

    Li Yizheng

    2007-09-01

    Full Text Available Abstract Background High content screening (HCS is a powerful method for the exploration of cellular signalling and morphology that is rapidly being adopted in cancer research. HCS uses automated microscopy to collect images of cultured cells. The images are subjected to segmentation algorithms to identify cellular structures and quantitate their morphology, for hundreds to millions of individual cells. However, image analysis may be imperfect, especially for "HCS-unfriendly" cell lines whose morphology is not well handled by current image segmentation algorithms. We asked if segmentation errors were common for a clinically relevant cell line, if such errors had measurable effects on the data, and if HCS data could be improved by automated identification of well-segmented cells. Results Cases of poor cell body segmentation occurred frequently for the SK-BR-3 cell line. We trained classifiers to identify SK-BR-3 cells that were well segmented. On an independent test set created by human review of cell images, our optimal support-vector machine classifier identified well-segmented cells with 81% accuracy. The dose responses of morphological features were measurably different in well- and poorly-segmented populations. Elimination of the poorly-segmented cell population increased the purity of DNA content distributions, while appropriately retaining biological heterogeneity, and simultaneously increasing our ability to resolve specific morphological changes in perturbed cells. Conclusion Image segmentation has a measurable impact on HCS data. The application of a multivariate shape-based filter to identify well-segmented cells improved HCS data quality for an HCS-unfriendly cell line, and could be a valuable post-processing step for some HCS datasets.

  4. "Fitspiration" on Social Media: A Content Analysis of Gendered Images.

    Science.gov (United States)

    Carrotte, Elise Rose; Prichard, Ivanka; Lim, Megan Su Cheng

    2017-03-29

    "Fitspiration" (also known as "fitspo") aims to inspire individuals to exercise and be healthy, but emerging research indicates exposure can negatively impact female body image. Fitspiration is frequently accessed on social media; however, it is currently unclear the degree to which messages about body image and exercise differ by gender of the subject. The aim of our study was to conduct a content analysis to identify the characteristics of fitspiration content posted across social media and whether this differs according to subject gender. Content tagged with #fitspo across Instagram, Facebook, Twitter, and Tumblr was extracted over a composite 30-minute period. All posts were analyzed by 2 independent coders according to a codebook. Of the 415/476 (87.2%) relevant posts extracted, most posts were on Instagram (360/415, 86.8%). Most posts (308/415, 74.2%) related thematically to exercise, and 81/415 (19.6%) related thematically to food. In total, 151 (36.4%) posts depicted only female subjects and 114/415 (27.5%) depicted only male subjects. Female subjects were typically thin but toned; male subjects were often muscular or hypermuscular. Within the images, female subjects were significantly more likely to be aged under 25 years (P<.001) than the male subjects, to have their full body visible (P=.001), and to have their buttocks emphasized (P<.001). Male subjects were more likely to have their face visible in the post (P=.005) than the female subjects. Female subjects were more likely to be sexualized than the male subjects (P=.002). Female #fitspo subjects typically adhered to the thin or athletic ideal, and male subjects typically adhered to the muscular ideal. Future research and interventional efforts should consider the potential objectifying messages in fitspiration, as it relates to both female and male body image. ©Elise Rose Carrotte, Ivanka Prichard, Megan Su Cheng Lim. Originally published in the Journal of Medical Internet Research (http

  5. “Fitspiration” on Social Media: A Content Analysis of Gendered Images

    Science.gov (United States)

    Prichard, Ivanka; Lim, Megan Su Cheng

    2017-01-01

    Background “Fitspiration” (also known as “fitspo”) aims to inspire individuals to exercise and be healthy, but emerging research indicates exposure can negatively impact female body image. Fitspiration is frequently accessed on social media; however, it is currently unclear the degree to which messages about body image and exercise differ by gender of the subject. Objective The aim of our study was to conduct a content analysis to identify the characteristics of fitspiration content posted across social media and whether this differs according to subject gender. Methods Content tagged with #fitspo across Instagram, Facebook, Twitter, and Tumblr was extracted over a composite 30-minute period. All posts were analyzed by 2 independent coders according to a codebook. Results Of the 415/476 (87.2%) relevant posts extracted, most posts were on Instagram (360/415, 86.8%). Most posts (308/415, 74.2%) related thematically to exercise, and 81/415 (19.6%) related thematically to food. In total, 151 (36.4%) posts depicted only female subjects and 114/415 (27.5%) depicted only male subjects. Female subjects were typically thin but toned; male subjects were often muscular or hypermuscular. Within the images, female subjects were significantly more likely to be aged under 25 years (P<.001) than the male subjects, to have their full body visible (P=.001), and to have their buttocks emphasized (P<.001). Male subjects were more likely to have their face visible in the post (P=.005) than the female subjects. Female subjects were more likely to be sexualized than the male subjects (P=.002). Conclusions Female #fitspo subjects typically adhered to the thin or athletic ideal, and male subjects typically adhered to the muscular ideal. Future research and interventional efforts should consider the potential objectifying messages in fitspiration, as it relates to both female and male body image. PMID:28356239

  6. A method for volume determination of the orbit and its contents by high resolution axial tomography and quantitative digital image analysis.

    Science.gov (United States)

    Cooper, W C

    1985-01-01

    The various congenital and acquired conditions which alter orbital volume are reviewed. Previous investigative work to determine orbital capacity is summarized. Since these studies were confined to postmortem evaluations, the need for a technique to measure orbital volume in the living state is presented. A method for volume determination of the orbit and its contents by high-resolution axial tomography and quantitative digital image analysis is reported. This procedure has proven to be accurate (the discrepancy between direct and computed measurements ranged from 0.2% to 4%) and reproducible (greater than 98%). The application of this method to representative clinical problems is presented and discussed. The establishment of a diagnostic system versatile enough to expand the usefulness of computerized axial tomography and polytomography should add a new dimension to ophthalmic investigation and treatment.

  7. Ontology of gaps in content-based image retrieval.

    Science.gov (United States)

    Deserno, Thomas M; Antani, Sameer; Long, Rodney

    2009-04-01

    Content-based image retrieval (CBIR) is a promising technology to enrich the core functionality of picture archiving and communication systems (PACS). CBIR has a potential for making a strong impact in diagnostics, research, and education. Research as reported in the scientific literature, however, has not made significant inroads as medical CBIR applications incorporated into routine clinical medicine or medical research. The cause is often attributed (without supporting analysis) to the inability of these applications in overcoming the "semantic gap." The semantic gap divides the high-level scene understanding and interpretation available with human cognitive capabilities from the low-level pixel analysis of computers, based on mathematical processing and artificial intelligence methods. In this paper, we suggest a more systematic and comprehensive view of the concept of "gaps" in medical CBIR research. In particular, we define an ontology of 14 gaps that addresses the image content and features, as well as system performance and usability. In addition to these gaps, we identify seven system characteristics that impact CBIR applicability and performance. The framework we have created can be used a posteriori to compare medical CBIR systems and approaches for specific biomedical image domains and goals and a priori during the design phase of a medical CBIR application, as the systematic analysis of gaps provides detailed insight in system comparison and helps to direct future research.

  8. A thematic content analysis of #cheatmeal images on social media: Characterizing an emerging dietary trend.

    Science.gov (United States)

    Pila, Eva; Mond, Jonathan M; Griffiths, Scott; Mitchison, Deborah; Murray, Stuart B

    2017-06-01

    Despite the pervasive social endorsement of "cheat meals" within pro-muscularity online communities, there is an absence of empirical work examining this dietary phenomenon. The present study aimed to characterize cheat meals, and explore the meaning ascribed to engagement in this practice. Thematic content analysis was employed to code the photographic and textual elements of a sample (n = 600) that was extracted from over 1.6 million images marked with the #cheatmeal tag on the social networking site, Instagram. Analysis of the volume and type of food revealed the presence of very large quantities (54.5%) of calorie-dense foods (71.3%) that was rated to qualify as an objective binge episode. Photographic content of people commonly portrayed highly-muscular bodies (60.7%) in the act of intentional body exposure (40.0%). Meanwhile, textual content exemplified the idealization of overconsumption, a strict commitment to fitness, and a reward-based framework around diet and fitness. Collectively, these findings position cheat meals as goal-oriented dietary practices in the pursuit of physique-ideals, thus underscoring the potential clinical repercussions of this socially-endorsed dietary phenomenon. © 2017 Wiley Periodicals, Inc.

  9. Detection of Isoflavones Content in Soybean Based on Hyperspectral Imaging Technology

    Directory of Open Access Journals (Sweden)

    Tan Kezhu

    2014-04-01

    Full Text Available Because of many important biological activities, Soybean isoflavones which has great potential for exploitation is significant to practical applications. Due to the conventional methods for determination of soybean isoflavones having long detection period, used too many reagents, couldn’t be detected on-line, and other issues, we propose hyperspectral imaging technology to detect the contents of soybean isoflavones. Based on the 40 varieties of soybeans produced in Heilongjiang province, we get the spectral reflection datum of soybean samples varied from the soybean’s hyperspectral images which are collected by the hyperspectral imaging system, and apply high performance liquid chromatography (HPLC method to determine the true value of the selected samples of isoflavones. The feature wavelengths for isoflavones content prediction (1516, 1572, 1691, 1716 and 1760 nm were selected based on correlation analysis. The prediction model was established by using the method of BP neural network in order to realize the prediction of soybean isoflavones content analysis. The experimental results show that, the ANN model could predict isoflavones content of soybean samples with of 0.9679, the average relative error is 0.8032 %, and the mean square error (MSE is 0.110328, which indicates the effectiveness of the proposed method and provides a theoretical basis for the applications of hyerspectral imaging in non-destructive detection for interior quality of soybean.

  10. High content image based analysis identifies cell cycle inhibitors as regulators of Ebola virus infection.

    Science.gov (United States)

    Kota, Krishna P; Benko, Jacqueline G; Mudhasani, Rajini; Retterer, Cary; Tran, Julie P; Bavari, Sina; Panchal, Rekha G

    2012-09-25

    Viruses modulate a number of host biological responses including the cell cycle to favor their replication. In this study, we developed a high-content imaging (HCI) assay to measure DNA content and identify different phases of the cell cycle. We then investigated the potential effects of cell cycle arrest on Ebola virus (EBOV) infection. Cells arrested in G1 phase by serum starvation or G1/S phase using aphidicolin or G2/M phase using nocodazole showed much reduced EBOV infection compared to the untreated control. Release of cells from serum starvation or aphidicolin block resulted in a time-dependent increase in the percentage of EBOV infected cells. The effect of EBOV infection on cell cycle progression was found to be cell-type dependent. Infection of asynchronous MCF-10A cells with EBOV resulted in a reduced number of cells in G2/M phase with concomitant increase of cells in G1 phase. However, these effects were not observed in HeLa or A549 cells. Together, our studies suggest that EBOV requires actively proliferating cells for efficient replication. Furthermore, multiplexing of HCI based assays to detect viral infection, cell cycle status and other phenotypic changes in a single cell population will provide useful information during screening campaigns using siRNA and small molecule therapeutics.

  11. High Content Image Based Analysis Identifies Cell Cycle Inhibitors as Regulators of Ebola Virus Infection

    Directory of Open Access Journals (Sweden)

    Sina Bavari

    2012-09-01

    Full Text Available Viruses modulate a number of host biological responses including the cell cycle to favor their replication. In this study, we developed a high-content imaging (HCI assay to measure DNA content and identify different phases of the cell cycle. We then investigated the potential effects of cell cycle arrest on Ebola virus (EBOV infection. Cells arrested in G1 phase by serum starvation or G1/S phase using aphidicolin or G2/M phase using nocodazole showed much reduced EBOV infection compared to the untreated control. Release of cells from serum starvation or aphidicolin block resulted in a time-dependent increase in the percentage of EBOV infected cells. The effect of EBOV infection on cell cycle progression was found to be cell-type dependent. Infection of asynchronous MCF-10A cells with EBOV resulted in a reduced number of cells in G2/M phase with concomitant increase of cells in G1 phase. However, these effects were not observed in HeLa or A549 cells. Together, our studies suggest that EBOV requires actively proliferating cells for efficient replication. Furthermore, multiplexing of HCI based assays to detect viral infection, cell cycle status and other phenotypic changes in a single cell population will provide useful information during screening campaigns using siRNA and small molecule therapeutics.

  12. Disability in physical education textbooks: an analysis of image content.

    Science.gov (United States)

    Táboas-Pais, María Inés; Rey-Cao, Ana

    2012-10-01

    The aim of this paper is to show how images of disability are portrayed in physical education textbooks for secondary schools in Spain. The sample was composed of 3,316 images published in 36 textbooks by 10 publishing houses. A content analysis was carried out using a coding scheme based on categories employed in other similar studies and adapted to the requirements of this study with additional categories. The variables were camera angle, gender, type of physical activity, field of practice, space, and level. Univariate and bivariate descriptive analyses were also carried out. The Pearson chi-square statistic was used to identify associations between the variables. Results showed a noticeable imbalance between people with disabilities and people without disabilities, and women with disabilities were less frequently represented than men with disabilities. People with disabilities were depicted as participating in a very limited variety of segregated, competitive, and elite sports activities.

  13. 99Tcm-MIBI imaging in diagnosing benign/malign pulmonary disease and analysis of lung cancer DNA content

    International Nuclear Information System (INIS)

    Feng Yanlin; Tan Jiaju; Yang Jie; Zhu Zheng; Yu Fengwen; He Xiaohong; Huang Kemin; Yuan Baihong; Su Shaodi

    2002-01-01

    Objective: To evaluate the value of 99 Tc m -methoxyisobutylisonitrile (MIBI) lung imaging in diagnosing benign/malign pulmonary disease and the relation of 99 Tc m -MIBI uptake ratio (UR) with lung cancer DNA content. Methods: Early and delay imaging were performed on 27 cases of benign lung disease and 46 cases of malign lung disease. Visual analysis of the images and T/N uptake ratio measurement were performed on every case. Cancer cell DNA content and DNA index (DI) were measured in 24 cases of malign pulmonary disease. Results: The delay phase UR was 1.13 ± 0.19 in benign disease group, and the delay phase UR was 1.45 ± 0.21 in malign disease group (t6.51, P 99 Tc m -MIBI is not an ideal imaging agent for differentiating pulmonary benign/malign disease. Lung cancer DNA content may be reflected by delay phase UR

  14. Chlorophyll content of Plešné Lake phytoplankton cells studied with image analysis

    Czech Academy of Sciences Publication Activity Database

    Nedoma, Jiří; Nedbalová, Linda

    2006-01-01

    Roč. 61, Suppl. 20 (2006), S491-S498 ISSN 0006-3088 Grant - others:MSMT(CZ) 0021620828 Institutional research plan: CEZ:AV0Z60170517; CEZ:AV0Z60050516 Keywords : cellular chlorophyll content * image analysis * vertical profile Subject RIV: EH - Ecology, Behaviour Impact factor: 0.213, year: 2006

  15. High content image-based screening of a protease inhibitor library reveals compounds broadly active against Rift Valley fever virus and other highly pathogenic RNA viruses.

    Directory of Open Access Journals (Sweden)

    Rajini Mudhasani

    2014-08-01

    Full Text Available High content image-based screening was developed as an approach to test a protease inhibitor small molecule library for antiviral activity against Rift Valley fever virus (RVFV and to determine their mechanism of action. RVFV is the causative agent of severe disease of humans and animals throughout Africa and the Arabian Peninsula. Of the 849 compounds screened, 34 compounds exhibited ≥ 50% inhibition against RVFV. All of the hit compounds could be classified into 4 distinct groups based on their unique chemical backbone. Some of the compounds also showed broad antiviral activity against several highly pathogenic RNA viruses including Ebola, Marburg, Venezuela equine encephalitis, and Lassa viruses. Four hit compounds (C795-0925, D011-2120, F694-1532 and G202-0362, which were most active against RVFV and showed broad-spectrum antiviral activity, were selected for further evaluation for their cytotoxicity, dose response profile, and mode of action using classical virological methods and high-content imaging analysis. Time-of-addition assays in RVFV infections suggested that D011-2120 and G202-0362 targeted virus egress, while C795-0925 and F694-1532 inhibited virus replication. We showed that D011-2120 exhibited its antiviral effects by blocking microtubule polymerization, thereby disrupting the Golgi complex and inhibiting viral trafficking to the plasma membrane during virus egress. While G202-0362 also affected virus egress, it appears to do so by a different mechanism, namely by blocking virus budding from the trans Golgi. F694-1532 inhibited viral replication, but also appeared to inhibit overall cellular gene expression. However, G202-0362 and C795-0925 did not alter any of the morphological features that we examined and thus may prove to be good candidates for antiviral drug development. Overall this work demonstrates that high-content image analysis can be used to screen chemical libraries for new antivirals and to determine their

  16. Plant leaf chlorophyll content retrieval based on a field imaging spectroscopy system.

    Science.gov (United States)

    Liu, Bo; Yue, Yue-Min; Li, Ru; Shen, Wen-Jing; Wang, Ke-Lin

    2014-10-23

    A field imaging spectrometer system (FISS; 380-870 nm and 344 bands) was designed for agriculture applications. In this study, FISS was used to gather spectral information from soybean leaves. The chlorophyll content was retrieved using a multiple linear regression (MLR), partial least squares (PLS) regression and support vector machine (SVM) regression. Our objective was to verify the performance of FISS in a quantitative spectral analysis through the estimation of chlorophyll content and to determine a proper quantitative spectral analysis method for processing FISS data. The results revealed that the derivative reflectance was a more sensitive indicator of chlorophyll content and could extract content information more efficiently than the spectral reflectance, which is more significant for FISS data compared to ASD (analytical spectral devices) data, reducing the corresponding RMSE (root mean squared error) by 3.3%-35.6%. Compared with the spectral features, the regression methods had smaller effects on the retrieval accuracy. A multivariate linear model could be the ideal model to retrieve chlorophyll information with a small number of significant wavelengths used. The smallest RMSE of the chlorophyll content retrieved using FISS data was 0.201 mg/g, a relative reduction of more than 30% compared with the RMSE based on a non-imaging ASD spectrometer, which represents a high estimation accuracy compared with the mean chlorophyll content of the sampled leaves (4.05 mg/g). Our study indicates that FISS could obtain both spectral and spatial detailed information of high quality. Its image-spectrum-in-one merit promotes the good performance of FISS in quantitative spectral analyses, and it can potentially be widely used in the agricultural sector.

  17. Image Retargeting by Content-Aware Synthesis

    OpenAIRE

    Dong, Weiming; Wu, Fuzhang; Kong, Yan; Mei, Xing; Lee, Tong-Yee; Zhang, Xiaopeng

    2014-01-01

    Real-world images usually contain vivid contents and rich textural details, which will complicate the manipulation on them. In this paper, we design a new framework based on content-aware synthesis to enhance content-aware image retargeting. By detecting the textural regions in an image, the textural image content can be synthesized rather than simply distorted or cropped. This method enables the manipulation of textural & non-textural regions with different strategy since they have different...

  18. Relationship between color and tannin content in sorghum grain: application of image analysis and artificial neural network

    Directory of Open Access Journals (Sweden)

    M Sedghi

    2012-03-01

    Full Text Available The relationship between sorghum grain color and tannin content was reported in several references. In this study, 33 phenotypes of sorghum grain differing in seed characteristics were collected and analyzed by Folin-Ciocalteu method. A computer image analysis method was used to determine the color characteristics of all 33 sorghum phenotypes. Two methods of multiple linear regression and artificial neural network (ANN models were developed to describe tannin content in sorghum grain from three input parameters of color characteristics. The goodness of fit of the models was tested using R², MS error, and bias. The computer image analysis technique was a suitable method to estimate tannin through sorghum grain color strength. Therefore, the color quality of the samples was described according three color parameters: L* (lightness, a* (redness - from green to red and b* (blueness - from blue to yellow. The developed regression and ANN models showed a strong relationship between color and tannin content of samples. The goodness of fit (in terms of R², which corresponds to training the ANN model, showed higher accuracy of prediction of ANN compared with the equation established by the regression method (0.96 vs. 0.88. The ANN models in term of MS error showed lower residuals distribution than that of regression model (0.002 vs. 0.006. The platform of computer image analysis technique and ANN-based model may be used to estimate the tannin content of sorghum.

  19. Application of content-based image compression to telepathology

    Science.gov (United States)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  20. A novel automatic quantification method for high-content screening analysis of DNA double strand-break response.

    Science.gov (United States)

    Feng, Jingwen; Lin, Jie; Zhang, Pengquan; Yang, Songnan; Sa, Yu; Feng, Yuanming

    2017-08-29

    High-content screening is commonly used in studies of the DNA damage response. The double-strand break (DSB) is one of the most harmful types of DNA damage lesions. The conventional method used to quantify DSBs is γH2AX foci counting, which requires manual adjustment and preset parameters and is usually regarded as imprecise, time-consuming, poorly reproducible, and inaccurate. Therefore, a robust automatic alternative method is highly desired. In this manuscript, we present a new method for quantifying DSBs which involves automatic image cropping, automatic foci-segmentation and fluorescent intensity measurement. Furthermore, an additional function was added for standardizing the measurement of DSB response inhibition based on co-localization analysis. We tested the method with a well-known inhibitor of DSB response. The new method requires only one preset parameter, which effectively minimizes operator-dependent variations. Compared with conventional methods, the new method detected a higher percentage difference of foci formation between different cells, which can improve measurement accuracy. The effects of the inhibitor on DSB response were successfully quantified with the new method (p = 0.000). The advantages of this method in terms of reliability, automation and simplicity show its potential in quantitative fluorescence imaging studies and high-content screening for compounds and factors involved in DSB response.

  1. High-content live cell imaging with RNA probes: advancements in high-throughput antimalarial drug discovery

    Directory of Open Access Journals (Sweden)

    Cervantes Serena

    2009-06-01

    Full Text Available Abstract Background Malaria, a major public health issue in developing nations, is responsible for more than one million deaths a year. The most lethal species, Plasmodium falciparum, causes up to 90% of fatalities. Drug resistant strains to common therapies have emerged worldwide and recent artemisinin-based combination therapy failures hasten the need for new antimalarial drugs. Discovering novel compounds to be used as antimalarials is expedited by the use of a high-throughput screen (HTS to detect parasite growth and proliferation. Fluorescent dyes that bind to DNA have replaced expensive traditional radioisotope incorporation for HTS growth assays, but do not give additional information regarding the parasite stage affected by the drug and a better indication of the drug's mode of action. Live cell imaging with RNA dyes, which correlates with cell growth and proliferation, has been limited by the availability of successful commercial dyes. Results After screening a library of newly synthesized stryrl dyes, we discovered three RNA binding dyes that provide morphological details of live parasites. Utilizing an inverted confocal imaging platform, live cell imaging of parasites increases parasite detection, improves the spatial and temporal resolution of the parasite under drug treatments, and can resolve morphological changes in individual cells. Conclusion This simple one-step technique is suitable for automation in a microplate format for novel antimalarial compound HTS. We have developed a new P. falciparum RNA high-content imaging growth inhibition assay that is robust with time and energy efficiency.

  2. Visual analytics for semantic queries of TerraSAR-X image content

    Science.gov (United States)

    Espinoza-Molina, Daniela; Alonso, Kevin; Datcu, Mihai

    2015-10-01

    With the continuous image product acquisition of satellite missions, the size of the image archives is considerably increasing every day as well as the variety and complexity of their content, surpassing the end-user capacity to analyse and exploit them. Advances in the image retrieval field have contributed to the development of tools for interactive exploration and extraction of the images from huge archives using different parameters like metadata, key-words, and basic image descriptors. Even though we count on more powerful tools for automated image retrieval and data analysis, we still face the problem of understanding and analyzing the results. Thus, a systematic computational analysis of these results is required in order to provide to the end-user a summary of the archive content in comprehensible terms. In this context, visual analytics combines automated analysis with interactive visualizations analysis techniques for an effective understanding, reasoning and decision making on the basis of very large and complex datasets. Moreover, currently several researches are focused on associating the content of the images with semantic definitions for describing the data in a format to be easily understood by the end-user. In this paper, we present our approach for computing visual analytics and semantically querying the TerraSAR-X archive. Our approach is mainly composed of four steps: 1) the generation of a data model that explains the information contained in a TerraSAR-X product. The model is formed by primitive descriptors and metadata entries, 2) the storage of this model in a database system, 3) the semantic definition of the image content based on machine learning algorithms and relevance feedback, and 4) querying the image archive using semantic descriptors as query parameters and computing the statistical analysis of the query results. The experimental results shows that with the help of visual analytics and semantic definitions we are able to explain

  3. High-content image informatics of the structural nuclear protein NuMA parses trajectories for stem/progenitor cell lineages and oncogenic transformation

    International Nuclear Information System (INIS)

    Vega, Sebastián L.; Liu, Er; Arvind, Varun; Bushman, Jared; Sung, Hak-Joon; Becker, Matthew L.; Lelièvre, Sophie; Kohn, Joachim; Vidi, Pierre-Alexandre; Moghe, Prabhas V.

    2017-01-01

    Stem and progenitor cells that exhibit significant regenerative potential and critical roles in cancer initiation and progression remain difficult to characterize. Cell fates are determined by reciprocal signaling between the cell microenvironment and the nucleus; hence parameters derived from nuclear remodeling are ideal candidates for stem/progenitor cell characterization. Here we applied high-content, single cell analysis of nuclear shape and organization to examine stem and progenitor cells destined to distinct differentiation endpoints, yet undistinguishable by conventional methods. Nuclear descriptors defined through image informatics classified mesenchymal stem cells poised to either adipogenic or osteogenic differentiation, and oligodendrocyte precursors isolated from different regions of the brain and destined to distinct astrocyte subtypes. Nuclear descriptors also revealed early changes in stem cells after chemical oncogenesis, allowing the identification of a class of cancer-mitigating biomaterials. To capture the metrology of nuclear changes, we developed a simple and quantitative “imaging-derived” parsing index, which reflects the dynamic evolution of the high-dimensional space of nuclear organizational features. A comparative analysis of parsing outcomes via either nuclear shape or textural metrics of the nuclear structural protein NuMA indicates the nuclear shape alone is a weak phenotypic predictor. In contrast, variations in the NuMA organization parsed emergent cell phenotypes and discerned emergent stages of stem cell transformation, supporting a prognosticating role for this protein in the outcomes of nuclear functions. - Highlights: • High-content analysis of nuclear shape and organization classify stem and progenitor cells poised for distinct lineages. • Early oncogenic changes in mesenchymal stem cells (MSCs) are also detected with nuclear descriptors. • A new class of cancer-mitigating biomaterials was identified based on image

  4. High-content image informatics of the structural nuclear protein NuMA parses trajectories for stem/progenitor cell lineages and oncogenic transformation

    Energy Technology Data Exchange (ETDEWEB)

    Vega, Sebastián L. [Department of Chemical and Biochemical Engineering, Rutgers University, Piscataway, NJ (United States); Liu, Er; Arvind, Varun [Department of Biomedical Engineering, Rutgers University, Piscataway, NJ (United States); Bushman, Jared [Department of Chemistry and Chemical Biology, New Jersey Center for Biomaterials, Piscataway, NJ (United States); School of Pharmacy, University of Wyoming, Laramie, WY (United States); Sung, Hak-Joon [Department of Chemistry and Chemical Biology, New Jersey Center for Biomaterials, Piscataway, NJ (United States); Department of Biomedical Engineering, Vanderbilt University, Nashville, TN (United States); Becker, Matthew L. [Department of Polymer Science and Engineering, University of Akron, Akron, OH (United States); Lelièvre, Sophie [Department of Basic Medical Sciences, Purdue University, West Lafayette, IN (United States); Kohn, Joachim [Department of Chemistry and Chemical Biology, New Jersey Center for Biomaterials, Piscataway, NJ (United States); Vidi, Pierre-Alexandre, E-mail: pvidi@wakehealth.edu [Department of Cancer Biology, Wake Forest School of Medicine, Winston-Salem, NC (United States); Moghe, Prabhas V., E-mail: moghe@rutgers.edu [Department of Chemical and Biochemical Engineering, Rutgers University, Piscataway, NJ (United States); Department of Biomedical Engineering, Rutgers University, Piscataway, NJ (United States)

    2017-02-01

    Stem and progenitor cells that exhibit significant regenerative potential and critical roles in cancer initiation and progression remain difficult to characterize. Cell fates are determined by reciprocal signaling between the cell microenvironment and the nucleus; hence parameters derived from nuclear remodeling are ideal candidates for stem/progenitor cell characterization. Here we applied high-content, single cell analysis of nuclear shape and organization to examine stem and progenitor cells destined to distinct differentiation endpoints, yet undistinguishable by conventional methods. Nuclear descriptors defined through image informatics classified mesenchymal stem cells poised to either adipogenic or osteogenic differentiation, and oligodendrocyte precursors isolated from different regions of the brain and destined to distinct astrocyte subtypes. Nuclear descriptors also revealed early changes in stem cells after chemical oncogenesis, allowing the identification of a class of cancer-mitigating biomaterials. To capture the metrology of nuclear changes, we developed a simple and quantitative “imaging-derived” parsing index, which reflects the dynamic evolution of the high-dimensional space of nuclear organizational features. A comparative analysis of parsing outcomes via either nuclear shape or textural metrics of the nuclear structural protein NuMA indicates the nuclear shape alone is a weak phenotypic predictor. In contrast, variations in the NuMA organization parsed emergent cell phenotypes and discerned emergent stages of stem cell transformation, supporting a prognosticating role for this protein in the outcomes of nuclear functions. - Highlights: • High-content analysis of nuclear shape and organization classify stem and progenitor cells poised for distinct lineages. • Early oncogenic changes in mesenchymal stem cells (MSCs) are also detected with nuclear descriptors. • A new class of cancer-mitigating biomaterials was identified based on image

  5. Content-based histopathology image retrieval using CometCloud.

    Science.gov (United States)

    Qi, Xin; Wang, Daihou; Rodero, Ivan; Diaz-Montes, Javier; Gensure, Rebekah H; Xing, Fuyong; Zhong, Hua; Goodell, Lauri; Parashar, Manish; Foran, David J; Yang, Lin

    2014-08-26

    The development of digital imaging technology is creating extraordinary levels of accuracy that provide support for improved reliability in different aspects of the image analysis, such as content-based image retrieval, image segmentation, and classification. This has dramatically increased the volume and rate at which data are generated. Together these facts make querying and sharing non-trivial and render centralized solutions unfeasible. Moreover, in many cases this data is often distributed and must be shared across multiple institutions requiring decentralized solutions. In this context, a new generation of data/information driven applications must be developed to take advantage of the national advanced cyber-infrastructure (ACI) which enable investigators to seamlessly and securely interact with information/data which is distributed across geographically disparate resources. This paper presents the development and evaluation of a novel content-based image retrieval (CBIR) framework. The methods were tested extensively using both peripheral blood smears and renal glomeruli specimens. The datasets and performance were evaluated by two pathologists to determine the concordance. The CBIR algorithms that were developed can reliably retrieve the candidate image patches exhibiting intensity and morphological characteristics that are most similar to a given query image. The methods described in this paper are able to reliably discriminate among subtle staining differences and spatial pattern distributions. By integrating a newly developed dual-similarity relevance feedback module into the CBIR framework, the CBIR results were improved substantially. By aggregating the computational power of high performance computing (HPC) and cloud resources, we demonstrated that the method can be successfully executed in minutes on the Cloud compared to weeks using standard computers. In this paper, we present a set of newly developed CBIR algorithms and validate them using two

  6. Anti-cancer agents in Saudi Arabian herbals revealed by automated high-content imaging

    KAUST Repository

    Hajjar, Dina

    2017-06-13

    Natural products have been used for medical applications since ancient times. Commonly, natural products are structurally complex chemical compounds that efficiently interact with their biological targets, making them useful drug candidates in cancer therapy. Here, we used cell-based phenotypic profiling and image-based high-content screening to study the mode of action and potential cellular targets of plants historically used in Saudi Arabia\\'s traditional medicine. We compared the cytological profiles of fractions taken from Juniperus phoenicea (Arar), Anastatica hierochuntica (Kaff Maryam), and Citrullus colocynthis (Hanzal) with a set of reference compounds with established modes of action. Cluster analyses of the cytological profiles of the tested compounds suggested that these plants contain possible topoisomerase inhibitors that could be effective in cancer treatment. Using histone H2AX phosphorylation as a marker for DNA damage, we discovered that some of the compounds induced double-strand DNA breaks. Furthermore, chemical analysis of the active fraction isolated from Juniperus phoenicea revealed possible anti-cancer compounds. Our results demonstrate the usefulness of cell-based phenotypic screening of natural products to reveal their biological activities.

  7. Content-Based High-Resolution Remote Sensing Image Retrieval via Unsupervised Feature Learning and Collaborative Affinity Metric Fusion

    Directory of Open Access Journals (Sweden)

    Yansheng Li

    2016-08-01

    Full Text Available With the urgent demand for automatic management of large numbers of high-resolution remote sensing images, content-based high-resolution remote sensing image retrieval (CB-HRRS-IR has attracted much research interest. Accordingly, this paper proposes a novel high-resolution remote sensing image retrieval approach via multiple feature representation and collaborative affinity metric fusion (IRMFRCAMF. In IRMFRCAMF, we design four unsupervised convolutional neural networks with different layers to generate four types of unsupervised features from the fine level to the coarse level. In addition to these four types of unsupervised features, we also implement four traditional feature descriptors, including local binary pattern (LBP, gray level co-occurrence (GLCM, maximal response 8 (MR8, and scale-invariant feature transform (SIFT. In order to fully incorporate the complementary information among multiple features of one image and the mutual information across auxiliary images in the image dataset, this paper advocates collaborative affinity metric fusion to measure the similarity between images. The performance evaluation of high-resolution remote sensing image retrieval is implemented on two public datasets, the UC Merced (UCM dataset and the Wuhan University (WH dataset. Large numbers of experiments show that our proposed IRMFRCAMF can significantly outperform the state-of-the-art approaches.

  8. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    Science.gov (United States)

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  9. High throughput on-chip analysis of high-energy charged particle tracks using lensfree imaging

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Wei; Shabbir, Faizan; Gong, Chao; Gulec, Cagatay; Pigeon, Jeremy; Shaw, Jessica; Greenbaum, Alon; Tochitsky, Sergei; Joshi, Chandrashekhar [Electrical Engineering Department, University of California, Los Angeles, California 90095 (United States); Ozcan, Aydogan, E-mail: ozcan@ucla.edu [Electrical Engineering Department, University of California, Los Angeles, California 90095 (United States); Bioengineering Department, University of California, Los Angeles, California 90095 (United States); California NanoSystems Institute (CNSI), University of California, Los Angeles, California 90095 (United States)

    2015-04-13

    We demonstrate a high-throughput charged particle analysis platform, which is based on lensfree on-chip microscopy for rapid ion track analysis using allyl diglycol carbonate, i.e., CR-39 plastic polymer as the sensing medium. By adopting a wide-area opto-electronic image sensor together with a source-shifting based pixel super-resolution technique, a large CR-39 sample volume (i.e., 4 cm × 4 cm × 0.1 cm) can be imaged in less than 1 min using a compact lensfree on-chip microscope, which detects partially coherent in-line holograms of the ion tracks recorded within the CR-39 detector. After the image capture, using highly parallelized reconstruction and ion track analysis algorithms running on graphics processing units, we reconstruct and analyze the entire volume of a CR-39 detector within ∼1.5 min. This significant reduction in the entire imaging and ion track analysis time not only increases our throughput but also allows us to perform time-resolved analysis of the etching process to monitor and optimize the growth of ion tracks during etching. This computational lensfree imaging platform can provide a much higher throughput and more cost-effective alternative to traditional lens-based scanning optical microscopes for ion track analysis using CR-39 and other passive high energy particle detectors.

  10. Independent component analysis for understanding multimedia content

    DEFF Research Database (Denmark)

    Kolenda, Thomas; Hansen, Lars Kai; Larsen, Jan

    2002-01-01

    Independent component analysis of combined text and image data from Web pages has potential for search and retrieval applications by providing more meaningful and context dependent content. It is demonstrated that ICA of combined text and image features has a synergistic effect, i.e., the retrieval...

  11. Pancreatic size and fat content in diabetes: A systematic review and meta-analysis of imaging studies.

    Directory of Open Access Journals (Sweden)

    Tiago Severo Garcia

    Full Text Available Imaging studies are expected to produce reliable information regarding the size and fat content of the pancreas. However, the available studies have produced inconclusive results. The aim of this study was to perform a systematic review and meta-analysis of imaging studies assessing pancreas size and fat content in patients with type 1 diabetes (T1DM and type 2 diabetes (T2DM.Medline and Embase databases were performed. Studies evaluating pancreatic size (diameter, area or volume and/or fat content by ultrasound, computed tomography, or magnetic resonance imaging in patients with T1DM and/or T2DM as compared to healthy controls were selected. Seventeen studies including 3,403 subjects (284 T1DM patients, 1,139 T2DM patients, and 1,980 control subjects were selected for meta-analyses. Pancreas diameter, area, volume, density, and fat percentage were evaluated.Pancreatic volume was reduced in T1DM and T2DM vs. controls (T1DM vs. controls: -38.72 cm3, 95%CI: -52.25 to -25.19, I2 = 70.2%, p for heterogeneity = 0.018; and T2DM vs. controls: -12.18 cm3, 95%CI: -19.1 to -5.25, I2 = 79.3%, p for heterogeneity = 0.001. Fat content was higher in T2DM vs. controls (+2.73%, 95%CI 0.55 to 4.91, I2 = 82.0%, p for heterogeneity<0.001.Individuals with T1DM and T2DM have reduced pancreas size in comparison with control subjects. Patients with T2DM have increased pancreatic fat content.

  12. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    Science.gov (United States)

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  13. High-speed image analysis reveals chaotic vibratory behaviors of pathological vocal folds

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Yu, E-mail: yuzhang@xmu.edu.c [Key Laboratory of Underwater Acoustic Communication and Marine Information Technology of the Ministry of Education, Xiamen University, Xiamen Fujian 361005 (China); Shao Jun [Shanghai EENT Hospital of Fudan University, Shanghai (China); Krausert, Christopher R. [Department of Surgery, Division of Otolaryngology - Head and Neck Surgery, University of Wisconsin School of Medicine and Public Health, Madison, WI 53792-7375 (United States); Zhang Sai [Key Laboratory of Underwater Acoustic Communication and Marine Information Technology of the Ministry of Education, Xiamen University, Xiamen Fujian 361005 (China); Jiang, Jack J. [Shanghai EENT Hospital of Fudan University, Shanghai (China); Department of Surgery, Division of Otolaryngology - Head and Neck Surgery, University of Wisconsin School of Medicine and Public Health, Madison, WI 53792-7375 (United States)

    2011-01-15

    Research highlights: Low-dimensional human glottal area data. Evidence of chaos in human laryngeal activity from high-speed digital imaging. Traditional perturbation analysis should be cautiously applied to aperiodic high speed image signals. Nonlinear dynamic analysis may be helpful for understanding disordered behaviors in pathological laryngeal systems. - Abstract: Laryngeal pathology is usually associated with irregular dynamics of laryngeal activity. High-speed imaging facilitates direct observation and measurement of vocal fold vibrations. However, chaotic dynamic characteristics of aperiodic high-speed image data have not yet been investigated in previous studies. In this paper, we will apply nonlinear dynamic analysis and traditional perturbation methods to quantify high-speed image data from normal subjects and patients with various laryngeal pathologies including vocal fold nodules, polyps, bleeding, and polypoid degeneration. The results reveal the low-dimensional dynamic characteristics of human glottal area data. In comparison to periodic glottal area series from a normal subject, aperiodic glottal area series from pathological subjects show complex reconstructed phase space, fractal dimension, and positive Lyapunov exponents. The estimated positive Lyapunov exponents provide the direct evidence of chaos in pathological human vocal folds from high-speed digital imaging. Furthermore, significant differences between the normal and pathological groups are investigated for nonlinear dynamic and perturbation analyses. Jitter in the pathological group is significantly higher than in the normal group, but shimmer does not show such a difference. This finding suggests that the traditional perturbation analysis should be cautiously applied to high speed image signals. However, the correlation dimension and the maximal Lyapunov exponent reveal a statistically significant difference between normal and pathological groups. Nonlinear dynamic analysis is capable of

  14. High-speed image analysis reveals chaotic vibratory behaviors of pathological vocal folds

    International Nuclear Information System (INIS)

    Zhang Yu; Shao Jun; Krausert, Christopher R.; Zhang Sai; Jiang, Jack J.

    2011-01-01

    Research highlights: → Low-dimensional human glottal area data. → Evidence of chaos in human laryngeal activity from high-speed digital imaging. → Traditional perturbation analysis should be cautiously applied to aperiodic high speed image signals. → Nonlinear dynamic analysis may be helpful for understanding disordered behaviors in pathological laryngeal systems. - Abstract: Laryngeal pathology is usually associated with irregular dynamics of laryngeal activity. High-speed imaging facilitates direct observation and measurement of vocal fold vibrations. However, chaotic dynamic characteristics of aperiodic high-speed image data have not yet been investigated in previous studies. In this paper, we will apply nonlinear dynamic analysis and traditional perturbation methods to quantify high-speed image data from normal subjects and patients with various laryngeal pathologies including vocal fold nodules, polyps, bleeding, and polypoid degeneration. The results reveal the low-dimensional dynamic characteristics of human glottal area data. In comparison to periodic glottal area series from a normal subject, aperiodic glottal area series from pathological subjects show complex reconstructed phase space, fractal dimension, and positive Lyapunov exponents. The estimated positive Lyapunov exponents provide the direct evidence of chaos in pathological human vocal folds from high-speed digital imaging. Furthermore, significant differences between the normal and pathological groups are investigated for nonlinear dynamic and perturbation analyses. Jitter in the pathological group is significantly higher than in the normal group, but shimmer does not show such a difference. This finding suggests that the traditional perturbation analysis should be cautiously applied to high speed image signals. However, the correlation dimension and the maximal Lyapunov exponent reveal a statistically significant difference between normal and pathological groups. Nonlinear dynamic

  15. Automated Slide Scanning and Segmentation in Fluorescently-labeled Tissues Using a Widefield High-content Analysis System.

    Science.gov (United States)

    Poon, Candice C; Ebacher, Vincent; Liu, Katherine; Yong, Voon Wee; Kelly, John James Patrick

    2018-05-03

    Automated slide scanning and segmentation of fluorescently-labeled tissues is the most efficient way to analyze whole slides or large tissue sections. Unfortunately, many researchers spend large amounts of time and resources developing and optimizing workflows that are only relevant to their own experiments. In this article, we describe a protocol that can be used by those with access to a widefield high-content analysis system (WHCAS) to image any slide-mounted tissue, with options for customization within pre-built modules found in the associated software. Not originally intended for slide scanning, the steps detailed in this article make it possible to acquire slide scanning images in the WHCAS which can be imported into the associated software. In this example, the automated segmentation of brain tumor slides is demonstrated, but the automated segmentation of any fluorescently-labeled nuclear or cytoplasmic marker is possible. Furthermore, there are a variety of other quantitative software modules including assays for protein localization/translocation, cellular proliferation/viability/apoptosis, and angiogenesis that can be run. This technique will save researchers time and effort and create an automated protocol for slide analysis.

  16. A parallel solution for high resolution histological image analysis.

    Science.gov (United States)

    Bueno, G; González, R; Déniz, O; García-Rojo, M; González-García, J; Fernández-Carrobles, M M; Vállez, N; Salido, J

    2012-10-01

    This paper describes a general methodology for developing parallel image processing algorithms based on message passing for high resolution images (on the order of several Gigabytes). These algorithms have been applied to histological images and must be executed on massively parallel processing architectures. Advances in new technologies for complete slide digitalization in pathology have been combined with developments in biomedical informatics. However, the efficient use of these digital slide systems is still a challenge. The image processing that these slides are subject to is still limited both in terms of data processed and processing methods. The work presented here focuses on the need to design and develop parallel image processing tools capable of obtaining and analyzing the entire gamut of information included in digital slides. Tools have been developed to assist pathologists in image analysis and diagnosis, and they cover low and high-level image processing methods applied to histological images. Code portability, reusability and scalability have been tested by using the following parallel computing architectures: distributed memory with massive parallel processors and two networks, INFINIBAND and Myrinet, composed of 17 and 1024 nodes respectively. The parallel framework proposed is flexible, high performance solution and it shows that the efficient processing of digital microscopic images is possible and may offer important benefits to pathology laboratories. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  17. G protein-coupled receptor internalization assays in the high-content screening format.

    Science.gov (United States)

    Haasen, Dorothea; Schnapp, Andreas; Valler, Martin J; Heilker, Ralf

    2006-01-01

    High-content screening (HCS), a combination of fluorescence microscopic imaging and automated image analysis, has become a frequently applied tool to study test compound effects in cellular disease-modeling systems. This chapter describes the measurement of G protein-coupled receptor (GPCR) internalization in the HCS format using a high-throughput, confocal cellular imaging device. GPCRs are the most successful group of therapeutic targets on the pharmaceutical market. Accordingly, the search for compounds that interfere with GPCR function in a specific and selective way is a major focus of the pharmaceutical industry today. This chapter describes methods for the ligand-induced internalization of GPCRs labeled previously with either a fluorophore-conjugated ligand or an antibody directed against an N-terminal tag of the GPCR. Both labeling techniques produce robust assay formats. Complementary to other functional GPCR drug discovery assays, internalization assays enable a pharmacological analysis of test compounds. We conclude that GPCR internalization assays represent a valuable medium/high-throughput screening format to determine the cellular activity of GPCR ligands.

  18. Ultra-high performance, solid-state, autoradiographic image digitization and analysis system

    International Nuclear Information System (INIS)

    Lear, J.L.; Pratt, J.P.; Ackermann, R.F.; Plotnick, J.; Rumley, S.

    1990-01-01

    We developed a Macintosh II-based, charge-coupled device (CCD), image digitization and analysis system for high-speed, high-resolution quantification of autoradiographic image data. A linear CCD array with 3,500 elements was attached to a precision drive assembly and mounted behind a high-uniformity lens. The drive assembly was used to sweep the array perpendicularly to its axis so that an entire 20 x 25-cm autoradiographic image-containing film could be digitized into 256 gray levels at 50-microns resolution in less than 30 sec. The scanner was interfaced to a Macintosh II computer through a specially constructed NuBus circuit board and software was developed for autoradiographic data analysis. The system was evaluated by scanning individual films multiple times, then measuring the variability of the digital data between the different scans. Image data were found to be virtually noise free. The coefficient of variation averaged less than 1%, a value significantly exceeding the accuracy of both high-speed, low-resolution, video camera (VC) systems and low-speed, high-resolution, rotating drum densitometers (RDD). Thus, the CCD scanner-Macintosh computer analysis system offers the advantage over VC systems of the ability to digitize entire films containing many autoradiograms, but with much greater speed and accuracy than achievable with RDD scanners

  19. High content analysis platform for optimization of lipid mediated CRISPR-Cas9 delivery strategies in human cells.

    Science.gov (United States)

    Steyer, Benjamin; Carlson-Stevermer, Jared; Angenent-Mari, Nicolas; Khalil, Andrew; Harkness, Ty; Saha, Krishanu

    2016-04-01

    Non-viral gene-editing of human cells using the CRISPR-Cas9 system requires optimized delivery of multiple components. Both the Cas9 endonuclease and a single guide RNA, that defines the genomic target, need to be present and co-localized within the nucleus for efficient gene-editing to occur. This work describes a new high-throughput screening platform for the optimization of CRISPR-Cas9 delivery strategies. By exploiting high content image analysis and microcontact printed plates, multi-parametric gene-editing outcome data from hundreds to thousands of isolated cell populations can be screened simultaneously. Employing this platform, we systematically screened four commercially available cationic lipid transfection materials with a range of RNAs encoding the CRISPR-Cas9 system. Analysis of Cas9 expression and editing of a fluorescent mCherry reporter transgene within human embryonic kidney cells was monitored over several days after transfection. Design of experiments analysis enabled rigorous evaluation of delivery materials and RNA concentration conditions. The results of this analysis indicated that the concentration and identity of transfection material have significantly greater effect on gene-editing than ratio or total amount of RNA. Cell subpopulation analysis on microcontact printed plates, further revealed that low cell number and high Cas9 expression, 24h after CRISPR-Cas9 delivery, were strong predictors of gene-editing outcomes. These results suggest design principles for the development of materials and transfection strategies with lipid-based materials. This platform could be applied to rapidly optimize materials for gene-editing in a variety of cell/tissue types in order to advance genomic medicine, regenerative biology and drug discovery. CRISPR-Cas9 is a new gene-editing technology for "genome surgery" that is anticipated to treat genetic diseases. This technology uses multiple components of the Cas9 system to cut out disease-causing mutations

  20. Recent advances in quantitative high throughput and high content data analysis.

    Science.gov (United States)

    Moutsatsos, Ioannis K; Parker, Christian N

    2016-01-01

    High throughput screening has become a basic technique with which to explore biological systems. Advances in technology, including increased screening capacity, as well as methods that generate multiparametric readouts, are driving the need for improvements in the analysis of data sets derived from such screens. This article covers the recent advances in the analysis of high throughput screening data sets from arrayed samples, as well as the recent advances in the analysis of cell-by-cell data sets derived from image or flow cytometry application. Screening multiple genomic reagents targeting any given gene creates additional challenges and so methods that prioritize individual gene targets have been developed. The article reviews many of the open source data analysis methods that are now available and which are helping to define a consensus on the best practices to use when analyzing screening data. As data sets become larger, and more complex, the need for easily accessible data analysis tools will continue to grow. The presentation of such complex data sets, to facilitate quality control monitoring and interpretation of the results will require the development of novel visualizations. In addition, advanced statistical and machine learning algorithms that can help identify patterns, correlations and the best features in massive data sets will be required. The ease of use for these tools will be important, as they will need to be used iteratively by laboratory scientists to improve the outcomes of complex analyses.

  1. A Sensitive Measurement for Estimating Impressions of Image-Contents

    Science.gov (United States)

    Sato, Mie; Matouge, Shingo; Mori, Toshifumi; Suzuki, Noboru; Kasuga, Masao

    We have investigated Kansei Content that appeals maker's intention to viewer's kansei. An SD method is a very good way to evaluate subjective impression of image-contents. However, because the SD method is performed after subjects view the image-contents, it is difficult to examine impression of detailed scenes of the image-contents in real time. To measure viewer's impression of the image-contents in real time, we have developed a Taikan sensor. With the Taikan sensor, we investigate relations among the image-contents, the grip strength and the body temperature. We also explore the interface of the Taikan sensor to use it easily. In our experiment, a horror movie is used that largely affects emotion of the subjects. Our results show that there is a possibility that the grip strength increases when the subjects view a strained scene and that it is easy to use the Taikan sensor without its circle base that is originally installed.

  2. An Analysis of Images of Contention and Violence in Dagara and Akan Proverbial Expressions

    Directory of Open Access Journals (Sweden)

    Martin Kyiileyang

    2017-04-01

    Full Text Available Proverbial expressions have typical linguistic and figurative features. These are normally captivating to the listener. The expressive culture of the Dagara and Akan societies is embellished by these proverbial expressions. Most African proverbs, express various images depicting both pleasant and unpleasant situations in life. Unpleasant language normally depicts several terrifying images particularly when threats, insults and other forms of abuse are traded vehemently. Dagara and Akan proverbs are no exceptions to this phenomenon. This paper seeks to examine images of contention and violence depicted in Akan and Dagara proverbial expressions. To achieve this, a variety of proverbs from Akan and Dagara were analysed for their meanings using Yankah’s and Honeck’s Theories. The result revealed that structurally, as with many proverbs, the Akan and Dagara proverbial expressions are pithy and terse. The most dominant images of contention and violence in these expressions expose negative values and perceptions about the people who speak these languages.

  3. High resolution, high sensitivity imaging and analysis of minerals and inclusions (fluid and melt) using the new CSIRO-GEMOC nuclear microprobe

    International Nuclear Information System (INIS)

    Ryan, C.G.; McInnes, B.M.; Van Achterbergh, E.; Williams, P.J.; Dong, G.; Zaw, K.

    1999-01-01

    .g. Yankee Lode, Mole Granite, NSW [Heinrich el al., 1993] and Batu Hijau, Indonesia [McInnes et al., 1999]), and the high concentrations of some elements in many ore-related fluid inclusions [e.g. Pb ∼4 wt% at Hellyer, Tasmania (Khin Zaw et al., 1996) and Ba ∼9 wt% at Starra, Cloncurry district, Queensland (Williams et al., 2000)]. Now, using the NMP, the internal contents of individual fluid inclusions can be imaged to show clearly that these elements reside within the fluid inclusions, and to discrimination against solid phases outside the inclusion volume. Melt Inclusion Analysis and Imaging Samples of melts and fluids, responsible for metasomatic change and evolution of the earth's upper mantle are often preserved as inclusions in xenoliths. However, their quench textures can often conceal rare minor phases that concentrate important trace elements (e.g. HFSE and REE). The penetration of MeV protons enables the detection of these contributions to ∼40 μm depth, thus providing a tool to determine reliable melt composition, with detection sensitivities down to 0.2 ppm, and to image spatial variation in component elements at 1-2 μm resolution. Copyright (1999) Geological Society of Australia

  4. Uses of software in digital image analysis: a forensic report

    Science.gov (United States)

    Sharma, Mukesh; Jha, Shailendra

    2010-02-01

    Forensic image analysis is required an expertise to interpret the content of an image or the image itself in legal matters. Major sub-disciplines of forensic image analysis with law enforcement applications include photo-grammetry, photographic comparison, content analysis and image authentication. It has wide applications in forensic science range from documenting crime scenes to enhancing faint or indistinct patterns such as partial fingerprints. The process of forensic image analysis can involve several different tasks, regardless of the type of image analysis performed. Through this paper authors have tried to explain these tasks, which are described in to three categories: Image Compression, Image Enhancement & Restoration and Measurement Extraction. With the help of examples like signature comparison, counterfeit currency comparison and foot-wear sole impression using the software Canvas and Corel Draw.

  5. Cell Painting, a high-content image-based assay for morphological profiling using multiplexed fluorescent dyes

    Science.gov (United States)

    Bray, Mark-Anthony; Singh, Shantanu; Han, Han; Davis, Chadwick T.; Borgeson, Blake; Hartland, Cathy; Kost-Alimova, Maria; Gustafsdottir, Sigrun M.; Gibson, Christopher C.; Carpenter, Anne E.

    2016-01-01

    In morphological profiling, quantitative data are extracted from microscopy images of cells to identify biologically relevant similarities and differences among samples based on these profiles. This protocol describes the design and execution of experiments using Cell Painting, a morphological profiling assay multiplexing six fluorescent dyes imaged in five channels, to reveal eight broadly relevant cellular components or organelles. Cells are plated in multi-well plates, perturbed with the treatments to be tested, stained, fixed, and imaged on a high-throughput microscope. Then, automated image analysis software identifies individual cells and measures ~1,500 morphological features (various measures of size, shape, texture, intensity, etc.) to produce a rich profile suitable for detecting subtle phenotypes. Profiles of cell populations treated with different experimental perturbations can be compared to suit many goals, such as identifying the phenotypic impact of chemical or genetic perturbations, grouping compounds and/or genes into functional pathways, and identifying signatures of disease. Cell culture and image acquisition takes two weeks; feature extraction and data analysis take an additional 1-2 weeks. PMID:27560178

  6. Content Progressive Coding of Limited Bits/pixel Images

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Forchhammer, Søren

    1999-01-01

    A new lossless context based method for content progressive coding of limited bits/pixel images is proposed. Progressive coding is achieved by separating the image into contelnt layers. Digital maps are compressed up to 3 times better than GIF.......A new lossless context based method for content progressive coding of limited bits/pixel images is proposed. Progressive coding is achieved by separating the image into contelnt layers. Digital maps are compressed up to 3 times better than GIF....

  7. Supervised learning of tools for content-based search of image databases

    Science.gov (United States)

    Delanoy, Richard L.

    1996-03-01

    A computer environment, called the Toolkit for Image Mining (TIM), is being developed with the goal of enabling users with diverse interests and varied computer skills to create search tools for content-based image retrieval and other pattern matching tasks. Search tools are generated using a simple paradigm of supervised learning that is based on the user pointing at mistakes of classification made by the current search tool. As mistakes are identified, a learning algorithm uses the identified mistakes to build up a model of the user's intentions, construct a new search tool, apply the search tool to a test image, display the match results as feedback to the user, and accept new inputs from the user. Search tools are constructed in the form of functional templates, which are generalized matched filters capable of knowledge- based image processing. The ability of this system to learn the user's intentions from experience contrasts with other existing approaches to content-based image retrieval that base searches on the characteristics of a single input example or on a predefined and semantically- constrained textual query. Currently, TIM is capable of learning spectral and textural patterns, but should be adaptable to the learning of shapes, as well. Possible applications of TIM include not only content-based image retrieval, but also quantitative image analysis, the generation of metadata for annotating images, data prioritization or data reduction in bandwidth-limited situations, and the construction of components for larger, more complex computer vision algorithms.

  8. Ontology of Gaps in Content-Based Image Retrieval

    OpenAIRE

    Deserno, Thomas M.; Antani, Sameer; Long, Rodney

    2008-01-01

    Content-based image retrieval (CBIR) is a promising technology to enrich the core functionality of picture archiving and communication systems (PACS). CBIR has a potential for making a strong impact in diagnostics, research, and education. Research as reported in the scientific literature, however, has not made significant inroads as medical CBIR applications incorporated into routine clinical medicine or medical research. The cause is often attributed (without supporting analysis) to the ina...

  9. Image Quality in High-resolution and High-cadence Solar Imaging

    Science.gov (United States)

    Denker, C.; Dineva, E.; Balthasar, H.; Verma, M.; Kuckein, C.; Diercke, A.; González Manrique, S. J.

    2018-03-01

    Broad-band imaging and even imaging with a moderate bandpass (about 1 nm) provides a photon-rich environment, where frame selection (lucky imaging) becomes a helpful tool in image restoration, allowing us to perform a cost-benefit analysis on how to design observing sequences for imaging with high spatial resolution in combination with real-time correction provided by an adaptive optics (AO) system. This study presents high-cadence (160 Hz) G-band and blue continuum image sequences obtained with the High-resolution Fast Imager (HiFI) at the 1.5-meter GREGOR solar telescope, where the speckle-masking technique is used to restore images with nearly diffraction-limited resolution. The HiFI employs two synchronized large-format and high-cadence sCMOS detectors. The median filter gradient similarity (MFGS) image-quality metric is applied, among others, to AO-corrected image sequences of a pore and a small sunspot observed on 2017 June 4 and 5. A small region of interest, which was selected for fast-imaging performance, covered these contrast-rich features and their neighborhood, which were part of Active Region NOAA 12661. Modifications of the MFGS algorithm uncover the field- and structure-dependency of this image-quality metric. However, MFGS still remains a good choice for determining image quality without a priori knowledge, which is an important characteristic when classifying the huge number of high-resolution images contained in data archives. In addition, this investigation demonstrates that a fast cadence and millisecond exposure times are still insufficient to reach the coherence time of daytime seeing. Nonetheless, the analysis shows that data acquisition rates exceeding 50 Hz are required to capture a substantial fraction of the best seeing moments, significantly boosting the performance of post-facto image restoration.

  10. Computer-aided detection of basal cell carcinoma through blood content analysis in dermoscopy images

    Science.gov (United States)

    Kharazmi, Pegah; Kalia, Sunil; Lui, Harvey; Wang, Z. Jane; Lee, Tim K.

    2018-02-01

    Basal cell carcinoma (BCC) is the most common type of skin cancer, which is highly damaging to the skin at its advanced stages and causes huge costs on the healthcare system. However, most types of BCC are easily curable if detected at early stage. Due to limited access to dermatologists and expert physicians, non-invasive computer-aided diagnosis is a viable option for skin cancer screening. A clinical biomarker of cancerous tumors is increased vascularization and excess blood flow. In this paper, we present a computer-aided technique to differentiate cancerous skin tumors from benign lesions based on vascular characteristics of the lesions. Dermoscopy image of the lesion is first decomposed using independent component analysis of the RGB channels to derive melanin and hemoglobin maps. A novel set of clinically inspired features and ratiometric measurements are then extracted from each map to characterize the vascular properties and blood content of the lesion. The feature set is then fed into a random forest classifier. Over a dataset of 664 skin lesions, the proposed method achieved an area under ROC curve of 0.832 in a 10-fold cross validation for differentiating basal cell carcinomas from benign lesions.

  11. Detecting content adaptive scaling of images for forensic applications

    Science.gov (United States)

    Fillion, Claude; Sharma, Gaurav

    2010-01-01

    Content-aware resizing methods have recently been developed, among which, seam-carving has achieved the most widespread use. Seam-carving's versatility enables deliberate object removal and benign image resizing, in which perceptually important content is preserved. Both types of modifications compromise the utility and validity of the modified images as evidence in legal and journalistic applications. It is therefore desirable that image forensic techniques detect the presence of seam-carving. In this paper we address detection of seam-carving for forensic purposes. As in other forensic applications, we pose the problem of seam-carving detection as the problem of classifying a test image in either of two classes: a) seam-carved or b) non-seam-carved. We adopt a pattern recognition approach in which a set of features is extracted from the test image and then a Support Vector Machine based classifier, trained over a set of images, is utilized to estimate which of the two classes the test image lies in. Based on our study of the seam-carving algorithm, we propose a set of intuitively motivated features for the detection of seam-carving. Our methodology for detection of seam-carving is then evaluated over a test database of images. We demonstrate that the proposed method provides the capability for detecting seam-carving with high accuracy. For images which have been reduced 30% by benign seam-carving, our method provides a classification accuracy of 91%.

  12. Innovation management and marketing in the high-tech sector: A content analysis of advertisements

    DEFF Research Database (Denmark)

    Gerhard, D.; Brem, Alexander; Baccarella, Ch.

    2011-01-01

    Advertizing high-technology products is a tricky and critical task for every company, since it means operating in an environment with high market uncertainty. The work presents results of a content analysis of 110 adverts for consumer electronics products which examines how these products and the...

  13. Insight into dynamic genome imaging: Canonical framework identification and high-throughput analysis.

    Science.gov (United States)

    Ronquist, Scott; Meixner, Walter; Rajapakse, Indika; Snyder, John

    2017-07-01

    The human genome is dynamic in structure, complicating researcher's attempts at fully understanding it. Time series "Fluorescent in situ Hybridization" (FISH) imaging has increased our ability to observe genome structure, but due to cell type and experimental variability this data is often noisy and difficult to analyze. Furthermore, computational analysis techniques are needed for homolog discrimination and canonical framework detection, in the case of time-series images. In this paper we introduce novel ideas for nucleus imaging analysis, present findings extracted using dynamic genome imaging, and propose an objective algorithm for high-throughput, time-series FISH imaging. While a canonical framework could not be detected beyond statistical significance in the analyzed dataset, a mathematical framework for detection has been outlined with extension to 3D image analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Automated microscopy for high-content RNAi screening

    Science.gov (United States)

    2010-01-01

    Fluorescence microscopy is one of the most powerful tools to investigate complex cellular processes such as cell division, cell motility, or intracellular trafficking. The availability of RNA interference (RNAi) technology and automated microscopy has opened the possibility to perform cellular imaging in functional genomics and other large-scale applications. Although imaging often dramatically increases the content of a screening assay, it poses new challenges to achieve accurate quantitative annotation and therefore needs to be carefully adjusted to the specific needs of individual screening applications. In this review, we discuss principles of assay design, large-scale RNAi, microscope automation, and computational data analysis. We highlight strategies for imaging-based RNAi screening adapted to different library and assay designs. PMID:20176920

  15. Teachable, high-content analytics for live-cell, phase contrast movies.

    Science.gov (United States)

    Alworth, Samuel V; Watanabe, Hirotada; Lee, James S J

    2010-09-01

    CL-Quant is a new solution platform for broad, high-content, live-cell image analysis. Powered by novel machine learning technologies and teach-by-example interfaces, CL-Quant provides a platform for the rapid development and application of scalable, high-performance, and fully automated analytics for a broad range of live-cell microscopy imaging applications, including label-free phase contrast imaging. The authors used CL-Quant to teach off-the-shelf universal analytics, called standard recipes, for cell proliferation, wound healing, cell counting, and cell motility assays using phase contrast movies collected on the BioStation CT and BioStation IM platforms. Similar to application modules, standard recipes are intended to work robustly across a wide range of imaging conditions without requiring customization by the end user. The authors validated the performance of the standard recipes by comparing their performance with truth created manually, or by custom analytics optimized for each individual movie (and therefore yielding the best possible result for the image), and validated by independent review. The validation data show that the standard recipes' performance is comparable with the validated truth with low variation. The data validate that the CL-Quant standard recipes can provide robust results without customization for live-cell assays in broad cell types and laboratory settings.

  16. Content-based image retrieval applied to bone age assessment

    Science.gov (United States)

    Fischer, Benedikt; Brosig, André; Welter, Petra; Grouls, Christoph; Günther, Rolf W.; Deserno, Thomas M.

    2010-03-01

    Radiological bone age assessment is based on local image regions of interest (ROI), such as the epiphysis or the area of carpal bones. These are compared to a standardized reference and scores determining the skeletal maturity are calculated. For computer-aided diagnosis, automatic ROI extraction and analysis is done so far mainly by heuristic approaches. Due to high variations in the imaged biological material and differences in age, gender and ethnic origin, automatic analysis is difficult and frequently requires manual interactions. On the contrary, epiphyseal regions (eROIs) can be compared to previous cases with known age by content-based image retrieval (CBIR). This requires a sufficient number of cases with reliable positioning of the eROI centers. In this first approach to bone age assessment by CBIR, we conduct leaving-oneout experiments on 1,102 left hand radiographs and 15,428 metacarpal and phalangeal eROIs from the USC hand atlas. The similarity of the eROIs is assessed by cross-correlation of 16x16 scaled eROIs. The effects of the number of eROIs, two age computation methods as well as the number of considered CBIR references are analyzed. The best results yield an error rate of 1.16 years and a standard deviation of 0.85 years. As the appearance of the hand varies naturally by up to two years, these results clearly demonstrate the applicability of the CBIR approach for bone age estimation.

  17. Image content authentication based on channel coding

    Science.gov (United States)

    Zhang, Fan; Xu, Lei

    2008-03-01

    The content authentication determines whether an image has been tampered or not, and if necessary, locate malicious alterations made on the image. Authentication on a still image or a video are motivated by recipient's interest, and its principle is that a receiver must be able to identify the source of this document reliably. Several techniques and concepts based on data hiding or steganography designed as a means for the image authentication. This paper presents a color image authentication algorithm based on convolution coding. The high bits of color digital image are coded by the convolution codes for the tamper detection and localization. The authentication messages are hidden in the low bits of image in order to keep the invisibility of authentication. All communications channels are subject to errors introduced because of additive Gaussian noise in their environment. Data perturbations cannot be eliminated but their effect can be minimized by the use of Forward Error Correction (FEC) techniques in the transmitted data stream and decoders in the receiving system that detect and correct bits in error. This paper presents a color image authentication algorithm based on convolution coding. The message of each pixel is convolution encoded with the encoder. After the process of parity check and block interleaving, the redundant bits are embedded in the image offset. The tamper can be detected and restored need not accessing the original image.

  18. HC StratoMineR: A Web-Based Tool for the Rapid Analysis of High-Content Datasets.

    Science.gov (United States)

    Omta, Wienand A; van Heesbeen, Roy G; Pagliero, Romina J; van der Velden, Lieke M; Lelieveld, Daphne; Nellen, Mehdi; Kramer, Maik; Yeong, Marley; Saeidi, Amir M; Medema, Rene H; Spruit, Marco; Brinkkemper, Sjaak; Klumperman, Judith; Egan, David A

    2016-10-01

    High-content screening (HCS) can generate large multidimensional datasets and when aligned with the appropriate data mining tools, it can yield valuable insights into the mechanism of action of bioactive molecules. However, easy-to-use data mining tools are not widely available, with the result that these datasets are frequently underutilized. Here, we present HC StratoMineR, a web-based tool for high-content data analysis. It is a decision-supportive platform that guides even non-expert users through a high-content data analysis workflow. HC StratoMineR is built by using My Structured Query Language for storage and querying, PHP: Hypertext Preprocessor as the main programming language, and jQuery for additional user interface functionality. R is used for statistical calculations, logic and data visualizations. Furthermore, C++ and graphical processor unit power is diffusely embedded in R by using the rcpp and rpud libraries for operations that are computationally highly intensive. We show that we can use HC StratoMineR for the analysis of multivariate data from a high-content siRNA knock-down screen and a small-molecule screen. It can be used to rapidly filter out undesirable data; to select relevant data; and to perform quality control, data reduction, data exploration, morphological hit picking, and data clustering. Our results demonstrate that HC StratoMineR can be used to functionally categorize HCS hits and, thus, provide valuable information for hit prioritization.

  19. Utilizing a Photo-Analysis Software for Content Identifying Method (CIM

    Directory of Open Access Journals (Sweden)

    Nejad Nasim Sahraei

    2015-01-01

    Full Text Available Content Identifying Methodology or (CIM was developed to measure public preferences in order to reveal the common characteristics of landscapes and aspects of underlying perceptions including the individual's reactions to content and spatial configuration, therefore, it can assist with the identification of factors that influenced preference. Regarding the analysis of landscape photographs through CIM, there are several studies utilizing image analysis software, such as Adobe Photoshop, in order to identify the physical contents in the scenes. This study attempts to evaluate public’s ‘preferences for aesthetic qualities of pedestrian bridges in urban areas through a photo-questionnaire survey, in which respondents evaluated images of pedestrian bridges in urban areas. Two groups of images were evaluated as the most and least preferred scenes that concern the highest and lowest mean scores respectively. These two groups were analyzed by CIM and also evaluated based on the respondent’s description of each group to reveal the pattern of preferences and the factors that may affect them. Digimizer Software was employed to triangulate the two approaches and to determine the role of these factors on people’s preferences. This study attempts to introduce the useful software for image analysis which can measure the physical contents and also their spatial organization in the scenes. According to the findings, it is revealed that Digimizer could be a useful tool in CIM approaches through preference studies that utilizes photographs in place of the actual landscape in order to determine the most important factors in public preferences for pedestrian bridges in urban areas.

  20. Information content of poisson images

    International Nuclear Information System (INIS)

    Cederlund, J.

    1979-04-01

    One major problem when producing images with the aid of Poisson distributed quanta is how best to compromise between spatial and contrast resolution. Increasing the number of image elements improves spatial resolution, but at the cost of fewer quanta per image element, which reduces contrast resolution. Information theory arguments are used to analyse this problem. It is argued that information capacity is a useful concept to describe an important property of the imaging device, but that in order to compute the information content of an image produced by this device some statistical properties (such as the a priori probability of the densities) of the object to be depicted must be taken into account. If these statistical properties are not known one cannot make a correct choice between spatial and contrast resolution. (author)

  1. Qualitative Content Analysis

    OpenAIRE

    Philipp Mayring

    2000-01-01

    The article describes an approach of systematic, rule guided qualitative text analysis, which tries to preserve some methodological strengths of quantitative content analysis and widen them to a concept of qualitative procedure. First the development of content analysis is delineated and the basic principles are explained (units of analysis, step models, working with categories, validity and reliability). Then the central procedures of qualitative content analysis, inductive development of ca...

  2. Quantitative image analysis of vertebral body architecture - improved diagnosis in osteoporosis based on high-resolution computed tomography

    International Nuclear Information System (INIS)

    Mundinger, A.; Wiesmeier, B.; Dinkel, E.; Helwig, A.; Beck, A.; Schulte Moenting, J.

    1993-01-01

    71 women, 64 post-menopausal, were examined by single-energy quantitative computed tomography (SEQCT) and by high-resolution computed tomography (HRCT) scans through the middle of lumbar vertebral bodies. Computer-assisted image analysis of the high-resolution images assessed trabecular morphometry of the vertebral spongiosa texture. Texture parameters differed in women with and without age-reduced bone density, and in the former group also in patients with and without vertebral fractures. Discriminating parameters were the total number, diameter and variance of trabecular and intertrabecular spaces as well as the trabecular surface (p < 0.05)). A texture index based on these statistically selected morphometric parameters identified a subgroup of patients suffering from fractures due to abnormal spongiosal architecture but with a bone mineral content not indicative for increased fracture risk. The combination of osteodensitometric and trabecular morphometry improves the diagnosis of osteoporosis and may contribute to the prediction of individual fracture risk. (author)

  3. An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis

    Science.gov (United States)

    Kim, Yongmin; Alexander, Thomas

    1986-06-01

    In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.

  4. Content Based Medical Image Retrieval for Histopathological, CT and MRI Images

    Directory of Open Access Journals (Sweden)

    Swarnambiga AYYACHAMY

    2013-09-01

    Full Text Available A content based approach is followed for medical images. The purpose of this study is to access the stability of these methods for medical image retrieval. The methods used in color based retrieval for histopathological images are color co-occurrence matrix (CCM and histogram with meta features. For texture based retrieval GLCM (gray level co-occurrence matrix and local binary pattern (LBP were used. For shape based retrieval canny edge detection and otsu‘s method with multivariable threshold were used. Texture and shape based retrieval were implemented using MRI (magnetic resonance images. The most remarkable characteristics of the article are its content based approach for each medical imaging modality. Our efforts were focused on the initial visual search. From our experiment, histogram with meta features in color based retrieval for histopathological images shows a precision of 60 % and recall of 30 %. Whereas GLCM in texture based retrieval for MRI images shows a precision of 70 % and recall of 20 %. Shape based retrieval for MRI images shows a precision of 50% and recall of 25 %. The retrieval results shows that this simple approach is successful.

  5. Image Post-Processing and Analysis. Chapter 17

    Energy Technology Data Exchange (ETDEWEB)

    Yushkevich, P. A. [University of Pennsylvania, Philadelphia (United States)

    2014-09-15

    For decades, scientists have used computers to enhance and analyse medical images. At first, they developed simple computer algorithms to enhance the appearance of interesting features in images, helping humans read and interpret them better. Later, they created more advanced algorithms, where the computer would not only enhance images but also participate in facilitating understanding of their content. Segmentation algorithms were developed to detect and extract specific anatomical objects in images, such as malignant lesions in mammograms. Registration algorithms were developed to align images of different modalities and to find corresponding anatomical locations in images from different subjects. These algorithms have made computer aided detection and diagnosis, computer guided surgery and other highly complex medical technologies possible. Nowadays, the field of image processing and analysis is a complex branch of science that lies at the intersection of applied mathematics, computer science, physics, statistics and biomedical sciences. This chapter will give a general overview of the most common problems in this field and the algorithms that address them.

  6. Content-based image retrieval with ontological ranking

    Science.gov (United States)

    Tsai, Shen-Fu; Tsai, Min-Hsuan; Huang, Thomas S.

    2010-02-01

    Images are a much more powerful medium of expression than text, as the adage says: "One picture is worth a thousand words." It is because compared with text consisting of an array of words, an image has more degrees of freedom and therefore a more complicated structure. However, the less limited structure of images presents researchers in the computer vision community a tough task of teaching machines to understand and organize images, especially when a limit number of learning examples and background knowledge are given. The advance of internet and web technology in the past decade has changed the way human gain knowledge. People, hence, can exchange knowledge with others by discussing and contributing information on the web. As a result, the web pages in the internet have become a living and growing source of information. One is therefore tempted to wonder whether machines can learn from the web knowledge base as well. Indeed, it is possible to make computer learn from the internet and provide human with more meaningful knowledge. In this work, we explore this novel possibility on image understanding applied to semantic image search. We exploit web resources to obtain links from images to keywords and a semantic ontology constituting human's general knowledge. The former maps visual content to related text in contrast to the traditional way of associating images with surrounding text; the latter provides relations between concepts for machines to understand to what extent and in what sense an image is close to the image search query. With the aid of these two tools, the resulting image search system is thus content-based and moreover, organized. The returned images are ranked and organized such that semantically similar images are grouped together and given a rank based on the semantic closeness to the input query. The novelty of the system is twofold: first, images are retrieved not only based on text cues but their actual contents as well; second, the grouping

  7. Segmental Quantitative MR Imaging analysis of diurnal variation of water content in the lumbar intervertebral discs

    International Nuclear Information System (INIS)

    Zhu, Ting Ting; Ai, Tao; Zhang, Wei; Li, Tao; Li, Xiao Ming

    2015-01-01

    To investigate the changes in water content in the lumbar intervertebral discs by quantitative T2 MR imaging in the morning after bed rest and evening after a diurnal load. Twenty healthy volunteers were separately examined in the morning after bed rest and in the evening after finishing daily work. T2-mapping images were obtained and analyzed. An equally-sized rectangular region of interest (ROI) was manually placed in both, the anterior and the posterior annulus fibrosus (AF), in the outermost 20% of the disc. Three ROIs were placed in the space defined as the nucleus pulposus (NP). Repeated-measures analysis of variance and paired 2-tailed t tests were used for statistical analysis, with p < 0.05 as significantly different. T2 values significantly decreased from morning to evening, in the NP (anterior NP = -13.9 ms; central NP = -17.0 ms; posterior NP = -13.3 ms; all p < 0.001). Meanwhile T2 values significantly increased in the anterior AF (+2.9 ms; p = 0.025) and the posterior AF (+5.9 ms; p < 0.001). T2 values in the posterior AF showed the largest degree of variation among the 5 ROIs, but there was no statistical significance (p = 0.414). Discs with initially low T2 values in the center NP showed a smaller degree of variation in the anterior NP and in the central NP, than in discs with initially high T2 values in the center NP (10.0% vs. 16.1%, p = 0.037; 6.4% vs. 16.1%, p = 0.006, respectively). Segmental quantitative T2 MRI provides valuable insights into physiological aspects of normal discs.

  8. High-content image informatics of the structural nuclear protein NuMA parses trajectories for stem/progenitor cell lineages and oncogenic transformation.

    Science.gov (United States)

    Vega, Sebastián L; Liu, Er; Arvind, Varun; Bushman, Jared; Sung, Hak-Joon; Becker, Matthew L; Lelièvre, Sophie; Kohn, Joachim; Vidi, Pierre-Alexandre; Moghe, Prabhas V

    2017-02-01

    Stem and progenitor cells that exhibit significant regenerative potential and critical roles in cancer initiation and progression remain difficult to characterize. Cell fates are determined by reciprocal signaling between the cell microenvironment and the nucleus; hence parameters derived from nuclear remodeling are ideal candidates for stem/progenitor cell characterization. Here we applied high-content, single cell analysis of nuclear shape and organization to examine stem and progenitor cells destined to distinct differentiation endpoints, yet undistinguishable by conventional methods. Nuclear descriptors defined through image informatics classified mesenchymal stem cells poised to either adipogenic or osteogenic differentiation, and oligodendrocyte precursors isolated from different regions of the brain and destined to distinct astrocyte subtypes. Nuclear descriptors also revealed early changes in stem cells after chemical oncogenesis, allowing the identification of a class of cancer-mitigating biomaterials. To capture the metrology of nuclear changes, we developed a simple and quantitative "imaging-derived" parsing index, which reflects the dynamic evolution of the high-dimensional space of nuclear organizational features. A comparative analysis of parsing outcomes via either nuclear shape or textural metrics of the nuclear structural protein NuMA indicates the nuclear shape alone is a weak phenotypic predictor. In contrast, variations in the NuMA organization parsed emergent cell phenotypes and discerned emergent stages of stem cell transformation, supporting a prognosticating role for this protein in the outcomes of nuclear functions. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Qualitative Content Analysis

    Directory of Open Access Journals (Sweden)

    Satu Elo

    2014-02-01

    Full Text Available Qualitative content analysis is commonly used for analyzing qualitative data. However, few articles have examined the trustworthiness of its use in nursing science studies. The trustworthiness of qualitative content analysis is often presented by using terms such as credibility, dependability, conformability, transferability, and authenticity. This article focuses on trustworthiness based on a review of previous studies, our own experiences, and methodological textbooks. Trustworthiness was described for the main qualitative content analysis phases from data collection to reporting of the results. We concluded that it is important to scrutinize the trustworthiness of every phase of the analysis process, including the preparation, organization, and reporting of results. Together, these phases should give a reader a clear indication of the overall trustworthiness of the study. Based on our findings, we compiled a checklist for researchers attempting to improve the trustworthiness of a content analysis study. The discussion in this article helps to clarify how content analysis should be reported in a valid and understandable manner, which would be of particular benefit to reviewers of scientific articles. Furthermore, we discuss that it is often difficult to evaluate the trustworthiness of qualitative content analysis studies because of defective data collection method description and/or analysis description.

  10. High-content, high-throughput screening for the identification of cytotoxic compounds based on cell morphology and cell proliferation markers.

    Directory of Open Access Journals (Sweden)

    Heather L Martin

    Full Text Available Toxicity is a major cause of failure in drug discovery and development, and whilst robust toxicological testing occurs, efficiency could be improved if compounds with cytotoxic characteristics were identified during primary compound screening. The use of high-content imaging in primary screening is becoming more widespread, and by utilising phenotypic approaches it should be possible to incorporate cytotoxicity counter-screens into primary screens. Here we present a novel phenotypic assay that can be used as a counter-screen to identify compounds with adverse cellular effects. This assay has been developed using U2OS cells, the PerkinElmer Operetta high-content/high-throughput imaging system and Columbus image analysis software. In Columbus, algorithms were devised to identify changes in nuclear morphology, cell shape and proliferation using DAPI, TOTO-3 and phosphohistone H3 staining, respectively. The algorithms were developed and tested on cells treated with doxorubicin, taxol and nocodazole. The assay was then used to screen a novel, chemical library, rich in natural product-like molecules of over 300 compounds, 13.6% of which were identified as having adverse cellular effects. This assay provides a relatively cheap and rapid approach for identifying compounds with adverse cellular effects during screening assays, potentially reducing compound rejection due to toxicity in subsequent in vitro and in vivo assays.

  11. Content dependent selection of image enhancement parameters for mobile displays

    Science.gov (United States)

    Lee, Yoon-Gyoo; Kang, Yoo-Jin; Kim, Han-Eol; Kim, Ka-Hee; Kim, Choon-Woo

    2011-01-01

    Mobile devices such as cellular phones and portable multimedia player with capability of playing terrestrial digital multimedia broadcasting (T-DMB) contents have been introduced into consumer market. In this paper, content dependent image quality enhancement method for sharpness and colorfulness and noise reduction is presented to improve perceived image quality on mobile displays. Human visual experiments are performed to analyze viewers' preference. Relationship between the objective measures and the optimal values of image control parameters are modeled by simple lookup tables based on the results of human visual experiments. Content dependent values of image control parameters are determined based on the calculated measures and predetermined lookup tables. Experimental results indicate that dynamic selection of image control parameters yields better image quality.

  12. High Throughput In vivo Analysis of Plant Leaf Chemical Properties Using Hyperspectral Imaging

    Directory of Open Access Journals (Sweden)

    Piyush Pandey

    2017-08-01

    Full Text Available Image-based high-throughput plant phenotyping in greenhouse has the potential to relieve the bottleneck currently presented by phenotypic scoring which limits the throughput of gene discovery and crop improvement efforts. Numerous studies have employed automated RGB imaging to characterize biomass and growth of agronomically important crops. The objective of this study was to investigate the utility of hyperspectral imaging for quantifying chemical properties of maize and soybean plants in vivo. These properties included leaf water content, as well as concentrations of macronutrients nitrogen (N, phosphorus (P, potassium (K, magnesium (Mg, calcium (Ca, and sulfur (S, and micronutrients sodium (Na, iron (Fe, manganese (Mn, boron (B, copper (Cu, and zinc (Zn. Hyperspectral images were collected from 60 maize and 60 soybean plants, each subjected to varying levels of either water deficit or nutrient limitation stress with the goal of creating a wide range of variation in the chemical properties of plant leaves. Plants were imaged on an automated conveyor belt system using a hyperspectral imager with a spectral range from 550 to 1,700 nm. Images were processed to extract reflectance spectrum from each plant and partial least squares regression models were developed to correlate spectral data with chemical data. Among all the chemical properties investigated, water content was predicted with the highest accuracy [R2 = 0.93 and RPD (Ratio of Performance to Deviation = 3.8]. All macronutrients were also quantified satisfactorily (R2 from 0.69 to 0.92, RPD from 1.62 to 3.62, with N predicted best followed by P, K, and S. The micronutrients group showed lower prediction accuracy (R2 from 0.19 to 0.86, RPD from 1.09 to 2.69 than the macronutrient groups. Cu and Zn were best predicted, followed by Fe and Mn. Na and B were the only two properties that hyperspectral imaging was not able to quantify satisfactorily (R2 < 0.3 and RPD < 1.2. This study suggested

  13. Mosaicing of single plane illumination microscopy images using groupwise registration and fast content-based image fusion

    Science.gov (United States)

    Preibisch, Stephan; Rohlfing, Torsten; Hasak, Michael P.; Tomancak, Pavel

    2008-03-01

    Single Plane Illumination Microscopy (SPIM; Huisken et al., Nature 305(5686):1007-1009, 2004) is an emerging microscopic technique that enables live imaging of large biological specimens in their entirety. By imaging the living biological sample from multiple angles SPIM has the potential to achieve isotropic resolution throughout even relatively large biological specimens. For every angle, however, only a relatively shallow section of the specimen is imaged with high resolution, whereas deeper regions appear increasingly blurred. In order to produce a single, uniformly high resolution image, we propose here an image mosaicing algorithm that combines state of the art groupwise image registration for alignment with content-based image fusion to prevent degrading of the fused image due to regional blurring of the input images. For the registration stage, we introduce an application-specific groupwise transformation model that incorporates per-image as well as groupwise transformation parameters. We also propose a new fusion algorithm based on Gaussian filters, which is substantially faster than fusion based on local image entropy. We demonstrate the performance of our mosaicing method on data acquired from living embryos of the fruit fly, Drosophila, using four and eight angle acquisitions.

  14. CellProfiler and KNIME: open source tools for high content screening.

    Science.gov (United States)

    Stöter, Martin; Niederlein, Antje; Barsacchi, Rico; Meyenhofer, Felix; Brandl, Holger; Bickle, Marc

    2013-01-01

    High content screening (HCS) has established itself in the world of the pharmaceutical industry as an essential tool for drug discovery and drug development. HCS is currently starting to enter the academic world and might become a widely used technology. Given the diversity of problems tackled in academic research, HCS could experience some profound changes in the future, mainly with more imaging modalities and smart microscopes being developed. One of the limitations in the establishment of HCS in academia is flexibility and cost. Flexibility is important to be able to adapt the HCS setup to accommodate the multiple different assays typical of academia. Many cost factors cannot be avoided, but the costs of the software packages necessary to analyze large datasets can be reduced by using Open Source software. We present and discuss the Open Source software CellProfiler for image analysis and KNIME for data analysis and data mining that provide software solutions which increase flexibility and keep costs low.

  15. Towards a framework for agent-based image analysis of remote-sensing data.

    Science.gov (United States)

    Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera

    2015-04-03

    Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects' properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA).

  16. Precision Statistical Analysis of Images Based on Brightness Distribution

    Directory of Open Access Journals (Sweden)

    Muzhir Shaban Al-Ani

    2017-07-01

    Full Text Available Study the content of images is considered an important topic in which reasonable and accurate analysis of images are generated. Recently image analysis becomes a vital field because of huge number of images transferred via transmission media in our daily life. These crowded media with images lead to highlight in research area of image analysis. In this paper, the implemented system is passed into many steps to perform the statistical measures of standard deviation and mean values of both color and grey images. Whereas the last step of the proposed method concerns to compare the obtained results in different cases of the test phase. In this paper, the statistical parameters are implemented to characterize the content of an image and its texture. Standard deviation, mean and correlation values are used to study the intensity distribution of the tested images. Reasonable results are obtained for both standard deviation and mean value via the implementation of the system. The major issue addressed in the work is concentrated on brightness distribution via statistical measures applying different types of lighting.

  17. BioSig3D: High Content Screening of Three-Dimensional Cell Culture Models.

    Directory of Open Access Journals (Sweden)

    Cemal Cagatay Bilgin

    Full Text Available BioSig3D is a computational platform for high-content screening of three-dimensional (3D cell culture models that are imaged in full 3D volume. It provides an end-to-end solution for designing high content screening assays, based on colony organization that is derived from segmentation of nuclei in each colony. BioSig3D also enables visualization of raw and processed 3D volumetric data for quality control, and integrates advanced bioinformatics analysis. The system consists of multiple computational and annotation modules that are coupled together with a strong use of controlled vocabularies to reduce ambiguities between different users. It is a web-based system that allows users to: design an experiment by defining experimental variables, upload a large set of volumetric images into the system, analyze and visualize the dataset, and either display computed indices as a heatmap, or phenotypic subtypes for heterogeneity analysis, or download computed indices for statistical analysis or integrative biology. BioSig3D has been used to profile baseline colony formations with two experiments: (i morphogenesis of a panel of human mammary epithelial cell lines (HMEC, and (ii heterogeneity in colony formation using an immortalized non-transformed cell line. These experiments reveal intrinsic growth properties of well-characterized cell lines that are routinely used for biological studies. BioSig3D is being released with seed datasets and video-based documentation.

  18. Quantitative analysis of receptor-mediated uptake and pro-apoptotic activity of mistletoe lectin-1 by high content imaging.

    Science.gov (United States)

    Beztsinna, N; de Matos, M B C; Walther, J; Heyder, C; Hildebrandt, E; Leneweit, G; Mastrobattista, E; Kok, R J

    2018-02-09

    Ribosome inactivating proteins (RIPs) are highly potent cytotoxins that have potential as anticancer therapeutics. Mistletoe lectin 1 (ML1) is a heterodimeric cytotoxic protein isolated from European Mistletoe and belongs to RIP class II. The aim of this project was to systematically study ML1 cell binding, endocytosis pathway(s), subcellular processing and apoptosis activation. For this purpose, state of the art cell imaging equipment and automated image analysis algorithms were used. ML1 displayed very fast binding to sugar residues on the membrane and energy-dependent uptake in CT26 cells. The co-staining with specific antibodies and uptake blocking experiments revealed involvement of both clathrin-dependent and -independent pathways in ML1 endocytosis. Co-localization studies demonstrated the toxin transport from early endocytic vesicles to Golgi network; a retrograde road to the endoplasmic reticulum. The pro-apoptotic and antiproliferative activity of ML1 were shown in time lapse movies and subsequently quantified. ML1 cytotoxicity was less affected in multidrug resistant tumor cell line 4T1 in contrast to commonly used chemotherapeutic drug (ML1 resistance index 6.9 vs 13.4 for doxorubicin; IC 50 : ML1 1.4 ng/ml vs doxorubicin 24000 ng/ml). This opens new opportunities for the use of ML1 as an alternative treatment in multidrug resistant cancers.

  19. The Use of QBIC Content-Based Image Retrieval System

    Directory of Open Access Journals (Sweden)

    Ching-Yi Wu

    2004-03-01

    Full Text Available The fast increase in digital images has caught increasing attention on the development of image retrieval technologies. Content-based image retrieval (CBIR has become an important approach in retrieving image data from a large collection. This article reports our results on the use and users study of a CBIR system. Thirty-eight students majored in art and design were invited to use the IBM’s OBIC (Query by Image Content system through the Internet. Data from their information needs, behaviors, and retrieval strategies were collected through an in-depth interview, observation, and self-described think-aloud process. Important conclusions are:(1)There are four types of information needs for image data: implicit, inspirational, ever-changing, and purposive. The types of needs may change during the retrieval process. (2)CBIR is suitable for the example-type query, text retrieval is suitable for the scenario-type query, and image browsing is suitable for the symbolic query. (3)Different from text retrieval, detailed description of the query condition may lead to retrieval failure more easily. (4)CBIR is suitable for the domain-specific image collection, not for the images on the Word-Wide Web.[Article content in Chinese

  20. Retinal image quality assessment based on image clarity and content

    Science.gov (United States)

    Abdel-Hamid, Lamiaa; El-Rafei, Ahmed; El-Ramly, Salwa; Michelson, Georg; Hornegger, Joachim

    2016-09-01

    Retinal image quality assessment (RIQA) is an essential step in automated screening systems to avoid misdiagnosis caused by processing poor quality retinal images. A no-reference transform-based RIQA algorithm is introduced that assesses images based on five clarity and content quality issues: sharpness, illumination, homogeneity, field definition, and content. Transform-based RIQA algorithms have the advantage of considering retinal structures while being computationally inexpensive. Wavelet-based features are proposed to evaluate the sharpness and overall illumination of the images. A retinal saturation channel is designed and used along with wavelet-based features for homogeneity assessment. The presented sharpness and illumination features are utilized to assure adequate field definition, whereas color information is used to exclude nonretinal images. Several publicly available datasets of varying quality grades are utilized to evaluate the feature sets resulting in area under the receiver operating characteristic curve above 0.99 for each of the individual feature sets. The overall quality is assessed by a classifier that uses the collective features as an input vector. The classification results show superior performance of the algorithm in comparison to other methods from literature. Moreover, the algorithm addresses efficiently and comprehensively various quality issues and is suitable for automatic screening systems.

  1. FMAj: a tool for high content analysis of muscle dynamics in Drosophila metamorphosis.

    Science.gov (United States)

    Kuleesha, Yadav; Puah, Wee Choo; Lin, Feng; Wasser, Martin

    2014-01-01

    During metamorphosis in Drosophila melanogaster, larval muscles undergo two different developmental fates; one population is removed by cell death, while the other persistent subset undergoes morphological remodeling and survives to adulthood. Thanks to the ability to perform live imaging of muscle development in transparent pupae and the power of genetics, metamorphosis in Drosophila can be used as a model to study the regulation of skeletal muscle mass. However, time-lapse microscopy generates sizeable image data that require new tools for high throughput image analysis. We performed targeted gene perturbation in muscles and acquired 3D time-series images of muscles in metamorphosis using laser scanning confocal microscopy. To quantify the phenotypic effects of gene perturbations, we designed the Fly Muscle Analysis tool (FMAj) which is based on the ImageJ and MySQL frameworks for image processing and data storage, respectively. The image analysis pipeline of FMAj contains three modules. The first module assists in adding annotations to time-lapse datasets, such as genotypes, experimental parameters and temporal reference points, which are used to compare different datasets. The second module performs segmentation and feature extraction of muscle cells and nuclei. Users can provide annotations to the detected objects, such as muscle identities and anatomical information. The third module performs comparative quantitative analysis of muscle phenotypes. We applied our tool to the phenotypic characterization of two atrophy related genes that were silenced by RNA interference. Reduction of Drosophila Tor (Target of Rapamycin) expression resulted in enhanced atrophy compared to control, while inhibition of the autophagy factor Atg9 caused suppression of atrophy and enlarged muscle fibers of abnormal morphology. FMAj enabled us to monitor the progression of atrophic and hypertrophic phenotypes of individual muscles throughout metamorphosis. We designed a new tool to

  2. FMAj: a tool for high content analysis of muscle dynamics in Drosophila metamorphosis

    Science.gov (United States)

    2014-01-01

    Background During metamorphosis in Drosophila melanogaster, larval muscles undergo two different developmental fates; one population is removed by cell death, while the other persistent subset undergoes morphological remodeling and survives to adulthood. Thanks to the ability to perform live imaging of muscle development in transparent pupae and the power of genetics, metamorphosis in Drosophila can be used as a model to study the regulation of skeletal muscle mass. However, time-lapse microscopy generates sizeable image data that require new tools for high throughput image analysis. Results We performed targeted gene perturbation in muscles and acquired 3D time-series images of muscles in metamorphosis using laser scanning confocal microscopy. To quantify the phenotypic effects of gene perturbations, we designed the Fly Muscle Analysis tool (FMAj) which is based on the ImageJ and MySQL frameworks for image processing and data storage, respectively. The image analysis pipeline of FMAj contains three modules. The first module assists in adding annotations to time-lapse datasets, such as genotypes, experimental parameters and temporal reference points, which are used to compare different datasets. The second module performs segmentation and feature extraction of muscle cells and nuclei. Users can provide annotations to the detected objects, such as muscle identities and anatomical information. The third module performs comparative quantitative analysis of muscle phenotypes. We applied our tool to the phenotypic characterization of two atrophy related genes that were silenced by RNA interference. Reduction of Drosophila Tor (Target of Rapamycin) expression resulted in enhanced atrophy compared to control, while inhibition of the autophagy factor Atg9 caused suppression of atrophy and enlarged muscle fibers of abnormal morphology. FMAj enabled us to monitor the progression of atrophic and hypertrophic phenotypes of individual muscles throughout metamorphosis

  3. Determining ice water content from 2D crystal images in convective cloud systems

    Science.gov (United States)

    Leroy, Delphine; Coutris, Pierre; Fontaine, Emmanuel; Schwarzenboeck, Alfons; Strapp, J. Walter

    2016-04-01

    Cloud microphysical in-situ instrumentation measures bulk parameters like total water content (TWC) and/or derives particle size distributions (PSD) (utilizing optical spectrometers and optical array probes (OAP)). The goal of this work is to introduce a comprehensive methodology to compute TWC from OAP measurements, based on the dataset collected during recent HAIC (High Altitude Ice Crystals)/HIWC (High Ice Water Content) field campaigns. Indeed, the HAIC/HIWC field campaigns in Darwin (2014) and Cayenne (2015) provide a unique opportunity to explore the complex relationship between cloud particle mass and size in ice crystal environments. Numerous mesoscale convective systems (MCSs) were sampled with the French Falcon 20 research aircraft at different temperature levels from -10°C up to 50°C. The aircraft instrumentation included an IKP-2 (isokinetic probe) to get reliable measurements of TWC and the optical array probes 2D-S and PIP recording images over the entire ice crystal size range. Based on the known principle relating crystal mass and size with a power law (m=α•Dβ), Fontaine et al. (2014) performed extended 3D crystal simulations and thereby demonstrated that it is possible to estimate the value of the exponent β from OAP data, by analyzing the surface-size relationship for the 2D images as a function of time. Leroy et al. (2015) proposed an extended version of this method that produces estimates of β from the analysis of both the surface-size and perimeter-size relationships. Knowing the value of β, α then is deduced from the simultaneous IKP-2 TWC measurements for the entire HAIC/HIWC dataset. The statistical analysis of α and β values for the HAIC/HIWC dataset firstly shows that α is closely linked to β and that this link changes with temperature. From these trends, a generalized parameterization for α is proposed. Finally, the comparison with the initial IKP-2 measurements demonstrates that the method is able to predict TWC values

  4. NiftyPET: a High-throughput Software Platform for High Quantitative Accuracy and Precision PET Imaging and Analysis.

    Science.gov (United States)

    Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien

    2018-01-01

    We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.

  5. Multimedia content analysis

    CERN Document Server

    Ohm, Jens

    2016-01-01

    This textbook covers the theoretical backgrounds and practical aspects of image, video and audio feature expression, e.g., color, texture, edge, shape, salient point and area, motion, 3D structure, audio/sound in time, frequency and cepstral domains, structure and melody. Up-to-date algorithms for estimation, search, classification and compact expression of feature data are described in detail. Concepts of signal decomposition (such as segmentation, source tracking and separation), as well as composition, mixing, effects, and rendering, are discussed. Numerous figures and examples help to illustrate the aspects covered. The book was developed on the basis of a graduate-level university course, and most chapters are supplemented by problem-solving exercises. The book is also a self-contained introduction both for researchers and developers of multimedia content analysis systems in industry. .

  6. Determination of fat and total protein content in milk using conventional digital imaging.

    Science.gov (United States)

    Kucheryavskiy, Sergey; Melenteva, Anastasiia; Bogomolov, Andrey

    2014-04-01

    The applicability of conventional digital imaging to quantitative determination of fat and total protein in cow's milk, based on the phenomenon of light scatter, has been proved. A new algorithm for extracting features from digital images of milk samples has been developed. The algorithm takes into account spatial distribution of light, diffusely transmitted through a sample. The proposed method has been tested on two sample sets prepared from industrial raw milk standards, with variable fat and protein content. Partial Least-Squares (PLS) regression on the features calculated from images of monochromatically illuminated milk samples resulted in models with high prediction performance when analysed the sets separately (best models with cross-validated R(2)=0.974 for protein and R(2)=0.973 for fat content). However when analysed the sets jointly with the obtained results were significantly worse (best models with cross-validated R(2)=0.890 for fat content and R(2)=0.720 for protein content). The results have been compared with previously published Vis/SW-NIR spectroscopic study of similar samples. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Hyperspectral imaging detection of decayed honey peaches based on their chlorophyll content.

    Science.gov (United States)

    Sun, Ye; Wang, Yihang; Xiao, Hui; Gu, Xinzhe; Pan, Leiqing; Tu, Kang

    2017-11-15

    Honey peach is a very common but highly perishable market fruit. When pathogens infect fruit, chlorophyll as one of the important components related to fruit quality, decreased significantly. Here, the feasibility of hyperspectral imaging to determine the chlorophyll content thus distinguishing diseased peaches was investigated. Three optimal wavelengths (617nm, 675nm, and 818nm) were selected according to chlorophyll content via successive projections algorithm. Partial least square regression models were established to determine chlorophyll content. Three band ratios were obtained using these optimal wavelengths, which improved spatial details, but also integrates the information of chemical composition from spectral characteristics. The band ratio values were suitable to classify the diseased peaches with 98.75% accuracy and clearly show the spatial distribution of diseased parts. This study provides a new perspective for the selection of optimal wavelengths of hyperspectral imaging via chlorophyll content, thus enabling the detection of fungal diseases in peaches. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. A prototype method for diagnosing high ice water content probability using satellite imager data

    Science.gov (United States)

    Yost, Christopher R.; Bedka, Kristopher M.; Minnis, Patrick; Nguyen, Louis; Strapp, J. Walter; Palikonda, Rabindra; Khlopenkov, Konstantin; Spangenberg, Douglas; Smith, William L., Jr.; Protat, Alain; Delanoe, Julien

    2018-03-01

    Recent studies have found that ingestion of high mass concentrations of ice particles in regions of deep convective storms, with radar reflectivity considered safe for aircraft penetration, can adversely impact aircraft engine performance. Previous aviation industry studies have used the term high ice water content (HIWC) to define such conditions. Three airborne field campaigns were conducted in 2014 and 2015 to better understand how HIWC is distributed in deep convection, both as a function of altitude and proximity to convective updraft regions, and to facilitate development of new methods for detecting HIWC conditions, in addition to many other research and regulatory goals. This paper describes a prototype method for detecting HIWC conditions using geostationary (GEO) satellite imager data coupled with in situ total water content (TWC) observations collected during the flight campaigns. Three satellite-derived parameters were determined to be most useful for determining HIWC probability: (1) the horizontal proximity of the aircraft to the nearest overshooting convective updraft or textured anvil cloud, (2) tropopause-relative infrared brightness temperature, and (3) daytime-only cloud optical depth. Statistical fits between collocated TWC and GEO satellite parameters were used to determine the membership functions for the fuzzy logic derivation of HIWC probability. The products were demonstrated using data from several campaign flights and validated using a subset of the satellite-aircraft collocation database. The daytime HIWC probability was found to agree quite well with TWC time trends and identified extreme TWC events with high probability. Discrimination of HIWC was more challenging at night with IR-only information. The products show the greatest capability for discriminating TWC ≥ 0.5 g m-3. Product validation remains challenging due to vertical TWC uncertainties and the typically coarse spatio-temporal resolution of the GEO data.

  9. Quantitative analysis of γ-oryzanol content in cold pressed rice bran oil by TLC-image analysis method.

    Science.gov (United States)

    Sakunpak, Apirak; Suksaeree, Jirapornchai; Monton, Chaowalit; Pathompak, Pathamaporn; Kraisintu, Krisana

    2014-02-01

    To develop and validate an image analysis method for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. TLC-densitometric and TLC-image analysis methods were developed, validated, and used for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. The results obtained by these two different quantification methods were compared by paired t-test. Both assays provided good linearity, accuracy, reproducibility and selectivity for determination of γ-oryzanol. The TLC-densitometric and TLC-image analysis methods provided a similar reproducibility, accuracy and selectivity for the quantitative determination of γ-oryzanol in cold pressed rice bran oil. A statistical comparison of the quantitative determinations of γ-oryzanol in samples did not show any statistically significant difference between TLC-densitometric and TLC-image analysis methods. As both methods were found to be equal, they therefore can be used for the determination of γ-oryzanol in cold pressed rice bran oil.

  10. Feasibility analysis of high resolution tissue image registration using 3-D synthetic data

    Directory of Open Access Journals (Sweden)

    Yachna Sharma

    2011-01-01

    Full Text Available Background: Registration of high-resolution tissue images is a critical step in the 3D analysis of protein expression. Because the distance between images (~4-5μm thickness of a tissue section is nearly the size of the objects of interest (~10-20μm cancer cell nucleus, a given object is often not present in both of two adjacent images. Without consistent correspondence of objects between images, registration becomes a difficult task. This work assesses the feasibility of current registration techniques for such images. Methods: We generated high resolution synthetic 3-D image data sets emulating the constraints in real data. We applied multiple registration methods to the synthetic image data sets and assessed the registration performance of three techniques (i.e., mutual information (MI, kernel density estimate (KDE method [1], and principal component analysis (PCA at various slice thicknesses (with increments of 1μm in order to quantify the limitations of each method. Results: Our analysis shows that PCA, when combined with the KDE method based on nuclei centers, aligns images corresponding to 5μm thick sections with acceptable accuracy. We also note that registration error increases rapidly with increasing distance between images, and that the choice of feature points which are conserved between slices improves performance. Conclusions: We used simulation to help select appropriate features and methods for image registration by estimating best-case-scenario errors for given data constraints in histological images. The results of this study suggest that much of the difficulty of stained tissue registration can be reduced to the problem of accurately identifying feature points, such as the center of nuclei.

  11. Commentary: Roles for Pathologists in a High-throughput Image Analysis Team.

    Science.gov (United States)

    Aeffner, Famke; Wilson, Kristin; Bolon, Brad; Kanaly, Suzanne; Mahrt, Charles R; Rudmann, Dan; Charles, Elaine; Young, G David

    2016-08-01

    Historically, pathologists perform manual evaluation of H&E- or immunohistochemically-stained slides, which can be subjective, inconsistent, and, at best, semiquantitative. As the complexity of staining and demand for increased precision of manual evaluation increase, the pathologist's assessment will include automated analyses (i.e., "digital pathology") to increase the accuracy, efficiency, and speed of diagnosis and hypothesis testing and as an important biomedical research and diagnostic tool. This commentary introduces the many roles for pathologists in designing and conducting high-throughput digital image analysis. Pathology review is central to the entire course of a digital pathology study, including experimental design, sample quality verification, specimen annotation, analytical algorithm development, and report preparation. The pathologist performs these roles by reviewing work undertaken by technicians and scientists with training and expertise in image analysis instruments and software. These roles require regular, face-to-face interactions between team members and the lead pathologist. Traditional pathology training is suitable preparation for entry-level participation on image analysis teams. The future of pathology is very exciting, with the expanding utilization of digital image analysis set to expand pathology roles in research and drug development with increasing and new career opportunities for pathologists. © 2016 by The Author(s) 2016.

  12. APPLICATION OF PRINCIPAL COMPONENT ANALYSIS TO RELAXOGRAPHIC IMAGES

    International Nuclear Information System (INIS)

    STOYANOVA, R.S.; OCHS, M.F.; BROWN, T.R.; ROONEY, W.D.; LI, X.; LEE, J.H.; SPRINGER, C.S.

    1999-01-01

    Standard analysis methods for processing inversion recovery MR images traditionally have used single pixel techniques. In these techniques each pixel is independently fit to an exponential recovery, and spatial correlations in the data set are ignored. By analyzing the image as a complete dataset, improved error analysis and automatic segmentation can be achieved. Here, the authors apply principal component analysis (PCA) to a series of relaxographic images. This procedure decomposes the 3-dimensional data set into three separate images and corresponding recovery times. They attribute the 3 images to be spatial representations of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) content

  13. CEST ANALYSIS: AUTOMATED CHANGE DETECTION FROM VERY-HIGH-RESOLUTION REMOTE SENSING IMAGES

    Directory of Open Access Journals (Sweden)

    M. Ehlers

    2012-08-01

    Full Text Available A fast detection, visualization and assessment of change in areas of crisis or catastrophes are important requirements for coordination and planning of help. Through the availability of new satellites and/or airborne sensors with very high spatial resolutions (e.g., WorldView, GeoEye new remote sensing data are available for a better detection, delineation and visualization of change. For automated change detection, a large number of algorithms has been proposed and developed. From previous studies, however, it is evident that to-date no single algorithm has the potential for being a reliable change detector for all possible scenarios. This paper introduces the Combined Edge Segment Texture (CEST analysis, a decision-tree based cooperative suite of algorithms for automated change detection that is especially designed for the generation of new satellites with very high spatial resolution. The method incorporates frequency based filtering, texture analysis, and image segmentation techniques. For the frequency analysis, different band pass filters can be applied to identify the relevant frequency information for change detection. After transforming the multitemporal images via a fast Fourier transform (FFT and applying the most suitable band pass filter, different methods are available to extract changed structures: differencing and correlation in the frequency domain and correlation and edge detection in the spatial domain. Best results are obtained using edge extraction. For the texture analysis, different 'Haralick' parameters can be calculated (e.g., energy, correlation, contrast, inverse distance moment with 'energy' so far providing the most accurate results. These algorithms are combined with a prior segmentation of the image data as well as with morphological operations for a final binary change result. A rule-based combination (CEST of the change algorithms is applied to calculate the probability of change for a particular location. CEST

  14. Estimation of water content in the leaves of fruit trees using infra-red images

    International Nuclear Information System (INIS)

    Muramatsu, N.; Hiraoka, K.

    2006-01-01

    A method was developed to evaluate water contents of fruit trees using infra-red photography. The irrigation of potted satsuma mandarin trees and grapevines was suppressed to induce water stress. During the drought treatment the leaf edges of basal parts of the shoots of grapevines became necrotic and the area of necrosis extended as the duration of stress increased. Necrosis was clearly distinguished from the viable areas on infra-red images. In satsuma mandarin, an abscission layer formed at the basal part of the petiole, then the leaves fell. Thus, detailed analysis was indispensable for detecting of the leaf water content. After obtaining infra-red images of satsuma mandarin leaves with or without water stress, a background treatment (subtraction of the background image) was performed on the images, then the average brightness of the leaf was determined using image analyzing software (Image Pro-plus). Coefficient correlation between the water status index using the infra-red camera and water content determined from dry weight and fresh weight of leaves was significant (r = 0.917 for adaxial surface data and r = 0.880 for abaxial surface data). These data indicate that infra-red photography is useful for detecting the degree of plant water stress

  15. Content-Based Image Retrieval Based on Electromagnetism-Like Mechanism

    Directory of Open Access Journals (Sweden)

    Hamid A. Jalab

    2013-01-01

    Full Text Available Recently, many researchers in the field of automatic content-based image retrieval have devoted a remarkable amount of research looking for methods to retrieve the best relevant images to the query image. This paper presents a novel algorithm for increasing the precision in content-based image retrieval based on electromagnetism optimization technique. The electromagnetism optimization is a nature-inspired technique that follows the collective attraction-repulsion mechanism by considering each image as an electrical charge. The algorithm is composed of two phases: fitness function measurement and electromagnetism optimization technique. It is implemented on a database with 8,000 images spread across 80 classes with 100 images in each class. Eight thousand queries are fired on the database, and the overall average precision is computed. Experimental results of the proposed approach have shown significant improvement in the retrieval performance in regard to precision.

  16. Quantitative image of bone mineral content

    International Nuclear Information System (INIS)

    Katoh, Tsuguhisa

    1990-01-01

    A dual energy subtraction system was constructed on an experimental basis for the quantitative image of bone mineral content. The system consists of a radiographing system and an image processor. Two radiograms were taken with dual x-ray energy in a single exposure using an x-ray beam dichromized by a tin filter. In this system, a film cassette was used where a low speed film-screen system, a copper filter and a high speed film-screen system were layered on top of each other. The images were read by a microdensitometer and processed by a personal computer. The image processing included the corrections of the film characteristics and heterogeneity in the x-ray field, and the dual energy subtraction in which the effect of the high energy component of the dichromized beam on the tube side image was corrected. In order to determine the accuracy of the system, experiments using wedge phantoms made of mixtures of epoxy resin and bone mineral-equivalent materials in various fractions were performed for various tube potentials and film processing conditions. The results indicated that the relative precision of the system was within ±4% and that the propagation of the film noise was within ±11 mg/cm 2 for the 0.2 mm pixels. The results also indicated that the system response was independent of the tube potential and the film processing condition. The bone mineral weight in each phalanx of the freshly dissected hand of a rhesus monkey was measured by this system and compared with the ash weight. The results showed an error of ±10%, slightly larger than that of phantom experiments, which is probably due to the effect of fat and the variation of focus-object distance. The air kerma in free air at the object was approximately 0.5 mGy for one exposure. The results indicate that this system is applicable to clinical use and provides useful information for evaluating a time-course of localized bone disease. (author)

  17. Bread Water Content Measurement Based on Hyperspectral Imaging

    DEFF Research Database (Denmark)

    Liu, Zhi; Møller, Flemming

    2011-01-01

    Water content is one of the most important properties of the bread for tasting assesment or store monitoring. Traditional bread water content measurement methods mostly are processed manually, which is destructive and time consuming. This paper proposes an automated water content measurement...... for bread quality based on near-infrared hyperspectral imaging against the conventional manual loss-in-weight method. For this purpose, the hyperspectral components unmixing technology is used for measuring the water content quantitatively. And the definition on bread water content index is presented...

  18. Collagen Content Limits Optical Coherence Tomography Image Depth in Porcine Vocal Fold Tissue.

    Science.gov (United States)

    Garcia, Jordan A; Benboujja, Fouzi; Beaudette, Kathy; Rogers, Derek; Maurer, Rie; Boudoux, Caroline; Hartnick, Christopher J

    2016-11-01

    Vocal fold scarring, a condition defined by increased collagen content, is challenging to treat without a method of noninvasively assessing vocal fold structure in vivo. The goal of this study was to observe the effects of vocal fold collagen content on optical coherence tomography imaging to develop a quantifiable marker of disease. Excised specimen study. Massachusetts Eye and Ear Infirmary. Porcine vocal folds were injected with collagenase to remove collagen from the lamina propria. Optical coherence tomography imaging was performed preinjection and at 0, 45, 90, and 180 minutes postinjection. Mean pixel intensity (or image brightness) was extracted from images of collagenase- and control-treated hemilarynges. Texture analysis of the lamina propria at each injection site was performed to extract image contrast. Two-factor repeated measure analysis of variance and t tests were used to determine statistical significance. Picrosirius red staining was performed to confirm collagenase activity. Mean pixel intensity was higher at injection sites of collagenase-treated vocal folds than control vocal folds (P Fold change in image contrast was significantly increased in collagenase-treated vocal folds than control vocal folds (P = .002). Picrosirius red staining in control specimens revealed collagen fibrils most prominent in the subepithelium and above the thyroarytenoid muscle. Specimens treated with collagenase exhibited a loss of these structures. Collagen removal from vocal fold tissue increases image brightness of underlying structures. This inverse relationship may be useful in treating vocal fold scarring in patients. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2016.

  19. High Throughput In vivo Analysis of Plant Leaf Chemical Properties Using Hyperspectral Imaging

    Science.gov (United States)

    Pandey, Piyush; Ge, Yufeng; Stoerger, Vincent; Schnable, James C.

    2017-01-01

    Image-based high-throughput plant phenotyping in greenhouse has the potential to relieve the bottleneck currently presented by phenotypic scoring which limits the throughput of gene discovery and crop improvement efforts. Numerous studies have employed automated RGB imaging to characterize biomass and growth of agronomically important crops. The objective of this study was to investigate the utility of hyperspectral imaging for quantifying chemical properties of maize and soybean plants in vivo. These properties included leaf water content, as well as concentrations of macronutrients nitrogen (N), phosphorus (P), potassium (K), magnesium (Mg), calcium (Ca), and sulfur (S), and micronutrients sodium (Na), iron (Fe), manganese (Mn), boron (B), copper (Cu), and zinc (Zn). Hyperspectral images were collected from 60 maize and 60 soybean plants, each subjected to varying levels of either water deficit or nutrient limitation stress with the goal of creating a wide range of variation in the chemical properties of plant leaves. Plants were imaged on an automated conveyor belt system using a hyperspectral imager with a spectral range from 550 to 1,700 nm. Images were processed to extract reflectance spectrum from each plant and partial least squares regression models were developed to correlate spectral data with chemical data. Among all the chemical properties investigated, water content was predicted with the highest accuracy [R2 = 0.93 and RPD (Ratio of Performance to Deviation) = 3.8]. All macronutrients were also quantified satisfactorily (R2 from 0.69 to 0.92, RPD from 1.62 to 3.62), with N predicted best followed by P, K, and S. The micronutrients group showed lower prediction accuracy (R2 from 0.19 to 0.86, RPD from 1.09 to 2.69) than the macronutrient groups. Cu and Zn were best predicted, followed by Fe and Mn. Na and B were the only two properties that hyperspectral imaging was not able to quantify satisfactorily (R2 designing experiments to vary plant nutrients

  20. Quantitative analysis of γ-oryzanol content in cold pressed rice bran oil by TLC-image analysis method

    OpenAIRE

    Sakunpak, Apirak; Suksaeree, Jirapornchai; Monton, Chaowalit; Pathompak, Pathamaporn; Kraisintu, Krisana

    2014-01-01

    Objective: To develop and validate an image analysis method for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. Methods: TLC-densitometric and TLC-image analysis methods were developed, validated, and used for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. The results obtained by these two different quantification methods were compared by paired t-test. Results: Both assays provided good linearity, accuracy, reproducibility and selectivity for dete...

  1. Automated analysis of heterogeneous carbon nanostructures by high-resolution electron microscopy and on-line image processing

    International Nuclear Information System (INIS)

    Toth, P.; Farrer, J.K.; Palotas, A.B.; Lighty, J.S.; Eddings, E.G.

    2013-01-01

    High-resolution electron microscopy is an efficient tool for characterizing heterogeneous nanostructures; however, currently the analysis is a laborious and time-consuming manual process. In order to be able to accurately and robustly quantify heterostructures, one must obtain a statistically high number of micrographs showing images of the appropriate sub-structures. The second step of analysis is usually the application of digital image processing techniques in order to extract meaningful structural descriptors from the acquired images. In this paper it will be shown that by applying on-line image processing and basic machine vision algorithms, it is possible to fully automate the image acquisition step; therefore, the number of acquired images in a given time can be increased drastically without the need for additional human labor. The proposed automation technique works by computing fields of structural descriptors in situ and thus outputs sets of the desired structural descriptors in real-time. The merits of the method are demonstrated by using combustion-generated black carbon samples. - Highlights: ► The HRTEM analysis of heterogeneous nanostructures is a tedious manual process. ► Automatic HRTEM image acquisition and analysis can improve data quantity and quality. ► We propose a method based on on-line image analysis for the automation of HRTEM image acquisition. ► The proposed method is demonstrated using HRTEM images of soot particles

  2. Thermogravimetric Analysis of Effects of High-Content Limstone Addition on Combustion Characteristics of Taixi Anthracite

    Institute of Scientific and Technical Information of China (English)

    ZHANG Hong; LI Mei; SUN Min; WEI Xian-yong

    2004-01-01

    Combustion characteristics of Taixi anthracite admixed with high content of limestone addition were investigated with thermogravimetric analysis. The results show that limestone addition has a little promoting effect on the ignition of raw coals as a whole. The addition of limestone is found to significantly accelerate the combustion and burnout of raw coals. The higher the sample mass is, the more significant the effect will be. The results also show that the change of limestone proportion between 45%-80% has little effect on ignition temperatures of coal in the blended samples. Increasing limestone content lowers the temperature corresponding to the maximum weight loss. Although higher maximum mass loss rates are observed with higher limestone content, the effect is found not ascribed to changing limestone addition, but to the decrease of absolute coal mass in the sample. The change of limestone proportion has little effect on its burnout temperature. Mechanism analysis indicates that these phenomena result mainly from improved heat conduction due to limestone addition.

  3. Backscattering analysis of high frequency ultrasonic imaging for ultrasound-guided breast biopsy

    Science.gov (United States)

    Cummins, Thomas; Akiyama, Takahiro; Lee, Changyang; Martin, Sue E.; Shung, K. Kirk

    2017-03-01

    A new ultrasound-guided breast biopsy technique is proposed. The technique utilizes conventional ultrasound guidance coupled with a high frequency embedded ultrasound array located within the biopsy needle to improve the accuracy in breast cancer diagnosis.1 The array within the needle is intended to be used to detect micro- calcifications indicative of early breast cancers such as ductal carcinoma in situ (DCIS). Backscattering analysis has the potential to characterize tissues to improve localization of lesions. This paper describes initial results of the application of backscattering analysis of breast biopsy tissue specimens and shows the usefulness of high frequency ultrasound for the new biopsy related technique. Ultrasound echoes of ex-vivo breast biopsy tissue specimens were acquired by using a single-element transducer with a bandwidth from 41 MHz to 88 MHz utilizing a UBM methodology, and the backscattering coefficients were calculated. These values as well as B-mode image data were mapped in 2D and matched with each pathology image for the identification of tissue type for the comparison to the pathology images corresponding to each plane. Microcalcifications were significantly distinguished from normal tissue. Adenocarcinoma was also successfully differentiated from adipose tissue. These results indicate that backscattering analysis is able to quantitatively distinguish tissues into normal and abnormal, which should help radiologists locate abnormal areas during the proposed ultrasound-guided breast biopsy with high frequency ultrasound.

  4. Assessing cellular toxicities in fibroblasts upon exposure to lipid-based nanoparticles: a high content analysis approach

    International Nuclear Information System (INIS)

    Solmesky, Leonardo J; Weil, Miguel; Shuman, Michal; Goldsmith, Meir; Peer, Dan

    2011-01-01

    Lipid-based nanoparticles (LNPs) are widely used for the delivery of drugs and nucleic acids. Although most of them are considered safe, there is confusing evidence in the literature regarding their potential cellular toxicities. Moreover, little is known about the recovery process cells undergo after a cytotoxic insult. We have previously studied the systemic effects of common LNPs with different surface charge (cationic, anionic, neutral) and revealed that positively charged LNPs ((+)LNPs) activate pro-inflammatory cytokines and induce interferon response by acting as an agonist of Toll-like receptor 4 on immune cells. In this study, we focused on the response of human fibroblasts exposed to LNPs and their cellular recovery process. To this end, we used image-based high content analysis (HCA). Using this strategy, we were able to show simultaneously, in several intracellular parameters, that fibroblasts can recover from the cytotoxic effects of (+)LNPs. The use of HCA opens new avenues in understanding cellular response and nanotoxicity and may become a valuable tool for screening safe materials for drug delivery and tissue engineering.

  5. Assessing cellular toxicities in fibroblasts upon exposure to lipid-based nanoparticles: a high content analysis approach

    Science.gov (United States)

    Solmesky, Leonardo J.; Shuman, Michal; Goldsmith, Meir; Weil, Miguel; Peer, Dan

    2011-12-01

    Lipid-based nanoparticles (LNPs) are widely used for the delivery of drugs and nucleic acids. Although most of them are considered safe, there is confusing evidence in the literature regarding their potential cellular toxicities. Moreover, little is known about the recovery process cells undergo after a cytotoxic insult. We have previously studied the systemic effects of common LNPs with different surface charge (cationic, anionic, neutral) and revealed that positively charged LNPs ((+)LNPs) activate pro-inflammatory cytokines and induce interferon response by acting as an agonist of Toll-like receptor 4 on immune cells. In this study, we focused on the response of human fibroblasts exposed to LNPs and their cellular recovery process. To this end, we used image-based high content analysis (HCA). Using this strategy, we were able to show simultaneously, in several intracellular parameters, that fibroblasts can recover from the cytotoxic effects of (+)LNPs. The use of HCA opens new avenues in understanding cellular response and nanotoxicity and may become a valuable tool for screening safe materials for drug delivery and tissue engineering.

  6. A Categorical Content Analysis of Highly Cited Literature Related to Trends and Issues in Special Education.

    Science.gov (United States)

    Arden, Sarah V; Pentimonti, Jill M; Cooray, Rochana; Jackson, Stephanie

    2017-07-01

    This investigation employs categorical content analysis processes as a mechanism to examine trends and issues in a sampling of highly cited (100+) literature in special education journals. The authors had two goals: (a) broadly identifying trends across publication type, content area, and methodology and (b) specifically identifying articles with disaggregated outcomes for students with learning disabilities (LD). Content analyses were conducted across highly cited (100+) articles published during a 20-year period (1992-2013) in a sample ( n = 3) of journals focused primarily on LD, and in one broad, cross-categorical journal recognized for its impact in the field. Results indicated trends in the article type (i.e., commentary and position papers), content (i.e., reading and behavior), and methodology (i.e., small proportions of experimental and quasi-experimental designs). Results also revealed stability in the proportion of intervention research studies when compared to previous analyses and a decline in the proportion of those that disaggregated data specifically for students with LD.

  7. High Throughput Neuro-Imaging Informatics

    Directory of Open Access Journals (Sweden)

    Michael I Miller

    2013-12-01

    Full Text Available This paper describes neuroinformatics technologies at 1 mm anatomical scale based on high throughput 3D functional and structural imaging technologies of the human brain. The core is an abstract pipeline for converting functional and structural imagery into their high dimensional neuroinformatic representations index containing O(E3-E4 discriminating dimensions. The pipeline is based on advanced image analysis coupled to digital knowledge representations in the form of dense atlases of the human brain at gross anatomical scale. We demonstrate the integration of these high-dimensional representations with machine learning methods, which have become the mainstay of other fields of science including genomics as well as social networks. Such high throughput facilities have the potential to alter the way medical images are stored and utilized in radiological workflows. The neuroinformatics pipeline is used to examine cross-sectional and personalized analyses of neuropsychiatric illnesses in clinical applications as well as longitudinal studies. We demonstrate the use of high throughput machine learning methods for supporting (i cross-sectional image analysis to evaluate the health status of individual subjects with respect to the population data, (ii integration of image and non-image information for diagnosis and prognosis.

  8. Study of fish response using particle image velocimetry and high-speed, high-resolution imaging

    Energy Technology Data Exchange (ETDEWEB)

    Deng, Z. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Richmond, M. C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Mueller, R. P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gruensch, G. R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2004-10-01

    Fish swimming has fascinated both engineers and fish biologists for decades. Digital particle image velocimetry (DPIV) and high-speed, high-resolution digital imaging are recently developed analysis tools that can help engineers and biologists better understand how fish respond to turbulent environments. This report details studies to evaluate DPIV. The studies included a review of existing literature on DPIV, preliminary studies to test the feasibility of using DPIV conducted at our Flow Biology Laboratory in Richland, Washington September through December 2003, and applications of high-speed, high-resolution digital imaging with advanced motion analysis to investigations of fish injury mechanisms in turbulent shear flows and bead trajectories in laboratory physical models. Several conclusions were drawn based on these studies, which are summarized as recommendations for proposed research at the end of this report.

  9. Content-based quality evaluation of color images: overview and proposals

    Science.gov (United States)

    Tremeau, Alain; Richard, Noel; Colantoni, Philippe; Fernandez-Maloigne, Christine

    2003-12-01

    The automatic prediction of perceived quality from image data in general, and the assessment of particular image characteristics or attributes that may need improvement in particular, becomes an increasingly important part of intelligent imaging systems. The purpose of this paper is to propose to the color imaging community in general to develop a software package available on internet to help the user to select among all these approaches which is better appropriated to a given application. The ultimate goal of this project is to propose, next to implement, an open and unified color imaging system to set up a favourable context for the evaluation and analysis of color imaging processes. Many different methods for measuring the performance of a process have been proposed by different researchers. In this paper, we will discuss the advantages and shortcomings of most of main analysis criteria and performance measures currently used. The aim is not to establish a harsh competition between algorithms or processes, but rather to test and compare the efficiency of methodologies firstly to highlight strengths and weaknesses of a given algorithm or methodology on a given image type and secondly to have these results publicly available. This paper is focused on two important unsolved problems. Why it is so difficult to select a color space which gives better results than another one? Why it is so difficult to select an image quality metric which gives better results than another one, with respect to the judgment of the Human Visual System? Several methods used either in color imaging or in image quality will be thus discussed. Proposals for content-based image measures and means of developing a standard test suite for will be then presented. The above reference advocates for an evaluation protocol based on an automated procedure. This is the ultimate goal of our proposal.

  10. Nanoparticulate NaA zeolite composites for MRI: Effect of iron oxide content on image contrast

    Science.gov (United States)

    Gharehaghaji, Nahideh; Divband, Baharak; Zareei, Loghman

    2018-06-01

    In the current study, Fe3O4/NaA nanocomposites with various amounts of Fe3O4 (3.4, 6.8 & 10.2 wt%) were synthesized and characterized to study the effect of nano iron oxide content on the magnetic resonance (MR) image contrast. The cell viability of the nanocomposites was investigated by MTT assay method. T2 values as well as r2 relaxivities were determined with a 1.5 T MRI scanner. The results of the MTT assay confirmed the nanocomposites cytocompatibility up to 6.8% of the iron oxide content. Although the magnetization saturations and susceptibility values of the nanocomposites were increased as a function of the iron oxide content, their relaxivity was decreased from 921.78 mM-1 s-1 for the nanocomposite with the lowest iron oxide content to 380.16 mM-1 s-1 for the highest one. Therefore, Fe3O4/NaA nanocomposite with 3.4% iron oxide content led to the best MR image contrast. Nano iron oxide content and dispersion in the nanocomposites structure have important role in the nanocomposite r2 relaxivity and the MR image contrast. Aggregation of the iron oxide nanoparticles is a limiting factor in using of the high iron oxide content nanocomposites.

  11. Qualitative Content Analysis

    OpenAIRE

    Satu Elo; Maria Kääriäinen; Outi Kanste; Tarja Pölkki; Kati Utriainen; Helvi Kyngäs

    2014-01-01

    Qualitative content analysis is commonly used for analyzing qualitative data. However, few articles have examined the trustworthiness of its use in nursing science studies. The trustworthiness of qualitative content analysis is often presented by using terms such as credibility, dependability, conformability, transferability, and authenticity. This article focuses on trustworthiness based on a review of previous studie...

  12. Determination of fat and total protein content in milk using conventional digital imaging

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey; Melenteva, Anastasiia; Bogomolov, Andrey

    2014-01-01

    into account spatial distribution of light, diffusely transmitted through a sample. The proposed method has been tested on two sample sets prepared from industrial raw milk standards, with variable fat and protein content. Partial Least-Squares (PLS) regression on the features calculated from images......The applicability of conventional digital imaging to quantitative determination of fat and total protein in cow’s milk, based on the phenomenon of light scatter, has been proved. A new algorithm for extracting features from digital images of milk samples has been developed. The algorithm takes...... of monochromatically illuminated milk samples resulted in models with high prediction performance when analysed the sets separately (best models with cross-validated R2=0.974 for protein and R2=0.973 for fat content). However when analysed the sets jointly the obtained results were significantly worse (best models...

  13. Evaluation of Yogurt Microstructure Using Confocal Laser Scanning Microscopy and Image Analysis.

    Science.gov (United States)

    Skytte, Jacob L; Ghita, Ovidiu; Whelan, Paul F; Andersen, Ulf; Møller, Flemming; Dahl, Anders B; Larsen, Rasmus

    2015-06-01

    The microstructure of protein networks in yogurts defines important physical properties of the yogurt and hereby partly its quality. Imaging this protein network using confocal scanning laser microscopy (CSLM) has shown good results, and CSLM has become a standard measuring technique for fermented dairy products. When studying such networks, hundreds of images can be obtained, and here image analysis methods are essential for using the images in statistical analysis. Previously, methods including gray level co-occurrence matrix analysis and fractal analysis have been used with success. However, a range of other image texture characterization methods exists. These methods describe an image by a frequency distribution of predefined image features (denoted textons). Our contribution is an investigation of the choice of image analysis methods by performing a comparative study of 7 major approaches to image texture description. Here, CSLM images from a yogurt fermentation study are investigated, where production factors including fat content, protein content, heat treatment, and incubation temperature are varied. The descriptors are evaluated through nearest neighbor classification, variance analysis, and cluster analysis. Our investigation suggests that the texton-based descriptors provide a fuller description of the images compared to gray-level co-occurrence matrix descriptors and fractal analysis, while still being as applicable and in some cases as easy to tune. © 2015 Institute of Food Technologists®

  14. Image seedling analysis to evaluate tomato seed physiological potential

    Directory of Open Access Journals (Sweden)

    Vanessa Neumann Silva

    Full Text Available Computerized seedling image analysis are one of the most recently techniques to detect differences of vigor between seed lots. The aim of this study was verify the hability of computerized seedling image analysis by SVIS® to detect differences of vigor between tomato seed lots as information provided by traditionally vigor tests. Ten lots of tomato seeds, cultivar Santa Clara, were stored for 12 months in controlled environment at 20 ± 1 ºC and 45-50% of relative humidity of the air. The moisture content of the seeds was monitored and the physiological potential tested at 0, 6 and 12 months after storage, with germination test, first count of germination, traditional accelerated ageing and with saturated salt solution, electrical conductivity, seedling emergence and with seed vigor imaging system (SVIS®. A completely randomized experimental design was used with four replications. The parameters obtained by the computerized seedling analysis (seedling length and indexes of vigor and seedling growth with software SVIS® are efficient to detect differences between tomato seed lots of high and low vigor.

  15. Vaccine Images on Twitter: Analysis of What Images are Shared

    Science.gov (United States)

    Dredze, Mark

    2018-01-01

    Background Visual imagery plays a key role in health communication; however, there is little understanding of what aspects of vaccine-related images make them effective communication aids. Twitter, a popular venue for discussions related to vaccination, provides numerous images that are shared with tweets. Objective The objectives of this study were to understand how images are used in vaccine-related tweets and provide guidance with respect to the characteristics of vaccine-related images that correlate with the higher likelihood of being retweeted. Methods We collected more than one million vaccine image messages from Twitter and characterized various properties of these images using automated image analytics. We fit a logistic regression model to predict whether or not a vaccine image tweet was retweeted, thus identifying characteristics that correlate with a higher likelihood of being shared. For comparison, we built similar models for the sharing of vaccine news on Facebook and for general image tweets. Results Most vaccine-related images are duplicates (125,916/237,478; 53.02%) or taken from other sources, not necessarily created by the author of the tweet. Almost half of the images contain embedded text, and many include images of people and syringes. The visual content is highly correlated with a tweet’s textual topics. Vaccine image tweets are twice as likely to be shared as nonimage tweets. The sentiment of an image and the objects shown in the image were the predictive factors in determining whether an image was retweeted. Conclusions We are the first to study vaccine images on Twitter. Our findings suggest future directions for the study and use of vaccine imagery and may inform communication strategies around vaccination. Furthermore, our study demonstrates an effective study methodology for image analysis. PMID:29615386

  16. Vaccine Images on Twitter: Analysis of What Images are Shared.

    Science.gov (United States)

    Chen, Tao; Dredze, Mark

    2018-04-03

    Visual imagery plays a key role in health communication; however, there is little understanding of what aspects of vaccine-related images make them effective communication aids. Twitter, a popular venue for discussions related to vaccination, provides numerous images that are shared with tweets. The objectives of this study were to understand how images are used in vaccine-related tweets and provide guidance with respect to the characteristics of vaccine-related images that correlate with the higher likelihood of being retweeted. We collected more than one million vaccine image messages from Twitter and characterized various properties of these images using automated image analytics. We fit a logistic regression model to predict whether or not a vaccine image tweet was retweeted, thus identifying characteristics that correlate with a higher likelihood of being shared. For comparison, we built similar models for the sharing of vaccine news on Facebook and for general image tweets. Most vaccine-related images are duplicates (125,916/237,478; 53.02%) or taken from other sources, not necessarily created by the author of the tweet. Almost half of the images contain embedded text, and many include images of people and syringes. The visual content is highly correlated with a tweet's textual topics. Vaccine image tweets are twice as likely to be shared as nonimage tweets. The sentiment of an image and the objects shown in the image were the predictive factors in determining whether an image was retweeted. We are the first to study vaccine images on Twitter. Our findings suggest future directions for the study and use of vaccine imagery and may inform communication strategies around vaccination. Furthermore, our study demonstrates an effective study methodology for image analysis. ©Tao Chen, Mark Dredze. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 03.04.2018.

  17. Factor analysis in optimization of formulation of high content uniformity tablets containing low dose active substance.

    Science.gov (United States)

    Lukášová, Ivana; Muselík, Jan; Franc, Aleš; Goněc, Roman; Mika, Filip; Vetchý, David

    2017-11-15

    Warfarin is intensively discussed drug with narrow therapeutic range. There have been cases of bleeding attributed to varying content or altered quality of the active substance. Factor analysis is useful for finding suitable technological parameters leading to high content uniformity of tablets containing low amount of active substance. The composition of tabletting blend and technological procedure were set with respect to factor analysis of previously published results. The correctness of set parameters was checked by manufacturing and evaluation of tablets containing 1-10mg of warfarin sodium. The robustness of suggested technology was checked by using "worst case scenario" and statistical evaluation of European Pharmacopoeia (EP) content uniformity limits with respect to Bergum division and process capability index (Cpk). To evaluate the quality of active substance and tablets, dissolution method was developed (water; EP apparatus II; 25rpm), allowing for statistical comparison of dissolution profiles. Obtained results prove the suitability of factor analysis to optimize the composition with respect to batches manufactured previously and thus the use of metaanalysis under industrial conditions is feasible. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Near Infrared Microspectroscopy, Fluorescence Microspectroscopy, Infrared Chemical Imaging and High Resolution Nuclear Magnetic Resonance Analysis of Soybean Seeds, Somatic Embryos and Single Cells

    CERN Document Server

    Baianu, I C; Hofmann, N E; Korban, S S; Lozano, P; You, T; AOCS 94th Meeting, Kansas

    2002-01-01

    Novel methodologies are currently being developed and established for the chemical analysis of soybean seeds, embryos and single cells by Fourier Transform Infrared (FT-IR), Fourier Transform Near Infrared (FT-NIR) Microspectroscopy, Fluorescence and High-Resolution NMR (HR-NMR). The first FT-NIR chemical images of biological systems approaching one micron resolution are presented here. Chemical images obtained by FT-NIR and FT-IR Microspectroscopy are presented for oil in soybean seeds and somatic embryos under physiological conditions. FT-NIR spectra of oil and proteins were obtained for volumes as small as two cubic microns. Related, HR-NMR analyses of oil contents in somatic embryos are also presented here with nanoliter precision. Such 400 MHz 1H NMR analyses allowed the selection of mutagenized embryos with higher oil content (e.g. ~20%) compared to non-mutagenized control embryos. Moreover, developmental changes in single soybean seeds and/or somatic embryos may be monitored by FT-NIR with a precision ...

  19. Changes in content and synthesis of collagen types and proteoglycans in osteoarthritis of the knee joint and comparison of quantitative analysis with Photoshop-based image analysis.

    Science.gov (United States)

    Lahm, Andreas; Mrosek, Eike; Spank, Heiko; Erggelet, Christoph; Kasch, Richard; Esser, Jan; Merk, Harry

    2010-04-01

    The different cartilage layers vary in synthesis of proteoglycan and of the distinct types of collagen with the predominant collagen Type II with its associated collagens, e.g. types IX and XI, produced by normal chondrocytes. It was demonstrated that proteoglycan decreases in degenerative tissue and a switch from collagen type II to type I occurs. The aim of this study was to evaluate the correlation of real-time (RT)-PCR and Photoshop-based image analysis in detecting such lesions and find new aspects about their distribution. We performed immunohistochemistry and histology with cartilage tissue samples from 20 patients suffering from osteoarthritis compared with 20 healthy biopsies. Furthermore, we quantified our results on the gene expression of collagen type I and II and aggrecan with the help of real-time (RT)-PCR. Proteoglycan content was measured colorimetrically. Using Adobe Photoshop the digitized images of histology and immunohistochemistry stains of collagen type I and II were stored on an external data storage device. The area occupied by any specific colour range can be specified and compared in a relative manner directly from the histogram using the "magic wand tool" in the select similar menu. In the image grow menu gray levels or luminosity (colour) of all pixels within the selected area, including mean, median and standard deviation, etc. are depicted. Statistical Analysis was performed using the t test. With the help of immunohistochemistry, RT-PCR and quantitative RT- PCR we found that not only collagen type II, but also collagen type I is synthesized by the cells of the diseased cartilage tissue, shown by increasing amounts of collagen type I mRNA especially in the later stages of osteoarthritis. A decrease of collagen type II is visible especially in the upper fibrillated area of the advanced osteoarthritic samples, which leads to an overall decrease. Analysis of proteoglycan showed a loss of the overall content and a quite uniform staining in

  20. Non-destructive, high-content analysis of wheat grain traits using X-ray micro computed tomography

    Directory of Open Access Journals (Sweden)

    Nathan Hughes

    2017-11-01

    Full Text Available Abstract Background Wheat is one of the most widely grown crop in temperate climates for food and animal feed. In order to meet the demands of the predicted population increase in an ever-changing climate, wheat production needs to dramatically increase. Spike and grain traits are critical determinants of final yield and grain uniformity a commercially desired trait, but their analysis is laborious and often requires destructive harvest. One of the current challenges is to develop an accurate, non-destructive method for spike and grain trait analysis capable of handling large populations. Results In this study we describe the development of a robust method for the accurate extraction and measurement of spike and grain morphometric parameters from images acquired by X-ray micro-computed tomography (μCT. The image analysis pipeline developed automatically identifies plant material of interest in μCT images, performs image analysis, and extracts morphometric data. As a proof of principle, this integrated methodology was used to analyse the spikes from a population of wheat plants subjected to high temperatures under two different water regimes. Temperature has a negative effect on spike height and grain number with the middle of the spike being the most affected region. The data also confirmed that increased grain volume was correlated with the decrease in grain number under mild stress. Conclusions Being able to quickly measure plant phenotypes in a non-destructive manner is crucial to advance our understanding of gene function and the effects of the environment. We report on the development of an image analysis pipeline capable of accurately and reliably extracting spike and grain traits from crops without the loss of positional information. This methodology was applied to the analysis of wheat spikes can be readily applied to other economically important crop species.

  1. A content analysis of thinspiration, fitspiration, and bonespiration imagery on social media.

    Science.gov (United States)

    Talbot, Catherine Victoria; Gavin, Jeffrey; van Steen, Tommy; Morey, Yvette

    2017-01-01

    On social media, images such as thinspiration, fitspiration, and bonespiration, are shared to inspire certain body ideals. Previous research has demonstrated that exposure to these groups of content is associated with increased body dissatisfaction and decreased self-esteem. It is therefore important that the bodies featured within these groups of content are more fully understood so that effective interventions and preventative measures can be informed, developed, and implemented. A content analysis was conducted on a sample of body-focussed images with the hashtags thinspiration, fitspiration, and bonespiration from three social media platforms. The analyses showed that thinspiration and bonespiration content contained more thin and objectified bodies, compared to fitspiration which featured a greater prevalence of muscles and muscular bodies. In addition, bonespiration content contained more bone protrusions and fewer muscles than thinspiration content. The findings suggest fitspiration may be a less unhealthy type of content; however, a subgroup of imagery was identified which idealised the extremely thin body type and as such this content should also be approached with caution. Future research should utilise qualitative methods to further develop understandings of the body ideals that are constructed within these groups of content and the motivations behind posting this content.

  2. Biased discriminant euclidean embedding for content-based image retrieval.

    Science.gov (United States)

    Bian, Wei; Tao, Dacheng

    2010-02-01

    With many potential multimedia applications, content-based image retrieval (CBIR) has recently gained more attention for image management and web search. A wide variety of relevance feedback (RF) algorithms have been developed in recent years to improve the performance of CBIR systems. These RF algorithms capture user's preferences and bridge the semantic gap. However, there is still a big room to further the RF performance, because the popular RF algorithms ignore the manifold structure of image low-level visual features. In this paper, we propose the biased discriminative Euclidean embedding (BDEE) which parameterises samples in the original high-dimensional ambient space to discover the intrinsic coordinate of image low-level visual features. BDEE precisely models both the intraclass geometry and interclass discrimination and never meets the undersampled problem. To consider unlabelled samples, a manifold regularization-based item is introduced and combined with BDEE to form the semi-supervised BDEE, or semi-BDEE for short. To justify the effectiveness of the proposed BDEE and semi-BDEE, we compare them against the conventional RF algorithms and show a significant improvement in terms of accuracy and stability based on a subset of the Corel image gallery.

  3. Estimate of the melanin content in human hairs by the inverse Monte-Carlo method using a system for digital image analysis

    International Nuclear Information System (INIS)

    Bashkatov, A N; Genina, Elina A; Kochubei, V I; Tuchin, Valerii V

    2006-01-01

    Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)

  4. High Throughput In vivo Analysis of Plant Leaf Chemical Properties Using Hyperspectral Imaging.

    Science.gov (United States)

    Pandey, Piyush; Ge, Yufeng; Stoerger, Vincent; Schnable, James C

    2017-01-01

    Image-based high-throughput plant phenotyping in greenhouse has the potential to relieve the bottleneck currently presented by phenotypic scoring which limits the throughput of gene discovery and crop improvement efforts. Numerous studies have employed automated RGB imaging to characterize biomass and growth of agronomically important crops. The objective of this study was to investigate the utility of hyperspectral imaging for quantifying chemical properties of maize and soybean plants in vivo . These properties included leaf water content, as well as concentrations of macronutrients nitrogen (N), phosphorus (P), potassium (K), magnesium (Mg), calcium (Ca), and sulfur (S), and micronutrients sodium (Na), iron (Fe), manganese (Mn), boron (B), copper (Cu), and zinc (Zn). Hyperspectral images were collected from 60 maize and 60 soybean plants, each subjected to varying levels of either water deficit or nutrient limitation stress with the goal of creating a wide range of variation in the chemical properties of plant leaves. Plants were imaged on an automated conveyor belt system using a hyperspectral imager with a spectral range from 550 to 1,700 nm. Images were processed to extract reflectance spectrum from each plant and partial least squares regression models were developed to correlate spectral data with chemical data. Among all the chemical properties investigated, water content was predicted with the highest accuracy [ R 2 = 0.93 and RPD (Ratio of Performance to Deviation) = 3.8]. All macronutrients were also quantified satisfactorily ( R 2 from 0.69 to 0.92, RPD from 1.62 to 3.62), with N predicted best followed by P, K, and S. The micronutrients group showed lower prediction accuracy ( R 2 from 0.19 to 0.86, RPD from 1.09 to 2.69) than the macronutrient groups. Cu and Zn were best predicted, followed by Fe and Mn. Na and B were the only two properties that hyperspectral imaging was not able to quantify satisfactorily ( R 2 plant chemical traits. Future

  5. High-Content Electrophysiological Analysis of Human Pluripotent Stem Cell-Derived Cardiomyocytes (hPSC-CMs).

    Science.gov (United States)

    Kong, Chi-Wing; Geng, Lin; Li, Ronald A

    2018-01-01

    Considerable interest has been raised to develop human pluripotent stem cell-derived cardiomyocytes (hPSC-CMs) as a model for drug discovery and cardiotoxicity screening. High-content electrophysiological analysis of currents generated by transmembrane cell surface ion channels has been pursued to complement such emerging applications. Here we describe practical procedures and considerations for accomplishing successful assays of hPSC-CMs using an automated planar patch-clamp system.

  6. Evaluation of Yogurt Microstructure Using Confocal Laser Scanning Microscopy and Image Analysis

    DEFF Research Database (Denmark)

    Skytte, Jacob Lercke; Ghita, Ovidiu; Whelan, Paul F.

    2015-01-01

    The microstructure of protein networks in yogurts defines important physical properties of the yogurt and hereby partly its quality. Imaging this protein network using confocal scanning laser microscopy (CSLM) has shown good results, and CSLM has become a standard measuring technique for fermented...... to image texture description. Here, CSLM images from a yogurt fermentation study are investigated, where production factors including fat content, protein content, heat treatment, and incubation temperature are varied. The descriptors are evaluated through nearest neighbor classification, variance analysis...... scanning microscopy images can be used to provide information on the protein microstructure in yogurt products. For large numbers of microscopy images, subjective evaluation becomes a difficult or even impossible approach, if the images should be incorporated in any form of statistical analysis alongside...

  7. Detection of Crater Rims by Image Analysis in Very High Resolution Images of Mars, Mercury and the Moon

    Science.gov (United States)

    Pina, P.; Marques, J. S.; Bandeira, L.

    2013-12-01

    The adaptive nature of automated crater detection algorithms permits achieving a high level of autonomous detections in different surfaces and consequently becoming an important tool in the update of crater catalogues. Nevertheless, the available approaches assume all craters as circular and only provide as output the radius and location of each crater. However, the delineation of impact craters following the local variability of the rims is also important to, among others, evaluate their degree of degradation or preservation, namely those studies related to ancient climate analysis. This contour determination is normally prepared in a manual way but can advantageously be done by image analysis methods, eliminating subjectivity and allowing large scale delineations. We have recently proposed a pair of independent approaches to tackle with this problem, one based on processing the crater image in polar coordinates [1], the other using morphological operators [2], which achieved a good degree of success on very high resolution images from Mars [3-4], but where enough room for improvement was still available. Thus, the integration of both approaches into a single one, suppressing the individual drawbacks of the previous approaches, permitted to strength the detection procedure. We describe now the novel sequence of processing that we have built and test it intensively in a wider variety of planetary surfaces, namely, those of Mars, Mercury and the Moon, using the very high resolution images provided by HiRISE, MDIS and LROC cameras. The automated delineations of the craters are compared to a ground-truth reference (manually delineated contours), so a quantitative evaluation can be performed; on a dataset constituted by more than one thousand impact craters we have obtained a global high delineation rate. The breakdown by crater size on each surface is performed. The whole processing procedure works on raster images and also delivers the output in the same image format

  8. High-content profiling of cell responsiveness to graded substrates based on combinyatorially variant polymers.

    Science.gov (United States)

    Liu, Er; Treiser, Matthew D; Patel, Hiral; Sung, Hak-Joon; Roskov, Kristen E; Kohn, Joachim; Becker, Matthew L; Moghe, Prabhas V

    2009-08-01

    We have developed a novel approach combining high information and high throughput analysis to characterize cell adhesive responses to biomaterial substrates possessing gradients in surface topography. These gradients were fabricated by subjecting thin film blends of tyrosine-derived polycarbonates, i.e. poly(DTE carbonate) and poly(DTO carbonate) to a gradient temperature annealing protocol. Saos-2 cells engineered with a green fluorescent protein (GFP) reporter for farnesylation (GFP-f) were cultured on the gradient substrates to assess the effects of nanoscale surface topology and roughness that arise during the phase separation process on cell attachment and adhesion strength. The high throughput imaging approach allowed us to rapidly identify the "global" and "high content" structure-property relationships between cell adhesion and biomaterial properties such as polymer chemistry and topography. This study found that cell attachment and spreading increased monotonically with DTE content and were significantly elevated at the position with intermediate regions corresponding to the highest "gradient" of surface roughness, while GFP-f farnesylation intensity descriptors were sensitively altered by surface roughness, even in cells with comparable levels of spreading.

  9. Cytometric analysis of mammalian sperm for induced morphologic and DNA content errors

    International Nuclear Information System (INIS)

    Pinkel, D.

    1983-01-01

    Some flow-cytometric and image analysis procedures under development for quantitative analysis of sperm morphology are reviewed. The results of flow-cytometric DNA-content measurements on sperm from radiation exposed mice are also summarized, the results related to the available cytological information, and their potential dosimetric sensitivity discussed

  10. Content-based image retrieval using spatial layout information in brain tumor T1-weighted contrast-enhanced MR images.

    Science.gov (United States)

    Huang, Meiyan; Yang, Wei; Wu, Yao; Jiang, Jun; Gao, Yang; Chen, Yang; Feng, Qianjin; Chen, Wufan; Lu, Zhentai

    2014-01-01

    This study aims to develop content-based image retrieval (CBIR) system for the retrieval of T1-weighted contrast-enhanced MR (CE-MR) images of brain tumors. When a tumor region is fed to the CBIR system as a query, the system attempts to retrieve tumors of the same pathological category. The bag-of-visual-words (BoVW) model with partition learning is incorporated into the system to extract informative features for representing the image contents. Furthermore, a distance metric learning algorithm called the Rank Error-based Metric Learning (REML) is proposed to reduce the semantic gap between low-level visual features and high-level semantic concepts. The effectiveness of the proposed method is evaluated on a brain T1-weighted CE-MR dataset with three types of brain tumors (i.e., meningioma, glioma, and pituitary tumor). Using the BoVW model with partition learning, the mean average precision (mAP) of retrieval increases beyond 4.6% with the learned distance metrics compared with the spatial pyramid BoVW method. The distance metric learned by REML significantly outperforms three other existing distance metric learning methods in terms of mAP. The mAP of the CBIR system is as high as 91.8% using the proposed method, and the precision can reach 93.1% when the top 10 images are returned by the system. These preliminary results demonstrate that the proposed method is effective and feasible for the retrieval of brain tumors in T1-weighted CE-MR Images.

  11. Content-based image retrieval using spatial layout information in brain tumor T1-weighted contrast-enhanced MR images.

    Directory of Open Access Journals (Sweden)

    Meiyan Huang

    Full Text Available This study aims to develop content-based image retrieval (CBIR system for the retrieval of T1-weighted contrast-enhanced MR (CE-MR images of brain tumors. When a tumor region is fed to the CBIR system as a query, the system attempts to retrieve tumors of the same pathological category. The bag-of-visual-words (BoVW model with partition learning is incorporated into the system to extract informative features for representing the image contents. Furthermore, a distance metric learning algorithm called the Rank Error-based Metric Learning (REML is proposed to reduce the semantic gap between low-level visual features and high-level semantic concepts. The effectiveness of the proposed method is evaluated on a brain T1-weighted CE-MR dataset with three types of brain tumors (i.e., meningioma, glioma, and pituitary tumor. Using the BoVW model with partition learning, the mean average precision (mAP of retrieval increases beyond 4.6% with the learned distance metrics compared with the spatial pyramid BoVW method. The distance metric learned by REML significantly outperforms three other existing distance metric learning methods in terms of mAP. The mAP of the CBIR system is as high as 91.8% using the proposed method, and the precision can reach 93.1% when the top 10 images are returned by the system. These preliminary results demonstrate that the proposed method is effective and feasible for the retrieval of brain tumors in T1-weighted CE-MR Images.

  12. High-speed Vibrational Imaging and Spectral Analysis of Lipid Bodies by Compound Raman Microscopy

    OpenAIRE

    Slipchenko, Mikhail N.; Le, Thuc T.; Chen, Hongtao; Cheng, Ji-Xin

    2009-01-01

    Cells store excess energy in the form of cytoplasmic lipid droplets. At present, it is unclear how different types of fatty acids contribute to the formation of lipid-droplets. We describe a compound Raman microscope capable of both high-speed chemical imaging and quantitative spectral analysis on the same platform. We use a picosecond laser source to perform coherent Raman scattering imaging of a biological sample and confocal Raman spectral analysis at points of interest. The potential of t...

  13. A Novel Technique for Shape Feature Extraction Using Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Dhanoa Jaspreet Singh

    2016-01-01

    Full Text Available With the advent of technology and multimedia information, digital images are increasing very quickly. Various techniques are being developed to retrieve/search digital information or data contained in the image. Traditional Text Based Image Retrieval System is not plentiful. Since it is time consuming as it require manual image annotation. Also, the image annotation differs with different peoples. An alternate to this is Content Based Image Retrieval (CBIR system. It retrieves/search for image using its contents rather the text, keywords etc. A lot of exploration has been compassed in the range of Content Based Image Retrieval (CBIR with various feature extraction techniques. Shape is a significant image feature as it reflects the human perception. Moreover, Shape is quite simple to use by the user to define object in an image as compared to other features such as Color, texture etc. Over and above, if applied alone, no descriptor will give fruitful results. Further, by combining it with an improved classifier, one can use the positive features of both the descriptor and classifier. So, a tryout will be made to establish an algorithm for accurate feature (Shape extraction in Content Based Image Retrieval (CBIR. The main objectives of this project are: (a To propose an algorithm for shape feature extraction using CBIR, (b To evaluate the performance of proposed algorithm and (c To compare the proposed algorithm with state of art techniques.

  14. Context-dependent JPEG backward-compatible high-dynamic range image compression

    Science.gov (United States)

    Korshunov, Pavel; Ebrahimi, Touradj

    2013-10-01

    High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.

  15. Imaging mass spectrometry statistical analysis.

    Science.gov (United States)

    Jones, Emrys A; Deininger, Sören-Oliver; Hogendoorn, Pancras C W; Deelder, André M; McDonnell, Liam A

    2012-08-30

    Imaging mass spectrometry is increasingly used to identify new candidate biomarkers. This clinical application of imaging mass spectrometry is highly multidisciplinary: expertise in mass spectrometry is necessary to acquire high quality data, histology is required to accurately label the origin of each pixel's mass spectrum, disease biology is necessary to understand the potential meaning of the imaging mass spectrometry results, and statistics to assess the confidence of any findings. Imaging mass spectrometry data analysis is further complicated because of the unique nature of the data (within the mass spectrometry field); several of the assumptions implicit in the analysis of LC-MS/profiling datasets are not applicable to imaging. The very large size of imaging datasets and the reporting of many data analysis routines, combined with inadequate training and accessible reviews, have exacerbated this problem. In this paper we provide an accessible review of the nature of imaging data and the different strategies by which the data may be analyzed. Particular attention is paid to the assumptions of the data analysis routines to ensure that the reader is apprised of their correct usage in imaging mass spectrometry research. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. An automated, high-throughput plant phenotyping system using machine learning-based plant segmentation and image analysis.

    Science.gov (United States)

    Lee, Unseok; Chang, Sungyul; Putra, Gian Anantrio; Kim, Hyoungseok; Kim, Dong Hwan

    2018-01-01

    A high-throughput plant phenotyping system automatically observes and grows many plant samples. Many plant sample images are acquired by the system to determine the characteristics of the plants (populations). Stable image acquisition and processing is very important to accurately determine the characteristics. However, hardware for acquiring plant images rapidly and stably, while minimizing plant stress, is lacking. Moreover, most software cannot adequately handle large-scale plant imaging. To address these problems, we developed a new, automated, high-throughput plant phenotyping system using simple and robust hardware, and an automated plant-imaging-analysis pipeline consisting of machine-learning-based plant segmentation. Our hardware acquires images reliably and quickly and minimizes plant stress. Furthermore, the images are processed automatically. In particular, large-scale plant-image datasets can be segmented precisely using a classifier developed using a superpixel-based machine-learning algorithm (Random Forest), and variations in plant parameters (such as area) over time can be assessed using the segmented images. We performed comparative evaluations to identify an appropriate learning algorithm for our proposed system, and tested three robust learning algorithms. We developed not only an automatic analysis pipeline but also a convenient means of plant-growth analysis that provides a learning data interface and visualization of plant growth trends. Thus, our system allows end-users such as plant biologists to analyze plant growth via large-scale plant image data easily.

  17. Relevance Feedback in Content Based Image Retrieval: A Review

    Directory of Open Access Journals (Sweden)

    Manesh B. Kokare

    2011-01-01

    Full Text Available This paper provides an overview of the technical achievements in the research area of relevance feedback (RF in content-based image retrieval (CBIR. Relevance feedback is a powerful technique in CBIR systems, in order to improve the performance of CBIR effectively. It is an open research area to the researcher to reduce the semantic gap between low-level features and high level concepts. The paper covers the current state of art of the research in relevance feedback in CBIR, various relevance feedback techniques and issues in relevance feedback are discussed in detail.

  18. Representations of Codeine Misuse on Instagram: Content Analysis

    Science.gov (United States)

    Cherian, Roy; Westbrook, Marisa; Ramo, Danielle

    2018-01-01

    Background Prescription opioid misuse has doubled over the past 10 years and is now a public health epidemic. Analysis of social media data may provide additional insights into opioid misuse to supplement the traditional approaches of data collection (eg, self-report on surveys). Objective The aim of this study was to characterize representations of codeine misuse through analysis of public posts on Instagram to understand text phrases related to misuse. Methods We identified hashtags and searchable text phrases associated with codeine misuse by analyzing 1156 sequential Instagram posts over the course of 2 weeks from May 2016 to July 2016. Content analysis of posts associated with these hashtags identified the most common themes arising in images, as well as culture around misuse, including how misuse is happening and being perpetuated through social media. Results A majority of images (50/100; 50.0%) depicted codeine in its commonly misused form, combined with soda (lean). Codeine misuse was commonly represented with the ingestion of alcohol, cannabis, and benzodiazepines. Some images highlighted the previously noted affinity between codeine misuse and hip-hop culture or mainstream popular culture images. Conclusions The prevalence of codeine misuse images, glamorizing of ingestion with soda and alcohol, and their integration with mainstream, popular culture imagery holds the potential to normalize and increase codeine misuse and overdose. To reduce harm and prevent misuse, immediate public health efforts are needed to better understand the relationship between the potential normalization, ritualization, and commercialization of codeine misuse. PMID:29559422

  19. Content-adaptive Image Enhancement, Based on Sky and Grass Segmentation

    NARCIS (Netherlands)

    Zafarifar, B.; With, de P.H.N.

    2009-01-01

    Current TV image enhancement functions employ globally controlled settings. A more flexible system can be achieved if the global control is extended to incorporate semantic-level image content information. In this paper, we present a system that extends existing TV image enhancement functions with

  20. Developing a NIR multispectral imaging for prediction and visualization of peanut protein content using variable selection algorithms

    Science.gov (United States)

    Cheng, Jun-Hu; Jin, Huali; Liu, Zhiwei

    2018-01-01

    The feasibility of developing a multispectral imaging method using important wavelengths from hyperspectral images selected by genetic algorithm (GA), successive projection algorithm (SPA) and regression coefficient (RC) methods for modeling and predicting protein content in peanut kernel was investigated for the first time. Partial least squares regression (PLSR) calibration model was established between the spectral data from the selected optimal wavelengths and the reference measured protein content ranged from 23.46% to 28.43%. The RC-PLSR model established using eight key wavelengths (1153, 1567, 1972, 2143, 2288, 2339, 2389 and 2446 nm) showed the best predictive results with the coefficient of determination of prediction (R2P) of 0.901, and root mean square error of prediction (RMSEP) of 0.108 and residual predictive deviation (RPD) of 2.32. Based on the obtained best model and image processing algorithms, the distribution maps of protein content were generated. The overall results of this study indicated that developing a rapid and online multispectral imaging system using the feature wavelengths and PLSR analysis is potential and feasible for determination of the protein content in peanut kernels.

  1. Adaptive platform for fluorescence microscopy-based high-content screening

    Science.gov (United States)

    Geisbauer, Matthias; Röder, Thorsten; Chen, Yang; Knoll, Alois; Uhl, Rainer

    2010-04-01

    Fluorescence microscopy has become a widely used tool for the study of medically relevant intra- and intercellular processes. Extracting meaningful information out of a bulk of acquired images is usually performed during a separate post-processing task. Thus capturing raw data results in an unnecessary huge number of images, whereas usually only a few images really show the particular information that is searched for. Here we propose a novel automated high-content microscope system, which enables experiments to be carried out with only a minimum of human interaction. It facilitates a huge speed-increase for cell biology research and its applications compared to the widely performed workflows. Our fluorescence microscopy system can automatically execute application-dependent data processing algorithms during the actual experiment. They are used for image contrast enhancement, cell segmentation and/or cell property evaluation. On-the-fly retrieved information is used to reduce data and concomitantly control the experiment process in real-time. Resulting in a closed loop of perception and action the system can greatly decrease the amount of stored data on one hand and increases the relative valuable data content on the other hand. We demonstrate our approach by addressing the problem of automatically finding cells with a particular combination of labeled receptors and then selectively stimulate them with antagonists or agonists. The results are then compared against the results of traditional, static systems.

  2. Multimedia human brain database system for surgical candidacy determination in temporal lobe epilepsy with content-based image retrieval

    Science.gov (United States)

    Siadat, Mohammad-Reza; Soltanian-Zadeh, Hamid; Fotouhi, Farshad A.; Elisevich, Kost

    2003-01-01

    This paper presents the development of a human brain multimedia database for surgical candidacy determination in temporal lobe epilepsy. The focus of the paper is on content-based image management, navigation and retrieval. Several medical image-processing methods including our newly developed segmentation method are utilized for information extraction/correlation and indexing. The input data includes T1-, T2-Weighted MRI and FLAIR MRI and ictal and interictal SPECT modalities with associated clinical data and EEG data analysis. The database can answer queries regarding issues such as the correlation between the attribute X of the entity Y and the outcome of a temporal lobe epilepsy surgery. The entity Y can be a brain anatomical structure such as the hippocampus. The attribute X can be either a functionality feature of the anatomical structure Y, calculated with SPECT modalities, such as signal average, or a volumetric/morphological feature of the entity Y such as volume or average curvature. The outcome of the surgery can be any surgery assessment such as memory quotient. A determination is made regarding surgical candidacy by analysis of both textual and image data. The current database system suggests a surgical determination for the cases with relatively small hippocampus and high signal intensity average on FLAIR images within the hippocampus. This indication pretty much fits with the surgeons" expectations/observations. Moreover, as the database gets more populated with patient profiles and individual surgical outcomes, using data mining methods one may discover partially invisible correlations between the contents of different modalities of data and the outcome of the surgery.

  3. Indexing, learning and content-based retrieval for special purpose image databases

    NARCIS (Netherlands)

    M.J. Huiskes (Mark); E.J. Pauwels (Eric)

    2005-01-01

    textabstractThis chapter deals with content-based image retrieval in special purpose image databases. As image data is amassed ever more effortlessly, building efficient systems for searching and browsing of image databases becomes increasingly urgent. We provide an overview of the current

  4. Assessment of the variations in fat content in normal liver using a fast MR imaging method in comparison with results obtained by spectroscopic imaging

    International Nuclear Information System (INIS)

    Irwan, Roy; Edens, Mireille A.; Sijens, Paul E.

    2008-01-01

    A recently published Dixon-based MRI method for quantifying liver fat content using dual-echo breath-hold gradient echo imaging was validated by phantom experiments and compared with results of biopsy in two patients (Radiology 2005;237:1048-1055). We applied this method in ten healthy volunteers and compared the outcomes with the results of MR spectroscopy (MRS), the gold standard in quantifying liver fat content. Novel was the use of spectroscopic imaging yielding the variations in fat content across the liver rather than a single value obtained by single voxel MRS. Compared with the results of MRS, liver fat content according to MRI was too high in nine subjects (range 3.3-10.7% vs. 0.9-7.7%) and correct in one (21.1 vs. 21.3%). Furthermore, in one of the ten subjects the MRI fat content according to the Dixon-based MRI method was incorrect due to a (100-x) versus x percent lipid content mix-up. The second problem was fixed by a minor adjustment of the MRI algorithm. Despite systematic overestimation of liver fat contents by MRI, Spearman's correlation between the adjusted MRI liver fat contents with MRS was high (r = 0.927, P < 0.001). Even after correction of the algorithm, the problem remaining with the Dixon-based MRI method for the assessment of liver fat content,is that, at the lower end range, liver fat content is systematically overestimated by 4%. (orig.)

  5. An optimized content-aware image retargeting method: toward expanding the perceived visual field of the high-density retinal prosthesis recipients

    Science.gov (United States)

    Li, Heng; Zeng, Yajie; Lu, Zhuofan; Cao, Xiaofei; Su, Xiaofan; Sui, Xiaohong; Wang, Jing; Chai, Xinyu

    2018-04-01

    Objective. Retinal prosthesis devices have shown great value in restoring some sight for individuals with profoundly impaired vision, but the visual acuity and visual field provided by prostheses greatly limit recipients’ visual experience. In this paper, we employ computer vision approaches to seek to expand the perceptible visual field in patients implanted potentially with a high-density retinal prosthesis while maintaining visual acuity as much as possible. Approach. We propose an optimized content-aware image retargeting method, by introducing salient object detection based on color and intensity-difference contrast, aiming to remap important information of a scene into a small visual field and preserve their original scale as much as possible. It may improve prosthetic recipients’ perceived visual field and aid in performing some visual tasks (e.g. object detection and object recognition). To verify our method, psychophysical experiments, detecting object number and recognizing objects, are conducted under simulated prosthetic vision. As control, we use three other image retargeting techniques, including Cropping, Scaling, and seam-assisted shrinkability. Main results. Results show that our method outperforms in preserving more key features and has significantly higher recognition accuracy in comparison with other three image retargeting methods under the condition of small visual field and low-resolution. Significance. The proposed method is beneficial to expand the perceived visual field of prosthesis recipients and improve their object detection and recognition performance. It suggests that our method may provide an effective option for image processing module in future high-density retinal implants.

  6. Content-based image retrieval: Color-selection exploited

    NARCIS (Netherlands)

    Broek, E.L. van den; Vuurpijl, L.G.; Kisters, P. M. F.; Schmid, J.C.M. von; Moens, M.F.; Busser, R. de; Hiemstra, D.; Kraaij, W.

    2002-01-01

    This research presents a new color selection interface that facilitates query-by-color in Content-Based Image Retrieval (CBIR). Existing CBIR color selection interfaces, are being judged as non-intuitive and difficult to use. Our interface copes with these problems of usability. It is based on 11

  7. Content-Based Image Retrieval: Color-selection exploited

    NARCIS (Netherlands)

    Moens, Marie-Francine; van den Broek, Egon; Vuurpijl, L.G.; de Brusser, Rik; Kisters, P.M.F.; Hiemstra, Djoerd; Kraaij, Wessel; von Schmid, J.C.M.

    2002-01-01

    This research presents a new color selection interface that facilitates query-by-color in Content-Based Image Retrieval (CBIR). Existing CBIR color selection interfaces, are being judged as non-intuitive and difficult to use. Our interface copes with these problems of usability. It is based on 11

  8. Analysis of the High-Frequency Content in Human QRS Complexes by the Continuous Wavelet Transform: An Automatized Analysis for the Prediction of Sudden Cardiac Death.

    Science.gov (United States)

    García Iglesias, Daniel; Roqueñi Gutiérrez, Nieves; De Cos, Francisco Javier; Calvo, David

    2018-02-12

    Fragmentation and delayed potentials in the QRS signal of patients have been postulated as risk markers for Sudden Cardiac Death (SCD). The analysis of the high-frequency spectral content may be useful for quantification. Forty-two consecutive patients with prior history of SCD or malignant arrhythmias (patients) where compared with 120 healthy individuals (controls). The QRS complexes were extracted with a modified Pan-Tompkins algorithm and processed with the Continuous Wavelet Transform to analyze the high-frequency content (85-130 Hz). Overall, the power of the high-frequency content was higher in patients compared with controls (170.9 vs. 47.3 10³nV²Hz -1 ; p = 0.007), with a prolonged time to reach the maximal power (68.9 vs. 64.8 ms; p = 0.002). An analysis of the signal intensity (instantaneous average of cumulative power), revealed a distinct function between patients and controls. The total intensity was higher in patients compared with controls (137.1 vs. 39 10³nV²Hz -1 s -1 ; p = 0.001) and the time to reach the maximal intensity was also prolonged (88.7 vs. 82.1 ms; p content of the QRS complexes was distinct between patients at risk of SCD and healthy controls. The wavelet transform is an efficient tool for spectral analysis of the QRS complexes that may contribute to stratification of risk.

  9. A rapid automatic analyzer and its methodology for effective bentonite content based on image recognition technology

    Directory of Open Access Journals (Sweden)

    Wei Long

    2016-09-01

    Full Text Available Fast and accurate determination of effective bentonite content in used clay bonded sand is very important for selecting the correct mixing ratio and mixing process to obtain high-performance molding sand. Currently, the effective bentonite content is determined by testing the ethylene blue absorbed in used clay bonded sand, which is usually a manual operation with some disadvantages including complicated process, long testing time and low accuracy. A rapid automatic analyzer of the effective bentonite content in used clay bonded sand was developed based on image recognition technology. The instrument consists of auto stirring, auto liquid removal, auto titration, step-rotation and image acquisition components, and processor. The principle of the image recognition method is first to decompose the color images into three-channel gray images based on the photosensitive degree difference of the light blue and dark blue in the three channels of red, green and blue, then to make the gray values subtraction calculation and gray level transformation of the gray images, and finally, to extract the outer circle light blue halo and the inner circle blue spot and calculate their area ratio. The titration process can be judged to reach the end-point while the area ratio is higher than the setting value.

  10. High-speed vibrational imaging and spectral analysis of lipid bodies by compound Raman microscopy.

    Science.gov (United States)

    Slipchenko, Mikhail N; Le, Thuc T; Chen, Hongtao; Cheng, Ji-Xin

    2009-05-28

    Cells store excess energy in the form of cytoplasmic lipid droplets. At present, it is unclear how different types of fatty acids contribute to the formation of lipid droplets. We describe a compound Raman microscope capable of both high-speed chemical imaging and quantitative spectral analysis on the same platform. We used a picosecond laser source to perform coherent Raman scattering imaging of a biological sample and confocal Raman spectral analysis at points of interest. The potential of the compound Raman microscope was evaluated on lipid bodies of cultured cells and live animals. Our data indicate that the in vivo fat contains much more unsaturated fatty acids (FAs) than the fat formed via de novo synthesis in 3T3-L1 cells. Furthermore, in vivo analysis of subcutaneous adipocytes and glands revealed a dramatic difference not only in the unsaturation level but also in the thermodynamic state of FAs inside their lipid bodies. Additionally, the compound Raman microscope allows tracking of the cellular uptake of a specific fatty acid and its abundance in nascent cytoplasmic lipid droplets. The high-speed vibrational imaging and spectral analysis capability renders compound Raman microscopy an indispensible analytical tool for the study of lipid-droplet biology.

  11. Content-based analysis and indexing of sports video

    Science.gov (United States)

    Luo, Ming; Bai, Xuesheng; Xu, Guang-you

    2001-12-01

    An explosion of on-line image and video data in digital form is already well underway. With the exponential rise in interactive information exploration and dissemination through the World-Wide Web, the major inhibitors of rapid access to on-line video data are the management of capture and storage, and content-based intelligent search and indexing techniques. This paper proposes an approach for content-based analysis and event-based indexing of sports video. It includes a novel method to organize shots - classifying shots as close shots and far shots, an original idea of blur extent-based event detection, and an innovative local mutation-based algorithm for caption detection and retrieval. Results on extensive real TV programs demonstrate the applicability of our approach.

  12. Label-free cell-cycle analysis by high-throughput quantitative phase time-stretch imaging flow cytometry

    Science.gov (United States)

    Mok, Aaron T. Y.; Lee, Kelvin C. M.; Wong, Kenneth K. Y.; Tsia, Kevin K.

    2018-02-01

    Biophysical properties of cells could complement and correlate biochemical markers to characterize a multitude of cellular states. Changes in cell size, dry mass and subcellular morphology, for instance, are relevant to cell-cycle progression which is prevalently evaluated by DNA-targeted fluorescence measurements. Quantitative-phase microscopy (QPM) is among the effective biophysical phenotyping tools that can quantify cell sizes and sub-cellular dry mass density distribution of single cells at high spatial resolution. However, limited camera frame rate and thus imaging throughput makes QPM incompatible with high-throughput flow cytometry - a gold standard in multiparametric cell-based assay. Here we present a high-throughput approach for label-free analysis of cell cycle based on quantitative-phase time-stretch imaging flow cytometry at a throughput of > 10,000 cells/s. Our time-stretch QPM system enables sub-cellular resolution even at high speed, allowing us to extract a multitude (at least 24) of single-cell biophysical phenotypes (from both amplitude and phase images). Those phenotypes can be combined to track cell-cycle progression based on a t-distributed stochastic neighbor embedding (t-SNE) algorithm. Using multivariate analysis of variance (MANOVA) discriminant analysis, cell-cycle phases can also be predicted label-free with high accuracy at >90% in G1 and G2 phase, and >80% in S phase. We anticipate that high throughput label-free cell cycle characterization could open new approaches for large-scale single-cell analysis, bringing new mechanistic insights into complex biological processes including diseases pathogenesis.

  13. Automated processing of zebrafish imaging data: a survey.

    Science.gov (United States)

    Mikut, Ralf; Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A; Kausler, Bernhard X; Ledesma-Carbayo, María J; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine

    2013-09-01

    Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines.

  14. Automated Processing of Zebrafish Imaging Data: A Survey

    Science.gov (United States)

    Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A.; Kausler, Bernhard X.; Ledesma-Carbayo, María J.; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine

    2013-01-01

    Abstract Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines. PMID:23758125

  15. Evaluation of moisture content distribution in wood by soft X-ray imaging

    International Nuclear Information System (INIS)

    Tanaka, T.; Avramidis, S.; Shida, S.

    2009-01-01

    A technique for nondestructive evaluation of moisture content distribution of Japanese cedar (sugi) during drying using a newly developed soft X-ray digital microscope was investigated. Radial, tangential, and cross-sectional samples measuring 100 x 100 x 10 mm were cut from green sugi wood. Each sample was dried in several steps in an oven and upon completion of each step, the mass was recorded and a soft X-ray image was taken. The relationship between moisture content and the average grayscale value of the soft X-ray image at each step was linear. In addition, the linear regressions overlapped each other regardless of the sample sections. These results showed that soft X-ray images could accurately estimate the moisture content. Applying this relationship to a small section of each sample, the moisture content distribution was estimated from the image differential between the soft X-ray pictures obtained from the sample in question and the same sample in the oven-dried condition. Moisture content profiles for 10-mm-wide parts at the centers of the samples were also obtained. The shapes of the profiles supported the evaluation method used in this study

  16. Spatial compression algorithm for the analysis of very large multivariate images

    Science.gov (United States)

    Keenan, Michael R [Albuquerque, NM

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  17. The impact of image-size manipulation and sugar content on children's cereal consumption.

    Science.gov (United States)

    Neyens, E; Aerts, G; Smits, T

    2015-12-01

    Previous studies have demonstrated that portion sizes and food energy-density influence children's eating behavior. However, the potential effects of front-of-pack image-sizes of serving suggestions and sugar content have not been tested. Using a mixed experimental design among young children, this study examines the effects of image-size manipulation and sugar content on cereal and milk consumption. Children poured and consumed significantly more cereal and drank significantly more milk when exposed to a larger sized image of serving suggestion as compared to a smaller image-size. Sugar content showed no main effects. Nevertheless, cereal consumption only differed significantly between small and large image-sizes when sugar content was low. An advantage of this study was the mundane setting in which the data were collected: a school's dining room instead of an artificial lab. Future studies should include a control condition, with children eating by themselves to reflect an even more natural context. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Framing Service, Benefit, and Credibility Through Images and Texts: A Content Analysis of Online Promotional Messages of Korean Medical Tourism Industry.

    Science.gov (United States)

    Jun, Jungmi

    2016-07-01

    This study examines how the Korean medical tourism industry frames its service, benefit, and credibility issues through texts and images of online brochures. The results of content analysis suggest that the Korean medical tourism industry attempts to frame their medical/health services as "excellence in surgeries and cancer care" and "advanced health technology and facilities." However, the use of cost-saving appeals was limited, which can be seen as a strategy to avoid consumers' association of lower cost with lower quality services, and to stress safety and credibility.

  19. Accurate and simple method for quantification of hepatic fat content using magnetic resonance imaging: a prospective study in biopsy-proven nonalcoholic fatty liver disease.

    Science.gov (United States)

    Hatta, Tomoko; Fujinaga, Yasunari; Kadoya, Masumi; Ueda, Hitoshi; Murayama, Hiroaki; Kurozumi, Masahiro; Ueda, Kazuhiko; Komatsu, Michiharu; Nagaya, Tadanobu; Joshita, Satoru; Kodama, Ryo; Tanaka, Eiji; Uehara, Tsuyoshi; Sano, Kenji; Tanaka, Naoki

    2010-12-01

    To assess the degree of hepatic fat content, simple and noninvasive methods with high objectivity and reproducibility are required. Magnetic resonance imaging (MRI) is one such candidate, although its accuracy remains unclear. We aimed to validate an MRI method for quantifying hepatic fat content by calibrating MRI reading with a phantom and comparing MRI measurements in human subjects with estimates of liver fat content in liver biopsy specimens. The MRI method was performed by a combination of MRI calibration using a phantom and double-echo chemical shift gradient-echo sequence (double-echo fast low-angle shot sequence) that has been widely used on a 1.5-T scanner. Liver fat content in patients with nonalcoholic fatty liver disease (NAFLD, n = 26) was derived from a calibration curve generated by scanning the phantom. Liver fat was also estimated by optical image analysis. The correlation between the MRI measurements and liver histology findings was examined prospectively. Magnetic resonance imaging measurements showed a strong correlation with liver fat content estimated from the results of light microscopic examination (correlation coefficient 0.91, P hepatic steatosis. Moreover, the severity of lobular inflammation or fibrosis did not influence the MRI measurements. This MRI method is simple and noninvasive, has excellent ability to quantify hepatic fat content even in NAFLD patients with mild steatosis or advanced fibrosis, and can be performed easily without special devices.

  20. A High-Content Live-Cell Viability Assay and Its Validation on a Diverse 12K Compound Screen.

    Science.gov (United States)

    Chiaravalli, Jeanne; Glickman, J Fraser

    2017-08-01

    We have developed a new high-content cytotoxicity assay using live cells, called "ImageTOX." We used a high-throughput fluorescence microscope system, image segmentation software, and the combination of Hoechst 33342 and SYTO 17 to simultaneously score the relative size and the intensity of the nuclei, the nuclear membrane permeability, and the cell number in a 384-well microplate format. We then performed a screen of 12,668 diverse compounds and compared the results to a standard cytotoxicity assay. The ImageTOX assay identified similar sets of compounds to the standard cytotoxicity assay, while identifying more compounds having adverse effects on cell structure, earlier in treatment time. The ImageTOX assay uses inexpensive commercially available reagents and facilitates the use of live cells in toxicity screens. Furthermore, we show that we can measure the kinetic profile of compound toxicity in a high-content, high-throughput format, following the same set of cells over an extended period of time.

  1. Comparative genome analysis to identify SNPs associated with high oleic acid and elevated protein content in soybean.

    Science.gov (United States)

    Kulkarni, Krishnanand P; Patil, Gunvant; Valliyodan, Babu; Vuong, Tri D; Shannon, J Grover; Nguyen, Henry T; Lee, Jeong-Dong

    2018-03-01

    The objective of this study was to determine the genetic relationship between the oleic acid and protein content. The genotypes having high oleic acid and elevated protein (HOEP) content were crossed with five elite lines having normal oleic acid and average protein (NOAP) content. The selected accessions were grown at six environments in three different locations and phenotyped for protein, oil, and fatty acid components. The mean protein content of parents, HOEP, and NOAP lines was 34.6%, 38%, and 34.9%, respectively. The oleic acid concentration of parents, HOEP, and NOAP lines was 21.7%, 80.5%, and 20.8%, respectively. The HOEP plants carried both FAD2-1A (S117N) and FAD2-1B (P137R) mutant alleles contributing to the high oleic acid phenotype. Comparative genome analysis using whole-genome resequencing data identified six genes having single nucleotide polymorphism (SNP) significantly associated with the traits analyzed. A single SNP in the putative gene Glyma.10G275800 was associated with the elevated protein content, and palmitic, oleic, and linoleic acids. The genes from the marker intervals of previously identified QTL did not carry SNPs associated with protein content and fatty acid composition in the lines used in this study, indicating that all the genes except Glyma.10G278000 may be the new genes associated with the respective traits.

  2. Vanillin inhibits translation and induces messenger ribonucleoprotein (mRNP) granule formation in saccharomyces cerevisiae: application and validation of high-content, image-based profiling.

    Science.gov (United States)

    Iwaki, Aya; Ohnuki, Shinsuke; Suga, Yohei; Izawa, Shingo; Ohya, Yoshikazu

    2013-01-01

    Vanillin, generated by acid hydrolysis of lignocellulose, acts as a potent inhibitor of the growth of the yeast Saccharomyces cerevisiae. Here, we investigated the cellular processes affected by vanillin using high-content, image-based profiling. Among 4,718 non-essential yeast deletion mutants, the morphology of those defective in the large ribosomal subunit showed significant similarity to that of vanillin-treated cells. The defects in these mutants were clustered in three domains of the ribosome: the mRNA tunnel entrance, exit and backbone required for small subunit attachment. To confirm that vanillin inhibited ribosomal function, we assessed polysome and messenger ribonucleoprotein granule formation after treatment with vanillin. Analysis of polysome profiles showed disassembly of the polysomes in the presence of vanillin. Processing bodies and stress granules, which are composed of non-translating mRNAs and various proteins, were formed after treatment with vanillin. These results suggest that vanillin represses translation in yeast cells.

  3. High-resolution electron microscope image analysis approach for superconductor YBa2Cu3O7-x

    International Nuclear Information System (INIS)

    Xu, J.; Lu, F.; Jia, C.; Hua, Z.

    1991-01-01

    In this paper, an HREM (High-resolution electron microscope) image analysis approach has been developed. The image filtering, segmentation and particles extraction based on gray-scale mathematical morphological operations, are performed on the original HREM image. The final image is a pseudocolor image, with the background removed, relatively uniform brightness, filtered slanting elongation, regular shape for every kind of particle, and particle boundaries that no longer touch each other so that the superconducting material structure can be shown clearly

  4. Screening of siRNA nanoparticles for delivery to airway epithelial cells using high-content analysis

    LENUS (Irish Health Repository)

    Hibbitts, Alan

    2011-08-01

    Aims: Delivery of siRNA to the lungs via inhalation offers a unique opportunity to develop a new treatment paradigm for a range of respiratory conditions. However, progress has been greatly hindered by safety and delivery issues. This study developed a high-throughput method for screening novel nanotechnologies for pulmonary siRNA delivery. Methodology: Following physicochemical analysis, the ability of PEI–PEG–siRNA nanoparticles to facilitate siRNA delivery was determined using high-content analysis (HCA) in Calu-3 cells. Results obtained from HCA were validated using confocal microscopy. Finally, cytotoxicity of the PEI–PEG–siRNA particles was analyzed by HCA using the Cellomics® multiparameter cytotoxicity assay. Conclusion: PEI–PEG–siRNA nanoparticles facilitated increased siRNA uptake and luciferase knockdown in Calu-3 cells compared with PEI–siRNA.

  5. Microscopy image segmentation tool: Robust image data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Valmianski, Ilya, E-mail: ivalmian@ucsd.edu; Monton, Carlos; Schuller, Ivan K. [Department of Physics and Center for Advanced Nanoscience, University of California San Diego, 9500 Gilman Drive, La Jolla, California 92093 (United States)

    2014-03-15

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  6. Microscopy image segmentation tool: Robust image data analysis

    Science.gov (United States)

    Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K.

    2014-03-01

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  7. Microscopy image segmentation tool: Robust image data analysis

    International Nuclear Information System (INIS)

    Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K.

    2014-01-01

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy

  8. Parallel content-based sub-image retrieval using hierarchical searching.

    Science.gov (United States)

    Yang, Lin; Qi, Xin; Xing, Fuyong; Kurc, Tahsin; Saltz, Joel; Foran, David J

    2014-04-01

    The capacity to systematically search through large image collections and ensembles and detect regions exhibiting similar morphological characteristics is central to pathology diagnosis. Unfortunately, the primary methods used to search digitized, whole-slide histopathology specimens are slow and prone to inter- and intra-observer variability. The central objective of this research was to design, develop, and evaluate a content-based image retrieval system to assist doctors for quick and reliable content-based comparative search of similar prostate image patches. Given a representative image patch (sub-image), the algorithm will return a ranked ensemble of image patches throughout the entire whole-slide histology section which exhibits the most similar morphologic characteristics. This is accomplished by first performing hierarchical searching based on a newly developed hierarchical annular histogram (HAH). The set of candidates is then further refined in the second stage of processing by computing a color histogram from eight equally divided segments within each square annular bin defined in the original HAH. A demand-driven master-worker parallelization approach is employed to speed up the searching procedure. Using this strategy, the query patch is broadcasted to all worker processes. Each worker process is dynamically assigned an image by the master process to search for and return a ranked list of similar patches in the image. The algorithm was tested using digitized hematoxylin and eosin (H&E) stained prostate cancer specimens. We have achieved an excellent image retrieval performance. The recall rate within the first 40 rank retrieved image patches is ∼90%. Both the testing data and source code can be downloaded from http://pleiad.umdnj.edu/CBII/Bioinformatics/.

  9. FT-IR imaging for quantitative determination of liver fat content in non-alcoholic fatty liver.

    Science.gov (United States)

    Kochan, K; Maslak, E; Chlopicki, S; Baranska, M

    2015-08-07

    In this work we apply FT-IR imaging of large areas of liver tissue cross-section samples (∼5 cm × 5 cm) for quantitative assessment of steatosis in murine model of Non-Alcoholic Fatty Liver (NAFLD). We quantified the area of liver tissue occupied by lipid droplets (LDs) by FT-IR imaging and Oil Red O (ORO) staining for comparison. Two alternative FT-IR based approaches are presented. The first, straightforward method, was based on average spectra from tissues and provided values of the fat content by using a PLS regression model and the reference method. The second one – the chemometric-based method – enabled us to determine the values of the fat content, independently of the reference method by means of k-means cluster (KMC) analysis. In summary, FT-IR images of large size liver sections may prove to be useful for quantifying liver steatosis without the need of tissue staining.

  10. Next-generation technologies for spatial proteomics: Integrating ultra-high speed MALDI-TOF and high mass resolution MALDI FTICR imaging mass spectrometry for protein analysis.

    Science.gov (United States)

    Spraggins, Jeffrey M; Rizzo, David G; Moore, Jessica L; Noto, Michael J; Skaar, Eric P; Caprioli, Richard M

    2016-06-01

    MALDI imaging mass spectrometry is a powerful analytical tool enabling the visualization of biomolecules in tissue. However, there are unique challenges associated with protein imaging experiments including the need for higher spatial resolution capabilities, improved image acquisition rates, and better molecular specificity. Here we demonstrate the capabilities of ultra-high speed MALDI-TOF and high mass resolution MALDI FTICR IMS platforms as they relate to these challenges. High spatial resolution MALDI-TOF protein images of rat brain tissue and cystic fibrosis lung tissue were acquired at image acquisition rates >25 pixels/s. Structures as small as 50 μm were spatially resolved and proteins associated with host immune response were observed in cystic fibrosis lung tissue. Ultra-high speed MALDI-TOF enables unique applications including megapixel molecular imaging as demonstrated for lipid analysis of cystic fibrosis lung tissue. Additionally, imaging experiments using MALDI FTICR IMS were shown to produce data with high mass accuracy (z 5000) for proteins up to ∼20 kDa. Analysis of clear cell renal cell carcinoma using MALDI FTICR IMS identified specific proteins localized to healthy tissue regions, within the tumor, and also in areas of increased vascularization around the tumor. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Representations of Codeine Misuse on Instagram: Content Analysis.

    Science.gov (United States)

    Cherian, Roy; Westbrook, Marisa; Ramo, Danielle; Sarkar, Urmimala

    2018-03-20

    Prescription opioid misuse has doubled over the past 10 years and is now a public health epidemic. Analysis of social media data may provide additional insights into opioid misuse to supplement the traditional approaches of data collection (eg, self-report on surveys). The aim of this study was to characterize representations of codeine misuse through analysis of public posts on Instagram to understand text phrases related to misuse. We identified hashtags and searchable text phrases associated with codeine misuse by analyzing 1156 sequential Instagram posts over the course of 2 weeks from May 2016 to July 2016. Content analysis of posts associated with these hashtags identified the most common themes arising in images, as well as culture around misuse, including how misuse is happening and being perpetuated through social media. A majority of images (50/100; 50.0%) depicted codeine in its commonly misused form, combined with soda (lean). Codeine misuse was commonly represented with the ingestion of alcohol, cannabis, and benzodiazepines. Some images highlighted the previously noted affinity between codeine misuse and hip-hop culture or mainstream popular culture images. The prevalence of codeine misuse images, glamorizing of ingestion with soda and alcohol, and their integration with mainstream, popular culture imagery holds the potential to normalize and increase codeine misuse and overdose. To reduce harm and prevent misuse, immediate public health efforts are needed to better understand the relationship between the potential normalization, ritualization, and commercialization of codeine misuse. ©Roy Cherian, Marisa Westbrook, Danielle Ramo, Urmimala Sarkar. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 20.03.2018.

  12. Digital image sequence processing, compression, and analysis

    CERN Document Server

    Reed, Todd R

    2004-01-01

    IntroductionTodd R. ReedCONTENT-BASED IMAGE SEQUENCE REPRESENTATIONPedro M. Q. Aguiar, Radu S. Jasinschi, José M. F. Moura, andCharnchai PluempitiwiriyawejTHE COMPUTATION OF MOTIONChristoph Stiller, Sören Kammel, Jan Horn, and Thao DangMOTION ANALYSIS AND DISPLACEMENT ESTIMATION IN THE FREQUENCY DOMAINLuca Lucchese and Guido Maria CortelazzoQUALITY OF SERVICE ASSESSMENT IN NEW GENERATION WIRELESS VIDEO COMMUNICATIONSGaetano GiuntaERROR CONCEALMENT IN DIGITAL VIDEOFrancesco G.B. De NataleIMAGE SEQUENCE RESTORATION: A WIDER PERSPECTIVEAnil KokaramVIDEO SUMMARIZATIONCuneyt M. Taskiran and Edward

  13. Design Guidelines for a Content-Based Image Retrieval Color-Selection Interface

    NARCIS (Netherlands)

    Eggen, Berry; van den Broek, Egon; van der Veer, Gerrit C.; Kisters, Peter M.F.; Willems, Rob; Vuurpijl, Louis G.

    2004-01-01

    In Content-Based Image Retrieval (CBIR) two query-methods exist: query-by-example and query-by-memory. The user either selects an example image or selects image features retrieved from memory (such as color, texture, spatial attributes, and shape) to define his query. Hitherto, research on CBIR

  14. Assay of Calcium Transients and Synapses in Rat Hippocampal Neurons by Kinetic Image Cytometry and High-Content Analysis: An In Vitro Model System for Postchemotherapy Cognitive Impairment.

    Science.gov (United States)

    McDonough, Patrick M; Prigozhina, Natalie L; Basa, Ranor C B; Price, Jeffrey H

    2017-07-01

    Postchemotherapy cognitive impairment (PCCI) is commonly exhibited by cancer patients treated with a variety of chemotherapeutic agents, including the endocrine disruptor tamoxifen (TAM). The etiology of PCCI is poorly understood. Our goal was to develop high-throughput assay methods to test the effects of chemicals on neuronal function applicable to PCCI. Rat hippocampal neurons (RHNs) were plated in 96- or 384-well dishes and exposed to test compounds (forskolin [FSK], 17β-estradiol [ES]), TAM or fulvestrant [FUL], aka ICI 182,780) for 6-14 days. Kinetic Image Cytometry™ (KIC™) methods were developed to quantify spontaneously occurring intracellular calcium transients representing the activity of the neurons, and high-content analysis (HCA) methods were developed to quantify the expression, colocalization, and puncta formed by synaptic proteins (postsynaptic density protein-95 [PSD-95] and presynaptic protein Synapsin-1 [Syn-1]). As quantified by KIC, FSK increased the occurrence and synchronization of the calcium transients indicating stimulatory effects on RHN activity, whereas TAM had inhibitory effects. As quantified by HCA, FSK also increased PSD-95 puncta and PSD-95:Syn-1 colocalization, whereas ES increased the puncta of both PSD-95 and Syn-1 with little effect on colocalization. The estrogen receptor antagonist FUL also increased PSD-95 puncta. In contrast, TAM reduced Syn-1 and PSD-95:Syn-1 colocalization, consistent with its inhibitory effects on the calcium transients. Thus TAM reduced activity and synapse formation by the RHNs, which may relate to the ability of this agent to cause PCCI. The results illustrate that KIC and HCA can be used to quantify neurotoxic and neuroprotective effects of chemicals in RHNs to investigate mechanisms and potential therapeutics for PCCI.

  15. Quantitative analysis of γ–oryzanol content in cold pressed rice bran oil by TLC–image analysis method

    Directory of Open Access Journals (Sweden)

    Apirak Sakunpak

    2014-02-01

    Conclusions: The TLC-densitometric and TLC-image analysis methods provided a similar reproducibility, accuracy and selectivity for the quantitative determination of γ-oryzanol in cold pressed rice bran oil. A statistical comparison of the quantitative determinations of γ-oryzanol in samples did not show any statistically significant difference between TLC-densitometric and TLC-image analysis methods. As both methods were found to be equal, they therefore can be used for the determination of γ-oryzanol in cold pressed rice bran oil.

  16. Experiences of High School Students about the Predictors of Tobacco Use: a Directed Qualitative Content Analysis

    Directory of Open Access Journals (Sweden)

    Mahmoud Ghasemi

    2015-12-01

    Full Text Available Background and Objectives: Tobacco use is one of the most important risk factors that increases the burden of diseases worldwide. Based on the increasing speed of tobacco use, the aim of the present study was to explain the experiences of high school students about the determiners of use and non-use of tobacco (cigarettes and hookah based on the theory of protection motivation. Materials and Methods: The present study is a qualitative study based on content analysis that has been carried out for five months from 22, November of 2014 to 20, April of 2015 on male high schools in Noshahr. Data were collected in the form of semi-structured interviews from 21 male high school students of whom 7 smoked cigarettes, 7 used hookah and 7 of them did not use any type of tobacco. Data analysis was carried out through the use of directed qualitative content analysis. Results: Data analysis led to the extraction of 99 primary codes that were categorized into 9 predetermined levels of protection motivation theory including perceived sensitivity, perceived intensity, fear, perceived self-efficacy, response expense, efficiency of the perceived answer, external perceived reward, internal perceived reward, protection motivation. The findings of the study showed that the most important predictors for the use of tobacco were the structures of response expense and high perceived rewards and the most important predictors for non-use of tobacco were perceived sensitivity, perceived intensity and high self-efficacy of students. Conclusions: the findings of the present study showed that the pressure from peers, being present in a group using tobacco and the absence of alternative recreational activities are among the most important factors of using tobacco. So, it is suggested that planners of the health department take the comprehensive interventions to improve effective individual and environmental factors of using tobacco so that they could reduce smoking cigarettes

  17. Junk Food Marketing on Instagram: Content Analysis.

    Science.gov (United States)

    Vassallo, Amy Jo; Kelly, Bridget; Zhang, Lelin; Wang, Zhiyong; Young, Sarah; Freeman, Becky

    2018-06-05

    Omnipresent marketing of processed foods is a key driver of dietary choices and brand loyalty. Market data indicate a shift in food marketing expenditures to digital media, including social media. These platforms have greater potential to influence young people, given their unique peer-to-peer transmission and youths' susceptibility to social pressures. The aim of this study was to investigate the frequency of images and videos posted by the most popular, energy-dense, nutrient-poor food and beverage brands on Instagram and the marketing strategies used in these images, including any healthy choice claims. A content analysis of 15 accounts was conducted, using 12 months of Instagram posts from March 15, 2015, to March 15, 2016. A pre-established hierarchical coding guide was used to identify the primary marketing strategy of each post. Each brand used 6 to 11 different marketing strategies in their Instagram accounts; however, they often adhered to an overall theme such as athleticism or relatable consumers. There was a high level of branding, although not necessarily product information on all accounts, and there were very few health claims. Brands are using social media platforms such as Instagram to market their products to a growing number of consumers, using a high frequency of targeted and curated posts that manipulate consumer emotions rather than present information about their products. Policy action is needed that better reflects the current media environment. Public health bodies also need to engage with emerging media platforms and develop compelling social counter-marketing campaigns. ©Amy Jo Vassallo, Bridget Kelly, Lelin Zhang, Zhiyong Wang, Sarah Young, Becky Freeman. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 05.06.2018.

  18. Adolescent judgment of sexual content on television: implications for future content analysis research.

    Science.gov (United States)

    Manganello, Jennifer A; Henderson, Vani R; Jordan, Amy; Trentacoste, Nicole; Martin, Suzanne; Hennessy, Michael; Fishbein, Martin

    2010-07-01

    Many studies of sexual messages in media utilize content analysis methods. At times, this research assumes that researchers and trained coders using content analysis methods and the intended audience view and interpret media content similarly. This article compares adolescents' perceptions of the presence or absence of sexual content on television to those of researchers using three different coding schemes. Results from this formative research study suggest that participants and researchers are most likely to agree with content categories assessing manifest content, and that differences exist among adolescents who view sexual messages on television. Researchers using content analysis methods to examine sexual content in media and media effects on sexual behavior should consider identifying how audience characteristics may affect interpretation of content and account for audience perspectives in content analysis study protocols when appropriate for study goals.

  19. Retinal Imaging and Image Analysis

    Science.gov (United States)

    Abràmoff, Michael D.; Garvin, Mona K.; Sonka, Milan

    2011-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships. PMID:22275207

  20. Manganese contents of soils as determined by activation analysis

    International Nuclear Information System (INIS)

    El-Kholi, A.F.; Hamdy, A.A.; Al Metwally, A.I.; El-Damaty, A.H.

    1976-01-01

    The object of this investigation is to determine total manganese by means of neutron activation analysis and evaluate this technique in comparison with the corresponding data obtained by conventional chemical analysis. Data obtained revealed that the values of total manganese in calcareous soils obtained by both chemical analysis and that by neutron activation analysis were similar. Therefore, activation analysis could be recommended as a quick laboratory, less tedious, and time consuming method for the determination of Mn content in both soils and plants than the conventional chemical techniques due to its great specificity, sensitivity and simplicity. Statistical analysis showed that there is a significant correlation at 5% probability level between manganese content in Soybean plant and total manganese determined by activation and chemical analysis giving the evidence that in the case of those highly calcareous soils of low total manganese content this fraction has to be considered as far as available soil manganese is concerned

  1. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors—Air Gap Effect

    Science.gov (United States)

    Bore, Thierry; Wagner, Norman; Delepine Lesoille, Sylvie; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique

    2016-01-01

    Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling. PMID:27096865

  2. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors—Air Gap Effect

    Directory of Open Access Journals (Sweden)

    Thierry Bore

    2016-04-01

    Full Text Available Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling.

  3. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors--Air Gap Effect.

    Science.gov (United States)

    Bore, Thierry; Wagner, Norman; Lesoille, Sylvie Delepine; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique

    2016-04-18

    Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling.

  4. Analysis of Levodopa Content in Commercial Mucuna pruriens Products Using High-Performance Liquid Chromatography with Fluorescence Detection.

    Science.gov (United States)

    Soumyanath, Amala; Denne, Tanya; Hiller, Amie; Ramachandran, Shaila; Shinto, Lynne

    2018-02-01

    Mucuna pruriens (MP) seeds contain levodopa (up to 2% by weight) and have been used in traditional Indian medicine to treat an illness named "Kampavata," now understood to be Parkinson's disease (PD). Studies have shown MP to be beneficial, and even superior, to levodopa alone in treating PD symptoms. Commercial products containing MP are readily available from online and retail sources to patients and physicians. Products often contain extracts of MP seeds, with significantly higher levodopa content than the seeds. However, MP products have limited regulatory controls with respect to quality and content of active ingredient. The aim of this study was to apply a quantitative method to determine levodopa content in readily available MP products that might be used by patients or in research studies. Levodopa present in six commercial MP products was quantified by solvent extraction followed by reversed-phase high-performance liquid chromatography (HPLC) coupled to fluorescence detection (FD). Certificates of analysis (COA) were obtained, from manufacturers of MP products, to assess the existence and implementation of specifications for levodopa content. HPLC-FD analysis revealed that the levodopa content of the six commercial MP products varied from 6% to 141% of individual label claims. No product contained levodopa within normal pharmacopeial limits of 90%-110% label claim. The maximum daily dose of levodopa delivered by the products varied from 14.4 to 720 mg/day. COAs were inconsistent in specifications for and verification of levodopa content. The commercial products tested varied widely in levodopa content, sometimes deviating widely from the label claim. These deficiencies could impact efficacy and safety of MP products used by PD patients and compromise the results of scientific studies on MP products. The HPLC-FD method described in this study could be utilized by both manufacturers and scientific researchers to verify levodopa content of MP products.

  5. ImageGrouper: a group-oriented user interface for content-based image retrieval and digital image arrangement

    NARCIS (Netherlands)

    Nakazato, Munehiro; Manola, L.; Huang, Thomas S.

    In content-based image retrieval (CBIR), experimental (trial-and-error) query with relevance feedback is essential for successful retrieval. Unfortunately, the traditional user interfaces are not suitable for trying different combinations of query examples. This is because first, these systems

  6. Nonisothermal Thermogravimetric Analysis of Thai Lignite with High CaO Content

    Science.gov (United States)

    Pintana, Pakamon

    2013-01-01

    Thermal behaviors and combustion kinetics of Thai lignite with different SO3-free CaO contents were investigated. Nonisothermal thermogravimetric method was carried out under oxygen environment at heating rates of 10, 30, and 50°C min−1 from ambient up to 1300°C. Flynn-Wall-Ozawa (FWO) and Kissinger-Akahira-Sunose (KAS) methods were adopted to estimate the apparent activation energy (E) for the thermal decomposition of these coals. Different thermal degradation behaviors were observed in lignites with low (14%) and high (42%) CaO content. Activation energy of the lignite combustion was found to vary with the conversion fraction. In comparison with the KAS method, higher E values were obtained by the FWO method for all conversions considered. High CaO lignite was observed to have higher activation energy than the low CaO coal. PMID:24250259

  7. A content analysis of tweets about high-potency marijuana.

    Science.gov (United States)

    Cavazos-Rehg, Patricia A; Sowles, Shaina J; Krauss, Melissa J; Agbonavbare, Vivian; Grucza, Richard; Bierut, Laura

    2016-09-01

    "Dabbing" involves heating extremely concentrated forms of marijuana to high temperatures and inhaling the resulting vapor. We studied themes describing the consequences of using highly concentrated marijuana by examining the dabbing-related content on Twitter. Tweets containing dabbing-related keywords were collected from 1/1-1/31/2015 (n=206,854). A random sample of 5000 tweets was coded for content according to pre-determined categories about dabbing-related behaviors and effects experienced using a crowdsourcing service. An examination of tweets from the full sample about respiratory effects and passing out was then conducted by selecting tweets with relevant keywords. Among the 5000 randomly sampled tweets, 3540 (71%) were related to dabbing marijuana concentrates. The most common themes included mentioning current use of concentrates (n=849; 24%), the intense high and/or extreme effects from dabbing (n=763; 22%) and excessive/heavy dabbing (n=517; 15%). Extreme effects included both physiological (n=124/333; 37%) and psychological effects (n=55/333; 17%). The most common physiologic effects, passing out (n=46/333; 14%) and respiratory effects (n=30/333; 9%), were then further studied in the full sample of tweets. Coughing was the most common respiratory effect mentioned (n=807/1179; 68%), and tweeters commonly expressed dabbing with intentions to pass out (416/915; 45%). This study adds to the limited understanding of marijuana concentrates and highlights self-reported physical and psychological effects from this type of marijuana use. Future research should further examine these effects and the potential severity of health consequences associated with concentrates. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. "Appearance potent"? A content analysis of UK gay and straight men's magazines.

    Science.gov (United States)

    Jankowski, Glen S; Fawkner, Helen; Slater, Amy; Tiggemann, Marika

    2014-09-01

    With little actual appraisal, a more 'appearance potent' (i.e., a reverence for appearance ideals) subculture has been used to explain gay men's greater body dissatisfaction in comparison to straight men's. This study sought to assess the respective appearance potency of each subculture by a content analysis of 32 issues of the most read gay (Attitude, Gay Times) and straight men's magazines (Men's Health, FHM) in the UK. Images of men and women were coded for their physical characteristics, objectification and nudity, as were the number of appearance adverts and articles. The gay men's magazines featured more images of men that were appearance ideal, nude and sexualized than the straight men's magazines. The converse was true for the images of women and appearance adverts. Although more research is needed to understand the effect of this content on the viewer, the findings are consistent with a more appearance potent gay male subculture. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  9. Fruit-related terms and images on food packages and advertisements affect children's perceptions of foods' fruit content.

    Science.gov (United States)

    Heller, Rebecca; Martin-Biggers, Jennifer; Berhaupt-Glickstein, Amanda; Quick, Virginia; Byrd-Bredbenner, Carol

    2015-10-01

    To determine whether food label information and advertisements for foods containing no fruit cause children to have a false impression of the foods' fruit content. In the food label condition, a trained researcher showed each child sixteen different food label photographs depicting front-of-food label packages that varied with regard to fruit content (i.e. real fruit v. sham fruit) and label elements. In the food advertisement condition, children viewed sixteen, 30 s television food advertisements with similar fruit content and label elements as in the food label condition. After viewing each food label and advertisement, children responded to the question 'Did they use fruit to make this?' with responses of yes, no or don't know. Schools, day-care centres, after-school programmes and other community groups. Children aged 4-7 years. In the food label condition, χ 2 analysis of within fruit content variation differences indicated children (n 58; mean age 4·2 years) were significantly more accurate in identifying real fruit foods as the label's informational load increased and were least accurate when neither a fruit name nor an image was on the label. Children (n 49; mean age 5·4 years) in the food advertisement condition were more likely to identify real fruit foods when advertisements had fruit images compared with when no image was included, while fruit images in advertisements for sham fruit foods significantly reduced accuracy of responses. Findings suggest that labels and advertisements for sham fruit foods mislead children with regard to the food's real fruit content.

  10. Morphometric image analysis of giant vesicles

    DEFF Research Database (Denmark)

    Husen, Peter Rasmussen; Arriaga, Laura; Monroy, Francisco

    2012-01-01

    We have developed a strategy to determine lengths and orientations of tie lines in the coexistence region of liquid-ordered and liquid-disordered phases of cholesterol containing ternary lipid mixtures. The method combines confocal-fluorescence-microscopy image stacks of giant unilamellar vesicles...... (GUVs), a dedicated 3D-image analysis, and a quantitative analysis based in equilibrium thermodynamic considerations. This approach was tested in GUVs composed of 1,2-dioleoyl-sn-glycero-3-phosphocholine/1,2-palmitoyl-sn-glycero-3-phosphocholine/cholesterol. In general, our results show a reasonable...... agreement with previously reported data obtained by other methods. For example, our computed tie lines were found to be nonhorizontal, indicating a difference in cholesterol content in the coexisting phases. This new, to our knowledge, analytical strategy offers a way to further exploit fluorescence...

  11. Rapid Analysis and Exploration of Fluorescence Microscopy Images

    OpenAIRE

    Pavie, Benjamin; Rajaram, Satwik; Ouyang, Austin; Altschuler, Jason; Steininger, Robert J; Wu, Lani; Altschuler, Steven

    2014-01-01

    Despite rapid advances in high-throughput microscopy, quantitative image-based assays still pose significant challenges. While a variety of specialized image analysis tools are available, most traditional image-analysis-based workflows have steep learning curves (for fine tuning of analysis parameters) and result in long turnaround times between imaging and analysis. In particular, cell segmentation, the process of identifying individual cells in an image, is a major bottleneck in this regard.

  12. Analysis of high-throughput plant image data with the information system IAP

    Directory of Open Access Journals (Sweden)

    Klukas Christian

    2012-06-01

    Full Text Available This work presents a sophisticated information system, the Integrated Analysis Platform (IAP, an approach supporting large-scale image analysis for different species and imaging systems. In its current form, IAP supports the investigation of Maize, Barley and Arabidopsis plants based on images obtained in different spectra.

  13. High-throughput image analysis of tumor spheroids: a user-friendly software application to measure the size of spheroids automatically and accurately.

    Science.gov (United States)

    Chen, Wenjin; Wong, Chung; Vosburgh, Evan; Levine, Arnold J; Foran, David J; Xu, Eugenia Y

    2014-07-08

    The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application - SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary "Manual Initialize" and "Hand Draw" tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model

  14. High PRF ultrafast sliding compound doppler imaging: fully qualitative and quantitative analysis of blood flow

    Science.gov (United States)

    Kang, Jinbum; Jang, Won Seuk; Yoo, Yangmo

    2018-02-01

    Ultrafast compound Doppler imaging based on plane-wave excitation (UCDI) can be used to evaluate cardiovascular diseases using high frame rates. In particular, it provides a fully quantifiable flow analysis over a large region of interest with high spatio-temporal resolution. However, the pulse-repetition frequency (PRF) in the UCDI method is limited for high-velocity flow imaging since it has a tradeoff between the number of plane-wave angles (N) and acquisition time. In this paper, we present high PRF ultrafast sliding compound Doppler imaging method (HUSDI) to improve quantitative flow analysis. With the HUSDI method, full scanline images (i.e. each tilted plane wave data) in a Doppler frame buffer are consecutively summed using a sliding window to create high-quality ensemble data so that there is no reduction in frame rate and flow sensitivity. In addition, by updating a new compounding set with a certain time difference (i.e. sliding window step size or L), the HUSDI method allows various Doppler PRFs with the same acquisition data to enable a fully qualitative, retrospective flow assessment. To evaluate the performance of the proposed HUSDI method, simulation, in vitro and in vivo studies were conducted under diverse flow circumstances. In the simulation and in vitro studies, the HUSDI method showed improved hemodynamic representations without reducing either temporal resolution or sensitivity compared to the UCDI method. For the quantitative analysis, the root mean squared velocity error (RMSVE) was measured using 9 angles (-12° to 12°) with L of 1-9, and the results were found to be comparable to those of the UCDI method (L  =  N  =  9), i.e.  ⩽0.24 cm s-1, for all L values. For the in vivo study, the flow data acquired from a full cardiac cycle of the femoral vessels of a healthy volunteer were analyzed using a PW spectrogram, and arterial and venous flows were successfully assessed with high Doppler PRF (e.g. 5 kHz at L

  15. High PRF ultrafast sliding compound doppler imaging: fully qualitative and quantitative analysis of blood flow.

    Science.gov (United States)

    Kang, Jinbum; Jang, Won Seuk; Yoo, Yangmo

    2018-02-09

    Ultrafast compound Doppler imaging based on plane-wave excitation (UCDI) can be used to evaluate cardiovascular diseases using high frame rates. In particular, it provides a fully quantifiable flow analysis over a large region of interest with high spatio-temporal resolution. However, the pulse-repetition frequency (PRF) in the UCDI method is limited for high-velocity flow imaging since it has a tradeoff between the number of plane-wave angles (N) and acquisition time. In this paper, we present high PRF ultrafast sliding compound Doppler imaging method (HUSDI) to improve quantitative flow analysis. With the HUSDI method, full scanline images (i.e. each tilted plane wave data) in a Doppler frame buffer are consecutively summed using a sliding window to create high-quality ensemble data so that there is no reduction in frame rate and flow sensitivity. In addition, by updating a new compounding set with a certain time difference (i.e. sliding window step size or L), the HUSDI method allows various Doppler PRFs with the same acquisition data to enable a fully qualitative, retrospective flow assessment. To evaluate the performance of the proposed HUSDI method, simulation, in vitro and in vivo studies were conducted under diverse flow circumstances. In the simulation and in vitro studies, the HUSDI method showed improved hemodynamic representations without reducing either temporal resolution or sensitivity compared to the UCDI method. For the quantitative analysis, the root mean squared velocity error (RMSVE) was measured using 9 angles (-12° to 12°) with L of 1-9, and the results were found to be comparable to those of the UCDI method (L  =  N  =  9), i.e.  ⩽0.24 cm s -1 , for all L values. For the in vivo study, the flow data acquired from a full cardiac cycle of the femoral vessels of a healthy volunteer were analyzed using a PW spectrogram, and arterial and venous flows were successfully assessed with high Doppler PRF (e.g. 5 kHz at L

  16. Identification of Fusarium damaged wheat kernels using image analysis

    Directory of Open Access Journals (Sweden)

    Ondřej Jirsa

    2011-01-01

    Full Text Available Visual evaluation of kernels damaged by Fusarium spp. pathogens is labour intensive and due to a subjective approach, it can lead to inconsistencies. Digital imaging technology combined with appropriate statistical methods can provide much faster and more accurate evaluation of the visually scabby kernels proportion. The aim of the present study was to develop a discrimination model to identify wheat kernels infected by Fusarium spp. using digital image analysis and statistical methods. Winter wheat kernels from field experiments were evaluated visually as healthy or damaged. Deoxynivalenol (DON content was determined in individual kernels using an ELISA method. Images of individual kernels were produced using a digital camera on dark background. Colour and shape descriptors were obtained by image analysis from the area representing the kernel. Healthy and damaged kernels differed significantly in DON content and kernel weight. Various combinations of individual shape and colour descriptors were examined during the development of the model using linear discriminant analysis. In addition to basic descriptors of the RGB colour model (red, green, blue, very good classification was also obtained using hue from the HSL colour model (hue, saturation, luminance. The accuracy of classification using the developed discrimination model based on RGBH descriptors was 85 %. The shape descriptors themselves were not specific enough to distinguish individual kernels.

  17. Content Based Image Matching for Planetary Science

    Science.gov (United States)

    Deans, M. C.; Meyer, C.

    2006-12-01

    Planetary missions generate large volumes of data. With the MER rovers still functioning on Mars, PDS contains over 7200 released images from the Microscopic Imagers alone. These data products are only searchable by keys such as the Sol, spacecraft clock, or rover motion counter index, with little connection to the semantic content of the images. We have developed a method for matching images based on the visual textures in images. For every image in a database, a series of filters compute the image response to localized frequencies and orientations. Filter responses are turned into a low dimensional descriptor vector, generating a 37 dimensional fingerprint. For images such as the MER MI, this represents a compression ratio of 99.9965% (the fingerprint is approximately 0.0035% the size of the original image). At query time, fingerprints are quickly matched to find images with similar appearance. Image databases containing several thousand images are preprocessed offline in a matter of hours. Image matches from the database are found in a matter of seconds. We have demonstrated this image matching technique using three sources of data. The first database consists of 7200 images from the MER Microscopic Imager. The second database consists of 3500 images from the Narrow Angle Mars Orbital Camera (MOC-NA), which were cropped into 1024×1024 sub-images for consistency. The third database consists of 7500 scanned archival photos from the Apollo Metric Camera. Example query results from all three data sources are shown. We have also carried out user tests to evaluate matching performance by hand labeling results. User tests verify approximately 20% false positive rate for the top 14 results for MOC NA and MER MI data. This means typically 10 to 12 results out of 14 match the query image sufficiently. This represents a powerful search tool for databases of thousands of images where the a priori match probability for an image might be less than 1%. Qualitatively, correct

  18. High precision Cross-correlated imaging in Few-mode fibers

    DEFF Research Database (Denmark)

    Muliar, Olena; Usuga Castaneda, Mario A.; Kristensen, Torben

    2017-01-01

    us to distinguishing differential time delays between HOMs in the picosecond timescale. Broad wavelength scanning in combination with spectral shaping, allows us to estimate the modal behavior of FMF without prior knowledge of the fiber parameters. We performed our demonstration at wavelengths from...... existing approaches for modal content analysis, several methods as S2, C2 in time and frequency domain are available. In this contribution we will present an improved time-domain cross-correlated (C2) imaging technique for the experimental evaluation of modal properties in HOM fibers over a broad range......) in a few-mode fiber (FMF) are used as multiple spatial communication channels, comes in this context as a viable approach to enable the optimization of high-capacity links. From this perspective, it becomes highly necessary to possess a diagnostic tool for the precise modal characterization of FMFs. Among...

  19. Evaluation of the characteristics of high burnup and high plutonium content mixed oxide (MOX) fuel

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-08-15

    Two kinds of MOX fuel irradiation tests, i.e., MOX irradiation test up to high burnup and MOX having high plutonium content irradiation test, have been performed from JFY 2007 for five years in order to establish technical data concerning MOX fuel behavior during irradiation, which shall be needed in safety regulation of MOX fuel with high reliability. The high burnup MOX irradiation test consists of irradiation extension and post irradiation examination (PIE). The activities done in JFY 2011 are destructive post irradiation examination (D-PIE) such as EPMA and SIMS at CEA (Commissariat a l'Enegie Atomique) facility. Cadarache and PIE data analysis. In the frame of irradiation test of high plutonium content MOX fuel programme, MOX fuel rods with about 14wt % Pu content are being irradiated at BR-2 reactor and corresponding PIE is also being done at PIE facility (SCK/CEN: Studiecentrum voor Kernenergie/Centre d'Etude l'Energie Nucleaire) in Belgium. The activities done in JFY 2011 are non-destructive post irradiation examination (ND-PIE) and D-PIE and PIE data analysis. In this report the results of EPMA and SIMS with high burnup irradiation test and the result of gamma spectrometry measurement which can give FP gas release rate are reported. (author)

  20. Componential distribution analysis of food using near infrared ray image

    Science.gov (United States)

    Yamauchi, Hiroki; Kato, Kunihito; Yamamoto, Kazuhiko; Ogawa, Noriko; Ohba, Kimie

    2008-11-01

    The components of the food related to the "deliciousness" are usually evaluated by componential analysis. The component content and type of components in the food are determined by this analysis. However, componential analysis is not able to analyze measurements in detail, and the measurement is time consuming. We propose a method to measure the two-dimensional distribution of the component in food using a near infrared ray (IR) image. The advantage of our method is to be able to visualize the invisible components. Many components in food have characteristics such as absorption and reflection of light in the IR range. The component content is measured using subtraction between two wavelengths of near IR light. In this paper, we describe a method to measure the component of food using near IR image processing, and we show an application to visualize the saccharose in the pumpkin.

  1. UV imaging in pharmaceutical analysis

    DEFF Research Database (Denmark)

    Østergaard, Jesper

    2018-01-01

    UV imaging provides spatially and temporally resolved absorbance measurements, which are highly useful in pharmaceutical analysis. Commercial UV imaging instrumentation was originally developed as a detector for separation sciences, but the main use is in the area of in vitro dissolution...

  2. Fat is fashionable and fit: A comparative content analysis of Fatspiration and Health at Every Size® Instagram images.

    Science.gov (United States)

    Webb, Jennifer B; Vinoski, Erin R; Bonar, Adrienne S; Davies, Alexandria E; Etzel, Lena

    2017-09-01

    In step with the proliferation of Thinspiration and Fitspiration content disseminated in popular web-based media, the fat acceptance movement has garnered heightened visibility within mainstream culture via the burgeoning Fatosphere weblog community. The present study extended previous Fatosphere research by comparing the shared and distinct strategies used to represent and motivate a fat-accepting lifestyle among 400 images sourced from Fatspiration- and Health at Every Size ® -themed hashtags on Instagram. Images were systematically analyzed for the socio-demographic and body size attributes of the individuals portrayed alongside content reflecting dimensions of general fat acceptance, physical appearance pride, physical activity and health, fat shaming, and eating and weight loss-related themes. #fatspiration/#fatspo-tagged images more frequently promoted fat acceptance through fashion and beauty-related activism; #healthateverysize/#haes posts more often featured physically-active portrayals, holistic well-being, and weight stigma. Findings provide insight into the common and unique motivational factors and contradictory messages encountered in these fat-accepting social media communities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Frequency domain analysis of knock images

    Science.gov (United States)

    Qi, Yunliang; He, Xin; Wang, Zhi; Wang, Jianxin

    2014-12-01

    High speed imaging-based knock analysis has mainly focused on time domain information, e.g. the spark triggered flame speed, the time when end gas auto-ignition occurs and the end gas flame speed after auto-ignition. This study presents a frequency domain analysis on the knock images recorded using a high speed camera with direct photography in a rapid compression machine (RCM). To clearly visualize the pressure wave oscillation in the combustion chamber, the images were high-pass-filtered to extract the luminosity oscillation. The luminosity spectrum was then obtained by applying fast Fourier transform (FFT) to three basic colour components (red, green and blue) of the high-pass-filtered images. Compared to the pressure spectrum, the luminosity spectra better identify the resonant modes of pressure wave oscillation. More importantly, the resonant mode shapes can be clearly visualized by reconstructing the images based on the amplitudes of luminosity spectra at the corresponding resonant frequencies, which agree well with the analytical solutions for mode shapes of gas vibration in a cylindrical cavity.

  4. Region Templates: Data Representation and Management for High-Throughput Image Analysis.

    Science.gov (United States)

    Teodoro, George; Pan, Tony; Kurc, Tahsin; Kong, Jun; Cooper, Lee; Klasky, Scott; Saltz, Joel

    2014-12-01

    We introduce a region template abstraction and framework for the efficient storage, management and processing of common data types in analysis of large datasets of high resolution images on clusters of hybrid computing nodes. The region template abstraction provides a generic container template for common data structures, such as points, arrays, regions, and object sets, within a spatial and temporal bounding box. It allows for different data management strategies and I/O implementations, while providing a homogeneous, unified interface to applications for data storage and retrieval. A region template application is represented as a hierarchical dataflow in which each computing stage may be represented as another dataflow of finer-grain tasks. The execution of the application is coordinated by a runtime system that implements optimizations for hybrid machines, including performance-aware scheduling for maximizing the utilization of computing devices and techniques to reduce the impact of data transfers between CPUs and GPUs. An experimental evaluation on a state-of-the-art hybrid cluster using a microscopy imaging application shows that the abstraction adds negligible overhead (about 3%) and achieves good scalability and high data transfer rates. Optimizations in a high speed disk based storage implementation of the abstraction to support asynchronous data transfers and computation result in an application performance gain of about 1.13×. Finally, a processing rate of 11,730 4K×4K tiles per minute was achieved for the microscopy imaging application on a cluster with 100 nodes (300 GPUs and 1,200 CPU cores). This computation rate enables studies with very large datasets.

  5. Fourier transform infrared spectroscopic imaging and multivariate regression for prediction of proteoglycan content of articular cartilage.

    Directory of Open Access Journals (Sweden)

    Lassi Rieppo

    Full Text Available Fourier Transform Infrared (FT-IR spectroscopic imaging has been earlier applied for the spatial estimation of the collagen and the proteoglycan (PG contents of articular cartilage (AC. However, earlier studies have been limited to the use of univariate analysis techniques. Current analysis methods lack the needed specificity for collagen and PGs. The aim of the present study was to evaluate the suitability of partial least squares regression (PLSR and principal component regression (PCR methods for the analysis of the PG content of AC. Multivariate regression models were compared with earlier used univariate methods and tested with a sample material consisting of healthy and enzymatically degraded steer AC. Chondroitinase ABC enzyme was used to increase the variation in PG content levels as compared to intact AC. Digital densitometric measurements of Safranin O-stained sections provided the reference for PG content. The results showed that multivariate regression models predict PG content of AC significantly better than earlier used absorbance spectrum (i.e. the area of carbohydrate region with or without amide I normalization or second derivative spectrum univariate parameters. Increased molecular specificity favours the use of multivariate regression models, but they require more knowledge of chemometric analysis and extended laboratory resources for gathering reference data for establishing the models. When true molecular specificity is required, the multivariate models should be used.

  6. Mutual Perception of USA and China based on Content-Analysis of Media

    Directory of Open Access Journals (Sweden)

    Farida Halmuratovna Autova

    2015-12-01

    Full Text Available The article evaluates mutual perception of the United States and China in the XXI century, based on content analysis of American and Chinese media. The research methodology includes both content and event analysis. To conduct content analysis we used leading weekly news magazines of US and China - “Newsweek” and “Beijing Review”. The events, limiting the time frame of analysis are Barack Obama's re-election to the second term in 2012, and the entry of Xi Jinping as the chairman of China in 2013. As a result, we have analyzed the issues of each magazine one year before and after the events respectively. Thematic areas covered by articles (politics, economy, culture, as well as stylistic coloring titles of articles are examined. Following the results of the analysis China confidently perceives itself in the international arena. In turn, the US are committed by emphasizing speed and power of the Chinese point out the negative consequences of such a jump (“growing pains”, on the challenges facing China in domestic and foreign policy, in order to create a negative image of China in the minds of American citizens.

  7. Video retrieval by still-image analysis with ImageMiner

    Science.gov (United States)

    Kreyss, Jutta; Roeper, M.; Alshuth, Peter; Hermes, Thorsten; Herzog, Otthein

    1997-01-01

    The large amount of available multimedia information (e.g. videos, audio, images) requires efficient and effective annotation and retrieval methods. As videos start playing a more important role in the frame of multimedia, we want to make these available for content-based retrieval. The ImageMiner-System, which was developed at the University of Bremen in the AI group, is designed for content-based retrieval of single images by a new combination of techniques and methods from computer vision and artificial intelligence. In our approach to make videos available for retrieval in a large database of videos and images there are two necessary steps: First, the detection and extraction of shots from a video, which is done by a histogram based method and second, the construction of the separate frames in a shot to one still single images. This is performed by a mosaicing-technique. The resulting mosaiced image gives a one image visualization of the shot and can be analyzed by the ImageMiner-System. ImageMiner has been tested on several domains, (e.g. landscape images, technical drawings), which cover a wide range of applications.

  8. CellProfiler Tracer: exploring and validating high-throughput, time-lapse microscopy image data.

    Science.gov (United States)

    Bray, Mark-Anthony; Carpenter, Anne E

    2015-11-04

    Time-lapse analysis of cellular images is an important and growing need in biology. Algorithms for cell tracking are widely available; what researchers have been missing is a single open-source software package to visualize standard tracking output (from software like CellProfiler) in a way that allows convenient assessment of track quality, especially for researchers tuning tracking parameters for high-content time-lapse experiments. This makes quality assessment and algorithm adjustment a substantial challenge, particularly when dealing with hundreds of time-lapse movies collected in a high-throughput manner. We present CellProfiler Tracer, a free and open-source tool that complements the object tracking functionality of the CellProfiler biological image analysis package. Tracer allows multi-parametric morphological data to be visualized on object tracks, providing visualizations that have already been validated within the scientific community for time-lapse experiments, and combining them with simple graph-based measures for highlighting possible tracking artifacts. CellProfiler Tracer is a useful, free tool for inspection and quality control of object tracking data, available from http://www.cellprofiler.org/tracer/.

  9. Learning effective color features for content based image retrieval in dermatology

    NARCIS (Netherlands)

    Bunte, Kerstin; Biehl, Michael; Jonkman, Marcel F.; Petkov, Nicolai

    We investigate the extraction of effective color features for a content-based image retrieval (CBIR) application in dermatology. Effectiveness is measured by the rate of correct retrieval of images from four color classes of skin lesions. We employ and compare two different methods to learn

  10. High and ultra-high b-value diffusion-weighted imaging in prostate cancer: a quantitative analysis.

    Science.gov (United States)

    Wetter, Axel; Nensa, Felix; Lipponer, Christine; Guberina, Nika; Olbricht, Tobias; Schenck, Marcus; Schlosser, Thomas W; Gratz, Marcel; Lauenstein, Thomas C

    2015-08-01

    Diffusion-weighted imaging (DWI) is routinely used in magnetic resonance imaging (MRI) of prostate cancer. However, the routine use of b values higher than 1000 s/mm(2) is not clear up to present. Moreover, the complex diffusion behavior of malignant and benign prostate tissues hampers precise predictions of contrast in DWI images and apparent diffusion coefficient (ADC) maps. To quantitatively analyze DWI with different b values in prostate cancer and to identify b values best suitable for cancer detection. Forty-one patients with histologically proven prostate cancer were examined with high resolution T2-weighted imaging and DWI at 3 Tesla. Five different b values (0, 800, 1000, 1500, 2000 s/mm(2)) were applied. ADC values of tumors and reference areas were measured on ADC maps derived from different pairs of b values. Furthermore, signal intensities of DW images of tumors and reference areas were measured. For analysis, contrast ratios of ADC values and signal intensities of DW images were calculated and compared. No significant differences were found between contrast ratios measured on ADC maps of all analyzed b value pairs (P = 0.43). Contrast ratios calculated from signal intensities of DW images were highest at b values of 1500 and 2000 s/mm(2) and differed significantly from contrast ratios at b values of 800 and 1000 s/mm(2) (P values, contrast ratios of DW images are significantly higher at b-values of 1500 and 2000 s/mm(2) in comparison to b values of 800 and 1000 s/mm(2). Therefore, diagnostic performance of DWI in prostate cancer might be increased by application of b values higher than 1000 s/mm(2). © The Foundation Acta Radiologica 2014.

  11. FIM imaging and FIMtrack: two new tools allowing high-throughput and cost effective locomotion analysis.

    Science.gov (United States)

    Risse, Benjamin; Otto, Nils; Berh, Dimitri; Jiang, Xiaoyi; Klämbt, Christian

    2014-12-24

    The analysis of neuronal network function requires a reliable measurement of behavioral traits. Since the behavior of freely moving animals is variable to a certain degree, many animals have to be analyzed, to obtain statistically significant data. This in turn requires a computer assisted automated quantification of locomotion patterns. To obtain high contrast images of almost translucent and small moving objects, a novel imaging technique based on frustrated total internal reflection called FIM was developed. In this setup, animals are only illuminated with infrared light at the very specific position of contact with the underlying crawling surface. This methodology results in very high contrast images. Subsequently, these high contrast images are processed using established contour tracking algorithms. Based on this, we developed the FIMTrack software, which serves to extract a number of features needed to quantitatively describe a large variety of locomotion characteristics. During the development of this software package, we focused our efforts on an open source architecture allowing the easy addition of further modules. The program operates platform independent and is accompanied by an intuitive GUI guiding the user through data analysis. All locomotion parameter values are given in form of csv files allowing further data analyses. In addition, a Results Viewer integrated into the tracking software provides the opportunity to interactively review and adjust the output, as might be needed during stimulus integration. The power of FIM and FIMTrack is demonstrated by studying the locomotion of Drosophila larvae.

  12. Comparison of laser fluorimetry, high resolution gamma-ray spectrometry and neutron activation analysis techniques for determination of uranium content in soil samples

    International Nuclear Information System (INIS)

    Ghods, A.; Asgharizadeh, F.; Salimi, B.; Abbasi, A.

    2004-01-01

    Much more concern is given nowadays for exposure of the world population to natural radiation especially to uranium since 57% of that exposure is due to radon-222, which is a member of uranium decay series. Most of the methods used for uranium determination is low concentration require either tedious separation and preconcentration or the accessibility to special instrumentation for detection of uranium at this low level. this study compares three techniques and methods for uranium analysis among different soil sample with variable uranium contents. Two of these techniques, neutron activation analysis and high resolution gamma-ray spectrometry , are non-destructive while the other, laser fluorimetry is done via chemical extraction of uranium. Analysis of standard materials is done also to control the quality and accuracy of the work. In spite of having quite variable ranges of detection limit, results obtained by high resolution gamma-ray spectrometry based on the assumption of having secular equilibrium between uranium and its daughters, which causes deviation whenever this condition was missed. For samples with reasonable uranium content, neutron activation analysis would be a rapid and reliable technique, while for low uranium content laser fluorimetry would be the most appropriate and accurate technique

  13. Image analysis

    International Nuclear Information System (INIS)

    Berman, M.; Bischof, L.M.; Breen, E.J.; Peden, G.M.

    1994-01-01

    This paper provides an overview of modern image analysis techniques pertinent to materials science. The usual approach in image analysis contains two basic steps: first, the image is segmented into its constituent components (e.g. individual grains), and second, measurement and quantitative analysis are performed. Usually, the segmentation part of the process is the harder of the two. Consequently, much of the paper concentrates on this aspect, reviewing both fundamental segmentation tools (commonly found in commercial image analysis packages) and more advanced segmentation tools. There is also a review of the most widely used quantitative analysis methods for measuring the size, shape and spatial arrangements of objects. Many of the segmentation and analysis methods are demonstrated using complex real-world examples. Finally, there is a discussion of hardware and software issues. 42 refs., 17 figs

  14. An Object-Based Image Analysis Approach for Detecting Penguin Guano in very High Spatial Resolution Satellite Images

    Directory of Open Access Journals (Sweden)

    Chandi Witharana

    2016-04-01

    Full Text Available The logistical challenges of Antarctic field work and the increasing availability of very high resolution commercial imagery have driven an interest in more efficient search and classification of remotely sensed imagery. This exploratory study employed geographic object-based analysis (GEOBIA methods to classify guano stains, indicative of chinstrap and Adélie penguin breeding areas, from very high spatial resolution (VHSR satellite imagery and closely examined the transferability of knowledge-based GEOBIA rules across different study sites focusing on the same semantic class. We systematically gauged the segmentation quality, classification accuracy, and the reproducibility of fuzzy rules. A master ruleset was developed based on one study site and it was re-tasked “without adaptation” and “with adaptation” on candidate image scenes comprising guano stains. Our results suggest that object-based methods incorporating the spectral, textural, spatial, and contextual characteristics of guano are capable of successfully detecting guano stains. Reapplication of the master ruleset on candidate scenes without modifications produced inferior classification results, while adapted rules produced comparable or superior results compared to the reference image. This work provides a road map to an operational “image-to-assessment pipeline” that will enable Antarctic wildlife researchers to seamlessly integrate VHSR imagery into on-demand penguin population census.

  15. Integrating high-content imaging and chemical genetics to probe host cellular pathways critical for Yersinia pestis infection.

    Directory of Open Access Journals (Sweden)

    Krishna P Kota

    Full Text Available The molecular machinery that regulates the entry and survival of Yersinia pestis in host macrophages is poorly understood. Here, we report the development of automated high-content imaging assays to quantitate the internalization of virulent Y. pestis CO92 by macrophages and the subsequent activation of host NF-κB. Implementation of these assays in a focused chemical screen identified kinase inhibitors that inhibited both of these processes. Rac-2-ethoxy-3 octadecanamido-1-propylphosphocholine (a protein Kinase C inhibitor, wortmannin (a PI3K inhibitor, and parthenolide (an IκB kinase inhibitor, inhibited pathogen-induced NF-κB activation and reduced bacterial entry and survival within macrophages. Parthenolide inhibited NF-κB activation in response to stimulation with Pam3CSK4 (a TLR2 agonist, E. coli LPS (a TLR4 agonist or Y. pestis infection, while the PI3K and PKC inhibitors were selective only for Y. pestis infection. Together, our results suggest that phagocytosis is the major stimulus for NF-κB activation in response to Y. pestis infection, and that Y. pestis entry into macrophages may involve the participation of protein kinases such as PI3K and PKC. More importantly, the automated image-based screening platform described here can be applied to the study of other bacteria in general and, in combination with chemical genetic screening, can be used to identify host cell functions facilitating the identification of novel antibacterial therapeutics.

  16. Applications of stochastic geometry in image analysis

    NARCIS (Netherlands)

    Lieshout, van M.N.M.; Kendall, W.S.; Molchanov, I.S.

    2009-01-01

    A discussion is given of various stochastic geometry models (random fields, sequential object processes, polygonal field models) which can be used in intermediate and high-level image analysis. Two examples are presented of actual image analysis problems (motion tracking in video,

  17. From text to codings: intercoder reliability assessment in qualitative content analysis.

    Science.gov (United States)

    Burla, Laila; Knierim, Birte; Barth, Jurgen; Liewald, Katharina; Duetz, Margreet; Abel, Thomas

    2008-01-01

    High intercoder reliability (ICR) is required in qualitative content analysis for assuring quality when more than one coder is involved in data analysis. The literature is short of standardized procedures for ICR procedures in qualitative content analysis. To illustrate how ICR assessment can be used to improve codings in qualitative content analysis. Key steps of the procedure are presented, drawing on data from a qualitative study on patients' perspectives on low back pain. First, a coding scheme was developed using a comprehensive inductive and deductive approach. Second, 10 transcripts were coded independently by two researchers, and ICR was calculated. A resulting kappa value of .67 can be regarded as satisfactory to solid. Moreover, varying agreement rates helped to identify problems in the coding scheme. Low agreement rates, for instance, indicated that respective codes were defined too broadly and would need clarification. In a third step, the results of the analysis were used to improve the coding scheme, leading to consistent and high-quality results. The quantitative approach of ICR assessment is a viable instrument for quality assurance in qualitative content analysis. Kappa values and close inspection of agreement rates help to estimate and increase quality of codings. This approach facilitates good practice in coding and enhances credibility of analysis, especially when large samples are interviewed, different coders are involved, and quantitative results are presented.

  18. [Preliminarily application of content analysis to qualitative nursing data].

    Science.gov (United States)

    Liang, Shu-Yuan; Chuang, Yeu-Hui; Wu, Shu-Fang

    2012-10-01

    Content analysis is a methodology for objectively and systematically studying the content of communication in various formats. Content analysis in nursing research and nursing education is called qualitative content analysis. Qualitative content analysis is frequently applied to nursing research, as it allows researchers to determine categories inductively and deductively. This article examines qualitative content analysis in nursing research from theoretical and practical perspectives. We first describe how content analysis concepts such as unit of analysis, meaning unit, code, category, and theme are used. Next, we describe the basic steps involved in using content analysis, including data preparation, data familiarization, analysis unit identification, creating tentative coding categories, category refinement, and establishing category integrity. Finally, this paper introduces the concept of content analysis rigor, including dependability, confirmability, credibility, and transferability. This article elucidates the content analysis method in order to help professionals conduct systematic research that generates data that are informative and useful in practical application.

  19. [Vegetation index estimation by chlorophyll content of grassland based on spectral analysis].

    Science.gov (United States)

    Xiao, Han; Chen, Xiu-Wan; Yang, Zhen-Yu; Li, Huai-Yu; Zhu, Han

    2014-11-01

    Comparing the methods of existing remote sensing research on the estimation of chlorophyll content, the present paper confirms that the vegetation index is one of the most practical and popular research methods. In recent years, the increasingly serious problem of grassland degradation. This paper, firstly, analyzes the measured reflectance spectral curve and its first derivative curve in the grasslands of Songpan, Sichuan and Gongger, Inner Mongolia, conducts correlation analysis between these two spectral curves and chlorophyll content, and finds out the regulation between REP (red edge position) and grassland chlorophyll content, that is, the higher the chlorophyll content is, the higher the REIP (red-edge inflection point) value would be. Then, this paper constructs GCI (grassland chlorophyll index) and selects the most suitable band for retrieval. Finally, this paper calculates the GCI by the use of satellite hyperspectral image, conducts the verification and accuracy analysis of the calculation results compared with chlorophyll content data collected from field of twice experiments. The result shows that for grassland chlorophyll content, GCI has stronger sensitivity than other indices of chlorophyll, and has higher estimation accuracy. GCI is the first proposed to estimate the grassland chlorophyll content, and has wide application potential for the remote sensing retrieval of grassland chlorophyll content. In addition, the grassland chlorophyll content estimation method based on remote sensing retrieval in this paper provides new research ideas for other vegetation biochemical parameters' estimation, vegetation growth status' evaluation and grassland ecological environment change's monitoring.

  20. An automatic fuzzy-based multi-temporal brain digital subtraction angiography image fusion algorithm using curvelet transform and content selection strategy.

    Science.gov (United States)

    Momeni, Saba; Pourghassem, Hossein

    2014-08-01

    Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.

  1. Quantitative digital image analysis of chromogenic assays for high throughput screening of alpha-amylase mutant libraries.

    Science.gov (United States)

    Shankar, Manoharan; Priyadharshini, Ramachandran; Gunasekaran, Paramasamy

    2009-08-01

    An image analysis-based method for high throughput screening of an alpha-amylase mutant library using chromogenic assays was developed. Assays were performed in microplates and high resolution images of the assay plates were read using the Virtual Microplate Reader (VMR) script to quantify the concentration of the chromogen. This method is fast and sensitive in quantifying 0.025-0.3 mg starch/ml as well as 0.05-0.75 mg glucose/ml. It was also an effective screening method for improved alpha-amylase activity with a coefficient of variance of 18%.

  2. Nanoscale high-content analysis using compositional heterogeneities of single proteoliposomes

    DEFF Research Database (Denmark)

    Mathiasen, Signe; Christensen, Sune M.; Fung, Juan José

    2014-01-01

    Proteoliposome reconstitution is a standard method to stabilize purified transmembrane proteins in membranes for structural and functional assays. Here we quantified intrareconstitution heterogeneities in single proteoliposomes using fluorescence microscopy. Our results suggest that compositional...... heterogeneities can severely skew ensemble-average proteoliposome measurements but also enable ultraminiaturized high-content screens. We took advantage of this screening capability to map the oligomerization energy of the β2-adrenergic receptor using ∼10(9)-fold less protein than conventional assays....

  3. Retrieval Architecture with Classified Query for Content Based Image Recognition

    Directory of Open Access Journals (Sweden)

    Rik Das

    2016-01-01

    Full Text Available The consumer behavior has been observed to be largely influenced by image data with increasing familiarity of smart phones and World Wide Web. Traditional technique of browsing through product varieties in the Internet with text keywords has been gradually replaced by the easy accessible image data. The importance of image data has portrayed a steady growth in application orientation for business domain with the advent of different image capturing devices and social media. The paper has described a methodology of feature extraction by image binarization technique for enhancing identification and retrieval of information using content based image recognition. The proposed algorithm was tested on two public datasets, namely, Wang dataset and Oliva and Torralba (OT-Scene dataset with 3688 images on the whole. It has outclassed the state-of-the-art techniques in performance measure and has shown statistical significance.

  4. VOLUME STUDY WITH HIGH DENSITY OF PARTICLES BASED ON CONTOUR AND CORRELATION IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Tatyana Yu. Nikolaeva

    2014-11-01

    Full Text Available The subject of study is the techniques of particle statistics evaluation, in particular, processing methods of particle images obtained by coherent illumination. This paper considers the problem of recognition and statistical accounting for individual images of small scattering particles in an arbitrary section of the volume in case of high concentrations. For automatic recognition of focused particles images, a special algorithm for statistical analysis based on contouring and thresholding was used. By means of the mathematical formalism of the scalar diffraction theory, coherent images of the particles formed by the optical system with high numerical aperture were simulated. Numerical testing of the method proposed for the cases of different concentrations and distributions of particles in the volume was performed. As a result, distributions of density and mass fraction of the particles were obtained, and the efficiency of the method in case of different concentrations of particles was evaluated. At high concentrations, the effect of coherent superposition of the particles from the adjacent planes strengthens, which makes it difficult to recognize images of particles using the algorithm considered in the paper. In this case, we propose to supplement the method with calculating the cross-correlation function of particle images from adjacent segments of the volume, and evaluating the ratio between the height of the correlation peak and the height of the function pedestal in the case of different distribution characters. The method of statistical accounting of particles considered in this paper is of practical importance in the study of volume with particles of different nature, for example, in problems of biology and oceanography. Effective work in the regime of high concentrations expands the limits of applicability of these methods for practically important cases and helps to optimize determination time of the distribution character and

  5. Dexterous robotic manipulation of alert adult Drosophila for high-content experimentation.

    Science.gov (United States)

    Savall, Joan; Ho, Eric Tatt Wei; Huang, Cheng; Maxey, Jessica R; Schnitzer, Mark J

    2015-07-01

    We present a robot that enables high-content studies of alert adult Drosophila by combining operations including gentle picking; translations and rotations; characterizations of fly phenotypes and behaviors; microdissection; or release. To illustrate, we assessed fly morphology, tracked odor-evoked locomotion, sorted flies by sex, and dissected the cuticle to image neural activity. The robot's tireless capacity for precise manipulations enables a scalable platform for screening flies' complex attributes and behavioral patterns.

  6. High-speed imaging of blood splatter patterns

    Energy Technology Data Exchange (ETDEWEB)

    McDonald, T.E.; Albright, K.A.; King, N.S.P.; Yates, G.J. (Los Alamos National Lab., NM (United States)); Levine, G.F. (California Dept. of Justice, Sacramento, CA (United States). Bureau of Forensic Services)

    1993-01-01

    The interpretation of blood splatter patterns is an important element in reconstructing the events and circumstances of an accident or crime scene. Unfortunately, the interpretation of patterns and stains formed by blood droplets is not necessarily intuitive and study and analysis are required to arrive at a correct conclusion. A very useful tool in the study of blood splatter patterns is high-speed photography. Scientists at the Los Alamos National Laboratory, Department of Energy (DOE), and Bureau of Forensic Services, State of California, have assembled a high-speed imaging system designed to image blood splatter patterns. The camera employs technology developed by Los Alamos for the underground nuclear testing program and has also been used in a military mine detection program. The camera uses a solid-state CCD sensor operating at approximately 650 frames per second (75 MPixels per second) with a microchannel plate image intensifier that can provide shuttering as short as 5 ns. The images are captured with a laboratory high-speed digitizer and transferred to an IBM compatible PC for display and hard copy output for analysis. The imaging system is described in this paper.

  7. High-speed imaging of blood splatter patterns

    Energy Technology Data Exchange (ETDEWEB)

    McDonald, T.E.; Albright, K.A.; King, N.S.P.; Yates, G.J. [Los Alamos National Lab., NM (United States); Levine, G.F. [California Dept. of Justice, Sacramento, CA (United States). Bureau of Forensic Services

    1993-05-01

    The interpretation of blood splatter patterns is an important element in reconstructing the events and circumstances of an accident or crime scene. Unfortunately, the interpretation of patterns and stains formed by blood droplets is not necessarily intuitive and study and analysis are required to arrive at a correct conclusion. A very useful tool in the study of blood splatter patterns is high-speed photography. Scientists at the Los Alamos National Laboratory, Department of Energy (DOE), and Bureau of Forensic Services, State of California, have assembled a high-speed imaging system designed to image blood splatter patterns. The camera employs technology developed by Los Alamos for the underground nuclear testing program and has also been used in a military mine detection program. The camera uses a solid-state CCD sensor operating at approximately 650 frames per second (75 MPixels per second) with a microchannel plate image intensifier that can provide shuttering as short as 5 ns. The images are captured with a laboratory high-speed digitizer and transferred to an IBM compatible PC for display and hard copy output for analysis. The imaging system is described in this paper.

  8. A microfluidic array for high-content screening at whole-organism resolution

    Science.gov (United States)

    Migliozzi, D.; Cornaglia, M.; Mouchiroud, L.; Auwerx, J.; Gijs, M. A. M.

    2018-02-01

    A main step for the development and the validation of medical drugs is the screening on whole organisms, which gives the systemic information that is missing when using cellular models. Among the organisms of choice, Caenorhabditis elegansis a soil worm which catches the interest of researchers who study systemic physiopathology (e.g. metabolic and neurodegenerative diseases) because: (1) its large genetic homology with humans supports translational analysis; (2) worms are much easier to handle and grow in large amounts compared to rodents, for which (3) the costs and (4) the ethical concerns are substantial.C. elegansis therefore well suited for large screens, dose-response analysis and target-discovery involving an entire organism. We have developed and tested a microfluidic array for high-content screening, enabling the selection of small populations of its first larval stage in many separated chambers divided into channels for multiplexed screens. With automated protocols for feeding, drug administration and image acquisition, our chip enables the study of the nematodes throughout their entire lifespan. By using a paralyzing agent and a mitochondrial-stress inducer as case studies, we have demonstrated large field-of-view motility analysis, and worm-segmentation/signal-detection for mode-of-action quantification with genetically-encoded fluorescence reporters.

  9. Multiscale Distance Coherence Vector Algorithm for Content-Based Image Retrieval

    Science.gov (United States)

    Jiexian, Zeng; Xiupeng, Liu

    2014-01-01

    Multiscale distance coherence vector algorithm for content-based image retrieval (CBIR) is proposed due to the same descriptor with different shapes and the shortcomings of antinoise performance of the distance coherence vector algorithm. By this algorithm, the image contour curve is evolved by Gaussian function first, and then the distance coherence vector is, respectively, extracted from the contour of the original image and evolved images. Multiscale distance coherence vector was obtained by reasonable weight distribution of the distance coherence vectors of evolved images contour. This algorithm not only is invariable to translation, rotation, and scaling transformation but also has good performance of antinoise. The experiment results show us that the algorithm has a higher recall rate and precision rate for the retrieval of images polluted by noise. PMID:24883416

  10. The Content Analysis on Doctoral Thesies in Geomatic Engineering in Turkey

    Directory of Open Access Journals (Sweden)

    Tahsin BOZTOPRAK

    2016-10-01

    Full Text Available The purpose of this study is to reach a conclusion based on the synthesis of doctoral dissertations published about surveying (geomatic engineering by using content analysis. For this purpose, 325 doctoral dissertations published in Turkey were analyzed using content analysis technique. As a result of performed analysis, it has been determined that 70.46's% of doctoral dissertations have been published in the last fifteen years. It has been shown that most doctoral dissertations were published by Istanbul Technical University, Yıldız Technical University, Selçuk University and Karadeniz Technical University. Most of the doctorate studies have been carried out in the discipline of public measurements/ land management with the rate of %32.62. 89.54% of the publishing language of the PhD thesis is Turkish, the average number of pages is 156. 68.45% of the PhD thesis advisors have professor doctor title. The most covered subjects are geographic information systems (15.46%, GPS/GNSS (12.36% and satellite image analysis (11.24%.

  11. Bones, body parts, and sex appeal: An analysis of #thinspiration images on popular social media.

    Science.gov (United States)

    Ghaznavi, Jannath; Taylor, Laramie D

    2015-06-01

    The present study extends research on thinspiration images, visual and/or textual images intended to inspire weight loss, from pro-eating disorder websites to popular photo-sharing social media websites. The article reports on a systematic content analysis of thinspiration images (N=300) on Twitter and Pinterest. Images tended to be sexually suggestive and objectifying with a focus on ultra-thin, bony, scantily-clad women. Results indicated that particular social media channels and labels (i.e., tags) were characterized by more segmented, bony content and greater social endorsement compared to others. In light of theories of media influence, results offer insight into the potentially harmful effects of exposure to sexually suggestive and objectifying content in large online communities on body image, quality of life, and mental health. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. High performance computing environment for multidimensional image analysis.

    Science.gov (United States)

    Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo

    2007-07-10

    The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478x speedup. Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.

  13. Vis-NIR hyperspectral imaging and multivariate analysis for prediction of the moisture content and hardness of Pistachio kernels roasted in different conditions

    Directory of Open Access Journals (Sweden)

    T Mohammadi Moghaddam

    2015-09-01

    Full Text Available Introduction: Pistachio nut is one of the most delicious and nutritious nuts in the world and it is being used as a salted and roasted product or as an ingredient in snacks, ice cream, desserts, etc. (Maghsudi, 2010; Kashaninejad et al. 2006. Roasting is one of the most important food processes which provides useful attributes to the product. One of the objectives of nut roasting is to alter and significantly enhance the flavor, texture, color and appearance of the product (Ozdemir, 2001. In recent years, spectral imaging techniques (i.e. hyperspectral and multispectral imaging have emerged as powerful tools for safequality inspection of various agricultural commodities (Gowen et al., 2007. The objectives of this study were to apply reflectance hyperspectral imaging for non-destructive determination of moisture content and hardness of pistachio kernels roasted in different conditions. Materials and methods: Dried O’hadi pistachio nuts were supplied from a local market in Mashhad. Pistachio nuts were soaked in 5L of 20% salt solution for 20min (Goktas Seyhan, 2003. For roasting process, three temperatures (90, 120 and 150°C, three times (20, 35 and 50 min and three air velocities (0.5, 1.5 and 2.5 m s-1 were applied. The moisture content of pistachio kernels was measured in triplicate using oven drying (3 gr samples at 105 °C for 12 hours. Uniaxial compression test by a 35mm diameter plastic cylinder, was made on the pistachio kernels, which were mounted on a platform. Samples were compressed at a depth of 2mm and speed of 30 mm min-1. A hyperspectral imaging system in the Vis-NIR range (400-1000 nm was employed. The spectral pre-processing techniques: first derivative and second derivative, median filter, Savitzkye-Golay, wavelet, multiplicative scatter correction (MSC and standard normal variate transformation (SNV were used. To make models at PLSR and ANN methods, ParLeS software and Matlab R2009a were used, respectively. The coefficient

  14. Multimedia content analysis, management and retrieval: trends and challenges

    Science.gov (United States)

    Hanjalic, Alan; Sebe, Nicu; Chang, Edward

    2006-01-01

    Recent advances in computing, communications and storage technology have made multimedia data become prevalent. Multimedia has gained enormous potential in improving the processes in a wide range of fields, such as advertising and marketing, education and training, entertainment, medicine, surveillance, wearable computing, biometrics, and remote sensing. Rich content of multimedia data, built through the synergies of the information contained in different modalities, calls for new and innovative methods for modeling, processing, mining, organizing, and indexing of this data for effective and efficient searching, retrieval, delivery, management and sharing of multimedia content, as required by the applications in the abovementioned fields. The objective of this paper is to present our views on the trends that should be followed when developing such methods, to elaborate on the related research challenges, and to introduce the new conference, Multimedia Content Analysis, Management and Retrieval, as a premium venue for presenting and discussing these methods with the scientific community. Starting from 2006, the conference will be held annually as a part of the IS&T/SPIE Electronic Imaging event.

  15. Automatic classification for mammogram backgrounds based on bi-rads complexity definition and on a multi content analysis framework

    Science.gov (United States)

    Wu, Jie; Besnehard, Quentin; Marchessoux, Cédric

    2011-03-01

    Clinical studies for the validation of new medical imaging devices require hundreds of images. An important step in creating and tuning the study protocol is the classification of images into "difficult" and "easy" cases. This consists of classifying the image based on features like the complexity of the background, the visibility of the disease (lesions). Therefore, an automatic medical background classification tool for mammograms would help for such clinical studies. This classification tool is based on a multi-content analysis framework (MCA) which was firstly developed to recognize image content of computer screen shots. With the implementation of new texture features and a defined breast density scale, the MCA framework is able to automatically classify digital mammograms with a satisfying accuracy. BI-RADS (Breast Imaging Reporting Data System) density scale is used for grouping the mammograms, which standardizes the mammography reporting terminology and assessment and recommendation categories. Selected features are input into a decision tree classification scheme in MCA framework, which is the so called "weak classifier" (any classifier with a global error rate below 50%). With the AdaBoost iteration algorithm, these "weak classifiers" are combined into a "strong classifier" (a classifier with a low global error rate) for classifying one category. The results of classification for one "strong classifier" show the good accuracy with the high true positive rates. For the four categories the results are: TP=90.38%, TN=67.88%, FP=32.12% and FN =9.62%.

  16. Image analysis and modeling in medical image computing. Recent developments and advances.

    Science.gov (United States)

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body

  17. Phase-image-based content-addressable holographic data storage

    Science.gov (United States)

    John, Renu; Joseph, Joby; Singh, Kehar

    2004-03-01

    We propose and demonstrate the use of phase images for content-addressable holographic data storage. Use of binary phase-based data pages with 0 and π phase changes, produces uniform spectral distribution at the Fourier plane. The absence of strong DC component at the Fourier plane and more intensity of higher order spatial frequencies facilitate better recording of higher spatial frequencies, and improves the discrimination capability of the content-addressable memory. This improves the results of the associative recall in a holographic memory system, and can give low number of false hits even for small search arguments. The phase-modulated pixels also provide an opportunity of subtraction among data pixels leading to better discrimination between similar data pages.

  18. A multiplexed method for kinetic measurements of apoptosis and proliferation using live-content imaging.

    Science.gov (United States)

    Artymovich, Katherine; Appledorn, Daniel M

    2015-01-01

    In vitro cell proliferation and apoptosis assays are widely used to study cancer cell biology. Commonly used methodologies are however performed at a single, user-defined endpoint. We describe a kinetic multiplex assay incorporating the CellPlayer(TM) NucLight Red reagent to measure proliferation and the CellPlayer(TM) Caspase-3/7 reagent to measure apoptosis using the two-color, live-content imaging platform, IncuCyte(TM) ZOOM. High-definition phase-contrast images provide an additional qualitative validation of cell death based on morphological characteristics. The kinetic data generated using this strategy can be used to derive informed pharmacology measurements to screen potential cancer therapeutics.

  19. A roughly mapped terra incognita: Image of the child in adult-oriented media contents

    Directory of Open Access Journals (Sweden)

    Korać Nada M.

    2003-01-01

    Full Text Available The study analyzes the image of the child in the media contents intended for adult audiences in Serbia, considering the importance of the role media play in shaping public opinion on children, as well as the influence of such public opinion on adults' attitudes, decisions and actions concerning children. The study focuses on visibility and portrayal of children in the media, in order to determine to what extent children are present in them, and in what way. Relevant data were collected for three media - press, radio and television - mostly covering the entire territory of Serbia, over two consecutive months (April - May 2001. Content analysis revealed that children are not only underrepresented, but also misrepresented, in Serbian media.

  20. TV content analysis techniques and applications

    CERN Document Server

    Kompatsiaris, Yiannis

    2012-01-01

    The rapid advancement of digital multimedia technologies has not only revolutionized the production and distribution of audiovisual content, but also created the need to efficiently analyze TV programs to enable applications for content managers and consumers. Leaving no stone unturned, TV Content Analysis: Techniques and Applications provides a detailed exploration of TV program analysis techniques. Leading researchers and academics from around the world supply scientifically sound treatment of recent developments across the related subject areas--including systems, architectures, algorithms,

  1. Quantification of sterol-specific response in human macrophages using automated imaged-based analysis.

    Science.gov (United States)

    Gater, Deborah L; Widatalla, Namareq; Islam, Kinza; AlRaeesi, Maryam; Teo, Jeremy C M; Pearson, Yanthe E

    2017-12-13

    The transformation of normal macrophage cells into lipid-laden foam cells is an important step in the progression of atherosclerosis. One major contributor to foam cell formation in vivo is the intracellular accumulation of cholesterol. Here, we report the effects of various combinations of low-density lipoprotein, sterols, lipids and other factors on human macrophages, using an automated image analysis program to quantitatively compare single cell properties, such as cell size and lipid content, in different conditions. We observed that the addition of cholesterol caused an increase in average cell lipid content across a range of conditions. All of the sterol-lipid mixtures examined were capable of inducing increases in average cell lipid content, with variations in the distribution of the response, in cytotoxicity and in how the sterol-lipid combination interacted with other activating factors. For example, cholesterol and lipopolysaccharide acted synergistically to increase cell lipid content while also increasing cell survival compared with the addition of lipopolysaccharide alone. Additionally, ergosterol and cholesteryl hemisuccinate caused similar increases in lipid content but also exhibited considerably greater cytotoxicity than cholesterol. The use of automated image analysis enables us to assess not only changes in average cell size and content, but also to rapidly and automatically compare population distributions based on simple fluorescence images. Our observations add to increasing understanding of the complex and multifactorial nature of foam-cell formation and provide a novel approach to assessing the heterogeneity of macrophage response to a variety of factors.

  2. Two-dimensional DFA scaling analysis applied to encrypted images

    Science.gov (United States)

    Vargas-Olmos, C.; Murguía, J. S.; Ramírez-Torres, M. T.; Mejía Carlos, M.; Rosu, H. C.; González-Aguilar, H.

    2015-01-01

    The technique of detrended fluctuation analysis (DFA) has been widely used to unveil scaling properties of many different signals. In this paper, we determine scaling properties in the encrypted images by means of a two-dimensional DFA approach. To carry out the image encryption, we use an enhanced cryptosystem based on a rule-90 cellular automaton and we compare the results obtained with its unmodified version and the encryption system AES. The numerical results show that the encrypted images present a persistent behavior which is close to that of the 1/f-noise. These results point to the possibility that the DFA scaling exponent can be used to measure the quality of the encrypted image content.

  3. Teaching Content Analysis through "Harry Potter"

    Science.gov (United States)

    Messinger, Adam M.

    2012-01-01

    Content analysis is a valuable research tool for social scientists that unfortunately can prove challenging to teach to undergraduate students. Published classroom exercises designed to teach content analysis have thus far been predominantly envisioned as lengthy projects for upper-level courses. A brief and engaging exercise may be more…

  4. A content analysis of visual cancer information: prevalence and use of photographs and illustrations in printed health materials.

    Science.gov (United States)

    King, Andy J

    2015-01-01

    Researchers and practitioners have an increasing interest in visual components of health information and health communication messages. This study contributes to this evolving body of research by providing an account of the visual images and information featured in printed cancer communication materials. Using content analysis, 147 pamphlets and 858 images were examined to determine how frequently images are used in printed materials, what types of images are used, what information is conveyed visually, and whether or not current recommendations for the inclusion of visual content were being followed. Although visual messages were found to be common in printed health materials, existing recommendations about the inclusion of visual content were only partially followed. Results are discussed in terms of how relevant theoretical frameworks in the areas of behavior change and visual persuasion seem to be used in these materials, as well as how more theory-oriented research is necessary in visual messaging efforts.

  5. Rapid analysis and exploration of fluorescence microscopy images.

    Science.gov (United States)

    Pavie, Benjamin; Rajaram, Satwik; Ouyang, Austin; Altschuler, Jason M; Steininger, Robert J; Wu, Lani F; Altschuler, Steven J

    2014-03-19

    Despite rapid advances in high-throughput microscopy, quantitative image-based assays still pose significant challenges. While a variety of specialized image analysis tools are available, most traditional image-analysis-based workflows have steep learning curves (for fine tuning of analysis parameters) and result in long turnaround times between imaging and analysis. In particular, cell segmentation, the process of identifying individual cells in an image, is a major bottleneck in this regard. Here we present an alternate, cell-segmentation-free workflow based on PhenoRipper, an open-source software platform designed for the rapid analysis and exploration of microscopy images. The pipeline presented here is optimized for immunofluorescence microscopy images of cell cultures and requires minimal user intervention. Within half an hour, PhenoRipper can analyze data from a typical 96-well experiment and generate image profiles. Users can then visually explore their data, perform quality control on their experiment, ensure response to perturbations and check reproducibility of replicates. This facilitates a rapid feedback cycle between analysis and experiment, which is crucial during assay optimization. This protocol is useful not just as a first pass analysis for quality control, but also may be used as an end-to-end solution, especially for screening. The workflow described here scales to large data sets such as those generated by high-throughput screens, and has been shown to group experimental conditions by phenotype accurately over a wide range of biological systems. The PhenoBrowser interface provides an intuitive framework to explore the phenotypic space and relate image properties to biological annotations. Taken together, the protocol described here will lower the barriers to adopting quantitative analysis of image based screens.

  6. Deep learning, audio adversaries, and music content analysis

    DEFF Research Database (Denmark)

    Kereliuk, Corey Mose; Sturm, Bob L.; Larsen, Jan

    2015-01-01

    We present the concept of adversarial audio in the context of deep neural networks (DNNs) for music content analysis. An adversary is an algorithm that makes minor perturbations to an input that cause major repercussions to the system response. In particular, we design an adversary for a DNN...... that takes as input short-time spectral magnitudes of recorded music and outputs a high-level music descriptor. We demonstrate how this adversary can make the DNN behave in any way with only extremely minor changes to the music recording signal. We show that the adversary cannot be neutralised by a simple...... filtering of the input. Finally, we discuss adversaries in the broader context of the evaluation of music content analysis systems....

  7. An objective method for High Dynamic Range source content selection

    DEFF Research Database (Denmark)

    Narwaria, Manish; Mantel, Claire; Da Silva, Matthieu Perreira

    2014-01-01

    With the aim of improving the immersive experience of the end user, High Dynamic Range (HDR) imaging has been gaining popularity. Therefore, proper validation and performance benchmarking of HDR processing algorithms is a key step towards standardization and commercial deployment. A crucial...... component of such validation studies is the selection of a challenging and balanced set of source (reference) HDR content. In order to facilitate this, we present an objective method based on the premise that a more challenging HDR scene encapsulates higher contrast, and as a result will show up more...

  8. Content Based Radiographic Images Indexing and Retrieval Using Pattern Orientation Histogram

    Directory of Open Access Journals (Sweden)

    Abolfazl Lakdashti

    2008-06-01

    Full Text Available Introduction: Content Based Image Retrieval (CBIR is a method of image searching and retrieval in a  database. In medical applications, CBIR is a tool used by physicians to compare the previous and current  medical images associated with patients pathological conditions. As the volume of pictorial information  stored in medical image databases is in progress, efficient image indexing and retrieval is increasingly  becoming a necessity.  Materials and Methods: This paper presents a new content based radiographic image retrieval approach  based on histogram of pattern orientations, namely pattern orientation histogram (POH. POH represents  the  spatial  distribution  of  five  different  pattern  orientations:  vertical,  horizontal,  diagonal  down/left,  diagonal down/right and non-orientation. In this method, a given image is first divided into image-blocks  and  the  frequency  of  each  type  of  pattern  is  determined  in  each  image-block.  Then,  local  pattern  histograms for each of these image-blocks are computed.   Results: The method was compared to two well known texture-based image retrieval methods: Tamura  and  Edge  Histogram  Descriptors  (EHD  in  MPEG-7  standard.  Experimental  results  based  on  10000  IRMA  radiography  image  dataset,  demonstrate  that  POH  provides  better  precision  and  recall  rates  compared to Tamura and EHD. For some images, the recall and precision rates obtained by POH are,  respectively, 48% and 18% better than the best of the two above mentioned methods.    Discussion and Conclusion: Since we exploit the absolute location of the pattern in the image as well as  its global composition, the proposed matching method can retrieve semantically similar medical images.

  9. Highly Robust Statistical Methods in Medical Image Analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2012-01-01

    Roč. 32, č. 2 (2012), s. 3-16 ISSN 0208-5216 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : robust statistics * classification * faces * robust image analysis * forensic science Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.208, year: 2012 http://www.ibib.waw.pl/bbe/bbefulltext/BBE_32_2_003_FT.pdf

  10. An open-source solution for advanced imaging flow cytometry data analysis using machine learning.

    Science.gov (United States)

    Hennig, Holger; Rees, Paul; Blasi, Thomas; Kamentsky, Lee; Hung, Jane; Dao, David; Carpenter, Anne E; Filby, Andrew

    2017-01-01

    Imaging flow cytometry (IFC) enables the high throughput collection of morphological and spatial information from hundreds of thousands of single cells. This high content, information rich image data can in theory resolve important biological differences among complex, often heterogeneous biological samples. However, data analysis is often performed in a highly manual and subjective manner using very limited image analysis techniques in combination with conventional flow cytometry gating strategies. This approach is not scalable to the hundreds of available image-based features per cell and thus makes use of only a fraction of the spatial and morphometric information. As a result, the quality, reproducibility and rigour of results are limited by the skill, experience and ingenuity of the data analyst. Here, we describe a pipeline using open-source software that leverages the rich information in digital imagery using machine learning algorithms. Compensated and corrected raw image files (.rif) data files from an imaging flow cytometer (the proprietary .cif file format) are imported into the open-source software CellProfiler, where an image processing pipeline identifies cells and subcellular compartments allowing hundreds of morphological features to be measured. This high-dimensional data can then be analysed using cutting-edge machine learning and clustering approaches using "user-friendly" platforms such as CellProfiler Analyst. Researchers can train an automated cell classifier to recognize different cell types, cell cycle phases, drug treatment/control conditions, etc., using supervised machine learning. This workflow should enable the scientific community to leverage the full analytical power of IFC-derived data sets. It will help to reveal otherwise unappreciated populations of cells based on features that may be hidden to the human eye that include subtle measured differences in label free detection channels such as bright-field and dark-field imagery

  11. Filtering adult image content with topic models

    OpenAIRE

    Lienhart, Rainer (Prof. Dr.); Hauke, Rudolf

    2009-01-01

    Protecting children from exposure to adult content has become a serious problem in the real world. Current statistics show that, for instance, the average age of first Internet exposure to pornography is 11 years, that the largest consumer group of Internet pornography is the age group of 12-to-17-year-olds and that 90% of the 8-to-16-year-olds have viewed porn online. To protect our children, effective algorithms for detecting adult images are needed. In this research we evaluate the use of ...

  12. Computer-aided diagnostics of screening mammography using content-based image retrieval

    Science.gov (United States)

    Deserno, Thomas M.; Soiron, Michael; de Oliveira, Júlia E. E.; de A. Araújo, Arnaldo

    2012-03-01

    Breast cancer is one of the main causes of death among women in occidental countries. In the last years, screening mammography has been established worldwide for early detection of breast cancer, and computer-aided diagnostics (CAD) is being developed to assist physicians reading mammograms. A promising method for CAD is content-based image retrieval (CBIR). Recently, we have developed a classification scheme of suspicious tissue pattern based on the support vector machine (SVM). In this paper, we continue moving towards automatic CAD of screening mammography. The experiments are based on in total 10,509 radiographs that have been collected from different sources. From this, 3,375 images are provided with one and 430 radiographs with more than one chain code annotation of cancerous regions. In different experiments, this data is divided into 12 and 20 classes, distinguishing between four categories of tissue density, three categories of pathology and in the 20 class problem two categories of different types of lesions. Balancing the number of images in each class yields 233 and 45 images remaining in each of the 12 and 20 classes, respectively. Using a two-dimensional principal component analysis, features are extracted from small patches of 128 x 128 pixels and classified by means of a SVM. Overall, the accuracy of the raw classification was 61.6 % and 52.1 % for the 12 and the 20 class problem, respectively. The confusion matrices are assessed for detailed analysis. Furthermore, an implementation of a SVM-based CBIR system for CADx in screening mammography is presented. In conclusion, with a smarter patch extraction, the CBIR approach might reach precision rates that are helpful for the physicians. This, however, needs more comprehensive evaluation on clinical data.

  13. Color image analysis technique for measuring of fat in meat: an application for the meat industry

    Science.gov (United States)

    Ballerini, Lucia; Hogberg, Anders; Lundstrom, Kerstin; Borgefors, Gunilla

    2001-04-01

    Intramuscular fat content in meat influences some important meat quality characteristics. The aim of the present study was to develop and apply image processing techniques to quantify intramuscular fat content in beefs together with the visual appearance of fat in meat (marbling). Color images of M. longissimus dorsi meat samples with a variability of intramuscular fat content and marbling were captured. Image analysis software was specially developed for the interpretation of these images. In particular, a segmentation algorithm (i.e. classification of different substances: fat, muscle and connective tissue) was optimized in order to obtain a proper classification and perform subsequent analysis. Segmentation of muscle from fat was achieved based on their characteristics in the 3D color space, and on the intrinsic fuzzy nature of these structures. The method is fully automatic and it combines a fuzzy clustering algorithm, the Fuzzy c-Means Algorithm, with a Genetic Algorithm. The percentages of various colors (i.e. substances) within the sample are then determined; the number, size distribution, and spatial distributions of the extracted fat flecks are measured. Measurements are correlated with chemical and sensory properties. Results so far show that advanced image analysis is useful for quantify the visual appearance of meat.

  14. Chemometric Analysis of High Molecular Mass Glutenin Subunits and Image Data of Bread Crumb Structure from Croatian Wheat Cultivars

    Directory of Open Access Journals (Sweden)

    Zorica Jurković

    2002-01-01

    Full Text Available The aim of this work is to investigate functional relationships among wheat properties, high molecular mass (weight (HMW glutenin subunits and bread quality produced from eleven Croatian wheat cultivars by chemometric analysis. HMW glutenin subunits were fractionated by sodium dodecylsulfate polyacrylamid gel electrophoresis (SDS-PAGE and subsequently analysed by scanning densitometry in order to quantify HMW glutenin fractions. Wheat properties are characterised by four variables: protein content, sedimentation value, wet gluten and gluten index. Bread quality is assessed by the standard measurement of loaf volume, and visual quality of bread slice is quantified by 8 parameters by the use of computer image analysis. The data matrix with 21 columns (measured variables and 11 rows (cultivars is analysed for determination of number of latent variables. It was found that the first two latent variables account for 92, 85 and 87 % of variance of wheat quality properties, HMW glutenin fractions, and the bread quality parameters, respectively. Classification and functional relationships are discussed from the case data (cultivars and variable projections to the planes of the first two latent variables. Between Glu-D1y proportion and the bread quality parameters (standard parameter loaf volume and bread crumb cell area fraction determined by image analysis the strongest positive correlations are found r = 0.651 and r = 0.885, respectively. Between Glu-B1x proportion and the bread quality parameters the strongest negative correlations are found r =-0.535 and r = –0.841, respectively. The results are discussed in view of possible development of new and improvement of existing wheat cultivars and optimisation of bread production.

  15. Multisite Thrombus Imaging and Fibrin Content Estimation With a Single Whole-Body PET Scan in Rats.

    Science.gov (United States)

    Blasi, Francesco; Oliveira, Bruno L; Rietz, Tyson A; Rotile, Nicholas J; Naha, Pratap C; Cormode, David P; Izquierdo-Garcia, David; Catana, Ciprian; Caravan, Peter

    2015-10-01

    Thrombosis is a leading cause of morbidity and mortality worldwide. Current diagnostic strategies rely on imaging modalities that are specific for distinct vascular territories, but a thrombus-specific whole-body imaging approach is still missing. Moreover, imaging techniques to assess thrombus composition are underdeveloped, although therapeutic strategies may benefit from such technology. Therefore, our goal was to test whether positron emission tomography (PET) with the fibrin-binding probe (64)Cu-FBP8 allows multisite thrombus detection and fibrin content estimation. Thrombosis was induced in Sprague-Dawley rats (n=32) by ferric chloride application on both carotid artery and femoral vein. (64)Cu-FBP8-PET/CT imaging was performed 1, 3, or 7 days after thrombosis to detect thrombus location and to evaluate age-dependent changes in target uptake. Ex vivo biodistribution, autoradiography, and histopathology were performed to validate imaging results. Arterial and venous thrombi were localized on fused PET/CT images with high accuracy (97.6%; 95% confidence interval, 92-100). A single whole-body PET/MR imaging session was sufficient to reveal the location of both arterial and venous thrombi after (64)Cu-FBP8 administration. PET imaging showed that probe uptake was greater in younger clots than in older ones for both arterial and venous thrombosis (P<0.0001). Quantitative histopathology revealed an age-dependent reduction of thrombus fibrin content (P<0.001), consistent with PET results. Biodistribution and autoradiography further confirmed the imaging findings. We demonstrated that (64)Cu-FBP8-PET is a feasible approach for whole-body thrombus detection and that molecular imaging of fibrin can provide, noninvasively, insight into clot composition. © 2015 American Heart Association, Inc.

  16. Determination of oxygen content in high Tc superconductors by deuteron particle activation analysis

    International Nuclear Information System (INIS)

    Tao Zhenlan; Yao, Y.D.; Kao, Y.H.

    1993-01-01

    The experimental method for determining the oxygen content in high T c superconductors is described in detail. This method is applied to determination of oxygen content in high T c Y-Ba-Cu-O and Bi-Sr-Ca-Cu-O samples in which the stoichiometry is varied by reducing the copper and bismuth concentrations. The oxygen concentration is found to vary linearly with Cu(x = 0-0.2) and Bi (x = 0-0.4) deficiencies in YBa 2 Cu 3(1-x )O y and Bi 2(1-x) Sr 2 CaCu 2 O y respectively. X-ray powder diffraction measurements show that the compound of YBa 2 Cu 3(1-x) O y is orthorhombic in the variation range of x = 0-0.2

  17. Assessment of Abdominal Adipose Tissue and Organ Fat Content by Magnetic Resonance Imaging

    Science.gov (United States)

    Hu, Houchun H.; Nayak, Krishna S.; Goran, Michael I.

    2010-01-01

    As the prevalence of obesity continues to rise, rapid and accurate tools for assessing abdominal body and organ fat quantity and distribution are critically needed to assist researchers investigating therapeutic and preventive measures against obesity and its comorbidities. Magnetic resonance imaging (MRI) is the most promising modality to address such need. It is non-invasive, utilizes no ionizing radiation, provides unmatched 3D visualization, is repeatable, and is applicable to subject cohorts of all ages. This article is aimed to provide the reader with an overview of current and state-of-the-art techniques in MRI and associated image analysis methods for fat quantification. The principles underlying traditional approaches such as T1-weighted imaging and magnetic resonance spectroscopy as well as more modern chemical-shift imaging techniques are discussed and compared. The benefits of contiguous 3D acquisitions over 2D multi-slice approaches are highlighted. Typical post-processing procedures for extracting adipose tissue depot volumes and percent organ fat content from abdominal MRI data sets are explained. Furthermore, the advantages and disadvantages of each MRI approach with respect to imaging parameters, spatial resolution, subject motion, scan time, and appropriate fat quantitative endpoints are also provided. Practical considerations in implementing these methods are also presented. PMID:21348916

  18. A Novel Optimization-Based Approach for Content-Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Manyu Xiao

    2013-01-01

    Full Text Available Content-based image retrieval is nowadays one of the possible and promising solutions to manage image databases effectively. However, with the large number of images, there still exists a great discrepancy between the users’ expectations (accuracy and efficiency and the real performance in image retrieval. In this work, new optimization strategies are proposed on vocabulary tree building, retrieval, and matching methods. More precisely, a new clustering strategy combining classification and conventional K-Means method is firstly redefined. Then a new matching technique is built to eliminate the error caused by large-scaled scale-invariant feature transform (SIFT. Additionally, a new unit mechanism is proposed to reduce the cost of indexing time. Finally, the numerical results show that excellent performances are obtained in both accuracy and efficiency based on the proposed improvements for image retrieval.

  19. Multispectral UV imaging for surface analysis of MUPS tablets with special focus on the pellet distribution

    DEFF Research Database (Denmark)

    Novikova, Anna; Carstensen, Jens Michael; Rades, Thomas

    2016-01-01

    In the present study the applicability of multispectral UV imaging in combination with multivariate image analysis for surface evaluation of MUPS tablets was investigated with respect to the differentiation of the API pellets from the excipients matrix, estimation of the drug content as well as p...... image analysis is a promising approach for the automatic quality control of MUPS tablets during the manufacturing process....

  20. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy.

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-19

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available.

  1. In-Situ Characterization of Tissue Blood Flow, Blood Content, and Water State Using New Techniques in Magnetic Resonance Imaging.

    Science.gov (United States)

    Conturo, Thomas Edward

    Tissue blood flow, blood content, and water state have been characterized in-situ with new nuclear magnetic resonance imaging techniques. The sensitivities of standard techniques to the physiologic tissue parameters spin density (N_{rm r}) and relaxation times (T_1 and T_2 ) are mathematically defined. A new driven inversion method is developed so that tissue T_1 and T_2 changes produce cooperative intensity changes, yielding high contrast, high signal to noise, and sensitivity to a wider range of tissue parameters. The actual tissue parameters were imaged by automated collection of multiple-echo data having multiple T _1 dependence. Data are simultaneously fit by three-parameters to a closed-form expression, producing lower inter-parameter correlation and parameter noise than in separate T_1 or T_2 methods or pre-averaged methods. Accurate parameters are obtained at different field strengths. Parametric images of pathology demonstrate high sensitivity to tissue heterogeneity, and water content is determined in many tissues. Erythrocytes were paramagnetically labeled to study blood content and relaxation mechanisms. Liver and spleen relaxation were enhanced following 10% exchange of animal blood volumes. Rapid water exchange between intracellular and extracellular compartments was validated. Erythrocytes occupied 12.5% of renal cortex volume, and blood content was uniform in the liver, spleen and kidney. The magnitude and direction of flow velocity was then imaged. To eliminate directional artifacts, a bipolar gradient technique sensitized to flow in different directions was developed. Phase angle was reconstructed instead of intensity since the former has a 2pi -fold higher dynamic range. Images of flow through curves demonstrated secondary flow with a centrifugally-biased laminar profile and stationary velocity peaks along the curvature. Portal vein flow velocities were diminished or reversed in cirrhosis. Image artifacts have been characterized and removed. The

  2. Influence of Ta Content in High Purity Niobium on Cavity Performance Preliminary Results*

    CERN Document Server

    Kneisel, P

    2004-01-01

    In a previous paper* a program designed to study the influence of the residual tantalum content on the superconducting properties of pure niobium metal for RF cavities was outlined. The main rationale for this program was based on a potential cost reduction for high purity niobium, if a less strict limit on the chemical specification for Ta content, which is not significantly affecting the RRR–value, could be tolerated for high performance cavities. Four ingots with different Ta contents have been melted and transformed into sheets. In each manufacturing step the quality of the material has been monitored by employing chemical analysis, neutron activation analysis, thermal conductivity measurements and evaluation of the mechanical properties. The niobium sheets have been scanned for defects by an eddy current device. From three of the four ingots—Ta contents 100, 600 and 1,200 wppm—two single cell cavities each of the CEBAF variety have been fabricated and a series of tests on each ...

  3. Metagenomic analysis revealed highly diverse microbial arsenic metabolism genes in paddy soils with low-arsenic contents

    International Nuclear Information System (INIS)

    Xiao, Ke-Qing; Li, Li-Guan; Ma, Li-Ping; Zhang, Si-Yu; Bao, Peng; Zhang, Tong; Zhu, Yong-Guan

    2016-01-01

    Microbe-mediated arsenic (As) metabolism plays a critical role in global As cycle, and As metabolism involves different types of genes encoding proteins facilitating its biotransformation and transportation processes. Here, we used metagenomic analysis based on high-throughput sequencing and constructed As metabolism protein databases to analyze As metabolism genes in five paddy soils with low-As contents. The results showed that highly diverse As metabolism genes were present in these paddy soils, with varied abundances and distribution for different types and subtypes of these genes. Arsenate reduction genes (ars) dominated in all soil samples, and significant correlation existed between the abundance of arr (arsenate respiration), aio (arsenite oxidation), and arsM (arsenite methylation) genes, indicating the co-existence and close-relation of different As resistance systems of microbes in wetland environments similar to these paddy soils after long-term evolution. Among all soil parameters, pH was an important factor controlling the distribution of As metabolism gene in five paddy soils (p = 0.018). To the best of our knowledge, this is the first study using high-throughput sequencing and metagenomics approach in characterizing As metabolism genes in the five paddy soil, showing their great potential in As biotransformation, and therefore in mitigating arsenic risk to humans. - Highlights: • Use metagenomics to analyze As metabolism genes in paddy soils with low-As content. • These genes were ubiquitous, abundant, and associated with diverse microbes. • pH as an important factor controlling their distribution in paddy soil. • Imply combinational effect of evolution and selection on As metabolism genes. - Metagenomics was used to analyze As metabolism genes in paddy soils with low-As contents. These genes were ubiquitous, abundant, and associated with diverse microbes.

  4. A Content Analysis of College and University Viewbooks (Brochures).

    Science.gov (United States)

    Hite, Robert E.; Yearwood, Alisa

    2001-01-01

    Systematically examined the content and components of college viewbooks/brochures. Compiled findings on: (1) physical components (e.g., photographs and slogans); (2) message content based on school characteristics such as size, type of school, enrollment, location, etc.; and (3) the type of image schools with different characteristics are seeking…

  5. Histogram analysis of diffusion kurtosis imaging derived maps may distinguish between low and high grade gliomas before surgery.

    Science.gov (United States)

    Qi, Xi-Xun; Shi, Da-Fa; Ren, Si-Xie; Zhang, Su-Ya; Li, Long; Li, Qing-Chang; Guan, Li-Ming

    2018-04-01

    To investigate the value of histogram analysis of diffusion kurtosis imaging (DKI) maps in the evaluation of glioma grading. A total of 39 glioma patients who underwent preoperative magnetic resonance imaging (MRI) were classified into low-grade (13 cases) and high-grade (26 cases) glioma groups. Parametric DKI maps were derived, and histogram metrics between low- and high-grade gliomas were analysed. The optimum diagnostic thresholds of the parameters, area under the receiver operating characteristic curve (AUC), sensitivity, and specificity were achieved using a receiver operating characteristic (ROC). Significant differences were observed not only in 12 metrics of histogram DKI parameters (PHistogram analysis of DKI may be more effective in glioma grading.

  6. PlantCV v2: Image analysis software for high-throughput plant phenotyping

    Directory of Open Access Journals (Sweden)

    Malia A. Gehan

    2017-12-01

    Full Text Available Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.

  7. GENERALIZATION, FORMULATION AND HEAT CONTENTS OF SIMULATED MSW WITH HIGH MOISTURE CONTENT

    Directory of Open Access Journals (Sweden)

    A. JOHARI

    2012-12-01

    Full Text Available This paper presents a generalization technique for the formulation of simulated municipal solid waste. This technique is used for the elimination of the inconsistency in the municipal solid waste (MSW characteristics due to its heterogeneous nature. The compositions of simulated municipal solid waste were formulated from four major municipal waste streams components in Malaysia namely paper, plastic, food and yard waste. The technique produced four simplified waste generalization categories with composition of paper (19%, plastic (25%, food (27% and green waste (29% respectively. Comparative study was conducted for proximate analysis for the determination of volatile matter, fixed carbon and ash content. Ultimate analysis was performed for carbon and hydrogen content. The heat content for simulated and actual municipal solid waste showed good agreement. The moisture content of the simulated municipal solid waste and actual municipal solid waste were established at 52.34% and 61.71% respectively. Overall results were considered to be representative of the actual compositions of municipal solid waste in Malaysia.

  8. Effects of Low Carbohydrate High Protein (LCHP) diet on atherosclerotic plaque phenotype in ApoE/LDLR-/- mice: FT-IR and Raman imaging.

    Science.gov (United States)

    Wrobel, T P; Marzec, K M; Chlopicki, S; Maślak, E; Jasztal, A; Franczyk-Żarów, M; Czyżyńska-Cichoń, I; Moszkowski, T; Kostogrys, R B; Baranska, M

    2015-09-22

    Low Carbohydrate High Protein (LCHP) diet displays pro-atherogenic effects, however, the exact mechanisms involved are still unclear. Here, with the use of vibrational imaging, such as Fourier transform infrared (FT-IR) and Raman (RS) spectroscopies, we characterize biochemical content of plaques in Brachiocephalic Arteries (BCA) from ApoE/LDLR(-/-) mice fed LCHP diet as compared to control, recomended by American Institute of Nutrition, AIN diet. FT-IR images were taken from 6-10 sections of BCA from each mice and were complemented with RS measurements with higher spatial resolution of chosen areas of plaque sections. In aortic plaques from LCHP fed ApoE/LDLR(-/-) mice, the content of cholesterol and cholesterol esters was increased, while that of proteins was decreased as evidenced by global FT-IR analysis. High resolution imaging by RS identified necrotic core/foam cells, lipids (including cholesterol crystals), calcium mineralization and fibrous cap. The decreased relative thickness of the outer fibrous cap and the presence of buried caps were prominent features of the plaques in ApoE/LDLR(-/-) mice fed LCHP diet. In conclusion, FT-IR and Raman-based imaging provided a complementary insight into the biochemical composition of the plaque suggesting that LCHP diet increased plaque cholesterol and cholesterol esters contents of atherosclerotic plaque, supporting the cholesterol-driven pathogenesis of LCHP-induced atherogenesis.

  9. Workflow for high-content, individual cell quantification of fluorescent markers from universal microscope data, supported by open source software.

    Science.gov (United States)

    Stockwell, Simon R; Mittnacht, Sibylle

    2014-12-16

    Advances in understanding the control mechanisms governing the behavior of cells in adherent mammalian tissue culture models are becoming increasingly dependent on modes of single-cell analysis. Methods which deliver composite data reflecting the mean values of biomarkers from cell populations risk losing subpopulation dynamics that reflect the heterogeneity of the studied biological system. In keeping with this, traditional approaches are being replaced by, or supported with, more sophisticated forms of cellular assay developed to allow assessment by high-content microscopy. These assays potentially generate large numbers of images of fluorescent biomarkers, which enabled by accompanying proprietary software packages, allows for multi-parametric measurements per cell. However, the relatively high capital costs and overspecialization of many of these devices have prevented their accessibility to many investigators. Described here is a universally applicable workflow for the quantification of multiple fluorescent marker intensities from specific subcellular regions of individual cells suitable for use with images from most fluorescent microscopes. Key to this workflow is the implementation of the freely available Cell Profiler software(1) to distinguish individual cells in these images, segment them into defined subcellular regions and deliver fluorescence marker intensity values specific to these regions. The extraction of individual cell intensity values from image data is the central purpose of this workflow and will be illustrated with the analysis of control data from a siRNA screen for G1 checkpoint regulators in adherent human cells. However, the workflow presented here can be applied to analysis of data from other means of cell perturbation (e.g., compound screens) and other forms of fluorescence based cellular markers and thus should be useful for a wide range of laboratories.

  10. Ultrasonic image analysis and image-guided interventions.

    Science.gov (United States)

    Noble, J Alison; Navab, Nassir; Becher, H

    2011-08-06

    The fields of medical image analysis and computer-aided interventions deal with reducing the large volume of digital images (X-ray, computed tomography, magnetic resonance imaging (MRI), positron emission tomography and ultrasound (US)) to more meaningful clinical information using software algorithms. US is a core imaging modality employed in these areas, both in its own right and used in conjunction with the other imaging modalities. It is receiving increased interest owing to the recent introduction of three-dimensional US, significant improvements in US image quality, and better understanding of how to design algorithms which exploit the unique strengths and properties of this real-time imaging modality. This article reviews the current state of art in US image analysis and its application in image-guided interventions. The article concludes by giving a perspective from clinical cardiology which is one of the most advanced areas of clinical application of US image analysis and describing some probable future trends in this important area of ultrasonic imaging research.

  11. Investigation into diagnostic agreement using automated computer-assisted histopathology pattern recognition image analysis

    Directory of Open Access Journals (Sweden)

    Joshua D Webster

    2012-01-01

    Full Text Available The extent to which histopathology pattern recognition image analysis (PRIA agrees with microscopic assessment has not been established. Thus, a commercial PRIA platform was evaluated in two applications using whole-slide images. Substantial agreement, lacking significant constant or proportional errors, between PRIA and manual morphometric image segmentation was obtained for pulmonary metastatic cancer areas (Passing/Bablok regression. Bland-Altman analysis indicated heteroscedastic measurements and tendency toward increasing variance with increasing tumor burden, but no significant trend in mean bias. The average between-methods percent tumor content difference was -0.64. Analysis of between-methods measurement differences relative to the percent tumor magnitude revealed that method disagreement had an impact primarily in the smallest measurements (tumor burden 0.988, indicating high reproducibility for both methods, yet PRIA reproducibility was superior (C.V.: PRIA = 7.4, manual = 17.1. Evaluation of PRIA on morphologically complex teratomas led to diagnostic agreement with pathologist assessments of pluripotency on subsets of teratomas. Accommodation of the diversity of teratoma histologic features frequently resulted in detrimental trade-offs, increasing PRIA error elsewhere in images. PRIA error was nonrandom and influenced by variations in histomorphology. File-size limitations encountered while training algorithms and consequences of spectral image processing dominance contributed to diagnostic inaccuracies experienced for some teratomas. PRIA appeared better suited for tissues with limited phenotypic diversity. Technical improvements may enhance diagnostic agreement, and consistent pathologist input will benefit further development and application of PRIA.

  12. Investigation into diagnostic agreement using automated computer-assisted histopathology pattern recognition image analysis.

    Science.gov (United States)

    Webster, Joshua D; Michalowski, Aleksandra M; Dwyer, Jennifer E; Corps, Kara N; Wei, Bih-Rong; Juopperi, Tarja; Hoover, Shelley B; Simpson, R Mark

    2012-01-01

    The extent to which histopathology pattern recognition image analysis (PRIA) agrees with microscopic assessment has not been established. Thus, a commercial PRIA platform was evaluated in two applications using whole-slide images. Substantial agreement, lacking significant constant or proportional errors, between PRIA and manual morphometric image segmentation was obtained for pulmonary metastatic cancer areas (Passing/Bablok regression). Bland-Altman analysis indicated heteroscedastic measurements and tendency toward increasing variance with increasing tumor burden, but no significant trend in mean bias. The average between-methods percent tumor content difference was -0.64. Analysis of between-methods measurement differences relative to the percent tumor magnitude revealed that method disagreement had an impact primarily in the smallest measurements (tumor burden 0.988, indicating high reproducibility for both methods, yet PRIA reproducibility was superior (C.V.: PRIA = 7.4, manual = 17.1). Evaluation of PRIA on morphologically complex teratomas led to diagnostic agreement with pathologist assessments of pluripotency on subsets of teratomas. Accommodation of the diversity of teratoma histologic features frequently resulted in detrimental trade-offs, increasing PRIA error elsewhere in images. PRIA error was nonrandom and influenced by variations in histomorphology. File-size limitations encountered while training algorithms and consequences of spectral image processing dominance contributed to diagnostic inaccuracies experienced for some teratomas. PRIA appeared better suited for tissues with limited phenotypic diversity. Technical improvements may enhance diagnostic agreement, and consistent pathologist input will benefit further development and application of PRIA.

  13. Quantifying biodiversity using digital cameras and automated image analysis.

    Science.gov (United States)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and

  14. Primena satelitskih snimaka za dopunu sadržaja topografskih karata / An application of satellite images for improving the content of topographic maps

    Directory of Open Access Journals (Sweden)

    Miodrag D. Regodić

    2010-10-01

    Full Text Available Neažurnost sadržaja topografskih karata (TK, uslovljena ponajviše stvarnim ekonomskim teškoćama pri izradi novih i dopuni postojećih izdanja, kao i nedovoljnost i sve teže stanje pri izradi ostalih geotopografskih materijala (GTM, u velikoj meri otežavaju geotopografsko obezbeđenje (GTOb vojske u miru, kao i u svim periodima pripreme i vođenja ratnih dejstava. Rešenje ovog problema je u iznalaženju adekvatnog načina upotrebe proizvoda svih vrsta daljinskih snimanja, a naročito u obradi kvalitetnih satelitskih snimaka. Kao najbolji pokazatelj velikih mogućnosti daljinske detekcije, korišćenjem satelitskih snimaka, u kartografskoj praksi primenom kvalitetnih softverskih rešenja, u radu je predstavljena dopuna topografske karte nedostajućim topografskim sadržajem. / Lack of updated content of topographic maps (TMs, mainly due to economic issues regarding the publishing of existing or revised TMs, substantially affects geo-topographic supply (GTS of the Army both in peace and warfare time, as well as shortage of other geo-topographic materials (GTMs. The solution to this problem is in finding an appropriate method of using products of all types of remote sensing, high quality satellite images in particular. Having shown the best possibilities of remote sensing while using satellite images in mapping through the quality software solutions, the author presents an addition to topographic maps based on missing topographic data. Introduction Numerous natural and social phenomena are constantly observed, surveyed, registered and analyzed. Permanent or periodical satellite surveillance and recording for different purposes are growing in importance. The purposes can range from meteorological issues, through study of large water surfaces to military intelligence, etc. These recording can be used in making topographic, thematic and working maps as well as other geo-topographic material. Processing and analyzing of ikonos2 satellite images

  15. Ash content of lignites - radiometric analysis

    International Nuclear Information System (INIS)

    Leonhardt, J.; Thuemmel, H.W.

    1986-01-01

    The quality of lignites is governed by the ash content varying in dependence upon the geologic conditions. Setup and function of the radiometric devices being used for ash content analysis in the GDR are briefly described

  16. DIMOND II: Measures for optimising radiological information content and dose in digital imaging

    International Nuclear Information System (INIS)

    Dowling, A.; Malone, J.; Marsh, D.

    2001-01-01

    The European Commission concerted action on 'Digital Imaging: Measures for Optimising Radiological Information Content and Dose', DIMOND II, was conducted by 12 European partners over the period January 1997 to June 1999. The objective of the concerted action was to initiate a project in the area of digital medical imaging where practice was evolving without structured research in radiation protection, optimisation or justification. The main issues addressed were patient and staff dosimetry, image quality, quality criteria and technical issues. The scope included computed radiography (CR), image intensifier radiography and fluoroscopy, cardiology and interventional procedures. The concerted action was based on the consolidation of work conducted in the partner's institutions together with elective new work. Protocols and approaches to dosimetry, radiological information content/image quality measurement and quality criteria were established and presented at an international workshop held in Dublin in June 1999. Details of the work conducted during the DIMOND II concerted action and a summary of the main findings and conclusions are presented in this contribution. (author)

  17. GRANULOMETRIC MAPS FROM HIGH RESOLUTION SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    Catherine Mering

    2011-05-01

    Full Text Available A new method of land cover mapping from satellite images using granulometric analysis is presented here. Discontinuous landscapes such as steppian bushes of semi arid regions and recently growing urban settlements are especially concerned by this study. Spatial organisations of the land cover are quantified by means of the size distribution analysis of the land cover units extracted from high resolution remotely sensed images. A granulometric map is built by automatic classification of every pixel of the image according to the granulometric density inside a sliding neighbourhood. Granulometric mapping brings some advantages over traditional thematic mapping by remote sensing by focusing on fine spatial events and small changes in one peculiar category of the landscape.

  18. Morphological images analysis and chromosomic aberrations classification based on fuzzy logic

    International Nuclear Information System (INIS)

    Souza, Leonardo Peres

    2011-01-01

    This work has implemented a methodology for automation of images analysis of chromosomes of human cells irradiated at IEA-R1 nuclear reactor (located at IPEN, Sao Paulo, Brazil), and therefore subject to morphological aberrations. This methodology intends to be a tool for helping cytogeneticists on identification, characterization and classification of chromosomal metaphasic analysis. The methodology development has included the creation of a software application based on artificial intelligence techniques using Fuzzy Logic combined with image processing techniques. The developed application was named CHRIMAN and is composed of modules that contain the methodological steps which are important requirements in order to achieve an automated analysis. The first step is the standardization of the bi-dimensional digital image acquisition procedure through coupling a simple digital camera to the ocular of the conventional metaphasic analysis microscope. Second step is related to the image treatment achieved through digital filters application; storing and organization of information obtained both from image content itself, and from selected extracted features, for further use on pattern recognition algorithms. The third step consists on characterizing, counting and classification of stored digital images and extracted features information. The accuracy in the recognition of chromosome images is 93.9%. This classification is based on classical standards obtained at Buckton [1973], and enables support to geneticist on chromosomic analysis procedure, decreasing analysis time, and creating conditions to include this method on a broader evaluation system on human cell damage due to ionizing radiation exposure. (author)

  19. Automated high-content live animal drug screening using C. elegans expressing the aggregation prone serpin α1-antitrypsin Z.

    Directory of Open Access Journals (Sweden)

    Sager J Gosai

    2010-11-01

    Full Text Available The development of preclinical models amenable to live animal bioactive compound screening is an attractive approach to discovering effective pharmacological therapies for disorders caused by misfolded and aggregation-prone proteins. In general, however, live animal drug screening is labor and resource intensive, and has been hampered by the lack of robust assay designs and high throughput work-flows. Based on their small size, tissue transparency and ease of cultivation, the use of C. elegans should obviate many of the technical impediments associated with live animal drug screening. Moreover, their genetic tractability and accomplished record for providing insights into the molecular and cellular basis of human disease, should make C. elegans an ideal model system for in vivo drug discovery campaigns. The goal of this study was to determine whether C. elegans could be adapted to high-throughput and high-content drug screening strategies analogous to those developed for cell-based systems. Using transgenic animals expressing fluorescently-tagged proteins, we first developed a high-quality, high-throughput work-flow utilizing an automated fluorescence microscopy platform with integrated image acquisition and data analysis modules to qualitatively assess different biological processes including, growth, tissue development, cell viability and autophagy. We next adapted this technology to conduct a small molecule screen and identified compounds that altered the intracellular accumulation of the human aggregation prone mutant that causes liver disease in α1-antitrypsin deficiency. This study provides powerful validation for advancement in preclinical drug discovery campaigns by screening live C. elegans modeling α1-antitrypsin deficiency and other complex disease phenotypes on high-content imaging platforms.

  20. A review of content-based image retrieval systems in medical applications-clinical benefits and future directions.

    Science.gov (United States)

    Müller, Henning; Michoux, Nicolas; Bandon, David; Geissbuhler, Antoine

    2004-02-01

    Content-based visual information retrieval (CBVIR) or content-based image retrieval (CBIR) has been one on the most vivid research areas in the field of computer vision over the last 10 years. The availability of large and steadily growing amounts of visual and multimedia data, and the development of the Internet underline the need to create thematic access methods that offer more than simple text-based queries or requests based on matching exact database fields. Many programs and tools have been developed to formulate and execute queries based on the visual or audio content and to help browsing large multimedia repositories. Still, no general breakthrough has been achieved with respect to large varied databases with documents of differing sorts and with varying characteristics. Answers to many questions with respect to speed, semantic descriptors or objective image interpretations are still unanswered. In the medical field, images, and especially digital images, are produced in ever-increasing quantities and used for diagnostics and therapy. The Radiology Department of the University Hospital of Geneva alone produced more than 12,000 images a day in 2002. The cardiology is currently the second largest producer of digital images, especially with videos of cardiac catheterization ( approximately 1800 exams per year containing almost 2000 images each). The total amount of cardiologic image data produced in the Geneva University Hospital was around 1 TB in 2002. Endoscopic videos can equally produce enormous amounts of data. With digital imaging and communications in medicine (DICOM), a standard for image communication has been set and patient information can be stored with the actual image(s), although still a few problems prevail with respect to the standardization. In several articles, content-based access to medical images for supporting clinical decision-making has been proposed that would ease the management of clinical data and scenarios for the integration of

  1. The analysis of image feature robustness using cometcloud

    Directory of Open Access Journals (Sweden)

    Xin Qi

    2012-01-01

    Full Text Available The robustness of image features is a very important consideration in quantitative image analysis. The objective of this paper is to investigate the robustness of a range of image texture features using hematoxylin stained breast tissue microarray slides which are assessed while simulating different imaging challenges including out of focus, changes in magnification and variations in illumination, noise, compression, distortion, and rotation. We employed five texture analysis methods and tested them while introducing all of the challenges listed above. The texture features that were evaluated include co-occurrence matrix, center-symmetric auto-correlation, texture feature coding method, local binary pattern, and texton. Due to the independence of each transformation and texture descriptor, a network structured combination was proposed and deployed on the Rutgers private cloud. The experiments utilized 20 randomly selected tissue microarray cores. All the combinations of the image transformations and deformations are calculated, and the whole feature extraction procedure was completed in 70 minutes using a cloud equipped with 20 nodes. Center-symmetric auto-correlation outperforms all the other four texture descriptors but also requires the longest computational time. It is roughly 10 times slower than local binary pattern and texton. From a speed perspective, both the local binary pattern and texton features provided excellent performance for classification and content-based image retrieval.

  2. The analysis and rationale behind the upgrading of existing standard definition thermal imagers to high definition

    Science.gov (United States)

    Goss, Tristan M.

    2016-05-01

    With 640x512 pixel format IR detector arrays having been on the market for the past decade, Standard Definition (SD) thermal imaging sensors have been developed and deployed across the world. Now with 1280x1024 pixel format IR detector arrays becoming readily available designers of thermal imager systems face new challenges as pixel sizes reduce and the demand and applications for High Definition (HD) thermal imaging sensors increases. In many instances the upgrading of existing under-sampled SD thermal imaging sensors into more optimally sampled or oversampled HD thermal imaging sensors provides a more cost effective and reduced time to market option than to design and develop a completely new sensor. This paper presents the analysis and rationale behind the selection of the best suited HD pixel format MWIR detector for the upgrade of an existing SD thermal imaging sensor to a higher performing HD thermal imaging sensor. Several commercially available and "soon to be" commercially available HD small pixel IR detector options are included as part of the analysis and are considered for this upgrade. The impact the proposed detectors have on the sensor's overall sensitivity, noise and resolution is analyzed, and the improved range performance is predicted. Furthermore with reduced dark currents due to the smaller pixel sizes, the candidate HD MWIR detectors are operated at higher temperatures when compared to their SD predecessors. Therefore, as an additional constraint and as a design goal, the feasibility of achieving upgraded performance without any increase in the size, weight and power consumption of the thermal imager is discussed herein.

  3. Genetic Analysis of Reduced γ-Tocopherol Content in Ethiopian Mustard Seeds.

    Science.gov (United States)

    García-Navarro, Elena; Fernández-Martínez, José M; Pérez-Vich, Begoña; Velasco, Leonardo

    2016-01-01

    Ethiopian mustard (Brassica carinata A. Braun) line BCT-6, with reduced γ-tocopherol content in the seeds, has been previously developed. The objective of this research was to conduct a genetic analysis of seed tocopherols in this line. BCT-6 was crossed with the conventional line C-101 and the F1, F2, and BC plant generations were analyzed. Generation mean analysis using individual scaling tests indicated that reduced γ-tocopherol content fitted an additive-dominant genetic model with predominance of additive effects and absence of epistatic interactions. This was confirmed through a joint scaling test and additional testing of the goodness of fit of the model. Conversely, epistatic interactions were identified for total tocopherol content. Estimation of the minimum number of genes suggested that both γ- and total tocopherol content may be controlled by two genes. A positive correlation between total tocopherol content and the proportion of γ-tocopherol was identified in the F2 generation. Additional research on the feasibility of developing germplasm with high tocopherol content and reduced concentration of γ-tocopherol is required.

  4. Genetic Analysis of Reduced γ-Tocopherol Content in Ethiopian Mustard Seeds

    Directory of Open Access Journals (Sweden)

    Elena García-Navarro

    2016-01-01

    Full Text Available Ethiopian mustard (Brassica carinata A. Braun line BCT-6, with reduced γ-tocopherol content in the seeds, has been previously developed. The objective of this research was to conduct a genetic analysis of seed tocopherols in this line. BCT-6 was crossed with the conventional line C-101 and the F1, F2, and BC plant generations were analyzed. Generation mean analysis using individual scaling tests indicated that reduced γ-tocopherol content fitted an additive-dominant genetic model with predominance of additive effects and absence of epistatic interactions. This was confirmed through a joint scaling test and additional testing of the goodness of fit of the model. Conversely, epistatic interactions were identified for total tocopherol content. Estimation of the minimum number of genes suggested that both γ- and total tocopherol content may be controlled by two genes. A positive correlation between total tocopherol content and the proportion of γ-tocopherol was identified in the F2 generation. Additional research on the feasibility of developing germplasm with high tocopherol content and reduced concentration of γ-tocopherol is required.

  5. Image formation and image analysis in electron microscopy

    International Nuclear Information System (INIS)

    Heel, M. van.

    1981-01-01

    This thesis covers various aspects of image formation and image analysis in electron microscopy. The imaging of relatively strong objects in partially coherent illumination, the coherence properties of thermionic emission sources and the detection of objects in quantum noise limited images are considered. IMAGIC, a fast, flexible and friendly image analysis software package is described. Intelligent averaging of molecular images is discussed. (C.F.)

  6. Environmental High-content Fluorescence Microscopy (e-HCFM) of Tara Oceans Samples Provides a View of Global Ocean Protist Biodiversity

    Science.gov (United States)

    Coelho, L. P.; Colin, S.; Sunagawa, S.; Karsenti, E.; Bork, P.; Pepperkok, R.; de Vargas, C.

    2016-02-01

    Protists are responsible for much of the diversity in the eukaryotic kingdomand are crucial to several biogeochemical processes of global importance (e.g.,the carbon cycle). Recent global investigations of these organisms have reliedon sequence-based approaches. These methods do not, however, capture thecomplex functional morphology of these organisms nor can they typically capturephenomena such as interactions (except indirectly through statistical means).Direct imaging of these organisms, can therefore provide a valuable complementto sequencing and, when performed quantitatively, provide measures ofstructures and interaction patterns which can then be related back to sequencebased measurements. Towards this end, we developed a framework, environmentalhigh-content fluorescence microscopy (e-HCFM) which can be applied toenvironmental samples composed of mixed communities. This strategy is based ongeneral purposes dyes that stain major structures in eukaryotes. Samples areimaged using scanning confocal microscopy, resulting in a three-dimensionalimage-stack. High-throughput can be achieved using automated microscopy andcomputational analysis. Standard bioimage informatics segmentation methodscombined with feature computation and machine learning results in automatictaxonomic assignments to the objects that are imaged in addition to severalbiochemically relevant measurements (such as biovolumes, fluorescenceestimates) per organism. We provide results on 174 image acquisition from TaraOcean samples, which cover organisms from 5 to 180 microns (82 samples in the5-20 fraction, 96 in the 20-180 fraction). We show a validation of the approachboth on technical grounds (demonstrating the high accuracy of automatedclassification) and provide results obtain from image analysis and fromintegrating with other data, such as associated environmental parametersmeasured in situ as well as perspectives on integration with sequenceinformation.

  7. Fundus Image Features Extraction for Exudate Mining in Coordination with Content Based Image Retrieval: A Study

    Science.gov (United States)

    Gururaj, C.; Jayadevappa, D.; Tunga, Satish

    2018-06-01

    Medical field has seen a phenomenal improvement over the previous years. The invention of computers with appropriate increase in the processing and internet speed has changed the face of the medical technology. However there is still scope for improvement of the technologies in use today. One of the many such technologies of medical aid is the detection of afflictions of the eye. Although a repertoire of research has been accomplished in this field, most of them fail to address how to take the detection forward to a stage where it will be beneficial to the society at large. An automated system that can predict the current medical condition of a patient after taking the fundus image of his eye is yet to see the light of the day. Such a system is explored in this paper by summarizing a number of techniques for fundus image features extraction, predominantly hard exudate mining, coupled with Content Based Image Retrieval to develop an automation tool. The knowledge of the same would bring about worthy changes in the domain of exudates extraction of the eye. This is essential in cases where the patients may not have access to the best of technologies. This paper attempts at a comprehensive summary of the techniques for Content Based Image Retrieval (CBIR) or fundus features image extraction, and few choice methods of both, and an exploration which aims to find ways to combine these two attractive features, and combine them so that it is beneficial to all.

  8. Fundus Image Features Extraction for Exudate Mining in Coordination with Content Based Image Retrieval: A Study

    Science.gov (United States)

    Gururaj, C.; Jayadevappa, D.; Tunga, Satish

    2018-02-01

    Medical field has seen a phenomenal improvement over the previous years. The invention of computers with appropriate increase in the processing and internet speed has changed the face of the medical technology. However there is still scope for improvement of the technologies in use today. One of the many such technologies of medical aid is the detection of afflictions of the eye. Although a repertoire of research has been accomplished in this field, most of them fail to address how to take the detection forward to a stage where it will be beneficial to the society at large. An automated system that can predict the current medical condition of a patient after taking the fundus image of his eye is yet to see the light of the day. Such a system is explored in this paper by summarizing a number of techniques for fundus image features extraction, predominantly hard exudate mining, coupled with Content Based Image Retrieval to develop an automation tool. The knowledge of the same would bring about worthy changes in the domain of exudates extraction of the eye. This is essential in cases where the patients may not have access to the best of technologies. This paper attempts at a comprehensive summary of the techniques for Content Based Image Retrieval (CBIR) or fundus features image extraction, and few choice methods of both, and an exploration which aims to find ways to combine these two attractive features, and combine them so that it is beneficial to all.

  9. Content Analysis of Tobacco-related Twitter Posts

    Science.gov (United States)

    Myslín, Mark; Zhu, Shu-Hong; Conway, Michael

    2013-01-01

    Objective We present results of a content analysis of tobacco-related Twitter posts (tweets), focusing on tweets referencing e-cigarettes and hookah. Introduction Vast amounts of free, real-time, localizable Twitter data offer new possibilities for public health workers to identify trends and attitudes that more traditional surveillance methods may not capture, particularly in emerging areas of public health concern where reliable statistical evidence is not readily accessible. Existing applications include tracking public informedness during disease outbreaks [1]. Twitter-based surveillance is particularly suited to new challenges in tobacco control. Hookah and e-cigarettes have surged in popularity, yet regulation and public information remain sparse, despite controversial health effects [2,3]. Ubiquitous online marketing of these products and their popularity among new and younger users make Twitter a key resource for tobacco surveillance. Methods We collected 7,300 tobacco-related Twitter posts at 15-day intervals from December 2011 to July 2012, using ten general keywords such as cig* and hookah. Each tweet was manually classified using a tri-axial scheme, capturing genre (firsthand experience, joke, news, …), theme (underage usage, health, social image, …), and sentiment (positive, negative, neutral). Machine-learning classifiers were trained to detect tobacco-related vs. irrelevant tweets as well as each of the above categories, using Naïve Bayes, k-Nearest Neighbors, and Support Vector Machine algorithms. Finally, phi correlation coefficients were computed between each of the categories to discover emergent patterns. Results The most prevalent genre of tweets was personal experience, followed by categories such as opinion, marketing, and news. The most common themes were hookah, cessation, and social image, and sentiment toward tobacco was more positive (26%) than negative (20%). The most highly correlated categories were social image

  10. High energy positron imaging

    International Nuclear Information System (INIS)

    Chen Shengzu

    2003-01-01

    The technique of High Energy Positron Imaging (HEPI) is the new development and extension of Positron Emission Tomography (PET). It consists of High Energy Collimation Imaging (HECI), Dual Head Coincidence Detection Imaging (DHCDI) and Positron Emission Tomography (PET). We describe the history of the development and the basic principle of the imaging methods of HEPI in details in this paper. Finally, the new technique of the imaging fusion, which combined the anatomical image and the functional image together are also introduced briefly

  11. Monitoring of Antarctic moss ecosystems using a high spatial resolution imaging spectroscopy

    Science.gov (United States)

    Malenovsky, Zbynek; Lucieer, Arko; Robinson, Sharon; Harwin, Stephen; Turner, Darren; Veness, Tony

    2013-04-01

    The most abundant photosynthetically active plants growing along the rocky Antarctic shore are mosses of three species: Schistidium antarctici, Ceratodon purpureus, and Bryum pseudotriquetrum. Even though mosses are well adapted to the extreme climate conditions, their existence in Antarctica depends strongly on availability of liquid water from snowmelt during the short summer season. Recent changes in temperature, wind speed and stratospheric ozone are stimulating faster evaporation, which in turn influences moss growing rate, health state and abundance. This makes them an ideal bio-indicator of the Antarctic climate change. Very short growing season, lasting only about three months, requires a time efficient, easily deployable and spatially resolved method for monitoring the Antarctic moss beds. Ground and/or low-altitude airborne imaging spectroscopy (called also hyperspectral remote sensing) offers a fast and spatially explicit approach to investigate an actual spatial extent and physiological state of moss turfs. A dataset of ground-based spectral images was acquired with a mini-Hyperspec imaging spectrometer (Headwall Inc., the USA) during the Antarctic summer 2012 in the surroundings of the Australian Antarctic station Casey (Windmill Islands). The collection of high spatial resolution spectral images, with pixels about 2 cm in size containing from 162 up to 324 narrow spectral bands of wavelengths between 399 and 998 nm, was accompanied with point moss reflectance measurements recorded with the ASD HandHeld-2 spectroradiometer (Analytical Spectral Devices Inc., the USA). The first spectral analysis indicates significant differences in red-edge and near-infrared reflectance of differently watered moss patches. Contrary to high plants, where the Normalized Difference Vegetation Index (NDVI) represents an estimate of green biomass, NDVI of mosses indicates mainly the actual water content. Similarly to high plants, reflectance of visible wavelengths is

  12. Content Analysis of Trends in Print Magazine Tobacco Advertisements.

    Science.gov (United States)

    Banerjee, Smita; Shuk, Elyse; Greene, Kathryn; Ostroff, Jamie

    2015-07-01

    To provide a descriptive and comparative content analysis of tobacco print magazine ads, with a focus on rhetorical and persuasive themes. Print tobacco ads for cigarettes, cigars, e-cigarettes, moist snuff, and snus (N = 171) were content analyzed for the physical composition/ad format (e.g., size of ad, image, setting, branding, warning label) and the content of the ad (e.g., rhetorical themes, persuasive themes). The theme of pathos (that elicits an emotional response) was most frequently utilized for cigarette (61%), cigar (50%), and moist snuff (50%) ads, and the theme of logos (use of logic or facts to support position) was most frequently used for e-cigarette (85%) ads. Additionally, comparative claims were most frequently used for snus (e.g., "spit-free," "smoke-free") and e-cigarette ads (e.g., "no tobacco smoke, only vapor," "no odor, no ash"). Comparative claims were also used in cigarette ads, primarily to highlight availability in different flavors (e.g., "bold," "menthol"). This study has implications for tobacco product marketing regulation, particularly around limiting tobacco advertising in publications with a large youth readership and prohibiting false or misleading labels, labeling, and advertising for tobacco products, such as modified risk (unless approved by the FDA) or therapeutic claims.

  13. MRI findings and hematoma contents of chronic subdural hematomas

    Energy Technology Data Exchange (ETDEWEB)

    Keyaki, Atsushi; Makita, Yasumasa; Nabeshima, Sachio; Tei, Taikyoku; Lee, Young-Eun; Higashi, Toshio; Matsubayashi, Keiko; Miki, Yukio; Matsuo, Michimasa (Tenri Hospital, Nara (Japan))

    1991-02-01

    Twenty-six cases of chronic subdural hematomas (CSDHs) were studied with reference to magnetic resonance image (MRI) findings and the biochemical analysis of the hematoma contents. There were 5 cases of bilateral CSDH. An apparent history of head trauma was obtained in 13 cases. All cases were evaluated preoperatively with both computed tomography (CT) and MRI. MRI was studied with both T{sub 1}-weighted (spin echo, TR/TE 600/15) imaging (T{sub 1}WI) and T{sub 2}-weighted (spin echo, TR/TE 3,000/90) imaging (T{sub 2}WI). A biochemical analysis of the hematoma contents was assayed with regard to hematocrit (HT), the total protein (TP), methemoglobin (Met-Hb), the total cholesterol (Tchol), triglyceride (TG), fibrin and fibrinogen degradation products (FDP), Fe, and osmolarity (Osm). The CT findings were divided into four groups: 5 cases of low-density, 7 cases of isodensity, 13 cases of high-density, and 5 cases of mixed-density hematomas. The MRI findings were also divided as 18 cases of high-, 4 cases of iso-, and 2 cases of low-signal-intensity hematomas on T{sub 1}WI. On T{sub 2}WI, 18 cases were high-, 4 cases were iso-, and 2 cases were low-signal-intensity hematomas. Twelve cases were high-signal-intensity hematomas on both T{sub 1}WI and T{sub 2}WI. In comparison with the CT and MRI findings, hematomas of low and isodensity on CT showed high signal intensities on T{sub 1}WI except in one case. The high-density hematomas on CT showed a variable signal intensity on MRI. The Ht value showed no apparent correlation with the MRI findings; however, increased values of TP in hematomas tended to show higher signal intensities on T{sub 1}WI. The most apparent correlation was seen between the Met-Hb ratio and T{sub 1}WI MRI. All hematomas containing >10% Met-Hb showed high signal intensities on T{sub 1}WI. The CT, the MRI, and the results of the biochemic analysis of hematoma contents were presented in 3 cases. (J.P.N.).

  14. Breast cancer histopathology image analysis : a review

    NARCIS (Netherlands)

    Veta, M.; Pluim, J.P.W.; Diest, van P.J.; Viergever, M.A.

    2014-01-01

    This paper presents an overview of methods that have been proposed for the analysis of breast cancer histopathology images. This research area has become particularly relevant with the advent of whole slide imaging (WSI) scanners, which can perform cost-effective and high-throughput histopathology

  15. Active Learning Strategies for Phenotypic Profiling of High-Content Screens.

    Science.gov (United States)

    Smith, Kevin; Horvath, Peter

    2014-06-01

    High-content screening is a powerful method to discover new drugs and carry out basic biological research. Increasingly, high-content screens have come to rely on supervised machine learning (SML) to perform automatic phenotypic classification as an essential step of the analysis. However, this comes at a cost, namely, the labeled examples required to train the predictive model. Classification performance increases with the number of labeled examples, and because labeling examples demands time from an expert, the training process represents a significant time investment. Active learning strategies attempt to overcome this bottleneck by presenting the most relevant examples to the annotator, thereby achieving high accuracy while minimizing the cost of obtaining labeled data. In this article, we investigate the impact of active learning on single-cell-based phenotype recognition, using data from three large-scale RNA interference high-content screens representing diverse phenotypic profiling problems. We consider several combinations of active learning strategies and popular SML methods. Our results show that active learning significantly reduces the time cost and can be used to reveal the same phenotypic targets identified using SML. We also identify combinations of active learning strategies and SML methods which perform better than others on the phenotypic profiling problems we studied. © 2014 Society for Laboratory Automation and Screening.

  16. Nuclear physical express analysis of solid fuel sulphur content

    International Nuclear Information System (INIS)

    Pak, Yu.; Ponomaryova, M

    2005-01-01

    Full text: Sulphur content is an important qualitative coal parameter. The problem of coal sulphur content determining remains one of the most important both in Kazakhstan and in other coal-mining countries. The traditional method of sampling, the final stage of which is chemical analysis of coal for sulphur, is characterized by high labour intensity and low productivity. That's why it is ineffective for mass express analytical quality control and technological schemes of coal processing control. In this connection it is very urgent to develop a method of coal sulphur content on the base of a series nuclear-geophysical equipment with an isotope source of primary radiation, allowing to increase analysis representativity and maximally take into account coal real composition inconstancy. To solve the problem set it is necessary to study the main laws of X-ray-radiometric method applied to the coal quality analysis for working out instrumental methods of speed determining of coal sulphur content with satisfactory accuracy for technological tasks, to determine laws of changing the flows of characteristic X-ray and scattered radiation from coal sulphur content of various real composition and to optimize methodical and hardware parameters, providing minimal error of sulphur content control. On the base of studying laws of real composition coal components and their interconnections with sulphur content there has been substantiated the expediency of using hardware functions of calcium and iron to control coal sulphur contents; there has been suggested a model to estimate the methodical error of coal sulphur content determining on the base of the data about sensitivity to sulphur and effecting factors using ultimate methods of coal components substitution methods allowing to optimize sulphur control parameters; there has been worked out an algorithm of X-ray-radiometric control of sulphur content based on the sequential radiating the analyzed coal with gamma-radiation of

  17. Automated processing of label-free Raman microscope images of macrophage cells with standardized regression for high-throughput analysis.

    Science.gov (United States)

    Milewski, Robert J; Kumagai, Yutaro; Fujita, Katsumasa; Standley, Daron M; Smith, Nicholas I

    2010-11-19

    Macrophages represent the front lines of our immune system; they recognize and engulf pathogens or foreign particles thus initiating the immune response. Imaging macrophages presents unique challenges, as most optical techniques require labeling or staining of the cellular compartments in order to resolve organelles, and such stains or labels have the potential to perturb the cell, particularly in cases where incomplete information exists regarding the precise cellular reaction under observation. Label-free imaging techniques such as Raman microscopy are thus valuable tools for studying the transformations that occur in immune cells upon activation, both on the molecular and organelle levels. Due to extremely low signal levels, however, Raman microscopy requires sophisticated image processing techniques for noise reduction and signal extraction. To date, efficient, automated algorithms for resolving sub-cellular features in noisy, multi-dimensional image sets have not been explored extensively. We show that hybrid z-score normalization and standard regression (Z-LSR) can highlight the spectral differences within the cell and provide image contrast dependent on spectral content. In contrast to typical Raman imaging processing methods using multivariate analysis, such as single value decomposition (SVD), our implementation of the Z-LSR method can operate nearly in real-time. In spite of its computational simplicity, Z-LSR can automatically remove background and bias in the signal, improve the resolution of spatially distributed spectral differences and enable sub-cellular features to be resolved in Raman microscopy images of mouse macrophage cells. Significantly, the Z-LSR processed images automatically exhibited subcellular architectures whereas SVD, in general, requires human assistance in selecting the components of interest. The computational efficiency of Z-LSR enables automated resolution of sub-cellular features in large Raman microscopy data sets without

  18. Measuring populism: comparing two methods of content analysis

    NARCIS (Netherlands)

    Rooduijn, M.; Pauwels, T.

    2011-01-01

    The measurement of populism - particularly over time and space - has received only scarce attention. In this research note two different ways to measure populism are compared: a classical content analysis and a computer-based content analysis. An analysis of political parties in the United Kingdom,

  19. An Imaging And Graphics Workstation For Image Sequence Analysis

    Science.gov (United States)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  20. Tourist-created Content

    DEFF Research Database (Denmark)

    Munar, Ana Maria

    2011-01-01

    study of social media sites and destination brands, relying on qualitative research methods, content analysis and field research. Findings – Tourists are largely contributing to destination image formation, while avoiding the use of the formal elements of the brands. The most popular strategies used...

  1. Plant phenomics: an overview of image acquisition technologies and image data analysis algorithms.

    Science.gov (United States)

    Perez-Sanz, Fernando; Navarro, Pedro J; Egea-Cortines, Marcos

    2017-11-01

    The study of phenomes or phenomics has been a central part of biology. The field of automatic phenotype acquisition technologies based on images has seen an important advance in the last years. As with other high-throughput technologies, it addresses a common set of problems, including data acquisition and analysis. In this review, we give an overview of the main systems developed to acquire images. We give an in-depth analysis of image processing with its major issues and the algorithms that are being used or emerging as useful to obtain data out of images in an automatic fashion. © The Author 2017. Published by Oxford University Press.

  2. The utilization of human color categorization for content-based image retrieval

    NARCIS (Netherlands)

    van den Broek, Egon; Rogowitz, Bernice E.; Kisters, Peter M.F.; Pappas, Thrasyvoulos N.; Vuurpijl, Louis G.

    2004-01-01

    We present the concept of intelligent Content-Based Image Retrieval (iCBIR), which incorporates knowledge concerning human cognition in system development. The present research focuses on the utilization of color categories (or focal colors) for CBIR purposes, in particularly considered to be useful

  3. Texture analysis of B-mode ultrasound images to stage hepatic lipidosis in the dairy cow: A methodological study.

    Science.gov (United States)

    Banzato, Tommaso; Fiore, Enrico; Morgante, Massimo; Manuali, Elisabetta; Zotti, Alessandro

    2016-10-01

    Hepatic lipidosis is the most diffused hepatic disease in the lactating cow. A new methodology to estimate the degree of fatty infiltration of the liver in lactating cows by means of texture analysis of B-mode ultrasound images is proposed. B-mode ultrasonography of the liver was performed in 48 Holstein Friesian cows using standardized ultrasound parameters. Liver biopsies to determine the triacylglycerol content of the liver (TAGqa) were obtained from each animal. A large number of texture parameters were calculated on the ultrasound images by means of a free software. Based on the TAGqa content of the liver, 29 samples were classified as mild (TAGqa100mg/g) and 13 as severe (TAG>100mg/g) in steatosis. Stepwise linear regression analysis was performed to predict the TAGqa content of the liver (TAGpred) from the texture parameters calculated on the ultrasound images. A five-variable model was used to predict the TAG content from the ultrasound images. The regression model explained 83.4% of the variance. An area under the curve (AUC) of 0.949 was calculated for 50mg/g of TAGqa; using an optimal cut-off value of 72mg/g TAGpred had a sensitivity of 86.2% and a specificity of 84.2%. An AUC of 0.978 for 100mg/g of TAGqa was calculated; using an optimal cut-off value of 89mg/g, TAGpred sensitivity was 92.3% and specificity was 88.6%. Texture analysis of B-mode ultrasound images may therefore be used to accurately predict the TAG content of the liver in lactating cows. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Automated image analysis of atomic force microscopy images of rotavirus particles

    International Nuclear Information System (INIS)

    Venkataraman, S.; Allison, D.P.; Qi, H.; Morrell-Falvey, J.L.; Kallewaard, N.L.; Crowe, J.E.; Doktycz, M.J.

    2006-01-01

    A variety of biological samples can be imaged by the atomic force microscope (AFM) under environments that range from vacuum to ambient to liquid. Generally imaging is pursued to evaluate structural features of the sample or perhaps identify some structural changes in the sample that are induced by the investigator. In many cases, AFM images of sample features and induced structural changes are interpreted in general qualitative terms such as markedly smaller or larger, rougher, highly irregular, or smooth. Various manual tools can be used to analyze images and extract more quantitative data, but this is usually a cumbersome process. To facilitate quantitative AFM imaging, automated image analysis routines are being developed. Viral particles imaged in water were used as a test case to develop an algorithm that automatically extracts average dimensional information from a large set of individual particles. The extracted information allows statistical analyses of the dimensional characteristics of the particles and facilitates interpretation related to the binding of the particles to the surface. This algorithm is being extended for analysis of other biological samples and physical objects that are imaged by AFM

  5. Automated image analysis of atomic force microscopy images of rotavirus particles

    Energy Technology Data Exchange (ETDEWEB)

    Venkataraman, S. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Allison, D.P. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Biochemistry, Cellular, and Molecular Biology, University of Tennessee, Knoxville, TN 37996 (United States); Molecular Imaging Inc. Tempe, AZ, 85282 (United States); Qi, H. [Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Morrell-Falvey, J.L. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Kallewaard, N.L. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Crowe, J.E. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Doktycz, M.J. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States)]. E-mail: doktyczmj@ornl.gov

    2006-06-15

    A variety of biological samples can be imaged by the atomic force microscope (AFM) under environments that range from vacuum to ambient to liquid. Generally imaging is pursued to evaluate structural features of the sample or perhaps identify some structural changes in the sample that are induced by the investigator. In many cases, AFM images of sample features and induced structural changes are interpreted in general qualitative terms such as markedly smaller or larger, rougher, highly irregular, or smooth. Various manual tools can be used to analyze images and extract more quantitative data, but this is usually a cumbersome process. To facilitate quantitative AFM imaging, automated image analysis routines are being developed. Viral particles imaged in water were used as a test case to develop an algorithm that automatically extracts average dimensional information from a large set of individual particles. The extracted information allows statistical analyses of the dimensional characteristics of the particles and facilitates interpretation related to the binding of the particles to the surface. This algorithm is being extended for analysis of other biological samples and physical objects that are imaged by AFM.

  6. The Study on the Attenuation of X-ray and Imaging Quality by Contents in Stomach

    International Nuclear Information System (INIS)

    Dong, Kyung Rae; Ji, Youn Sang; Kim, Chang Bok; Choi, Seong Kwan; Moon, Sang In; Dieter, Kevin

    2009-01-01

    This study examined the change in the attenuation of X-rays with the ROI (Region of Interest) in DR (Digital Radiography) according to the stomach contents by manufacturing a tissue equivalent material phantom to simulate real stomach tissue based on the assumption that there is some attenuation of X-rays and a difference in imaging quality according to the stomach contents. The transit dosage by the attenuation of X-rays decreased with increasing protein thickness, which altered the average ROI values in the film and DR images. A comparison of the change in average ROI values of the film and DR image showed that the image in film caused larger density changes with varying thickness of protein than the image by DR. The results indicate that NPO (nothing by mouth) is more important in film system than in DR system.

  7. The Study on the Attenuation of X-ray and Imaging Quality by Contents in Stomach

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Kyung Rae; Ji, Youn Sang; Kim, Chang Bok; Choi, Seong Kwan; Moon, Sang In [Dept. of Radiological Technology, Gwangju Health College University, Gwangju (Korea, Republic of); Dieter, Kevin [Dept. of Physical Therapy, Gwangju Health College University, Gwangju (Korea, Republic of)

    2009-03-15

    This study examined the change in the attenuation of X-rays with the ROI (Region of Interest) in DR (Digital Radiography) according to the stomach contents by manufacturing a tissue equivalent material phantom to simulate real stomach tissue based on the assumption that there is some attenuation of X-rays and a difference in imaging quality according to the stomach contents. The transit dosage by the attenuation of X-rays decreased with increasing protein thickness, which altered the average ROI values in the film and DR images. A comparison of the change in average ROI values of the film and DR image showed that the image in film caused larger density changes with varying thickness of protein than the image by DR. The results indicate that NPO (nothing by mouth) is more important in film system than in DR system.

  8. Examination of bariatric surgery Facebook support groups: a content analysis.

    Science.gov (United States)

    Koball, Afton M; Jester, Dylan J; Domoff, Sarah E; Kallies, Kara J; Grothe, Karen B; Kothari, Shanu N

    2017-08-01

    Support following bariatric surgery is vital to ensure long-term postoperative success. Many individuals undergoing bariatric surgery are turning to online modalities, especially the popular social media platform Facebook, to access support groups and pages. Despite evidence suggesting that the majority of patients considering bariatric surgery are utilizing online groups, little is known about the actual content of these groups. The purpose of the present study was to conduct a content analysis of bariatric surgery support groups and pages on Facebook. Online via Facebook, independent academic medical center, United States. Data from bariatric surgery-related Facebook support groups and pages were extracted over a 1-month period in 2016. Salient content themes (e.g., progress posts, depression content, eating behaviors) were coded reliably (all κ> .70). More than 6,800 posts and replies were coded. Results indicated that seeking recommendations (11%), providing information or recommendations (53%), commenting on changes since surgery (19%), and lending support to other members (32%) were the most common types of posts. Content surrounding anxiety, eating behaviors, depression, body image, weight bias, and alcohol was found less frequently. Online bariatric surgery groups can be used to receive support, celebrate physical and emotional accomplishments, provide anecdotal accounts of the "bariatric lifestyle" for preoperative patients, and comment on challenges with mental health and experiences of weight bias. Providers should become acquainted with the content commonly found in online groups and exercise caution in recommending these platforms to information-seeking patients. Copyright © 2017 American Society for Bariatric Surgery. Published by Elsevier Inc. All rights reserved.

  9. VIP: Vortex Image Processing Package for High-contrast Direct Imaging

    Science.gov (United States)

    Gomez Gonzalez, Carlos Alberto; Wertz, Olivier; Absil, Olivier; Christiaens, Valentin; Defrère, Denis; Mawet, Dimitri; Milli, Julien; Absil, Pierre-Antoine; Van Droogenbroeck, Marc; Cantalloube, Faustine; Hinz, Philip M.; Skemer, Andrew J.; Karlsson, Mikael; Surdej, Jean

    2017-07-01

    We present the Vortex Image Processing (VIP) library, a python package dedicated to astronomical high-contrast imaging. Our package relies on the extensive python stack of scientific libraries and aims to provide a flexible framework for high-contrast data and image processing. In this paper, we describe the capabilities of VIP related to processing image sequences acquired using the angular differential imaging (ADI) observing technique. VIP implements functionalities for building high-contrast data processing pipelines, encompassing pre- and post-processing algorithms, potential source position and flux estimation, and sensitivity curve generation. Among the reference point-spread function subtraction techniques for ADI post-processing, VIP includes several flavors of principal component analysis (PCA) based algorithms, such as annular PCA and incremental PCA algorithms capable of processing big datacubes (of several gigabytes) on a computer with limited memory. Also, we present a novel ADI algorithm based on non-negative matrix factorization, which comes from the same family of low-rank matrix approximations as PCA and provides fairly similar results. We showcase the ADI capabilities of the VIP library using a deep sequence on HR 8799 taken with the LBTI/LMIRCam and its recently commissioned L-band vortex coronagraph. Using VIP, we investigated the presence of additional companions around HR 8799 and did not find any significant additional point source beyond the four known planets. VIP is available at http://github.com/vortex-exoplanet/VIP and is accompanied with Jupyter notebook tutorials illustrating the main functionalities of the library.

  10. Determination of renewable energy yield from mixed waste material from the use of novel image analysis methods.

    Science.gov (United States)

    Wagland, S T; Dudley, R; Naftaly, M; Longhurst, P J

    2013-11-01

    Two novel techniques are presented in this study which together aim to provide a system able to determine the renewable energy potential of mixed waste materials. An image analysis tool was applied to two waste samples prepared using known quantities of source-segregated recyclable materials. The technique was used to determine the composition of the wastes, where through the use of waste component properties the biogenic content of the samples was calculated. The percentage renewable energy determined by image analysis for each sample was accurate to within 5% of the actual values calculated. Microwave-based multiple-point imaging (AutoHarvest) was used to demonstrate the ability of such a technique to determine the moisture content of mixed samples. This proof-of-concept experiment was shown to produce moisture measurement accurate to within 10%. Overall, the image analysis tool was able to determine the renewable energy potential of the mixed samples, and the AutoHarvest should enable the net calorific value calculations through the provision of moisture content measurements. The proposed system is suitable for combustion facilities, and enables the operator to understand the renewable energy potential of the waste prior to combustion. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Wavelet optimization for content-based image retrieval in medical databases.

    Science.gov (United States)

    Quellec, G; Lamard, M; Cazuguel, G; Cochener, B; Roux, C

    2010-04-01

    We propose in this article a content-based image retrieval (CBIR) method for diagnosis aid in medical fields. In the proposed system, images are indexed in a generic fashion, without extracting domain-specific features: a signature is built for each image from its wavelet transform. These image signatures characterize the distribution of wavelet coefficients in each subband of the decomposition. A distance measure is then defined to compare two image signatures and thus retrieve the most similar images in a database when a query image is submitted by a physician. To retrieve relevant images from a medical database, the signatures and the distance measure must be related to the medical interpretation of images. As a consequence, we introduce several degrees of freedom in the system so that it can be tuned to any pathology and image modality. In particular, we propose to adapt the wavelet basis, within the lifting scheme framework, and to use a custom decomposition scheme. Weights are also introduced between subbands. All these parameters are tuned by an optimization procedure, using the medical grading of each image in the database to define a performance measure. The system is assessed on two medical image databases: one for diabetic retinopathy follow up and one for screening mammography, as well as a general purpose database. Results are promising: a mean precision of 56.50%, 70.91% and 96.10% is achieved for these three databases, when five images are returned by the system. Copyright 2009 Elsevier B.V. All rights reserved.

  12. Cnn Based Retinal Image Upscaling Using Zero Component Analysis

    Science.gov (United States)

    Nasonov, A.; Chesnakov, K.; Krylov, A.

    2017-05-01

    The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.

  13. Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study.

    Science.gov (United States)

    Vaismoradi, Mojtaba; Turunen, Hannele; Bondas, Terese

    2013-09-01

    Qualitative content analysis and thematic analysis are two commonly used approaches in data analysis of nursing research, but boundaries between the two have not been clearly specified. In other words, they are being used interchangeably and it seems difficult for the researcher to choose between them. In this respect, this paper describes and discusses the boundaries between qualitative content analysis and thematic analysis and presents implications to improve the consistency between the purpose of related studies and the method of data analyses. This is a discussion paper, comprising an analytical overview and discussion of the definitions, aims, philosophical background, data gathering, and analysis of content analysis and thematic analysis, and addressing their methodological subtleties. It is concluded that in spite of many similarities between the approaches, including cutting across data and searching for patterns and themes, their main difference lies in the opportunity for quantification of data. It means that measuring the frequency of different categories and themes is possible in content analysis with caution as a proxy for significance. © 2013 Wiley Publishing Asia Pty Ltd.

  14. Strong is the new skinny: A content analysis of fitspiration websites.

    Science.gov (United States)

    Boepple, Leah; Ata, Rheanna N; Rum, Ruba; Thompson, J Kevin

    2016-06-01

    "Fitspiration" websites are media that aim to inspire people to live healthy and fit lifestyles through motivating images and text related to exercise and diet. Given the link between similar Internet content (i.e., healthy living blogs) and problematic messages, we hypothesized that content on these sites would over-emphasize appearance and promote problematic messages regarding exercise and diet. Keywords "fitspo" and "fitspiration" were entered into search engines. The first 10 images and text from 51 individual websites were rated on a variety of characteristics. Results indicated that a majority of messages found on fitspiration websites focused on appearance. Other common themes included content promoting exercise for appearance-motivated reasons and content promoting dietary restraint. "Fitspiration" websites are a source of messages that reinforce over-valuation of physical appearance, eating concerns, and excessive exercise. Further research is needed to examine the impact viewing such content has on participants' psychological health. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Content Analysis of the Science Textbooks of Iranian Junior High School Course in terms of the Components of Health Education

    Directory of Open Access Journals (Sweden)

    Abdolreza Gilavand

    2016-12-01

    Full Text Available BackgroundProviding healthcare for students is one of the primary duties of the states. This study aimed to analyze the contents of the science textbooks of Junior High School course in terms of the components of health education in Iran.Materials and MethodsThis descriptive study was conducted through content analysis. To collect data, a researcher-made check list including: physical health, nutritional health, the environment, environmental health, family health, accidents and safety, mobility, physical education, mental health, prevention of risky behavior, control and prevention of diseases, disabilities, public health and school health, was used. The samples were the science textbooks of Junior High School course (7th, 8th and 9th grades. Analysis unit was all pages of the textbooks (texts, pictures and exercises. Descriptive method (frequency table, percentage, mean and standard deviation [SD] was used to analyze the data and non-parametric Chi-square test was used to investigate the probable significant differences between the components.ResultsThe results showed that the authors of sciences textbooks of Junior High School course have paid most attention to the component of control and prevention of diseases (21.10% and have paid no attention to the component of "mental health". Also, there were significant differences among the components of physical health, family health, the environment and environmental health in terms of to be addressed in the science textbooks of Junior High School (P

  16. Improving high resolution retinal image quality using speckle illumination HiLo imaging.

    Science.gov (United States)

    Zhou, Xiaolin; Bedggood, Phillip; Metha, Andrew

    2014-08-01

    Retinal image quality from flood illumination adaptive optics (AO) ophthalmoscopes is adversely affected by out-of-focus light scatter due to the lack of confocality. This effect is more pronounced in small eyes, such as that of rodents, because the requisite high optical power confers a large dioptric thickness to the retina. A recently-developed structured illumination microscopy (SIM) technique called HiLo imaging has been shown to reduce the effect of out-of-focus light scatter in flood illumination microscopes and produce pseudo-confocal images with significantly improved image quality. In this work, we adopted the HiLo technique to a flood AO ophthalmoscope and performed AO imaging in both (physical) model and live rat eyes. The improvement in image quality from HiLo imaging is shown both qualitatively and quantitatively by using spatial spectral analysis.

  17. Content-Based Image Retrieval Benchmarking: Utilizing color categories and color distributions

    NARCIS (Netherlands)

    van den Broek, Egon; Kisters, Peter M.F.; Vuurpijl, Louis G.

    From a human centered perspective three ingredients for Content-Based Image Retrieval (CBIR) were developed. First, with their existence confirmed by experimental data, 11 color categories were utilized for CBIR and used as input for a new color space segmentation technique. The complete HSI color

  18. High-contrast direct imaging of exo-planets and circumstellar disks: from the self-coherent camera to NICI data analysis

    International Nuclear Information System (INIS)

    Mazoyer, Johan

    2014-01-01

    Out of the 1800 exo-planets detected to date, only 50 were by direct imaging. However, by allowing the observation of circumstellar disks and planets (sometimes simultaneously around the same star, as in the case of β-pictoris), this method is a fundamental tool for the understanding of planetary formation. In addition, direct access to the light of the detected objects allows spectroscopy, paving the way for the first time to the chemical and thermal analysis of their atmosphere and surface. However, direct imaging raises specific challenges: accessing objects fainter than their star (with a ration up to 10"-"8 to 10"-"1"1), and separated only by a fraction of arc-second. To obtain these values, several techniques must be implemented. A corona-graph, used in complement with a deformable mirror and active optical aberration correction methods, produces high-contrast images, which can be further processed by differential imaging techniques. My PhD thesis work took place at the intersection of these techniques. At first, I analyzed, in simulation and experimentally on the THD 'French acronym for very high contrast' bench of the Paris Observatory, the performance of the self-coherent camera, a wavefront sensing technique used to correct the optical aberrations in the focal plane. I managed to obtained high-contrast zones (called dark holes) with performance up to 3.10"-"8 between 5 and 12?/D, in monochromatic light. I also started an analysis of the performance in narrow spectral bands. In the second part of my thesis, I applied the latest differential imaging techniques to high contrast images from another corona-graphic instrument, NICI. The processing of these data revealed unprecedented views of the dust disk orbiting HD 15115. (author)

  19. [Analysis of spectral features based on water content of desert vegetation].

    Science.gov (United States)

    Zhao, Zhao; Li, Xia; Yin, Ye-biao; Tang, Jin; Zhou, Sheng-bin

    2010-09-01

    By using HR-768 field-portable spectroradiometer made by the Spectra Vista Corporation (SVC) of America, the hyper-spectral data of nine types of desert plants were measured, and the water content of corresponding vegetation was determined by roasting in lab. The continuum of measured hyperspectral data was removed by using ENVI, and the relationship between the water content of vegetation and the reflectance spectrum was analyzed by using correlation coefficient method. The result shows that the correlation between the bands from 978 to 1030 nm and water content of vegetation is weak while it is better for the bands from 1133 to 1266 nm. The bands from 1374 to 1534 nm are the characteristic bands because of the correlation between them and water content is the best. By using cluster analysis and according to the water content, the vegetation could be marked off into three grades: high (>70%), medium (50%-70%) and low (<50%). The research reveals the relationship between water content of desert vegetation and hyperspectral data, and provides basis for the analysis of area in desert and the monitoring of desert vegetation by using remote sensing data.

  20. Using Sentinel-1 and Landsat 8 satellite images to estimate surface soil moisture content.

    Science.gov (United States)

    Mexis, Philippos-Dimitrios; Alexakis, Dimitrios D.; Daliakopoulos, Ioannis N.; Tsanis, Ioannis K.

    2016-04-01

    Nowadays, the potential for more accurate assessment of Soil Moisture (SM) content exploiting Earth Observation (EO) technology, by exploring the use of synergistic approaches among a variety of EO instruments has emerged. This study is the first to investigate the potential of Synthetic Aperture Radar (SAR) (Sentinel-1) and optical (Landsat 8) images in combination with ground measurements to estimate volumetric SM content in support of water management and agricultural practices. SAR and optical data are downloaded and corrected in terms of atmospheric, geometric and radiometric corrections. SAR images are also corrected in terms of roughness and vegetation with the synergistic use of Oh and Topp models using a dataset consisting of backscattering coefficients and corresponding direct measurements of ground parameters (moisture, roughness). Following, various vegetation indices (NDVI, SAVI, MSAVI, EVI, etc.) are estimated to record diachronically the vegetation regime within the study area and as auxiliary data in the final modeling. Furthermore, thermal images from optical data are corrected and incorporated to the overall approach. The basic principle of Thermal InfraRed (TIR) method is that Land Surface Temperature (LST) is sensitive to surface SM content due to its impact on surface heating process (heat capacity and thermal conductivity) under bare soil or sparse vegetation cover conditions. Ground truth data are collected from a Time-domain reflectometer (TRD) gauge network established in western Crete, Greece, during 2015. Sophisticated algorithms based on Artificial Neural Networks (ANNs) and Multiple Linear Regression (MLR) approaches are used to explore the statistical relationship between backscattering measurements and SM content. Results highlight the potential of SAR and optical satellite images to contribute to effective SM content detection in support of water resources management and precision agriculture. Keywords: Sentinel-1, Landsat 8, Soil

  1. Image Analysis and Estimation of Porosity and Permeability of Arnager Greensand, Upper Cretaceous, Denmark

    DEFF Research Database (Denmark)

    Solymar, Mikael; Fabricius, Ida

    1999-01-01

    Arnager Greensand consists of unconsolidated, poorly sorted fine-grained, glauconitic quartz sand, often silty or clayey, with a few horizons of cemented coarse-grained sand. Samples from the upper part of the Arnager Greensand were used for this study to estimate permeability from microscopic...... images. Backscattered Scanning Electron Microscope images from polished thin-sections were acquired for image analysis with the software PIPPIN(R). Differences in grey levels owing to density differences allowed us to estimate porosity, clay and particle content. The images were simplified into two...

  2. Spinal imaging and image analysis

    CERN Document Server

    Yao, Jianhua

    2015-01-01

    This book is instrumental to building a bridge between scientists and clinicians in the field of spine imaging by introducing state-of-the-art computational methods in the context of clinical applications.  Spine imaging via computed tomography, magnetic resonance imaging, and other radiologic imaging modalities, is essential for noninvasively visualizing and assessing spinal pathology. Computational methods support and enhance the physician’s ability to utilize these imaging techniques for diagnosis, non-invasive treatment, and intervention in clinical practice. Chapters cover a broad range of topics encompassing radiological imaging modalities, clinical imaging applications for common spine diseases, image processing, computer-aided diagnosis, quantitative analysis, data reconstruction and visualization, statistical modeling, image-guided spine intervention, and robotic surgery. This volume serves a broad audience as  contributions were written by both clinicians and researchers, which reflects the inte...

  3. Image Analysis for X-ray Imaging of Food

    DEFF Research Database (Denmark)

    Einarsdottir, Hildur

    for quality and safety evaluation of food products. In this effort the fields of statistics, image analysis and statistical learning are combined, to provide analytical tools for determining the aforementioned food traits. The work demonstrated includes a quantitative analysis of heat induced changes......X-ray imaging systems are increasingly used for quality and safety evaluation both within food science and production. They offer non-invasive and nondestructive penetration capabilities to image the inside of food. This thesis presents applications of a novel grating-based X-ray imaging technique...... and defect detection in food. Compared to the complex three dimensional analysis of microstructure, here two dimensional images are considered, making the method applicable for an industrial setting. The advantages obtained by grating-based imaging are compared to conventional X-ray imaging, for both foreign...

  4. Rapid, low-cost, image analysis through video processing

    International Nuclear Information System (INIS)

    Levinson, R.A.; Marrs, R.W.; Grantham, D.G.

    1976-01-01

    Remote Sensing now provides the data necessary to solve many resource problems. However, many of the complex image processing and analysis functions used in analysis of remotely-sensed data are accomplished using sophisticated image analysis equipment. High cost of this equipment places many of these techniques beyond the means of most users. A new, more economical, video system capable of performing complex image analysis has now been developed. This report describes the functions, components, and operation of that system. Processing capability of the new video image analysis system includes many of the tasks previously accomplished with optical projectors and digital computers. Video capabilities include: color separation, color addition/subtraction, contrast stretch, dark level adjustment, density analysis, edge enhancement, scale matching, image mixing (addition and subtraction), image ratioing, and construction of false-color composite images. Rapid input of non-digital image data, instantaneous processing and display, relatively low initial cost, and low operating cost gives the video system a competitive advantage over digital equipment. Complex pre-processing, pattern recognition, and statistical analyses must still be handled through digital computer systems. The video system at the University of Wyoming has undergone extensive testing, comparison to other systems, and has been used successfully in practical applications ranging from analysis of x-rays and thin sections to production of color composite ratios of multispectral imagery. Potential applications are discussed including uranium exploration, petroleum exploration, tectonic studies, geologic mapping, hydrology sedimentology and petrography, anthropology, and studies on vegetation and wildlife habitat

  5. Introduction of High Throughput Magnetic Resonance T2-Weighted Image Texture Analysis for WHO Grade 2 and 3 Gliomas.

    Science.gov (United States)

    Kinoshita, Manabu; Sakai, Mio; Arita, Hideyuki; Shofuda, Tomoko; Chiba, Yasuyoshi; Kagawa, Naoki; Watanabe, Yoshiyuki; Hashimoto, Naoya; Fujimoto, Yasunori; Yoshimine, Toshiki; Nakanishi, Katsuyuki; Kanemura, Yonehiro

    2016-01-01

    Reports have suggested that tumor textures presented on T2-weighted images correlate with the genetic status of glioma. Therefore, development of an image analyzing framework that is capable of objective and high throughput image texture analysis for large scale image data collection is needed. The current study aimed to address the development of such a framework by introducing two novel parameters for image textures on T2-weighted images, i.e., Shannon entropy and Prewitt filtering. Twenty-two WHO grade 2 and 28 grade 3 glioma patients were collected whose pre-surgical MRI and IDH1 mutation status were available. Heterogeneous lesions showed statistically higher Shannon entropy than homogenous lesions (p = 0.006) and ROC curve analysis proved that Shannon entropy on T2WI was a reliable indicator for discrimination of homogenous and heterogeneous lesions (p = 0.015, AUC = 0.73). Lesions with well-defined borders exhibited statistically higher Edge mean and Edge median values using Prewitt filtering than those with vague lesion borders (p = 0.0003 and p = 0.0005 respectively). ROC curve analysis also proved that both Edge mean and median values were promising indicators for discrimination of lesions with vague and well defined borders and both Edge mean and median values performed in a comparable manner (p = 0.0002, AUC = 0.81 and p image metrics that reflect lesion texture described on T2WI. These two metrics were validated by readings of a neuro-radiologist who was blinded to the results. This observation will facilitate further use of this technique in future large scale image analysis of glioma.

  6. Enriching Discovery Layers: A Product Comparison of Content Enrichment Services Syndetic Solutions and Content Café 2

    Directory of Open Access Journals (Sweden)

    Allison DaSilva

    2014-10-01

    Full Text Available A comparative analysis of content enrichment services, Syndetic Solutions and Content Café 2, was undertaken to explore which service would provide public library users with a superior online search and discovery experience through the enriched data elements offered, specifically looking at the cover image data element and exploring what factors impact the display of said element. A data-set of 250 items in five different formats, including books, CDs, DVDs, e-books, and video games, was searched in four North American public libraries’ discovery layers to compare the integration, extent, and quality of the cover image data element supplied by Syndetic Solutions and Content Café 2. Based on an analysis of the URLs, ISBNs, and UPCs for each of the 250 items, it was determined that the integration, and therefore the display, of the cover image data element was impacted by: (1 whether or not an ISBN or UPC was listed in the MARC bibliographic record; (2 which ISBN or UPC was listed, as items could potentially have more than one; (3 the inclusion of both ISBNs and UPCs in the record and the settings of the discovery tool; (4 the order in which the ISBNs or UPCs were listed in the record; and (5 whether or not Syndetic Solutions or Content Café 2 had the image data in its database at the time of the search. The quality of the cover image displayed was found to be impacted by the size requested by the library and the size of the image provided by the publisher. These findings may also have implications for the integration of other enriched data elements.

  7. Methodological challenges in qualitative content analysis: A discussion paper.

    Science.gov (United States)

    Graneheim, Ulla H; Lindgren, Britt-Marie; Lundman, Berit

    2017-09-01

    This discussion paper is aimed to map content analysis in the qualitative paradigm and explore common methodological challenges. We discuss phenomenological descriptions of manifest content and hermeneutical interpretations of latent content. We demonstrate inductive, deductive, and abductive approaches to qualitative content analysis, and elaborate on the level of abstraction and degree of interpretation used in constructing categories, descriptive themes, and themes of meaning. With increased abstraction and interpretation comes an increased challenge to demonstrate the credibility and authenticity of the analysis. A key issue is to show the logic in how categories and themes are abstracted, interpreted, and connected to the aim and to each other. Qualitative content analysis is an autonomous method and can be used at varying levels of abstraction and interpretation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Characterizing the DNA Damage Response by Cell Tracking Algorithms and Cell Features Classification Using High-Content Time-Lapse Analysis.

    Directory of Open Access Journals (Sweden)

    Walter Georgescu

    Full Text Available Traditionally, the kinetics of DNA repair have been estimated using immunocytochemistry by labeling proteins involved in the DNA damage response (DDR with fluorescent markers in a fixed cell assay. However, detailed knowledge of DDR dynamics across multiple cell generations cannot be obtained using a limited number of fixed cell time-points. Here we report on the dynamics of 53BP1 radiation induced foci (RIF across multiple cell generations using live cell imaging of non-malignant human mammary epithelial cells (MCF10A expressing histone H2B-GFP and the DNA repair protein 53BP1-mCherry. Using automatic extraction of RIF imaging features and linear programming techniques, we were able to characterize detailed RIF kinetics for 24 hours before and 24 hours after exposure to low and high doses of ionizing radiation. High-content-analysis at the single cell level over hundreds of cells allows us to quantify precisely the dose dependence of 53BP1 protein production, RIF nuclear localization and RIF movement after exposure to X-ray. Using elastic registration techniques based on the nuclear pattern of individual cells, we could describe the motion of individual RIF precisely within the nucleus. We show that DNA repair occurs in a limited number of large domains, within which multiple small RIFs form, merge and/or resolve with random motion following normal diffusion law. Large foci formation is shown to be mainly happening through the merging of smaller RIF rather than through growth of an individual focus. We estimate repair domain sizes of 7.5 to 11 µm2 with a maximum number of ~15 domains per MCF10A cell. This work also highlights DDR which are specific to doses larger than 1 Gy such as rapid 53BP1 protein increase in the nucleus and foci diffusion rates that are significantly faster than for spontaneous foci movement. We hypothesize that RIF merging reflects a "stressed" DNA repair process that has been taken outside physiological conditions when

  9. Networked Content Analysis: The case of climate change

    NARCIS (Netherlands)

    Niederer, S.M.C.

    2016-01-01

    Content Analysis has been developed within communication science as a technique to analyze bodies of text for features or (recurring) themes, in order to identify cultural indicators, societal trends and issues. And while Content Analysis has seen a tremendous uptake across scientific disciplines,

  10. Enhancing protein to extremely high content in photosynthetic bacteria during biogas slurry treatment.

    Science.gov (United States)

    Yang, Anqi; Zhang, Guangming; Meng, Fan; Lu, Pei; Wang, Xintian; Peng, Meng

    2017-12-01

    This work proposed a novel approach to achieve an extremely high protein content in photosynthetic bacteria (PSB) using biogas slurry as a culturing medium. The results showed the protein content of PSB could be enhanced strongly to 90% in the biogas slurry, which was much higher than reported microbial protein contents. The slurry was partially purified at the same time. Dark-aerobic was more beneficial than light-anaerobic condition for protein accumulation. High salinity and high ammonia of the biogas slurry were the main causes for protein enhancement. In addition, the biogas slurry provided a good buffer system for PSB to grow. The biosynthesis mechanism of protein in PSB was explored according to theoretical analysis. During biogas slurry treatment, the activities of glutamate synthase and glutamine synthetase were increased by 26.55%, 46.95% respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Vanadium contents in Kazakhstan fossils hydrocarbons by data of nuclear-physical analysis methods

    International Nuclear Information System (INIS)

    Nadirov, N.K.; Solodukhin, V.P.

    1998-01-01

    Investigation of nuclear physical methods possibilities of vanadium determination analysis in organic fossils and an application of these methods for solution of scientific and practical tasks are presented. Vanadium contents in high viscous petroleums and petroleum bituminous rock of different deposits of Western Kazakhstan and carbonaceous shales of Dzhangariya are studied. Presented data evidence that organic fossils of numerous deposits of Kazakhstan have industrial interest because of high vanadium concentration in its contents

  12. Using image analysis to monitor biological changes in consume fish

    DEFF Research Database (Denmark)

    Dissing, Bjørn Skovlund; Frosch, Stina; Nielsen, Michael Engelbrecht

    2011-01-01

    The quality of fish products is largely defined by the visual appearance of the products. Visual appearance includes measurable parameters such as color and texture. Fat content and distribution as well as deposition of carotenoid pigments such as astaxanthin in muscular and fat tissue...... fishes is based on highly laborious chemical analysis. Trichromatic digital imaging and point-wise colorimetric or spectral measurement are also ways of estimating either the redness or the actual astaxanthin concentration of the fillet. These methods all have drawbacks of either cumbersome testing...... are biological parameters with a huge impact on the color and texture of the fish muscle. Consumerdriven quality demands call for rapid methods for quantification of quality parameters such as fat and astaxanthin in the industry. The spectral electromagnetic reflection properties of astaxanthin are well known...

  13. Analysis of the thematic content of review Nucleus

    International Nuclear Information System (INIS)

    Guerra Valdes, Ramiro

    2007-01-01

    A computer programme for performing standardized analysis of research areas and key concepts of nuclear science and technology under development at Cubaenergia is presented. Main components of the information processing system, as well as computational methods and modules for thematic content analysis of INIS Database record files are described. Results of thematic content analysis of review Nucleus from 1986 to 2005 are shown. Furthermore, results of demonstrative study Nucleus, Science, Technology and Society are also shown. The results provide new elements to asses the significance of the thematic content of review Nucleus in the context of innovation in interrelated multidisciplinary research areas

  14. Evaluation of fat grains in gothaj sausage using image analysis

    Directory of Open Access Journals (Sweden)

    Ludmila Luňáková

    2016-12-01

    Full Text Available Fat is an irreplacable ingredient in the production of sausages and it determines the appearance of the resulting cut to a significant extent. When shopping, consumers choose a traditional product mostly according to its appearance, based onwhat they are used to. Chemical analysis is capable to determine the total fat content in the product, but it cannot accurately describe the shape and size of fat grains which the consumer observes when looking at the product. The size of fat grains considered acceptable by consumers can be determined using sensory analysis or image analysis. In recent years, image analysis has become widely used when examining meat and meat products. Compared to the human eye, image analysis using a computer system is highly effective, since a correctly adjusted computer program is able to evaluate results with lower error rate. The most commonly monitored parameter in meat products is the aforementioned fat. The fat is located in the cut surface of the product in the form of dispersed particles which can be fairly reliably identified based on color differences in the individual parts of the product matrix. The size of the fat grains depends on the input raw material used as well as on the production technology. The present article describes the application of image analysis when evaluating fat grains in the appearance of cut of the Gothaj sausage whose sensory requirements are set by Czech legislation, namely by Decree No. 326/2001 Coll., as amended. The paper evaluates the size of fat mosaic grains in Gothaj sausages from different manufacturers. Fat grains were divided into ten size classes according to various size limits; specifically, 0.25, 0.5, 0.75, 1.0, 1.5, 2.0, 2.5, 5.0, 8.0 and over 8 mm. The upper limit of up to 8 mm in diameter was chosen based on the limit for the size of individual fat grains set by the legislation. This upper limit was not exceeded by any of the products. On the other side the mosaic had the

  15. Shape analysis in medical image analysis

    CERN Document Server

    Tavares, João

    2014-01-01

    This book contains thirteen contributions from invited experts of international recognition addressing important issues in shape analysis in medical image analysis, including techniques for image segmentation, registration, modelling and classification, and applications in biology, as well as in cardiac, brain, spine, chest, lung and clinical practice. This volume treats topics such as, anatomic and functional shape representation and matching; shape-based medical image segmentation; shape registration; statistical shape analysis; shape deformation; shape-based abnormity detection; shape tracking and longitudinal shape analysis; machine learning for shape modeling and analysis; shape-based computer-aided-diagnosis; shape-based medical navigation; benchmark and validation of shape representation, analysis and modeling algorithms. This work will be of interest to researchers, students, and manufacturers in the fields of artificial intelligence, bioengineering, biomechanics, computational mechanics, computationa...

  16. [Evaluation of dental plaque by quantitative digital image analysis system].

    Science.gov (United States)

    Huang, Z; Luan, Q X

    2016-04-18

    To analyze the plaque staining image by using image analysis software, to verify the maneuverability, practicability and repeatability of this technique, and to evaluate the influence of different plaque stains. In the study, 30 volunteers were enrolled from the new dental students of Peking University Health Science Center in accordance with the inclusion criteria. The digital images of the anterior teeth were acquired after plaque stained according to filming standardization.The image analysis was performed using Image Pro Plus 7.0, and the Quigley-Hein plaque indexes of the anterior teeth were evaluated. The plaque stain area percentage and the corresponding dental plaque index were highly correlated,and the Spearman correlation coefficient was 0.776 (Pchart showed only a few spots outside the 95% consistency boundaries. The different plaque stains image analysis results showed that the difference of the tooth area measurements was not significant, while the difference of the plaque area measurements significant (P<0.01). This method is easy in operation and control,highly related to the calculated percentage of plaque area and traditional plaque index, and has good reproducibility.The different plaque staining method has little effect on image segmentation results.The sensitive plaque stain for image analysis is suggested.

  17. Retinal imaging and image analysis

    NARCIS (Netherlands)

    Abramoff, M.D.; Garvin, Mona K.; Sonka, Milan

    2010-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of

  18. Combining semantic technologies with a content-based image retrieval system - Preliminary considerations

    Science.gov (United States)

    Chmiel, P.; Ganzha, M.; Jaworska, T.; Paprzycki, M.

    2017-10-01

    Nowadays, as a part of systematic growth of volume, and variety, of information that can be found on the Internet, we observe also dramatic increase in sizes of available image collections. There are many ways to help users browsing / selecting images of interest. One of popular approaches are Content-Based Image Retrieval (CBIR) systems, which allow users to search for images that match their interests, expressed in the form of images (query by example). However, we believe that image search and retrieval could take advantage of semantic technologies. We have decided to test this hypothesis. Specifically, on the basis of knowledge captured in the CBIR, we have developed a domain ontology of residential real estate (detached houses, in particular). This allows us to semantically represent each image (and its constitutive architectural elements) represented within the CBIR. The proposed ontology was extended to capture not only the elements resulting from image segmentation, but also "spatial relations" between them. As a result, a new approach to querying the image database (semantic querying) has materialized, thus extending capabilities of the developed system.

  19. Analysis of Facebook content demand patterns

    OpenAIRE

    Kihl, Maria; Larsson, Robin; Unnervik, Niclas; Haberkamm, Jolina; Arvidsson, Åke; Aurelius, Andreas

    2014-01-01

    Data volumes in communication networks increase rapidly. Further, usage of social network applications is very wide spread among users, and among these applications, Facebook is the most popular. In this paper, we analyse user demands patterns and content popularity of Facebook generated traffic. The data comes from residential users in two metropolitan access networks in Sweden, and we analyse more than 17 million images downloaded by almost 16,000 Facebook users. We show that the distributi...

  20. High-Resolution 3 T MR Microscopy Imaging of Arterial Walls

    International Nuclear Information System (INIS)

    Sailer, Johannes; Rand, Thomas; Berg, Andreas; Sulzbacher, Irene; Peloschek, P.; Hoelzenbein, Thomas; Lammer, Johannes

    2006-01-01

    Purpose. To achieve a high spatial resolution in MR imaging that allows for clear visualization of anatomy and even histology and documentation of plaque morphology in in vitro samples from patients with advanced atherosclerosis. A further objective of our study was to evaluate whether T2-weighted high-resolution MR imaging can provide accurate classification of atherosclerotic plaque according to a modified American Heart Association classification. Methods. T2-weighted images of arteries were obtained in 13 in vitro specimens using a 3 T MR unit (Medspec 300 Avance/Bruker, Ettlingen, Germany) combined with a dedicated MR microscopy system. Measurement parameters were: T2-weighted sequences with TR 3.5 sec, TE 15-120 msec; field of view (FOV) 1.4 x 1.4; NEX 8; matrix 192; and slice thickness 600 μm. MR measurements were compared with corresponding histologic sections. Results. We achieved excellent spatial and contrast resolution in all specimens. We found high agreement between MR images and histology with regard to the morphology and extent of intimal proliferations in all but 2 specimens. We could differentiate fibrous caps and calcifications from lipid plaque components based on differences in signal intensity in order to differentiate hard and soft atheromatous plaques. Hard plaques with predominantly intimal calcifications were found in 7 specimens, and soft plaques with a cholesterol/lipid content in 5 cases. In all specimens, hemorrhage or thrombus formation, and fibrotic and hyalinized tissue could be detected on both MR imaging and histopathology. Conclusion. High-resolution, high-field MR imaging of arterial walls demonstrates the morphologic features, volume, and extent of intimal proliferations with high spatial and contrast resolution in in vitro specimens and can differentiate hard and soft plaques

  1. Analysis of antibody aggregate content at extremely high concentrations using sedimentation velocity with a novel interference optics.

    Science.gov (United States)

    Schilling, Kristian; Krause, Frank

    2015-01-01

    Monoclonal antibodies represent the most important group of protein-based biopharmaceuticals. During formulation, manufacturing, or storage, antibodies may suffer post-translational modifications altering their physical and chemical properties. Such induced conformational changes may lead to the formation of aggregates, which can not only reduce their efficiency but also be immunogenic. Therefore, it is essential to monitor the amount of size variants to ensure consistency and quality of pharmaceutical antibodies. In many cases, antibodies are formulated at very high concentrations > 50 g/L, mostly along with high amounts of sugar-based excipients. As a consequence, all routine aggregation analysis methods, such as size-exclusion chromatography, cannot monitor the size distribution at those original conditions, but only after dilution and usually under completely different solvent conditions. In contrast, sedimentation velocity (SV) allows to analyze samples directly in the product formulation, both with limited sample-matrix interactions and minimal dilution. One prerequisite for the analysis of highly concentrated samples is the detection of steep concentration gradients with sufficient resolution: Commercially available ultracentrifuges are not able to resolve such steep interference profiles. With the development of our Advanced Interference Detection Array (AIDA), it has become possible to register interferograms of solutions as highly concentrated as 150 g/L. The other major difficulty encountered at high protein concentrations is the pronounced non-ideal sedimentation behavior resulting from repulsive intermolecular interactions, for which a comprehensive theoretical modelling has not yet been achieved. Here, we report the first SV analysis of highly concentrated antibodies up to 147 g/L employing the unique AIDA ultracentrifuge. By developing a consistent experimental design and data fit approach, we were able to provide a reliable estimation of the minimum

  2. How Content Analysis may Complement and Extend the Insights of Discourse Analysis

    Directory of Open Access Journals (Sweden)

    Tracey Feltham-King

    2016-02-01

    Full Text Available Although discourse analysis is a well-established qualitative research methodology, little attention has been paid to how discourse analysis may be enhanced through careful supplementation with the quantification allowed in content analysis. In this article, we report on a research study that involved the use of both Foucauldian discourse analysis (FDA and directed content analysis based on social constructionist theory and our qualitative research findings. The research focused on the discourses deployed, and the ways in which women were discursively positioned, in relation to abortion in 300 newspaper articles, published in 25 national and regional South African newspapers over 28 years, from 1978 to 2005. While the FDA was able to illuminate the constitutive network of power relations constructing women as subjects of a particular kind, questions emerged that were beyond the scope of the FDA. These questions concerned understanding the relative weightings of various discourses and tracing historical changes in the deployment of these discourses. In this article, we show how the decision to combine FDA and content analysis affected our sampling methodology. Using specific examples, we illustrate the contribution of the FDA to the study. Then, we indicate how subject positioning formed the link between the FDA and the content analysis. Drawing on the same examples, we demonstrate how the content analysis supplemented the FDA through tracking changes over time and providing empirical evidence of the extent to which subject positionings were deployed.

  3. Automated X-ray image analysis for cargo security: Critical review and future promise.

    Science.gov (United States)

    Rogers, Thomas W; Jaccard, Nicolas; Morton, Edward J; Griffin, Lewis D

    2017-01-01

    We review the relatively immature field of automated image analysis for X-ray cargo imagery. There is increasing demand for automated analysis methods that can assist in the inspection and selection of containers, due to the ever-growing volumes of traded cargo and the increasing concerns that customs- and security-related threats are being smuggled across borders by organised crime and terrorist networks. We split the field into the classical pipeline of image preprocessing and image understanding. Preprocessing includes: image manipulation; quality improvement; Threat Image Projection (TIP); and material discrimination and segmentation. Image understanding includes: Automated Threat Detection (ATD); and Automated Contents Verification (ACV). We identify several gaps in the literature that need to be addressed and propose ideas for future research. Where the current literature is sparse we borrow from the single-view, multi-view, and CT X-ray baggage domains, which have some characteristics in common with X-ray cargo.

  4. Breast cancer histopathology image analysis: a review.

    Science.gov (United States)

    Veta, Mitko; Pluim, Josien P W; van Diest, Paul J; Viergever, Max A

    2014-05-01

    This paper presents an overview of methods that have been proposed for the analysis of breast cancer histopathology images. This research area has become particularly relevant with the advent of whole slide imaging (WSI) scanners, which can perform cost-effective and high-throughput histopathology slide digitization, and which aim at replacing the optical microscope as the primary tool used by pathologist. Breast cancer is the most prevalent form of cancers among women, and image analysis methods that target this disease have a huge potential to reduce the workload in a typical pathology lab and to improve the quality of the interpretation. This paper is meant as an introduction for nonexperts. It starts with an overview of the tissue preparation, staining and slide digitization processes followed by a discussion of the different image processing techniques and applications, ranging from analysis of tissue staining to computer-aided diagnosis, and prognosis of breast cancer patients.

  5. Quantification and characterization of zirconium hydrides in Zircaloy-4 by the image analysis method

    International Nuclear Information System (INIS)

    Zhang, J.H.; Groos, M.; Bredel, T.; Trotabas, M.; Combette, P.

    1992-01-01

    The image analysis method is used to determine the hydrogen content in specimens of Zircaloy-4. Two parameters, surface density of hydride, S v , and degree of orientation, Ω, are defined to represent separately the hydrogen content and the orientation of hydrides. By analysing the stress-relieved Zircaloy-4 specimens with known hydrogen content from 100 to 1000 ppm, a relationship is established between the parameter S v and the hydrogen content when the magnifications of the optical microscope are 1000 and 250. The degree of orientation for the hydride in the stress-relieved Zircaloy-4 cladding is about 0.3. (orig.)

  6. Image analysis of multiple moving wood pieces in real time

    Science.gov (United States)

    Wang, Weixing

    2006-02-01

    This paper presents algorithms for image processing and image analysis of wood piece materials. The algorithms were designed for auto-detection of wood piece materials on a moving conveyor belt or a truck. When wood objects on moving, the hard task is to trace the contours of the objects in n optimal way. To make the algorithms work efficiently in the plant, a flexible online system was designed and developed, which mainly consists of image acquisition, image processing, object delineation and analysis. A number of newly-developed algorithms can delineate wood objects with high accuracy and high speed, and in the wood piece analysis part, each wood piece can be characterized by a number of visual parameters which can also be used for constructing experimental models directly in the system.

  7. The effect of manganese content on mechanical properties of high titanium microalloyed steels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiaolin, E-mail: lixiaolinwork@163.com [Shougang Research Institute of Technology, Beijing 100041 (China); Li, Fei; Cui, Yang; Xiao, Baoliang [Shougang Research Institute of Technology, Beijing 100041 (China); Wang, Xuemin [School of Materials Science and Engineering, University of Science and Technology Beijing, Beijing 100083 (China)

    2016-11-20

    In this work, in order to achieve an optimum combination of high strength, ductility and toughness of high Ti microalloyed steel, extensive research efforts were exerted to study the effect of soaking temperature, manganese and sulfur content on properties of titanium steels. Precipitation hardening of Ti-bearing steels has been found to vary with different soaking temperature. Higher strength was achieved in these steels at higher soaking temperature due to dissolution of more TiC, Ti{sub 4}S{sub 2}C{sub 2} and little TiN, which lead to re-precipitation of fine carbides with greater volume fraction. The results of transmission electron microscope (TEM)analysis indicates that there were more and finer TiC precipitates coherent or semi-coherent with the ferrite matrix in the high manganese content steel than in low manganese content steel. The marked improvement in strength is also associated with low sulfur content. TiC particles smaller than 20 nm in 8Ti-8Mn steel help enhance strength to higher than 302 MPa compared with 8Mn steel.

  8. Application of Image Texture Analysis for Evaluation of X-Ray Images of Fungal-Infected Maize Kernels

    DEFF Research Database (Denmark)

    Orina, Irene; Manley, Marena; Kucheryavskiy, Sergey V.

    2018-01-01

    The feasibility of image texture analysis to evaluate X-ray images of fungal-infected maize kernels was investigated. X-ray images of maize kernels infected with Fusarium verticillioides and control kernels were acquired using high-resolution X-ray micro-computed tomography. After image acquisition...... developed using partial least squares discriminant analysis (PLS-DA), and accuracies of 67 and 73% were achieved using first-order statistical features and GLCM extracted features, respectively. This work provides information on the possible application of image texture as method for analysing X-ray images......., homogeneity and contrast) were extracted from the side, front and top views of each kernel and used as inputs for principal component analysis (PCA). The first-order statistical image features gave a better separation of the control from infected kernels on day 8 post-inoculation. Classification models were...

  9. High-resolution three-dimensional imaging and analysis of rock falls in Yosemite valley, California

    Science.gov (United States)

    Stock, Gregory M.; Bawden, G.W.; Green, J.K.; Hanson, E.; Downing, G.; Collins, B.D.; Bond, S.; Leslar, M.

    2011-01-01

    We present quantitative analyses of recent large rock falls in Yosemite Valley, California, using integrated high-resolution imaging techniques. Rock falls commonly occur from the glacially sculpted granitic walls of Yosemite Valley, modifying this iconic landscape but also posing signifi cant potential hazards and risks. Two large rock falls occurred from the cliff beneath Glacier Point in eastern Yosemite Valley on 7 and 8 October 2008, causing minor injuries and damaging structures in a developed area. We used a combination of gigapixel photography, airborne laser scanning (ALS) data, and ground-based terrestrial laser scanning (TLS) data to characterize the rock-fall detachment surface and adjacent cliff area, quantify the rock-fall volume, evaluate the geologic structure that contributed to failure, and assess the likely failure mode. We merged the ALS and TLS data to resolve the complex, vertical to overhanging topography of the Glacier Point area in three dimensions, and integrated these data with gigapixel photographs to fully image the cliff face in high resolution. Three-dimensional analysis of repeat TLS data reveals that the cumulative failure consisted of a near-planar rock slab with a maximum length of 69.0 m, a mean thickness of 2.1 m, a detachment surface area of 2750 m2, and a volume of 5663 ?? 36 m3. Failure occurred along a surfaceparallel, vertically oriented sheeting joint in a clear example of granitic exfoliation. Stress concentration at crack tips likely propagated fractures through the partially attached slab, leading to failure. Our results demonstrate the utility of high-resolution imaging techniques for quantifying far-range (>1 km) rock falls occurring from the largely inaccessible, vertical rock faces of Yosemite Valley, and for providing highly accurate and precise data needed for rock-fall hazard assessment. ?? 2011 Geological Society of America.

  10. Electromagnetic characterization of white spruce at different moisture contents using synthetic aperture radar imaging

    Science.gov (United States)

    Ingemi, Christopher M.; Owusu Twumasi, Jones; Yu, Tzuyang

    2018-03-01

    Detection and quantification of moisture content inside wood (timber) is key to ensuring safety and reliability of timber structures. Moisture inside wood attracts insects and fosters the development of fungi to attack the timber, causing significant damages and reducing the load bearing capacity during their design life. The use of non-destructive evaluation (NDE) techniques (e.g., microwave/radar, ultrasonic, stress wave, and X-ray) for condition assessment of timber structures is a good choice. NDE techniques provide information about the level of deterioration and material properties of timber structures without obstructing their functionality. In this study, microwave/radar NDE technique was selected for the characterization of wood at different moisture contents. A 12 in-by-3.5 in-by-1.5 in. white spruce specimen (picea glauca) was imaged at different moisture contents using a 10 GHz synthetic aperture radar (SAR) sensor inside an anechoic chamber. The presence of moisture was found to increase the SAR image amplitude as expected. Additionally, integrated SAR amplitude was found beneficial in modeling the moisture content inside the wood specimen.

  11. Subnuclear foci quantification using high-throughput 3D image cytometry

    Science.gov (United States)

    Wadduwage, Dushan N.; Parrish, Marcus; Choi, Heejin; Engelward, Bevin P.; Matsudaira, Paul; So, Peter T. C.

    2015-07-01

    Ionising radiation causes various types of DNA damages including double strand breaks (DSBs). DSBs are often recognized by DNA repair protein ATM which forms gamma-H2AX foci at the site of the DSBs that can be visualized using immunohistochemistry. However most of such experiments are of low throughput in terms of imaging and image analysis techniques. Most of the studies still use manual counting or classification. Hence they are limited to counting a low number of foci per cell (5 foci per nucleus) as the quantification process is extremely labour intensive. Therefore we have developed a high throughput instrumentation and computational pipeline specialized for gamma-H2AX foci quantification. A population of cells with highly clustered foci inside nuclei were imaged, in 3D with submicron resolution, using an in-house developed high throughput image cytometer. Imaging speeds as high as 800 cells/second in 3D were achieved by using HiLo wide-field depth resolved imaging and a remote z-scanning technique. Then the number of foci per cell nucleus were quantified using a 3D extended maxima transform based algorithm. Our results suggests that while most of the other 2D imaging and manual quantification studies can count only up to about 5 foci per nucleus our method is capable of counting more than 100. Moreover we show that 3D analysis is significantly superior compared to the 2D techniques.

  12. Phytochemical, Proximate and Metal Content Analysis of the Leaves ...

    African Journals Online (AJOL)

    Methods: The phytochemical analysis of Psidium guajava was carried out by using a standard procedure. Ash, fat, protein, carbohydrate and fibre contents were determined using proximate analysis while the metal contents were determined using Pearson's method. Results: The phytochemical analysis revealed the ...

  13. Hyperspectral image analysis. A tutorial

    International Nuclear Information System (INIS)

    Amigo, José Manuel; Babamoradi, Hamid; Elcoroaristizabal, Saioa

    2015-01-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processing will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares – Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case. - Highlights: • Comprehensive tutorial of Hyperspectral Image analysis. • Hierarchical discrimination of six classes of plastics containing flame retardant. • Step by step guidelines to perform class-modeling on hyperspectral images. • Fusion of multivariate data analysis and digital image processing methods. • Promising methodology for real-time detection of plastics containing flame retardant.

  14. High-speed imaging of explosive eruptions: applications and perspectives

    Science.gov (United States)

    Taddeucci, Jacopo; Scarlato, Piergiorgio; Gaudin, Damien; Capponi, Antonio; Alatorre-Ibarguengoitia, Miguel-Angel; Moroni, Monica

    2013-04-01

    Explosive eruptions, being by definition highly dynamic over short time scales, necessarily call for observational systems capable of relatively high sampling rates. "Traditional" tools, like as seismic and acoustic networks, have recently been joined by Doppler radar and electric sensors. Recent developments in high-speed camera systems now allow direct visual information of eruptions to be obtained with a spatial and temporal resolution suitable for the analysis of several key eruption processes. Here we summarize the methods employed to gather and process high-speed videos of explosive eruptions, and provide an overview of the several applications of these new type of data in understanding different aspects of explosive volcanism. Our most recent set up for high-speed imaging of explosive eruptions (FAMoUS - FAst, MUltiparametric Set-up,) includes: 1) a monochrome high speed camera, capable of 500 frames per second (fps) at high-definition (1280x1024 pixel) resolution and up to 200000 fps at reduced resolution; 2) a thermal camera capable of 50-200 fps at 480-120x640 pixel resolution; and 3) two acoustic to infrasonic sensors. All instruments are time-synchronized via a data logging system, a hand- or software-operated trigger, and via GPS, allowing signals from other instruments or networks to be directly recorded by the same logging unit or to be readily synchronized for comparison. FAMoUS weights less than 20 kg, easily fits into four, hand-luggage-sized backpacks, and can be deployed in less than 20' (and removed in less than 2', if needed). So far, explosive eruptions have been recorded in high-speed at several active volcanoes, including Fuego and Santiaguito (Guatemala), Stromboli (Italy), Yasur (Vanuatu), and Eyjafiallajokull (Iceland). Image processing and analysis from these eruptions helped illuminate several eruptive processes, including: 1) Pyroclasts ejection. High-speed videos reveal multiple, discrete ejection pulses within a single Strombolian

  15. Quantitative neuroanatomy of all Purkinje cells with light sheet microscopy and high-throughput image analysis

    Directory of Open Access Journals (Sweden)

    Ludovico eSilvestri

    2015-05-01

    Full Text Available Characterizing the cytoarchitecture of mammalian central nervous system on a brain-wide scale is becoming a compelling need in neuroscience. For example, realistic modeling of brain activity requires the definition of quantitative features of large neuronal populations in the whole brain. Quantitative anatomical maps will also be crucial to classify the cytoarchtitectonic abnormalities associated with neuronal pathologies in a high reproducible and reliable manner. In this paper, we apply recent advances in optical microscopy and image analysis to characterize the spatial distribution of Purkinje cells across the whole cerebellum. Light sheet microscopy was used to image with micron-scale resolution a fixed and cleared cerebellum of an L7-GFP transgenic mouse, in which all Purkinje cells are fluorescently labeled. A fast and scalable algorithm for fully automated cell identification was applied on the image to extract the position of all the fluorescent Purkinje cells. This vectorized representation of the cell population allows a thorough characterization of the complex three-dimensional distribution of the neurons, highlighting the presence of gaps inside the lamellar organization of Purkinje cells, whose density is believed to play a significant role in autism spectrum disorders. Furthermore, clustering analysis of the localized somata permits dividing the whole cerebellum in groups of Purkinje cells with high spatial correlation, suggesting new possibilities of anatomical partition. The quantitative approach presented here can be extended to study the distribution of different types of cell in many brain regions and across the whole encephalon, providing a robust base for building realistic computational models of the brain, and for unbiased morphological tissue screening in presence of pathologies and/or drug treatments.

  16. Profiling Chilean Suicide Note-Writers through Content Analysis

    Directory of Open Access Journals (Sweden)

    Francisco Ceballos-Espinoza

    2016-09-01

    Full Text Available Suicides account for 2000 deaths in Chile each year. With a suicide rate of 11.3, it is classified as a country with high suicide risk. Aims: to identify personality and cognitive characteristics of the group of Chilean suicides that left suicide notes, through a content analysis. Methods: descriptive field study with an ex post facto design. All suicides registered between 2010 and 2012 by the Investigations Police of Chile were analyzed, obtaining 203 suicide notes from 96 cases. The Darbonne categories for content analysis were used with the inter-judge method. Results: The mean age of the suicides was 44.2 (SD = 18.53. Most of the notes were addressed to family members (51.7%. The most expressed reasons were marital- or interpersonal-related (24.6%; another 23.6% expressed a lack of purpose or hopelessness (including depression, wish to die, low self-esteem. The most frequent content expressed were instructions (about money, children, and funeral. All of the notes showed logical thinking and were written with coherence and clarity. Notably 42% of the notes were marked by affections of fondness, love or dependence of others. Regarding attitudes, the most common were of escape or farewell (42.4%, followed by fatalism, hopelessness, frustration or tiredness (40%. 24 statistically significant differences were found throughout the categories of analysis, according to cohorts of age, marital status and sex. Conclusions: the findings contribute to the profiling of Chilean suicides and to the implementation of suicide prevention programs

  17. Medical image registration for analysis

    International Nuclear Information System (INIS)

    Petrovic, V.

    2006-01-01

    Full text: Image registration techniques represent a rich family of image processing and analysis tools that aim to provide spatial correspondences across sets of medical images of similar and disparate anatomies and modalities. Image registration is a fundamental and usually the first step in medical image analysis and this paper presents a number of advanced techniques as well as demonstrates some of the advanced medical image analysis techniques they make possible. A number of both rigid and non-rigid medical image alignment algorithms of equivalent and merely consistent anatomical structures respectively are presented. The algorithms are compared in terms of their practical aims, inputs, computational complexity and level of operator (e.g. diagnostician) interaction. In particular, the focus of the methods discussion is placed on the applications and practical benefits of medical image registration. Results of medical image registration on a number of different imaging modalities and anatomies are presented demonstrating the accuracy and robustness of their application. Medical image registration is quickly becoming ubiquitous in medical imaging departments with the results of such algorithms increasingly used in complex medical image analysis and diagnostics. This paper aims to demonstrate at least part of the reason why

  18. High-quality and small-capacity e-learning video featuring lecturer-superimposing PC screen images

    Science.gov (United States)

    Nomura, Yoshihiko; Murakami, Michinobu; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    2006-10-01

    Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene. While dynamic lecturing scene is shot by a digital video camera, screen images are electronically stored by a PC screen image capturing software in relatively long period at a practical class. Then, a lecturer and a lecture stick are extracted from the digital video images by pattern recognition techniques, and the extracted images are superimposed on the appropriate PC screen images by off-line processing. Thus, we have succeeded to create a high-quality and small-capacity (HQ/SC) video-on-demand educational content featuring the advantages: the high quality of image sharpness, the small electronic file capacity, and the realistic lecturer motion.

  19. Characterization of SPAD Array for Multifocal High-Content Screening Applications

    Directory of Open Access Journals (Sweden)

    Anthony Tsikouras

    2016-10-01

    Full Text Available Current instruments used to detect specific protein-protein interactions in live cells for applications in high-content screening (HCS are limited by the time required to measure the lifetime. Here, a 32 × 1 single-photon avalanche diode (SPAD array was explored as a detector for fluorescence lifetime imaging (FLIM in HCS. Device parameters and characterization results were interpreted in the context of the application to determine if the SPAD array could satisfy the requirements of HCS-FLIM. Fluorescence lifetime measurements were performed using a known fluorescence standard; and the recovered fluorescence lifetime matched literature reported values. The design of a theoretical 32 × 32 SPAD array was also considered as a detector for a multi-point confocal scanning microscope.

  20. An efficient similarity measure for content based image retrieval using memetic algorithm

    Directory of Open Access Journals (Sweden)

    Mutasem K. Alsmadi

    2017-06-01

    Full Text Available Content based image retrieval (CBIR systems work by retrieving images which are related to the query image (QI from huge databases. The available CBIR systems extract limited feature sets which confine the retrieval efficacy. In this work, extensive robust and important features were extracted from the images database and then stored in the feature repository. This feature set is composed of color signature with the shape and color texture features. Where, features are extracted from the given QI in the similar fashion. Consequently, a novel similarity evaluation using a meta-heuristic algorithm called a memetic algorithm (genetic algorithm with great deluge is achieved between the features of the QI and the features of the database images. Our proposed CBIR system is assessed by inquiring number of images (from the test dataset and the efficiency of the system is evaluated by calculating precision-recall value for the results. The results were superior to other state-of-the-art CBIR systems in regard to precision.

  1. Visual Analysis of Weblog Content

    Energy Technology Data Exchange (ETDEWEB)

    Gregory, Michelle L.; Payne, Deborah A.; McColgin, Dave; Cramer, Nick O.; Love, Douglas V.

    2007-03-26

    In recent years, one of the advances of the World Wide Web is social media and one of the fastest growing aspects of social media is the blogosphere. Blogs make content creation easy and are highly accessible through web pages and syndication. With their growing influence, a need has arisen to be able to monitor the opinions and insight revealed within their content. In this paper we describe a technical approach for analyzing the content of blog data using a visual analytic tool, IN-SPIRE, developed by Pacific Northwest National Laboratory. We highlight the capabilities of this tool that are particularly useful for information gathering from blog data.

  2. The Digital Image Processing And Quantitative Analysis In Microscopic Image Characterization

    International Nuclear Information System (INIS)

    Ardisasmita, M. Syamsa

    2000-01-01

    Many electron microscopes although have produced digital images, but not all of them are equipped with a supporting unit to process and analyse image data quantitatively. Generally the analysis of image has to be made visually and the measurement is realized manually. The development of mathematical method for geometric analysis and pattern recognition, allows automatic microscopic image analysis with computer. Image processing program can be used for image texture and structure periodic analysis by the application of Fourier transform. Because the development of composite materials. Fourier analysis in frequency domain become important for measure the crystallography orientation. The periodic structure analysis and crystal orientation are the key to understand many material properties like mechanical strength. stress, heat conductivity, resistance, capacitance and other material electric and magnetic properties. In this paper will be shown the application of digital image processing in microscopic image characterization and analysis in microscopic image

  3. System for accessing a collection of histology images using content-based strategies

    International Nuclear Information System (INIS)

    Gonzalez F; Caicedo J C; Cruz Roa A; Camargo, J; Spinel, C

    2010-01-01

    Histology images are an important resource for research, education and medical practice. The availability of image collections with reference purposes is limited to printed formats such as books and specialized journals. When histology image sets are published in digital formats, they are composed of some tens of images that do not represent the wide diversity of biological structures that can be found in fundamental tissues; making a complete histology image collection available to the general public having a great impact on research and education in different areas such as medicine, biology and natural sciences. This work presents the acquisition process of a histology image collection with 20,000 samples in digital format, from tissue processing to digital image capturing. The main purpose of collecting these images is to make them available as reference material to the academic community. In addition, this paper presents the design and architecture of a system to query and explore the image collection, using content-based image retrieval tools and text-based search on the annotations provided by experts. The system also offers novel image visualization methods to allow easy identification of interesting images among hundreds of possible pictures. The system has been developed using a service-oriented architecture and allows web-based access in http://www.informed.unal.edu.co

  4. #fitspo on Instagram: A mixed-methods approach using Netlytic and photo analysis, uncovering the online discussion and author/image characteristics.

    Science.gov (United States)

    Santarossa, Sara; Coyne, Paige; Lisinski, Carly; Woodruff, Sarah J

    2016-11-01

    The #fitspo 'tag' is a recent trend on Instagram, which is used on posts to motivate others towards a healthy lifestyle through exercise/eating habits. This study used a mixed-methods approach consisting of text and network analysis via the Netlytic program ( N = 10,000 #fitspo posts), and content analysis of #fitspo images ( N = 122) was used to examine author and image characteristics. Results suggest that #fitspo posts may motivate through appearance-mediated themes, as the largest content categories (based on the associated text) were 'feeling good' and 'appearance'. Furthermore, #fitspo posts may create peer influence/support as personal (opposed to non-personal) accounts were associated with higher popularity of images (i.e. number of likes/followers). Finally, most images contained posed individuals with some degree of objectification.

  5. [Content of mineral elements of Gastrodia elata by principal components analysis].

    Science.gov (United States)

    Li, Jin-ling; Zhao, Zhi; Liu, Hong-chang; Luo, Chun-li; Huang, Ming-jin; Luo, Fu-lai; Wang, Hua-lei

    2015-03-01

    To study the content of mineral elements and the principal components in Gastrodia elata. Mineral elements were determined by ICP and the data was analyzed by SPSS. K element has the highest content-and the average content was 15.31 g x kg(-1). The average content of N element was 8.99 g x kg(-1), followed by K element. The coefficient of variation of K and N was small, but the Mn was the biggest with 51.39%. The highly significant positive correlation was found among N, P and K . Three principal components were selected by principal components analysis to evaluate the quality of G. elata. P, B, N, K, Cu, Mn, Fe and Mg were the characteristic elements of G. elata. The content of K and N elements was higher and relatively stable. The variation of Mn content was biggest. The quality of G. elata in Guizhou and Yunnan was better from the perspective of mineral elements.

  6. Marketing pharmaceutical drugs to women in magazines: a content analysis.

    Science.gov (United States)

    Sokol, Jennifer; Wackowski, Olivia; Lewis, M J

    2010-01-01

    To examine the prevalence and content of pharmaceutical ads in demographically different women's magazines. A content analysis was conducted using one year's worth of 5 different women's magazines of varying age demographics. Magazines differed in the proportion of drug ads for different health conditions (eg, cardiovascular) and target audience by age demographic. Use of persuasive elements (types of appeals, evidence) varied by condition promoted (eg, mental-health drug ads more frequently used emotional appeals). Ads placed greater emphasis on direction to industry information resources than on physician discussions. Prevalence of pharmaceutical advertising in women's magazines is high; continued surveillance is recommended.

  7. YouTube™ as a Source of Instructional Videos on Bowel Preparation: a Content Analysis.

    Science.gov (United States)

    Ajumobi, Adewale B; Malakouti, Mazyar; Bullen, Alexander; Ahaneku, Hycienth; Lunsford, Tisha N

    2016-12-01

    Instructional videos on bowel preparation have been shown to improve bowel preparation scores during colonoscopy. YouTube™ is one of the most frequently visited website on the internet and contains videos on bowel preparation. In an era where patients are increasingly turning to social media for guidance on their health, the content of these videos merits further investigation. We assessed the content of bowel preparation videos available on YouTube™ to determine the proportion of YouTube™ videos on bowel preparation that are high-content videos and the characteristics of these videos. YouTube™ videos were assessed for the following content: (1) definition of bowel preparation, (2) importance of bowel preparation, (3) instructions on home medications, (4) name of bowel cleansing agent (BCA), (5) instructions on when to start taking BCA, (6) instructions on volume and frequency of BCA intake, (7) diet instructions, (8) instructions on fluid intake, (9) adverse events associated with BCA, and (10) rectal effluent. Each content parameter was given 1 point for a total of 10 points. Videos with ≥5 points were considered by our group to be high-content videos. Videos with ≤4 points were considered low-content videos. Forty-nine (59 %) videos were low-content videos while 34 (41 %) were high-content videos. There was no association between number of views, number of comments, thumbs up, thumbs down or engagement score, and videos deemed high-content. Multiple regression analysis revealed bowel preparation videos on YouTube™ with length >4 minutes and non-patient authorship to be associated with high-content videos.

  8. From Digital Imaging to Computer Image Analysis of Fine Art

    Science.gov (United States)

    Stork, David G.

    An expanding range of techniques from computer vision, pattern recognition, image analysis, and computer graphics are being applied to problems in the history of art. The success of these efforts is enabled by the growing corpus of high-resolution multi-spectral digital images of art (primarily paintings and drawings), sophisticated computer vision methods, and most importantly the engagement of some art scholars who bring questions that may be addressed through computer methods. This paper outlines some general problem areas and opportunities in this new inter-disciplinary research program.

  9. Analysis of Content Shared in Online Cancer Communities: Systematic Review.

    Science.gov (United States)

    van Eenbergen, Mies C; van de Poll-Franse, Lonneke V; Krahmer, Emiel; Verberne, Suzan; Mols, Floortje

    2018-04-03

    The content that cancer patients and their relatives (ie, posters) share in online cancer communities has been researched in various ways. In the past decade, researchers have used automated analysis methods in addition to manual coding methods. Patients, providers, researchers, and health care professionals can learn from experienced patients, provided that their experience is findable. The aim of this study was to systematically review all relevant literature that analyzes user-generated content shared within online cancer communities. We reviewed the quality of available research and the kind of content that posters share with each other on the internet. A computerized literature search was performed via PubMed (MEDLINE), PsycINFO (5 and 4 stars), Cochrane Central Register of Controlled Trials, and ScienceDirect. The last search was conducted in July 2017. Papers were selected if they included the following terms: (cancer patient) and (support group or health communities) and (online or internet). We selected 27 papers and then subjected them to a 14-item quality checklist independently scored by 2 investigators. The methodological quality of the selected studies varied: 16 were of high quality and 11 were of adequate quality. Of those 27 studies, 15 were manually coded, 7 automated, and 5 used a combination of methods. The best results can be seen in the papers that combined both analytical methods. The number of analyzed posts ranged from 200 to 1,500,000; the number of analyzed posters ranged from 75 to 90,000. The studies analyzing large numbers of posts mainly related to breast cancer, whereas those analyzing small numbers were related to other types of cancers. A total of 12 studies involved some or entirely automatic analysis of the user-generated content. All the authors referred to two main content categories: informational support and emotional support. In all, 15 studies reported only on the content, 6 studies explicitly reported on content and social

  10. Portrayal of tobacco smoking in popular women's magazines: a content analysis.

    Science.gov (United States)

    Kasujee, Naseera; Britton, John; Cranwell, Jo; Lyons, Ailsa; Bains, Manpreet

    2017-09-01

    Whilst many countries have introduced legislation prohibiting tobacco advertising and sponsorship, references to tobacco continue to appear in the media. This study quantified and characterized tobacco smoking content in popular women's magazines. The 10 top weekly and 5 monthly women's magazines most popular among 15-34 year olds in Britain published over a 3-month period were included. A content analysis was conducted for both written and visual content. In 146 magazines, there were 310 instances of tobacco content, the majority of which were positive towards smoking. Instances of celebrities smoking were most common (171, 55%), often in holiday or party settings that could be perceived to be luxurious, glamorous or fun. In all, 55 (18%) tobacco references related to fashion, which generally created an impression of smoking as a norm within the industry; and 34 (11%) text and image references to tobacco in TV and film. There were 50 (16%) reader-initiated mentions of smoking, typically in real-life stories or readers writing in to seek advice about smoking. Anti-smoking references including the hazards of smoking were infrequent (49; 16%). Although tobacco advertising is prohibited in Britain, women's magazines still appear to be promoting positive messages about tobacco and smoking. © The Author 2016. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Near-Infrared Imaging for High-Throughput Screening of Moisture-Induced Changes in Freeze-Dried Formulations

    DEFF Research Database (Denmark)

    Trnka, Hjalte; Palou, Anna; Panouillot, Pierre Emanuel

    2014-01-01

    Evaluation of freeze-dried biopharmaceutical formulations requires careful analysis of multiple quality attributes. The aim of this study was to evaluate the use of near-infrared (NIR) imaging for fast analysis of water content and related physical properties in freeze-dried formulations. Model f...... tool for formulation development of freeze-dried samples. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association J Pharm Sci....

  12. Oncological image analysis.

    Science.gov (United States)

    Brady, Sir Michael; Highnam, Ralph; Irving, Benjamin; Schnabel, Julia A

    2016-10-01

    Cancer is one of the world's major healthcare challenges and, as such, an important application of medical image analysis. After a brief introduction to cancer, we summarise some of the major developments in oncological image analysis over the past 20 years, but concentrating those in the authors' laboratories, and then outline opportunities and challenges for the next decade. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Gender Stereotyping and the Jersey Shore: A Content Analysis

    OpenAIRE

    Jacqueline S. Anderson; Sharmila Pixy Ferris

    2016-01-01

    Reality television is a highly popular genre, with a growing body of scholarly research. Unlike scripted programming, which offers fictional storylines, reality television relies heavily on cast member’s reactions to carefully crafted situations. This study examined the relationship between reality television and gender role stereotyping in a seminal reality television show, MTV’s Jersey Shore. Content analysis was used to conduct an in-depth examination of the first season of ...

  14. MELDOQ - astrophysical image and pattern analysis in medicine: early recognition of malignant melanomas of the skin by digital image analysis. Final report

    International Nuclear Information System (INIS)

    Bunk, W.; Pompl, R.; Morfill, G.; Stolz, W.; Abmayr, W.

    1999-01-01

    Dermatoscopy is at present the most powerful clinical method for early detection of malignant melanomas. However, the application requires a lot of expertise and experience. Therefore, a quantitative image analysis system has been developed in order to assist dermatologists in 'on site diagnosis' and to improve the detection efficiency. Based on a very extensive dataset of dermatoscopic images, recorded in a standardized manner, a number of features for quantitative characterization of complex patterns in melanocytic skin lesions has been developed. The derived classifier improved the detection rate of malignant and benign melanocytic lesions to over 90% (sensitivity =91.5% and specificity =93.4% in the test set), using only six measures. A distinguishing feature of the system is the visualization of the quantified characteristics that are based on the dermatoscopic ABCD-rule. The developed prototype of a dermatoscopic workplace consists of defined procedures for standardized image acquisition and documentation, components of a necessary data pre-processing (e.g. shading- and colour-correction, removal of artefacts), quantification algorithms (evaluating asymmetry properties, border characteristics, the content of colours and structural components) and classification routines. In 2000 an industrial partner will begin marketing the digital imaging system including the specialized software for the early detection of skin cancer, which is suitable for clinicians and practitioners. The primary used nonlinear analysis techniques (e.g. scaling index method and others) can identify and characterize complex patterns in images and have a diagnostic potential in many other applications. (orig.) [de

  15. Gabor Analysis for Imaging

    DEFF Research Database (Denmark)

    Christensen, Ole; Feichtinger, Hans G.; Paukner, Stephan

    2015-01-01

    , it characterizes a function by its transform over phase space, which is the time–frequency plane (TF-plane) in a musical context or the location–wave-number domain in the context of image processing. Since the transition from the signal domain to the phase space domain introduces an enormous amount of data...... of the generalities relevant for an understanding of Gabor analysis of functions on Rd. We pay special attention to the case d = 2, which is the most important case for image processing and image analysis applications. The chapter is organized as follows. Section 2 presents central tools from functional analysis......, the application of Gabor expansions to image representation is considered in Sect. 6....

  16. Artificial intelligence and medical imaging. Expert systems and image analysis

    International Nuclear Information System (INIS)

    Wackenheim, A.; Zoellner, G.; Horviller, S.; Jacqmain, T.

    1987-01-01

    This paper gives an overview on the existing systems for automated image analysis and interpretation in medical imaging, especially in radiology. The example of ORFEVRE, the system for the analysis of CAT-scan images of the cervical triplet (c3-c5) by image analysis and subsequent expert-system is given and discussed in detail. Possible extensions are described [fr

  17. A Novel High Content Imaging-Based Screen Identifies the Anti-Helminthic Niclosamide as an Inhibitor of Lysosome Anterograde Trafficking and Prostate Cancer Cell Invasion.

    Directory of Open Access Journals (Sweden)

    Magdalena L Circu

    Full Text Available Lysosome trafficking plays a significant role in tumor invasion, a key event for the development of metastasis. Previous studies from our laboratory have demonstrated that the anterograde (outward movement of lysosomes to the cell surface in response to certain tumor microenvironment stimulus, such as hepatocyte growth factor (HGF or acidic extracellular pH (pHe, increases cathepsin B secretion and tumor cell invasion. Anterograde lysosome trafficking depends on sodium-proton exchanger activity and can be reversed by blocking these ion pumps with Troglitazone or EIPA. Since these drugs cannot be advanced into the clinic due to toxicity, we have designed a high-content assay to discover drugs that block peripheral lysosome trafficking with the goal of identifying novel drugs that inhibit tumor cell invasion. An automated high-content imaging system (Cellomics was used to measure the position of lysosomes relative to the nucleus. Among a total of 2210 repurposed and natural product drugs screened, 18 "hits" were identified. One of the compounds identified as an anterograde lysosome trafficking inhibitor was niclosamide, a marketed human anti-helminthic drug. Further studies revealed that niclosamide blocked acidic pHe, HGF, and epidermal growth factor (EGF-induced anterograde lysosome redistribution, protease secretion, motility, and invasion of DU145 castrate resistant prostate cancer cells at clinically relevant concentrations. In an effort to identify the mechanism by which niclosamide prevented anterograde lysosome movement, we found that this drug exhibited no significant effect on the level of ATP, microtubules or actin filaments, and had minimal effect on the PI3K and MAPK pathways. Niclosamide collapsed intralysosomal pH without disruption of the lysosome membrane, while bafilomycin, an agent that impairs lysosome acidification, was also found to induce JLA in our model. Taken together, these data suggest that niclosamide promotes

  18. Extraction of Terraces on the Loess Plateau from High-Resolution DEMs and Imagery Utilizing Object-Based Image Analysis

    Directory of Open Access Journals (Sweden)

    Hanqing Zhao

    2017-05-01

    Full Text Available Abstract: Terraces are typical artificial landforms on the Loess Plateau, with ecological functions in water and soil conservation, agricultural production, and biodiversity. Recording the spatial distribution of terraces is the basis of monitoring their extent and understanding their ecological effects. The current terrace extraction method mainly relies on high-resolution imagery, but its accuracy is limited due to vegetation coverage distorting the features of terraces in imagery. High-resolution topographic data reflecting the morphology of true terrace surfaces are needed. Terraces extraction on the Loess Plateau is challenging because of the complex terrain and diverse vegetation after the implementation of “vegetation recovery”. This study presents an automatic method of extracting terraces based on 1 m resolution digital elevation models (DEMs and 0.3 m resolution Worldview-3 imagery as auxiliary information used for object-based image analysis (OBIA. A multi-resolution segmentation method was used where slope, positive and negative terrain index (PN, accumulative curvature slope (AC, and slope of slope (SOS were determined as input layers for image segmentation by correlation analysis and Sheffield entropy method. The main classification features based on DEMs were chosen from the terrain features derived from terrain factors and texture features by gray-level co-occurrence matrix (GLCM analysis; subsequently, these features were determined by the importance analysis on classification and regression tree (CART analysis. Extraction rules based on DEMs were generated from the classification features with a total classification accuracy of 89.96%. The red band and near-infrared band of images were used to exclude construction land, which is easily confused with small-size terraces. As a result, the total classification accuracy was increased to 94%. The proposed method ensures comprehensive consideration of terrain, texture, shape, and

  19. Fast Depiction Invariant Visual Similarity for Content Based Image Retrieval Based on Data-driven Visual Similarity using Linear Discriminant Analysis

    Science.gov (United States)

    Wihardi, Y.; Setiawan, W.; Nugraha, E.

    2018-01-01

    On this research we try to build CBIRS based on Learning Distance/Similarity Function using Linear Discriminant Analysis (LDA) and Histogram of Oriented Gradient (HoG) feature. Our method is invariant to depiction of image, such as similarity of image to image, sketch to image, and painting to image. LDA can decrease execution time compared to state of the art method, but it still needs an improvement in term of accuracy. Inaccuracy in our experiment happen because we did not perform sliding windows search and because of low number of negative samples as natural-world images.

  20. Demonstrations of Agency in Contemporary International Children's Literature: An Exploratory Critical Content Analysis across Personal, Social, and Cultural Dimensions

    Science.gov (United States)

    Mathis, Janelle B.

    2015-01-01

    International children's literature has the potential to create global experiences and cultural insights for young people confronted with limited and biased images of the world offered by media. The current inquiry was designed to explore, through a critical content analysis approach, international children's literature in which characters…

  1. Video content analysis of surgical procedures.

    Science.gov (United States)

    Loukas, Constantinos

    2018-02-01

    In addition to its therapeutic benefits, minimally invasive surgery offers the potential for video recording of the operation. The videos may be archived and used later for reasons such as cognitive training, skills assessment, and workflow analysis. Methods from the major field of video content analysis and representation are increasingly applied in the surgical domain. In this paper, we review recent developments and analyze future directions in the field of content-based video analysis of surgical operations. The review was obtained from PubMed and Google Scholar search on combinations of the following keywords: 'surgery', 'video', 'phase', 'task', 'skills', 'event', 'shot', 'analysis', 'retrieval', 'detection', 'classification', and 'recognition'. The collected articles were categorized and reviewed based on the technical goal sought, type of surgery performed, and structure of the operation. A total of 81 articles were included. The publication activity is constantly increasing; more than 50% of these articles were published in the last 3 years. Significant research has been performed for video task detection and retrieval in eye surgery. In endoscopic surgery, the research activity is more diverse: gesture/task classification, skills assessment, tool type recognition, shot/event detection and retrieval. Recent works employ deep neural networks for phase and tool recognition as well as shot detection. Content-based video analysis of surgical operations is a rapidly expanding field. Several future prospects for research exist including, inter alia, shot boundary detection, keyframe extraction, video summarization, pattern discovery, and video annotation. The development of publicly available benchmark datasets to evaluate and compare task-specific algorithms is essential.

  2. Suppression of high-density artefacts in x-ray CT images using temporal digital subtraction with application to cryotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Baissalov, R.; Sandison, G.A.; Rewcastle, J.C. [Department of Medical Physics, Tom Baker Cancer Center, Calgary, Canada, T2N 4N2 2 Department of Physics and Astronomy, University of Calgary, Calgary T2N 2N4 (Canada); Donnelly, B.J. [Department of Surgery, Tom Baker Cancer Center, Calgary, Canada, T2N 4N2 4 Department of Surgery, Foothills Hospital, Calgary T2N 2T7 (Canada); Saliken, J.C. [Department of Surgery, Tom Baker Cancer Center, Calgary T2N 4N2 (Canada); Department of Diagnostic Imaging, Foothills Hospital, Calgary T2N 2T7 (Canada); McKinnon, J.G. [Department of Surgery, Foothills Hospital, Calgary T2N 2T7 (Canada); Muldrew, K. [Department of Surgery, Faculty of Medicine, University of Calgary, Calgary T2N 2T7 (Canada)

    2000-05-01

    Image guidance in cryotherapy is usually performed using ultrasound. Although not currently in routine clinical use, x-ray CT imaging is an alternative means of guidance that can display the full 3D structure of the iceball, including frozen and unfrozen regions. However, the quality of x-ray CT images is compromised by the presence of high-density streak artefacts. To suppress these artefacts we applied temporal digital subtraction (TDS). This TDS method has the added advantage of improving the grey-scale contrast between frozen and unfrozen tissue in the CT images. Two sets of CT images were taken of a phantom material, cryoprobes and a urethral warmer (UW) before and during the cryoprobe freeze cycle. The high-density artefacts persisted in both image sets. TDS was performed on these two image sets using the corresponding mask image of unfrozen material and the same geometrical configuration of the cryoprobes and the UW. The resultant difference image had a significantly reduced artefact content. Thus TDS can be used to significantly suppress or eliminate high-density CT streak artefacts without reducing the metallic content of the cryoprobes. In vivo study needs to be conducted to establish the utility of this TDS procedure for CT assisted prostate or liver cryotherapy. Applying TDS in x-ray CT guided cryotherapy will facilitate estimation of the number and location of all frozen and unfrozen regions, potentially making cryotherapy safer and less operator dependent. (author)

  3. Suppression of high-density artefacts in x-ray CT images using temporal digital subtraction with application to cryotherapy

    International Nuclear Information System (INIS)

    Baissalov, R.; Sandison, G.A.; Rewcastle, J.C.; Donnelly, B.J.; Saliken, J.C.; McKinnon, J.G.; Muldrew, K.

    2000-01-01

    Image guidance in cryotherapy is usually performed using ultrasound. Although not currently in routine clinical use, x-ray CT imaging is an alternative means of guidance that can display the full 3D structure of the iceball, including frozen and unfrozen regions. However, the quality of x-ray CT images is compromised by the presence of high-density streak artefacts. To suppress these artefacts we applied temporal digital subtraction (TDS). This TDS method has the added advantage of improving the grey-scale contrast between frozen and unfrozen tissue in the CT images. Two sets of CT images were taken of a phantom material, cryoprobes and a urethral warmer (UW) before and during the cryoprobe freeze cycle. The high-density artefacts persisted in both image sets. TDS was performed on these two image sets using the corresponding mask image of unfrozen material and the same geometrical configuration of the cryoprobes and the UW. The resultant difference image had a significantly reduced artefact content. Thus TDS can be used to significantly suppress or eliminate high-density CT streak artefacts without reducing the metallic content of the cryoprobes. In vivo study needs to be conducted to establish the utility of this TDS procedure for CT assisted prostate or liver cryotherapy. Applying TDS in x-ray CT guided cryotherapy will facilitate estimation of the number and location of all frozen and unfrozen regions, potentially making cryotherapy safer and less operator dependent. (author)

  4. Development of image analysis software for quantification of viable cells in microchips.

    Science.gov (United States)

    Georg, Maximilian; Fernández-Cabada, Tamara; Bourguignon, Natalia; Karp, Paola; Peñaherrera, Ana B; Helguera, Gustavo; Lerner, Betiana; Pérez, Maximiliano S; Mertelsmann, Roland

    2018-01-01

    Over the past few years, image analysis has emerged as a powerful tool for analyzing various cell biology parameters in an unprecedented and highly specific manner. The amount of data that is generated requires automated methods for the processing and analysis of all the resulting information. The software available so far are suitable for the processing of fluorescence and phase contrast images, but often do not provide good results from transmission light microscopy images, due to the intrinsic variation of the acquisition of images technique itself (adjustment of brightness / contrast, for instance) and the variability between image acquisition introduced by operators / equipment. In this contribution, it has been presented an image processing software, Python based image analysis for cell growth (PIACG), that is able to calculate the total area of the well occupied by cells with fusiform and rounded morphology in response to different concentrations of fetal bovine serum in microfluidic chips, from microscopy images in transmission light, in a highly efficient way.

  5. High Resolution Spatio Temporal Moments Analysis of Solute Migration Captured using Pre-clinical Medical Imaging Techniques

    Science.gov (United States)

    Dogan, M.; Moysey, S. M.; Powell, B. A.; DeVol, T. A.

    2016-12-01

    Advances in medical imaging technologies are continuously expanding the range of applications enabled within the earth sciences. While computed x-ray tomography (CT) scans have traditionally been used for investigating the structure of geologic materials, it is now possible to perform 3D time-lapse imaging of dynamic processes, such as monitoring the infiltration of water into a soil, with sub-millimeter resolution. Likewise, single photon emission computed tomography (SPECT) can provide information on the evolution of solute transport with spatial resolution on the order of a millimeter by tracking the migration of gamma-ray emitting isotopes like 99mTc and 111In. While these imaging techniques are revolutionizing our ability to look within porous media, techniques for the analysis of such rich and large data sets are limited. The spatial and temporal moments of a plume have long been used to provide quantitative measures to describe plume movement in a wide range of settings from the lab to field. Moment analysis can also be used to estimate the hydrologic properties of the porous media. In this research, we investigate the use of moments for analyzing a high resolution 4D SPECT data set collected during a 99mTc transport experiment performed in a heterogeneous column. The 4D nature of the data set makes it amenable to the use of data mining and pattern recognition methods, such as cluster analysis, to identify regions or zones within the data that exhibit abnormal or unexpected behaviors. We then compare anomalous features within the SPECT data to similar features identified within the CT image to relate the flow behavior to pore-scale structures, such as porosity differences and macropores. Such comparisons help to identify whether these features are good predictors of preferential transport. Likewise, we evaluate whether local analysis of moments can be used to infer apparent parameters governing non-conservative transport in a heterogeneous porous media, such

  6. Heavy Sexual Content Versus Safer Sex Content: A Content Analysis of the Entertainment Education Drama Shuga.

    Science.gov (United States)

    Booker, Nancy Achieng'; Miller, Ann Neville; Ngure, Peter

    2016-12-01

    Extremely popular with Kenyan youth, the entertainment-education drama Shuga was designed with specific goals of promoting condom use, single versus multiple sexual partners, and destigmatization of HIV. Almost as soon as it aired, however, it generated controversy due to its extensive sexual themes and relatively explicit portrayal of sexual issues. To determine how safer sex, antistigma messages, and overall sexual content were integrated into Shuga, we conducted a content analysis. Results indicated that condom use and HIV destigmatization messages were frequently and clearly communicated. Negative consequences for risky sexual behavior were communicated over the course of the entire series. Messages about multiple concurrent partnerships were not evident. In addition, in terms of scenes per hour of programming, Shuga had 10.3 times the amount of sexual content overall, 8.2 times the amount of sexual talk, 17.8 times the amount of sexual behavior, and 9.4 times the amount of sexual intercourse as found in previous analysis of U.S. entertainment programming. Research is needed to determine how these factors may interact to influence adolescent viewers of entertainment education dramas.

  7. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    Science.gov (United States)

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software

  8. The rice growth image files - The Rice Growth Monitoring for The Phenotypic Functional Analysis | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us The Rice Growth Monitoring for The Phenotypic Functional Analysis The rice growth image file...s Data detail Data name The rice growth image files DOI 10.18908/lsdba.nbdc00945-004 Description of data contents The rice growth ima...ge files categorized based on file size. Data file File name: image files (director...y) File URL: ftp://ftp.biosciencedbc.jp/archive/agritogo-rice-phenome/LATEST/image...ite Policy | Contact Us The rice growth image files - The Rice Growth Monitoring for The Phenotypic Functional Analysis | LSDB Archive ...

  9. Introduction of High Throughput Magnetic Resonance T2-Weighted Image Texture Analysis for WHO Grade 2 and 3 Gliomas.

    Directory of Open Access Journals (Sweden)

    Manabu Kinoshita

    Full Text Available Reports have suggested that tumor textures presented on T2-weighted images correlate with the genetic status of glioma. Therefore, development of an image analyzing framework that is capable of objective and high throughput image texture analysis for large scale image data collection is needed. The current study aimed to address the development of such a framework by introducing two novel parameters for image textures on T2-weighted images, i.e., Shannon entropy and Prewitt filtering. Twenty-two WHO grade 2 and 28 grade 3 glioma patients were collected whose pre-surgical MRI and IDH1 mutation status were available. Heterogeneous lesions showed statistically higher Shannon entropy than homogenous lesions (p = 0.006 and ROC curve analysis proved that Shannon entropy on T2WI was a reliable indicator for discrimination of homogenous and heterogeneous lesions (p = 0.015, AUC = 0.73. Lesions with well-defined borders exhibited statistically higher Edge mean and Edge median values using Prewitt filtering than those with vague lesion borders (p = 0.0003 and p = 0.0005 respectively. ROC curve analysis also proved that both Edge mean and median values were promising indicators for discrimination of lesions with vague and well defined borders and both Edge mean and median values performed in a comparable manner (p = 0.0002, AUC = 0.81 and p < 0.0001, AUC = 0.83, respectively. Finally, IDH1 wild type gliomas showed statistically lower Shannon entropy on T2WI than IDH1 mutated gliomas (p = 0.007 but no difference was observed between IDH1 wild type and mutated gliomas in Edge median values using Prewitt filtering. The current study introduced two image metrics that reflect lesion texture described on T2WI. These two metrics were validated by readings of a neuro-radiologist who was blinded to the results. This observation will facilitate further use of this technique in future large scale image analysis of glioma.

  10. Image Analysis of a Negatively Curved Graphitic Sheet Model for Amorphous Carbon

    Science.gov (United States)

    Bursill, L. A.; Bourgeois, Laure N.

    High-resolution electron micrographs are presented which show essentially curved single sheets of graphitic carbon. Image calculations are then presented for the random surface schwarzite-related model of Townsend et al. (Phys. Rev. Lett. 69, 921-924, 1992). Comparison with experimental images does not rule out the contention that such models, containing surfaces of negative curvature, may be useful for predicting some physical properties of specific forms of nanoporous carbon. Some difficulties of the model predictions, when compared with the experimental images, are pointed out. The range of application of this model, as well as competing models, is discussed briefly.

  11. Gap-enhanced Raman tags for high-contrast sentinel lymph node imaging.

    Science.gov (United States)

    Bao, Zhouzhou; Zhang, Yuqing; Tan, Ziyang; Yin, Xia; Di, Wen; Ye, Jian

    2018-05-01

    The sentinel lymph node (SLN) biopsy is gaining in popularity as a procedure to investigate the lymphatic metastasis of malignant tumors. The commonly used techniques to identify the SLNs in clinical practice are blue dyes-guided visualization, radioisotope-based detection and near-infrared fluorescence imaging. However, all these methods have not been found to perfectly fit the clinical criteria with issues such as short retention time in SLN, poor spatial resolution, autofluorescence, low photostability and high cost. In this study, we have reported a new type of nanoprobes, named, gap-enhanced Raman tags (GERTs) for the SLN Raman imaging. With the advantageous features including unique "fingerprint" Raman signal, strong Raman enhancement, high photostability, good biocompatibility and extra-long retention time, we have demonstrated that GERTs are greatly favorable for high-contrast and deep SLN Raman imaging, which meanwhile reveals the dynamic migration behavior of the probes entering the SLN. In addition, a quantitative volumetric Raman imaging (qVRI) data-processing method is employed to acquire a high-resolution 3-dimensional (3D) margin of SLN as well as the content variation of GERTs in the SLN. Moreover, SLN detection could be realized via a cost-effective commercial portable Raman scanner. Therefore, GERTs hold the great potential to be translated in clinical application for accurate and intraoperative location of the SLN. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Analysis of PET hypoxia imaging in the quantitative imaging for personalized cancer medicine program

    International Nuclear Information System (INIS)

    Yeung, Ivan; Driscoll, Brandon; Keller, Harald; Shek, Tina; Jaffray, David; Hedley, David

    2014-01-01

    Quantitative imaging is an important tool in clinical trials of testing novel agents and strategies for cancer treatment. The Quantitative Imaging Personalized Cancer Medicine Program (QIPCM) provides clinicians and researchers participating in multi-center clinical trials with a central repository for their imaging data. In addition, a set of tools provide standards of practice (SOP) in end-to-end quality assurance of scanners and image analysis. The four components for data archiving and analysis are the Clinical Trials Patient Database, the Clinical Trials PACS, the data analysis engine(s) and the high-speed networks that connect them. The program provides a suite of software which is able to perform RECIST, dynamic MRI, CT and PET analysis. The imaging data can be assessed securely from remote and analyzed by researchers with these software tools, or with tools provided by the users and installed at the server. Alternatively, QIPCM provides a service for data analysis on the imaging data according developed SOP. An example of a clinical study in which patients with unresectable pancreatic adenocarcinoma were studied with dynamic PET-FAZA for hypoxia measurement will be discussed. We successfully quantified the degree of hypoxia as well as tumor perfusion in a group of 20 patients in terms of SUV and hypoxic fraction. It was found that there is no correlation between bulk tumor perfusion and hypoxia status in this cohort. QIPCM also provides end-to-end QA testing of scanners used in multi-center clinical trials. Based on quality assurance data from multiple CT-PET scanners, we concluded that quality control of imaging was vital in the success in multi-center trials as different imaging and reconstruction parameters in PET imaging could lead to very different results in hypoxia imaging. (author)

  13. Resolution analysis of archive films for the purpose of their optimal digitization and distribution

    Science.gov (United States)

    Fliegel, Karel; Vítek, Stanislav; Páta, Petr; Myslík, Jiří; Pecák, Josef; Jícha, Marek

    2017-09-01

    With recent high demand for ultra-high-definition (UHD) content to be screened in high-end digital movie theaters but also in the home environment, film archives full of movies in high-definition and above are in the scope of UHD content providers. Movies captured with the traditional film technology represent a virtually unlimited source of UHD content. The goal to maintain complete image information is also related to the choice of scanning resolution and spatial resolution for further distribution. It might seem that scanning the film material in the highest possible resolution using state-of-the-art film scanners and also its distribution in this resolution is the right choice. The information content of the digitized images is however limited, and various degradations moreover lead to its further reduction. Digital distribution of the content in the highest image resolution might be therefore unnecessary or uneconomical. In other cases, the highest possible resolution is inevitable if we want to preserve fine scene details or film grain structure for archiving purposes. This paper deals with the image detail content analysis of archive film records. The resolution limit in captured scene image and factors which lower the final resolution are discussed. Methods are proposed to determine the spatial details of the film picture based on the analysis of its digitized image data. These procedures allow determining recommendations for optimal distribution of digitized video content intended for various display devices with lower resolutions. Obtained results are illustrated on spatial downsampling use case scenario, and performance evaluation of the proposed techniques is presented.

  14. High Resolution/High Fidelity Seismic Imaging and Parameter Estimation for Geological Structure and Material Characterization

    Energy Technology Data Exchange (ETDEWEB)

    Ru-Shan Wu; Xiao-Bi Xie

    2008-06-08

    Our proposed work on high resolution/high fidelity seismic imaging focused on three general areas: (1) development of new, more efficient, wave-equation-based propagators and imaging conditions, (2) developments towards amplitude-preserving imaging in the local angle domain, in particular, imaging methods that allow us to estimate the reflection as a function of angle at a layer boundary, and (3) studies of wave inversion for local parameter estimation. In this report we summarize the results and progress we made during the project period. The report is divided into three parts, totaling 10 chapters. The first part is on resolution analysis and its relation to directional illumination analysis. The second part, which is composed of 6 chapters, is on the main theme of our work, the true-reflection imaging. True-reflection imaging is an advanced imaging technology which aims at keeping the image amplitude proportional to the reflection strength of the local reflectors or to obtain the reflection coefficient as function of reflection-angle. There are many factors which may influence the image amplitude, such as geometrical spreading, transmission loss, path absorption, acquisition aperture effect, etc. However, we can group these into two categories: one is the propagator effect (geometric spreading, path losses); the other is the acquisition-aperture effect. We have made significant progress in both categories. We studied the effects of different terms in the true-amplitude one-way propagators, especially the terms including lateral velocity variation of the medium. We also demonstrate the improvements by optimizing the expansion coefficients in different terms. Our research also includes directional illumination analysis for both the one-way propagators and full-wave propagators. We developed the fast acquisition-aperture correction method in the local angle-domain, which is an important element in the true-reflection imaging. Other developments include the super

  15. [Quantitative image of bone mineral content--dual energy subtraction in a single exposure].

    Science.gov (United States)

    Katoh, T

    1990-09-25

    A dual energy subtraction system was constructed on an experimental basis for the quantitative image of bone mineral content. The system consists of a radiography system and an image processor. Two radiograms were taken with dual x-ray energy in a single exposure using an x-ray beam dichromized by a tin filter. In this system, a film cassette was used where a low speed film-screen system, a copper filter and a high speed film-screen system were layered on top of each other. The images were read by a microdensitometer and processed by a personal computer. The image processing included the corrections of the film characteristics and heterogeneity in the x-ray field, and the dual energy subtraction in which the effect of the high energy component of the dichromized beam on the tube side image was corrected. In order to determine the accuracy of the system, experiments using wedge phantoms made of mixtures of epoxy resin and bone mineral-equivalent materials in various fractions were performed for various tube potentials and film processing conditions. The results indicated that the relative precision of the system was within +/- 4% and that the propagation of the film noise was within +/- 11 mg/cm2 for the 0.2 mm pixels. The results also indicated that the system response was independent of the tube potential and the film processing condition. The bone mineral weight in each phalanx of the freshly dissected hand of a rhesus monkey was measured by this system and compared with the ash weight. The results showed an error of +/- 10%, slightly larger than that of phantom experiments, which is probably due to the effect of fat and the variation of focus-object distance. The air kerma in free air at the object was approximately 0.5 mGy for one exposure. The results indicate that this system is applicable to clinical use and provides useful information for evaluating a time-course of localized bone disease.

  16. Error tolerance analysis of wave diagnostic based on coherent modulation imaging in high power laser system

    Science.gov (United States)

    Pan, Xingchen; Liu, Cheng; Zhu, Jianqiang

    2018-02-01

    Coherent modulation imaging providing fast convergence speed and high resolution with single diffraction pattern is a promising technique to satisfy the urgent demands for on-line multiple parameter diagnostics with single setup in high power laser facilities (HPLF). However, the influence of noise on the final calculated parameters concerned has not been investigated yet. According to a series of simulations with twenty different sampling beams generated based on the practical parameters and performance of HPLF, the quantitative analysis based on statistical results was first investigated after considering five different error sources. We found the background noise of detector and high quantization error will seriously affect the final accuracy and different parameters have different sensitivity to different noise sources. The simulation results and the corresponding analysis provide the potential directions to further improve the final accuracy of parameter diagnostics which is critically important to its formal applications in the daily routines of HPLF.

  17. Color-Based Image Retrieval from High-Similarity Image Databases

    DEFF Research Database (Denmark)

    Hansen, Michael Adsetts Edberg; Carstensen, Jens Michael

    2003-01-01

    Many image classification problems can fruitfully be thought of as image retrieval in a "high similarity image database" (HSID) characterized by being tuned towards a specific application and having a high degree of visual similarity between entries that should be distinguished. We introduce...... a method for HSID retrieval using a similarity measure based on a linear combination of Jeffreys-Matusita (JM) distances between distributions of color (and color derivatives) estimated from a set of automatically extracted image regions. The weight coefficients are estimated based on optimal retrieval...... performance. Experimental results on the difficult task of visually identifying clones of fungal colonies grown in a petri dish and categorization of pelts show a high retrieval accuracy of the method when combined with standardized sample preparation and image acquisition....

  18. Echolucency of computerized ultrasound images of carotid atherosclerotic plaques are associated with increased levels of triglyceride-rich lipoproteins as well as increased plaque lipid content

    DEFF Research Database (Denmark)

    Grønholdt, Marie-Louise M.; Nordestgaard, Børge; Wiebe, Britt M.

    1998-01-01

    carotid plaque echo-lucency and that echo-lucency predicts a high plaque lipid content. Methods and Results-The study included 137 patients with neurological symptoms and greater than or equal to 50% stenosis of the relevant carotid artery, High-resolution B-mode ultrasound images of carotid plaques were......Background-Echo-lucency of carotid atherosclerotic plaques on computerized ultrasound B-mode images has been associated with a high incidence of brain infarcts as evaluated on CT scans. We tested the hypotheses that triglyceride-rich lipoproteins in the fasting and postprandial state predict...

  19. Hyperspectral image analysis. A tutorial

    DEFF Research Database (Denmark)

    Amigo Rubio, Jose Manuel; Babamoradi, Hamid; Elcoroaristizabal Martin, Saioa

    2015-01-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processi...... to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares - Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case....... will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology...

  20. Texture analysis of speckle in optical coherence tomography images of tissue phantoms

    International Nuclear Information System (INIS)

    Gossage, Kirk W; Smith, Cynthia M; Kanter, Elizabeth M; Hariri, Lida P; Stone, Alice L; Rodriguez, Jeffrey J; Williams, Stuart K; Barton, Jennifer K

    2006-01-01

    Optical coherence tomography (OCT) is an imaging modality capable of acquiring cross-sectional images of tissue using back-reflected light. Conventional OCT images have a resolution of 10-15 μm, and are thus best suited for visualizing tissue layers and structures. OCT images of collagen (with and without endothelial cells) have no resolvable features and may appear to simply show an exponential decrease in intensity with depth. However, examination of these images reveals that they display a characteristic repetitive structure due to speckle.The purpose of this study is to evaluate the application of statistical and spectral texture analysis techniques for differentiating living and non-living tissue phantoms containing various sizes and distributions of scatterers based on speckle content in OCT images. Statistically significant differences between texture parameters and excellent classification rates were obtained when comparing various endothelial cell concentrations ranging from 0 cells/ml to 25 million cells/ml. Statistically significant results and excellent classification rates were also obtained using various sizes of microspheres with concentrations ranging from 0 microspheres/ml to 500 million microspheres/ml. This study has shown that texture analysis of OCT images may be capable of differentiating tissue phantoms containing various sizes and distributions of scatterers

  1. Image simulation of high-speed imaging by high-pressure gas ionization detector

    International Nuclear Information System (INIS)

    Miao Jichen; Liu Ximing; Wu Zhifang

    2005-01-01

    The signal of the neighbor pixels is cumulated in Freight Train Inspection System because data fetch time is shorter than ion excursion time. This paper analyzes the pertinency of neighbor pixels and designs computer simulation method to generate some emulate images such as indicator image. The result indicates the high-pressure gas ionization detector can be used in high-speed digital radiography field. (authors)

  2. High speed measurement of corn seed viability using hyperspectral imaging

    Science.gov (United States)

    Ambrose, Ashabahebwa; Kandpal, Lalit Mohan; Kim, Moon S.; Lee, Wang-Hee; Cho, Byoung-Kwan

    2016-03-01

    Corn is one of the most cultivated crops all over world as food for humans as well as animals. Optimized agronomic practices and improved technological interventions during planting, harvesting and post-harvest handling are critical to improving the quantity and quality of corn production. Seed germination and vigor are the primary determinants of high yield notwithstanding any other factors that may play during the growth period. Seed viability may be lost during storage due to unfavorable conditions e.g. moisture content and temperatures, or physical damage during mechanical processing e.g. shelling, or over heating during drying. It is therefore vital for seed companies and farmers to test and ascertain seed viability to avoid losses of any kind. This study aimed at investigating the possibility of using hyperspectral imaging (HSI) technique to discriminate viable and nonviable corn seeds. A group of corn samples were heat treated by using microwave process while a group of seeds were kept as control group (untreated). The hyperspectral images of corn seeds of both groups were captured between 400 and 2500 nm wave range. Partial least squares discriminant analysis (PLS-DA) was built for the classification of aged (heat treated) and normal (untreated) corn seeds. The model showed highest classification accuracy of 97.6% (calibration) and 95.6% (prediction) in the SWIR region of the HSI. Furthermore, the PLS-DA and binary images were capable to provide the visual information of treated and untreated corn seeds. The overall results suggest that HSI technique is accurate for classification of viable and non-viable seeds with non-destructive manner.

  3. High sensitivity optical molecular imaging system

    Science.gov (United States)

    An, Yu; Yuan, Gao; Huang, Chao; Jiang, Shixin; Zhang, Peng; Wang, Kun; Tian, Jie

    2018-02-01

    Optical Molecular Imaging (OMI) has the advantages of high sensitivity, low cost and ease of use. By labeling the regions of interest with fluorescent or bioluminescence probes, OMI can noninvasively obtain the distribution of the probes in vivo, which play the key role in cancer research, pharmacokinetics and other biological studies. In preclinical and clinical application, the image depth, resolution and sensitivity are the key factors for researchers to use OMI. In this paper, we report a high sensitivity optical molecular imaging system developed by our group, which can improve the imaging depth in phantom to nearly 5cm, high resolution at 2cm depth, and high image sensitivity. To validate the performance of the system, special designed phantom experiments and weak light detection experiment were implemented. The results shows that cooperated with high performance electron-multiplying charge coupled device (EMCCD) camera, precision design of light path system and high efficient image techniques, our OMI system can simultaneously collect the light-emitted signals generated by fluorescence molecular imaging, bioluminescence imaging, Cherenkov luminance and other optical imaging modality, and observe the internal distribution of light-emitting agents fast and accurately.

  4. Multimodal Imaging Brain Connectivity Analysis (MIBCA toolbox

    Directory of Open Access Journals (Sweden)

    Andre Santos Ribeiro

    2015-07-01

    Full Text Available Aim. In recent years, connectivity studies using neuroimaging data have increased the understanding of the organization of large-scale structural and functional brain networks. However, data analysis is time consuming as rigorous procedures must be assured, from structuring data and pre-processing to modality specific data procedures. Until now, no single toolbox was able to perform such investigations on truly multimodal image data from beginning to end, including the combination of different connectivity analyses. Thus, we have developed the Multimodal Imaging Brain Connectivity Analysis (MIBCA toolbox with the goal of diminishing time waste in data processing and to allow an innovative and comprehensive approach to brain connectivity.Materials and Methods. The MIBCA toolbox is a fully automated all-in-one connectivity toolbox that offers pre-processing, connectivity and graph theoretical analyses of multimodal image data such as diffusion-weighted imaging, functional magnetic resonance imaging (fMRI and positron emission tomography (PET. It was developed in MATLAB environment and pipelines well-known neuroimaging softwares such as Freesurfer, SPM, FSL, and Diffusion Toolkit. It further implements routines for the construction of structural, functional and effective or combined connectivity matrices, as well as, routines for the extraction and calculation of imaging and graph-theory metrics, the latter using also functions from the Brain Connectivity Toolbox. Finally, the toolbox performs group statistical analysis and enables data visualization in the form of matrices, 3D brain graphs and connectograms. In this paper the MIBCA toolbox is presented by illustrating its capabilities using multimodal image data from a group of 35 healthy subjects (19–73 years old with volumetric T1-weighted, diffusion tensor imaging, and resting state fMRI data, and 10 subjets with 18F-Altanserin PET data also.Results. It was observed both a high inter

  5. ESIM: Edge Similarity for Screen Content Image Quality Assessment.

    Science.gov (United States)

    Ni, Zhangkai; Ma, Lin; Zeng, Huanqiang; Chen, Jing; Cai, Canhui; Ma, Kai-Kuang

    2017-10-01

    In this paper, an accurate full-reference image quality assessment (IQA) model developed for assessing screen content images (SCIs), called the edge similarity (ESIM), is proposed. It is inspired by the fact that the human visual system (HVS) is highly sensitive to edges that are often encountered in SCIs; therefore, essential edge features are extracted and exploited for conducting IQA for the SCIs. The key novelty of the proposed ESIM lies in the extraction and use of three salient edge features-i.e., edge contrast, edge width, and edge direction. The first two attributes are simultaneously generated from the input SCI based on a parametric edge model, while the last one is derived directly from the input SCI. The extraction of these three features will be performed for the reference SCI and the distorted SCI, individually. The degree of similarity measured for each above-mentioned edge attribute is then computed independently, followed by combining them together using our proposed edge-width pooling strategy to generate the final ESIM score. To conduct the performance evaluation of our proposed ESIM model, a new and the largest SCI database (denoted as SCID) is established in our work and made to the public for download. Our database contains 1800 distorted SCIs that are generated from 40 reference SCIs. For each SCI, nine distortion types are investigated, and five degradation levels are produced for each distortion type. Extensive simulation results have clearly shown that the proposed ESIM model is more consistent with the perception of the HVS on the evaluation of distorted SCIs than the multiple state-of-the-art IQA methods.

  6. Metabolic imaging of human kidney triglyceride content: reproducibility of proton magnetic resonance spectroscopy.

    Directory of Open Access Journals (Sweden)

    Sebastiaan Hammer

    Full Text Available OBJECTIVE: To assess the feasibility of renal proton magnetic resonance spectroscopy for quantification of triglyceride content and to compare spectral quality and reproducibility without and with respiratory motion compensation in vivo. MATERIALS AND METHODS: The Institutional Review Board of our institution approved the study protocol, and written informed consent was obtained. After technical optimization, a total of 20 healthy volunteers underwent renal proton magnetic resonance spectroscopy of the renal cortex both without and with respiratory motion compensation and volume tracking. After the first session the subjects were repositioned and the protocol was repeated to assess reproducibility. Spectral quality (linewidth of the water signal and triglyceride content were quantified. Bland-Altman analyses and a test by Pitman were performed. RESULTS: Linewidth changed from 11.5±0.4 Hz to 10.7±0.4 Hz (all data pooled, p<0.05, without and with respiratory motion compensation respectively. Mean % triglyceride content in the first and second session without respiratory motion compensation were respectively 0.58±0.12% and 0.51±0.14% (P = NS. Mean % triglyceride content in the first and second session with respiratory motion compensation were respectively 0.44±0.10% and 0.43±0.10% (P = NS between sessions and P = NS compared to measurements with respiratory motion compensation. Bland-Altman analyses showed narrower limits of agreement and a significant difference in the correlated variances (correlation of -0.59, P<0.05. CONCLUSION: Metabolic imaging of the human kidney using renal proton magnetic resonance spectroscopy is a feasible tool to assess cortical triglyceride content in humans in vivo and the use of respiratory motion compensation significantly improves spectral quality and reproducibility. Therefore, respiratory motion compensation seems a necessity for metabolic imaging of renal triglyceride content in vivo.

  7. Modeled effects on permittivity measurements of water content in high surface area porous media

    International Nuclear Information System (INIS)

    Jones, S.B.; Or, Dani

    2003-01-01

    Time domain reflectometry (TDR) has become an important measurement technique for determination of porous media water content and electrical conductivity due to its accuracy, fast response and automation capability. Water content is inferred from the measured bulk dielectric constant based on travel time analysis along simple transmission lines. TDR measurements in low surface area porous media accurately describe water content using an empirical relationship. Measurement discrepancies arise from dominating influences such as bound water due to high surface area, extreme aspect ratio particles or atypical water phase configuration. Our objectives were to highlight primary factors affecting dielectric permittivity measurements for water content determination in porous mixtures, and demonstrate the influence of these factors on mixture permittivity as predicted by a three-phase dielectric mixture model. Modeled results considering water binding, higher porosity, constituent geometry or phase configuration suggest any of these effects individually are capable of causing permittivity reduction, though all likely contribute in high surface area porous media

  8. CALIPSO: an interactive image analysis software package for desktop PACS workstations

    Science.gov (United States)

    Ratib, Osman M.; Huang, H. K.

    1990-07-01

    The purpose of this project is to develop a low cost workstation for quantitative analysis of multimodality images using a Macintosh II personal computer. In the current configuration the Macintosh operates as a stand alone workstation where images are imported either from a central PACS server through a standard Ethernet network or recorded through video digitizer board. The CALIPSO software developed contains a large variety ofbasic image display and manipulation tools. We focused our effort however on the design and implementation ofquantitative analysis methods that can be applied to images from different imaging modalities. Analysis modules currently implemented include geometric and densitometric volumes and ejection fraction calculation from radionuclide and cine-angiograms Fourier analysis ofcardiac wall motion vascular stenosis measurement color coded parametric display of regional flow distribution from dynamic coronary angiograms automatic analysis ofmyocardial distribution ofradiolabelled tracers from tomoscintigraphic images. Several of these analysis tools were selected because they use similar color coded andparametric display methods to communicate quantitative data extracted from the images. 1. Rationale and objectives of the project Developments of Picture Archiving and Communication Systems (PACS) in clinical environment allow physicians and radiologists to assess radiographic images directly through imaging workstations (''). This convenient access to the images is often limited by the number of workstations available due in part to their high cost. There is also an increasing need for quantitative analysis ofthe images. During thepast decade

  9. Cardiac imaging: working towards fully-automated machine analysis & interpretation.

    Science.gov (United States)

    Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido

    2017-03-01

    Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered: This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary: Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation.

  10. Mapping whole-brain activity with cellular resolution by light-sheet microscopy and high-throughput image analysis (Conference Presentation)

    Science.gov (United States)

    Silvestri, Ludovico; Rudinskiy, Nikita; Paciscopi, Marco; Müllenbroich, Marie Caroline; Costantini, Irene; Sacconi, Leonardo; Frasconi, Paolo; Hyman, Bradley T.; Pavone, Francesco S.

    2016-03-01

    Mapping neuronal activity patterns across the whole brain with cellular resolution is a challenging task for state-of-the-art imaging methods. Indeed, despite a number of technological efforts, quantitative cellular-resolution activation maps of the whole brain have not yet been obtained. Many techniques are limited by coarse resolution or by a narrow field of view. High-throughput imaging methods, such as light sheet microscopy, can be used to image large specimens with high resolution and in reasonable times. However, the bottleneck is then moved from image acquisition to image analysis, since many TeraBytes of data have to be processed to extract meaningful information. Here, we present a full experimental pipeline to quantify neuronal activity in the entire mouse brain with cellular resolution, based on a combination of genetics, optics and computer science. We used a transgenic mouse strain (Arc-dVenus mouse) in which neurons which have been active in the last hours before brain fixation are fluorescently labelled. Samples were cleared with CLARITY and imaged with a custom-made confocal light sheet microscope. To perform an automatic localization of fluorescent cells on the large images produced, we used a novel computational approach called semantic deconvolution. The combined approach presented here allows quantifying the amount of Arc-expressing neurons throughout the whole mouse brain. When applied to cohorts of mice subject to different stimuli and/or environmental conditions, this method helps finding correlations in activity between different neuronal populations, opening the possibility to infer a sort of brain-wide 'functional connectivity' with cellular resolution.

  11. What does cancer treatment look like in consumer cancer magazines? An exploratory analysis of photographic content in consumer cancer magazines.

    Science.gov (United States)

    Phillips, Selene G; Della, Lindsay J; Sohn, Steve H

    2011-04-01

    In an exploratory analysis of several highly circulated consumer cancer magazines, the authors evaluated congruency between visual images of cancer patients and target audience risk profile. The authors assessed 413 images of cancer patients/potential patients for demographic variables such as age, gender, and ethnicity/race. They compared this profile with actual risk statistics. The images in the magazines are considerably younger, more female, and more White than what is indicated by U.S. cancer risk statistics. The authors also assessed images for visual signs of cancer testing/diagnosis and treatment. Few individuals show obvious signs of cancer treatment (e.g., head scarves, skin/nail abnormalities, thin body types). Most images feature healthier looking people, some actively engaged in construction work, bicycling, and yoga. In contrast, a scan of the editorial content showed that nearly two thirds of the articles focus on treatment issues. To explicate the implications of this imagery-text discontinuity on readers' attention and cognitive processing, the authors used constructs from information processing and social identity theories. On the basis of these models/theories, the authors provide recommendations for consumer cancer magazines, suggesting that the imagery be adjusted to reflect cancer diagnosis realities for enhanced message attention and comprehension.

  12. OpenComet: An automated tool for comet assay image analysis

    Directory of Open Access Journals (Sweden)

    Benjamin M. Gyori

    2014-01-01

    Full Text Available Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time.

  13. Body Talk: Body Image Commentary on Queerty.com.

    Science.gov (United States)

    Schwartz, Joseph; Grimm, Josh

    2016-08-01

    In this study, we conducted a content analysis of 243 photographic images of men published on the gay male-oriented blog Queerty.com. We also analyzed 435 user-generated comments from a randomly selected 1-year sample. Focusing on images' body types, we found that the range of body types featured on the blog was quite narrow-the vast majority of images had very low levels of body fat and very high levels of muscularity. Users' body image-related comments typically endorsed and celebrated images; critiques of images were comparatively rare. Perspectives from objectification theory and social comparison theory suggest that the images and commentary found on the blog likely reinforce unhealthy body image in gay male communities.

  14. Androgen receptor mutations associated with androgen insensitivity syndrome: a high content analysis approach leading to personalized medicine.

    Directory of Open Access Journals (Sweden)

    Adam T Szafran

    2009-12-01

    Full Text Available Androgen insensitivity syndrome (AIS is a rare disease associated with inactivating mutations of AR that disrupt male sexual differentiation, and cause a spectrum of phenotypic abnormalities having as a common denominator loss of reproductive viability. No established treatment exists for these conditions, however there are sporadic reports of patients (or recapitulated mutations in cell lines that respond to administration of supraphysiologic doses (or pulses of testosterone or synthetic ligands. Here, we utilize a novel high content analysis (HCA approach to study AR function at the single cell level in genital skin fibroblasts (GSF. We discuss in detail findings in GSF from three historical patients with AIS, which include identification of novel mechanisms of AR malfunction, and the potential ability to utilize HCA for personalized treatment of patients affected by this condition.

  15. Signal Conditioning in Process of High Speed Imaging

    Directory of Open Access Journals (Sweden)

    Libor Hargas

    2015-01-01

    Full Text Available The accuracy of cinematic analysis with camera system depends on frame rate of used camera. Specific case of cinematic analysis is in medical research focusing on microscopic objects moving with high frequencies (cilia of respiratory epithelium. The signal acquired by high speed video acquisition system has very amount of data. This paper describes hardware parts, signal condition and software, which is used for image acquiring thru digital camera, intelligent illumination dimming hardware control and ROI statistic creation. All software parts are realized as virtual instruments.

  16. The high throughput virtual slit enables compact, inexpensive Raman spectral imagers

    Science.gov (United States)

    Gooding, Edward; Deutsch, Erik R.; Huehnerhoff, Joseph; Hajian, Arsen R.

    2018-02-01

    Raman spectral imaging is increasingly becoming the tool of choice for field-based applications such as threat, narcotics and hazmat detection; air, soil and water quality monitoring; and material ID. Conventional fiber-coupled point source Raman spectrometers effectively interrogate a small sample area and identify bulk samples via spectral library matching. However, these devices are very slow at mapping over macroscopic areas. In addition, the spatial averaging performed by instruments that collect binned spectra, particularly when used in combination with orbital raster scanning, tends to dilute the spectra of trace particles in a mixture. Our design, employing free space line illumination combined with area imaging, reveals both the spectral and spatial content of heterogeneous mixtures. This approach is well suited to applications such as detecting explosives and narcotics trace particle detection in fingerprints. The patented High Throughput Virtual Slit1 is an innovative optical design that enables compact, inexpensive handheld Raman spectral imagers. HTVS-based instruments achieve significantly higher spectral resolution than can be obtained with conventional designs of the same size. Alternatively, they can be used to build instruments with comparable resolution to large spectrometers, but substantially smaller size, weight and unit cost, all while maintaining high sensitivity. When used in combination with laser line imaging, this design eliminates sample photobleaching and unwanted photochemistry while greatly enhancing mapping speed, all with high selectivity and sensitivity. We will present spectral image data and discuss applications that are made possible by low cost HTVS-enabled instruments.

  17. Political leaders and the media: can we measure political leadership images in newspapers using computer-assisted content analysis?

    NARCIS (Netherlands)

    Aaldering, L.; Vliegenthart, R.

    2016-01-01

    Despite the large amount of research into both media coverage of politics as well as political leadership, surprisingly little research has been devoted to the ways political leaders are discussed in the media. This paper studies whether computer-aided content analysis can be applied in examining

  18. The diagnosis of small solitary pulmonary nodule: comparison of standard and inverse digital images on a high resolution monitor using ROC analysis

    International Nuclear Information System (INIS)

    Choi, Byeong Kyoo; Lee, In Sun; Seo, Joon Beom; Lee, Jin Seong; Song, Koun Sik; Lim, Tae Hwan

    2002-01-01

    To study the impact of inversion of soft-copy chest radiographs on the detection of small solitary pulmonary nodules using a high-resolution monitor. The study group consisted of 80 patients who had undergone posterior chest radiography; 40 had a solitary noncalcified pulmonary nodule approximately 1 cm in diameter, and 40 were control subjects. Standard and inverse digital images using the inversion tool on a PACS system were displayed on high-resolution monitors (2048x2560x8 bit). Ten radiologists were requested to rank each image using a five-point scale (1=definitely negative, 3=equivocal or indeterminate, 5=definite nodule), and the data were interpreted using receiver operating characteristic (ROC) analysis. The area under the ROC curve for pooled data of standard image sets was significantly larger than that of inverse image sets (0.8893 and 0.8095, respectively; p 0.05). For detecting small solitary pulmonary nodules, inverse digital images were significantly inferior to standard digital images

  19. Evaluation of water content around airway in obstructive sleep apnea patients using peripharyngeal mucosal T2 magnetic resonance imaging.

    Science.gov (United States)

    Rahmawati, Anita; Chishaki, Akiko; Ohkusa, Tomoko; Hashimoto, Sonomi; Adachi, Kazuo; Nagao, Michinobu; Konishi Nishizaka, Mari; Ando, Shin-Ichi

    2017-11-01

    Obstructive sleep apnea (OSA) is common sleep disorder characterized by repetitive episodes of airway closure which usually occurs in the retropalatal region of the oropharynx. It has been known that upper airway mucosa in OSA patients is described as edematous, but not fully clarified. This study aimed to investigate and establish magnetic resonance imaging (MRI) parameter to estimate tissue water content at retropalatal level and its relationship with sleep parameters in OSA patients. Forty-eight subjects with OSA underwent overnight polysomnography and cervical MRI with 1.5-tesla [mean (SD) age 55 (14) years and apnea-hypopnea index (AHI) 45.2 (26.1) events/hour, 79.2% male]. On the axial T2-weighted images from epipharynx to oropharynx, the signal intensities of masseter muscle and peripharyngeal mucosa [T2 mucous-to-masseter intensity ratio (T2MMIR)], was used as water content estimation in the retropalatal region. Partial correlation analysis was performed to examine the correlation between T2MMIR and polysomnography parameters. We found that there were strong and positive correlations between the T2MMIR and AHI (r = 0.545, P < 0.05), supine AHI (r = 0.553, P < 0.05) and REM AHI (r = 0.640, P < 0.01) by partial correlation analysis. Besides, in patients with less efficient sleep who had more stage 1 sleep, significantly higher T2MMIR was noted (r = 0.357, P < 0.05). This study confirmed that peripharyngeal T2MMIR can be a simple parameter representing peripharyngeal tissue water contents related to severe OSA. © 2015 John Wiley & Sons Ltd.

  20. What Images Reveal: a Comparative Study of Science Images between Australian and Taiwanese Junior High School Textbooks

    Science.gov (United States)

    Ge, Yun-Ping; Unsworth, Len; Wang, Kuo-Hua; Chang, Huey-Por

    2017-07-01

    From a social semiotic perspective, image designs in science textbooks are inevitably influenced by the sociocultural context in which the books are produced. The learning environments of Australia and Taiwan vary greatly. Drawing on social semiotics and cognitive science, this study compares classificational images in Australian and Taiwanese junior high school science textbooks. Classificational images are important kinds of images, which can represent taxonomic relations among objects as reported by Kress and van Leeuwen (Reading images: the grammar of visual design, 2006). An analysis of the images from sample chapters in Australian and Taiwanese high school science textbooks showed that the majority of the Taiwanese images are covert taxonomies, which represent hierarchical relations implicitly. In contrast, Australian classificational images included diversified designs, but particularly types with a tree structure which depicted overt taxonomies, explicitly representing hierarchical super-ordinate and subordinate relations. Many of the Taiwanese images are reminiscent of the specimen images in eighteenth century science texts representing "what truly is", while more Australian images emphasize structural objectivity. Moreover, Australian images support cognitive functions which facilitate reading comprehension. The relationships between image designs and learning environments are discussed and implications for textbook research and design are addressed.

  1. Content analysis to detect high stress in oral interviews and text documents

    Science.gov (United States)

    Thirumalainambi, Rajkumar (Inventor); Jorgensen, Charles C. (Inventor)

    2012-01-01

    A system of interrogation to estimate whether a subject of interrogation is likely experiencing high stress, emotional volatility and/or internal conflict in the subject's responses to an interviewer's questions. The system applies one or more of four procedures, a first statistical analysis, a second statistical analysis, a third analysis and a heat map analysis, to identify one or more documents containing the subject's responses for which further examination is recommended. Words in the documents are characterized in terms of dimensions representing different classes of emotions and states of mind, in which the subject's responses that manifest high stress, emotional volatility and/or internal conflict are identified. A heat map visually displays the dimensions manifested by the subject's responses in different colors, textures, geometric shapes or other visually distinguishable indicia.

  2. Compressed sensing cine imaging with high spatial or high temporal resolution for analysis of left ventricular function.

    Science.gov (United States)

    Goebel, Juliane; Nensa, Felix; Schemuth, Haemi P; Maderwald, Stefan; Gratz, Marcel; Quick, Harald H; Schlosser, Thomas; Nassenstein, Kai

    2016-08-01

    To assess two compressed sensing cine magnetic resonance imaging (MRI) sequences with high spatial or high temporal resolution in comparison to a reference steady-state free precession cine (SSFP) sequence for reliable quantification of left ventricular (LV) volumes. LV short axis stacks of two compressed sensing breath-hold cine sequences with high spatial resolution (SPARSE-SENSE HS: temporal resolution: 40 msec, in-plane resolution: 1.0 × 1.0 mm(2) ) and high temporal resolution (SPARSE-SENSE HT: temporal resolution: 11 msec, in-plane resolution: 1.7 × 1.7 mm(2) ) and of a reference cine SSFP sequence (standard SSFP: temporal resolution: 40 msec, in-plane resolution: 1.7 × 1.7 mm(2) ) were acquired in 16 healthy volunteers on a 1.5T MR system. LV parameters were analyzed semiautomatically twice by one reader and once by a second reader. The volumetric agreement between sequences was analyzed using paired t-test, Bland-Altman plots, and Passing-Bablock regression. Small differences were observed between standard SSFP and SPARSE-SENSE HS for stroke volume (SV; -7 ± 11 ml; P = 0.024), ejection fraction (EF; -2 ± 3%; P = 0.019), and myocardial mass (9 ± 9 g; P = 0.001), but not for end-diastolic volume (EDV; P = 0.079) and end-systolic volume (ESV; P = 0.266). No significant differences were observed between standard SSFP and SPARSE-SENSE HT regarding EDV (P = 0.956), SV (P = 0.088), and EF (P = 0.103), but for ESV (3 ± 5 ml; P = 0.039) and myocardial mass (8 ± 10 ml; P = 0.007). Bland-Altman analysis showed good agreement between the sequences (maximum bias ≤ -8%). Two compressed sensing cine sequences, one with high spatial resolution and one with high temporal resolution, showed good agreement with standard SSFP for LV volume assessment. J. Magn. Reson. Imaging 2016;44:366-374. © 2016 Wiley Periodicals, Inc.

  3. Quantitative image analysis of synovial tissue

    NARCIS (Netherlands)

    van der Hall, Pascal O.; Kraan, Maarten C.; Tak, Paul Peter

    2007-01-01

    Quantitative image analysis is a form of imaging that includes microscopic histological quantification, video microscopy, image analysis, and image processing. Hallmarks are the generation of reliable, reproducible, and efficient measurements via strict calibration and step-by-step control of the

  4. High-resolution dynamic imaging and quantitative analysis of lung cancer xenografts in nude mice using clinical PET/CT.

    Science.gov (United States)

    Wang, Ying Yi; Wang, Kai; Xu, Zuo Yu; Song, Yan; Wang, Chu Nan; Zhang, Chong Qing; Sun, Xi Lin; Shen, Bao Zhong

    2017-08-08

    Considering the general application of dedicated small-animal positron emission tomography/computed tomography is limited, an acceptable alternative in many situations might be clinical PET/CT. To estimate the feasibility of using clinical PET/CT with [F-18]-fluoro-2-deoxy-D-glucose for high-resolution dynamic imaging and quantitative analysis of cancer xenografts in nude mice. Dynamic clinical PET/CT scans were performed on xenografts for 60 min after injection with [F-18]-fluoro-2-deoxy-D-glucose. Scans were reconstructed with or without SharpIR method in two phases. And mice were sacrificed to extracting major organs and tumors, using ex vivo γ-counting as a reference. Strikingly, we observed that the image quality and the correlation between the all quantitive data from clinical PET/CT and the ex vivo counting was better with the SharpIR reconstructions than without. Our data demonstrate that clinical PET/CT scanner with SharpIR reconstruction is a valuable tool for imaging small animals in preclinical cancer research, offering dynamic imaging parameters, good image quality and accurate data quatification.

  5. High-Resolution and Non-destructive Evaluation of the Spatial Distribution of Nitrate and Its Dynamics in Spinach (Spinacia oleracea L. Leaves by Near-Infrared Hyperspectral Imaging

    Directory of Open Access Journals (Sweden)

    Hao-Yu Yang

    2017-11-01

    Full Text Available Nitrate is an important component of the nitrogen cycle and is therefore present in all plants. However, excessive nitrogen fertilization results in a high nitrate content in vegetables, which is unhealthy for humans. Understanding the spatial distribution of nitrate in leaves is beneficial for improving nitrogen assimilation efficiency and reducing its content in vegetables. In this study, near-infrared (NIR hyperspectral imaging was used for the non-destructive and effective evaluation of nitrate content in spinach (Spinacia oleracea L. leaves. Leaf samples with different nitrate contents were collected under various fertilization conditions, and reference data were obtained using reflectometer apparatus RQflex 10. Partial least squares regression analysis revealed that there was a high correlation between the reference data and NIR spectra (r2 = 0.74, root mean squared error of cross-validation = 710.16 mg/kg. Furthermore, the nitrate content in spinach leaves was successfully mapped at a high spatial resolution, clearly displaying its distribution in the petiole, vein, and blade. Finally, the mapping results demonstrated dynamic changes in the nitrate content in intact leaf samples under different storage conditions, showing the value of this non-destructive tool for future analyses of the nitrate content in vegetables.

  6. A Content Analysis of the Roles Portrayed by Women in Commercials: 1973 - 2008

    Directory of Open Access Journals (Sweden)

    Claudia Rosa Acevedo

    2010-12-01

    Full Text Available The principal objective of this paper was to examine female roles portrayed in advertising. More specifically, the question that stimulated this research project was: What message has been signaled to society through advertisements about women? Have these portrayals altered throughout the past decades? The study consisted of a systematic content analysis of Brazilian commercials between 1973 and 2008. A probabilistic sample procedure was adopted. 95 pieces of material were selected. Our results have revealed that certain specific images have changed over the years; however, they continue to be stereotyped and idealized. DOI: 10.5585/remark.v9i3.2201

  7. Image analysis to evaluate the browning degree of banana (Musa spp.) peel.

    Science.gov (United States)

    Cho, Jeong-Seok; Lee, Hyeon-Jeong; Park, Jung-Hoon; Sung, Jun-Hyung; Choi, Ji-Young; Moon, Kwang-Deog

    2016-03-01

    Image analysis was applied to examine banana peel browning. The banana samples were divided into 3 treatment groups: no treatment and normal packaging (Cont); CO2 gas exchange packaging (CO); normal packaging with an ethylene generator (ET). We confirmed that the browning of banana peels developed more quickly in the CO group than the other groups based on sensory test and enzyme assay. The G (green) and CIE L(∗), a(∗), and b(∗) values obtained from the image analysis sharply increased or decreased in the CO group. And these colour values showed high correlation coefficients (>0.9) with the sensory test results. CIE L(∗)a(∗)b(∗) values using a colorimeter also showed high correlation coefficients but comparatively lower than those of image analysis. Based on this analysis, browning of the banana occurred more quickly for CO2 gas exchange packaging, and image analysis can be used to evaluate the browning of banana peels. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. High Fidelity Raman Chemical Imaging of Materials

    Science.gov (United States)

    Bobba, Venkata Nagamalli Koteswara Rao

    The development of high fidelity Raman imaging systems is important for a number of application areas including material science, bio-imaging, bioscience and healthcare, pharmaceutical analysis, and semiconductor characterization. The use of Raman imaging as a characterization tool for detecting the amorphous and crystalline regions in the biopolymer poly-L-lactic acid (PLLA) is the precis of my thesis. In the first chapter, a brief insight about the basics of Raman spectroscopy, Raman chemical imaging, Raman mapping, and Raman imaging techniques has been provided. The second chapter contains details about the successful development of tailored sample of PLLA. Biodegradable polymers are used in areas of tissue engineering, agriculture, packaging, and in medical field for drug delivery, implant devices, and surgical sutures. Detailed information about the sample preparation and characterization of these cold-drawn PLLA polymer substrates has been provided. Wide-field Raman hyperspectral imaging using an acousto-optic tunable filter (AOTF) was demonstrated in the early 1990s. The AOTF contributed challenges such as image walk, distortion, and image blur. A wide-field AOTF Raman imaging system has been developed as part of my research and methods to overcome some of the challenges in performing AOTF wide-field Raman imaging are discussed in the third chapter. This imaging system has been used for studying the crystalline and amorphous regions on the cold-drawn sample of PLLA. Of all the different modalities that are available for performing Raman imaging, Raman point-mapping is the most extensively used method. The ease of obtaining the Raman hyperspectral cube dataset with a high spectral and spatial resolution is the main motive of performing this technique. As a part of my research, I have constructed a Raman point-mapping system and used it for obtaining Raman hyperspectral image data of various minerals, pharmaceuticals, and polymers. Chapter four offers

  9. Echo-lucency of computerized ultrasound images of carotid atherosclerotic plaques are associated with increased levels of triglyceride-rich lipoproteins as well as increased plaque lipid content

    DEFF Research Database (Denmark)

    Grønholdt, Marie-Louise Moes; Nordestgaard, Børge G.; Weibe, Brit M.

    1998-01-01

    carotid plaque echo-lucency and that echo-lucency predicts a high plaque lipid content. Methods and Results-The study included 137 patients with neurological symptoms and greater than or equal to 50% stenosis of the relevant carotid artery, High-resolution B-mode ultrasound images of carotid plaques were......Background-Echo-lucency of carotid atherosclerotic plaques on computerized ultrasound B-mode images has been associated with a high incidence of brain infarcts as evaluated on CT scans. We tested the hypotheses that triglyceride-rich lipoproteins in the fasting and postprandial state predict...

  10. [Analysis of nursing-related content portrayed in middle and high school textbooks under the national common basic curriculum in Korea].

    Science.gov (United States)

    Jung, Myun Sook; Choi, Hyeong Wook; Li, Dong Mei

    2010-02-01

    The purpose of this study was to analyze nursing-related content in middle, and high school textbooks under the National Common Basic Curriculum in Korea. Nursing-related content from 43 middle school textbooks and 13 high school textbooks was analyzed. There were 28 items of nursing-related content in the selected textbooks. Among them, 13 items were in the 'nursing activity' area, 6 items were in the 'nurse as an occupation' area, 2 items were in the 'major and career choice' area, 6 items were 'just one word' and 1 item in 'others'. The main nursing related content which portrayed in the middle and high school textbooks were caring for patients (7 items accounting for 46.5%), nurses working in hospitals (6 items accounting for 21.4%). In terms of gender perspective, female nurses (15 items accounting for 53.6%) were most prevalent.

  11. Image Analysis for Facility Siting: a Comparison of Lowand High-altitude Image Interpretability for Land Use/land Cover Mapping

    Science.gov (United States)

    Borella, H. M.; Estes, J. E.; Ezra, C. E.; Scepan, J.; Tinney, L. R.

    1982-01-01

    For two test sites in Pennsylvania the interpretability of commercially acquired low-altitude and existing high-altitude aerial photography are documented in terms of time, costs, and accuracy for Anderson Level II land use/land cover mapping. Information extracted from the imagery is to be used in the evaluation process for siting energy facilities. Land use/land cover maps were drawn at 1:24,000 scale using commercially flown color infrared photography obtained from the United States Geological Surveys' EROS Data Center. Detailed accuracy assessment of the maps generated by manual image analysis was accomplished employing a stratified unaligned adequate class representation. Both 'area-weighted' and 'by-class' accuracies were documented and field-verified. A discrepancy map was also drawn to illustrate differences in classifications between the two map scales. Results show that the 1:24,000 scale map set was more accurate (99% to 94% area-weighted) than the 1:62,500 scale set, especially when sampled by class (96% to 66%). The 1:24,000 scale maps were also more time-consuming and costly to produce, due mainly to higher image acquisition costs.

  12. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    Science.gov (United States)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  13. Biobleaching chemistry of laccase-mediator systems on high-lignin-content kraft pulps

    International Nuclear Information System (INIS)

    Chakar, F.S.; Ragauskas, A.J.

    2004-01-01

    A high-lignin-content softwood kraft pulp was reacted with laccase in the presence of 1-hydroxybenzotriazole (HBT), N-acetyl-N-phenylhydroxylamine (NHA), and violuric acid (VA). The biodelignification response with violuric acid was superior to both 1-hydroxybenzotriazole and N-acetyl-N-phenylhydroxylamine. NMR analysis of residual lignins isolated before and after the biobleaching treatments revealed that the latter material was highly oxidized and that the magnitude of structural changes was most pronounced with the laccase - violuric acid biobleaching system. An increase in the content of carboxylic acid groups and a decrease in methoxyl groups were noted with all three laccase-mediator systems. The oxidation biobleaching pathway is directed primarily towards noncondensed C5 phenolic lignin functional structures for all three laccase-mediated systems. The laccase - violuric acid system was also reactive towards C5-condensed phenolic lignin structures. (author)

  14. ADC texture—An imaging biomarker for high-grade glioma?

    Energy Technology Data Exchange (ETDEWEB)

    Brynolfsson, Patrik; Hauksson, Jón; Karlsson, Mikael; Garpebring, Anders; Nyholm, Tufve, E-mail: tufve.nyholm@radfys.umu.se [Department of Radiation Sciences, Radiation Physics, Umeå University, Umeå SE-901 87 (Sweden); Nilsson, David; Trygg, Johan [Computational Life Science Cluster (CLiC), Department of Chemistry, Umeå University, Umeå SE-901 87 (Sweden); Henriksson, Roger [Department of Radiation Sciences, Oncology, Umeå University, Umeå SE-901 87, Sweden and Regionalt Cancercentrum Stockholm, Karolinska Universitetssjukhuset, Solna, Stockholm SE-102 39 (Sweden); Birgander, Richard [Department of Radiation Sciences, Diagnostic Radiology, Umeå University, Umeå SE-901 87 (Sweden); Asklund, Thomas [Department of Radiation Sciences, Oncology, Umeå University, Umeå SE-901 87 (Sweden)

    2014-10-15

    Purpose: Survival for high-grade gliomas is poor, at least partly explained by intratumoral heterogeneity contributing to treatment resistance. Radiological evaluation of treatment response is in most cases limited to assessment of tumor size months after the initiation of therapy. Diffusion-weighted magnetic resonance imaging (MRI) and its estimate of the apparent diffusion coefficient (ADC) has been widely investigated, as it reflects tumor cellularity and proliferation. The aim of this study was to investigate texture analysis of ADC images in conjunction with multivariate image analysis as a means for identification of pretreatment imaging biomarkers. Methods: Twenty-three consecutive high-grade glioma patients were treated with radiotherapy (2 Gy/60 Gy) with concomitant and adjuvant temozolomide. ADC maps and T1-weighted anatomical images with and without contrast enhancement were collected prior to treatment, and (residual) tumor contrast enhancement was delineated. A gray-level co-occurrence matrix analysis was performed on the ADC maps in a cuboid encapsulating the tumor in coronal, sagittal, and transversal planes, giving a total of 60 textural descriptors for each tumor. In addition, similar examinations and analyses were performed at day 1, week 2, and week 6 into treatment. Principal component analysis (PCA) was applied to reduce dimensionality of the data, and the five largest components (scores) were used in subsequent analyses. MRI assessment three months after completion of radiochemotherapy was used for classifying tumor progression or regression. Results: The score scatter plots revealed that the first, third, and fifth components of the pretreatment examinations exhibited a pattern that strongly correlated to survival. Two groups could be identified: one with a median survival after diagnosis of 1099 days and one with 345 days, p = 0.0001. Conclusions: By combining PCA and texture analysis, ADC texture characteristics were identified, which seems

  15. Astronomical Image and Data Analysis

    CERN Document Server

    Starck, J.-L

    2006-01-01

    With information and scale as central themes, this comprehensive survey explains how to handle real problems in astronomical data analysis using a modern arsenal of powerful techniques. It treats those innovative methods of image, signal, and data processing that are proving to be both effective and widely relevant. The authors are leaders in this rapidly developing field and draw upon decades of experience. They have been playing leading roles in international projects such as the Virtual Observatory and the Grid. The book addresses not only students and professional astronomers and astrophysicists, but also serious amateur astronomers and specialists in earth observation, medical imaging, and data mining. The coverage includes chapters or appendices on: detection and filtering; image compression; multichannel, multiscale, and catalog data analytical methods; wavelets transforms, Picard iteration, and software tools. This second edition of Starck and Murtagh's highly appreciated reference again deals with to...

  16. Differentiating high-grade from low-grade chondrosarcoma with MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Hye Jin; Hong, Sung Hwan; Choi, Ja-Young; Choi, Jung-Ah; Kang, Heung Sik [Seoul National University College of Medicine, Department of Radiology and Institute of Radiation Medicine, Seoul (Korea); Moon, Kyung Chul [Seoul National University College of Medicine, Department of Pathology, Seoul (Korea); Kim, Han-Soo [Seoul National University College of Medicine, Department of Orthopedic Surgery, Seoul (Korea)

    2009-12-15

    The purpose of the study was to evaluate the MR imaging features that differentiate between low-grade chondrosarcoma (LGCS) and high-grade chondrosarcoma (HGCS) and to determine the most reliable predictors for differentiation. MR images of 42 pathologically proven chondrosarcomas (28 LGCS and 14 HGCS) were retrospectively reviewed. There were 13 male and 29 female patients with an age range of 23-72 years (average age 51 years). On MR images, signal intensity, specific morphological characteristics including entrapped fat, internal lobular architecture, and outer lobular margin, soft tissue mass formation and contrast enhancement pattern were analysed. MR imaging features used to identify LGCS and HGCS were compared using univariate analysis and multivariate stepwise logistic regression analysis. On T1-weighted images, a central area of high signal intensity, which was not seen in LGCS, was frequently observed in HGCS (n = 5, 36%) (p < 0.01). Entrapped fat within the tumour was commonly seen in LGCS (n = 26, 93%), but not in HGCS (n = 1, 4%) (p < 0.01). LGCS more commonly (n = 24, 86%) preserved the characteristic internal lobular structures within the tumour than HGCSs (n = 4, 29%) (p < 0.01). Soft tissue formation was more frequently observed in HGCS (n = 11, 79%) than in LGCS (n = 1, 4%) (p < 0.01). On gadolinium-enhanced images, large central nonenhancing areas were exhibited in only two (7.1%) of LGCS, while HGCS frequently (n = 9, 64%) had a central nonenhancing portion (p < 0.01). Results of multivariate stepwise logistic regression analysis showed that soft tissue formation and entrapped fat within the tumour were the variables that could be used to independently differentiate LGCS from HGCS. There were several MR imaging features of chondrosarcoma that could be helpful in distinguishing HGCS from LGCS. Among them, soft tissue mass formation favoured the diagnosis of HGCS, and entrapped fat within the tumour was highly indicative of LGCS. (orig.)

  17. Differentiating high-grade from low-grade chondrosarcoma with MR imaging

    International Nuclear Information System (INIS)

    Yoo, Hye Jin; Hong, Sung Hwan; Choi, Ja-Young; Choi, Jung-Ah; Kang, Heung Sik; Moon, Kyung Chul; Kim, Han-Soo

    2009-01-01

    The purpose of the study was to evaluate the MR imaging features that differentiate between low-grade chondrosarcoma (LGCS) and high-grade chondrosarcoma (HGCS) and to determine the most reliable predictors for differentiation. MR images of 42 pathologically proven chondrosarcomas (28 LGCS and 14 HGCS) were retrospectively reviewed. There were 13 male and 29 female patients with an age range of 23-72 years (average age 51 years). On MR images, signal intensity, specific morphological characteristics including entrapped fat, internal lobular architecture, and outer lobular margin, soft tissue mass formation and contrast enhancement pattern were analysed. MR imaging features used to identify LGCS and HGCS were compared using univariate analysis and multivariate stepwise logistic regression analysis. On T1-weighted images, a central area of high signal intensity, which was not seen in LGCS, was frequently observed in HGCS (n = 5, 36%) (p < 0.01). Entrapped fat within the tumour was commonly seen in LGCS (n = 26, 93%), but not in HGCS (n = 1, 4%) (p < 0.01). LGCS more commonly (n = 24, 86%) preserved the characteristic internal lobular structures within the tumour than HGCSs (n = 4, 29%) (p < 0.01). Soft tissue formation was more frequently observed in HGCS (n = 11, 79%) than in LGCS (n = 1, 4%) (p < 0.01). On gadolinium-enhanced images, large central nonenhancing areas were exhibited in only two (7.1%) of LGCS, while HGCS frequently (n = 9, 64%) had a central nonenhancing portion (p < 0.01). Results of multivariate stepwise logistic regression analysis showed that soft tissue formation and entrapped fat within the tumour were the variables that could be used to independently differentiate LGCS from HGCS. There were several MR imaging features of chondrosarcoma that could be helpful in distinguishing HGCS from LGCS. Among them, soft tissue mass formation favoured the diagnosis of HGCS, and entrapped fat within the tumour was highly indicative of LGCS. (orig.)

  18. The CODESRIA Bulletin: A Content Analysis

    African Journals Online (AJOL)

    Prof

    the major question that guides this essay is the following: What do articles in the. CODESRIA ... A qualitative and quantitative analysis of the contents of the articles can ..... and elections, child and youth, race, gender, and academic freedom.

  19. High-resolution satellite image segmentation using Hölder exponents

    Indian Academy of Sciences (India)

    Keywords. High resolution image; texture analysis; segmentation; IKONOS; Hölder exponent; cluster. ... are that. • it can be used as a tool to measure the roughness ... uses reinforcement learning to learn the reward values of ..... The numerical.

  20. Identifying Skill Requirements for GIS Positions: A Content Analysis of Job Advertisements

    Science.gov (United States)

    Hong, Jung Eun

    2016-01-01

    This study identifies the skill requirements for geographic information system (GIS) positions, including GIS analysts, programmers/developers/engineers, specialists, and technicians, through a content analysis of 946 GIS job advertisements from 2007-2014. The results indicated that GIS job applicants need to possess high levels of GIS analysis…

  1. High accuracy FIONA-AFM hybrid imaging

    International Nuclear Information System (INIS)

    Fronczek, D.N.; Quammen, C.; Wang, H.; Kisker, C.; Superfine, R.; Taylor, R.; Erie, D.A.; Tessmer, I.

    2011-01-01

    Multi-protein complexes are ubiquitous and play essential roles in many biological mechanisms. Single molecule imaging techniques such as electron microscopy (EM) and atomic force microscopy (AFM) are powerful methods for characterizing the structural properties of multi-protein and multi-protein-DNA complexes. However, a significant limitation to these techniques is the ability to distinguish different proteins from one another. Here, we combine high resolution fluorescence microscopy and AFM (FIONA-AFM) to allow the identification of different proteins in such complexes. Using quantum dots as fiducial markers in addition to fluorescently labeled proteins, we are able to align fluorescence and AFM information to ≥8 nm accuracy. This accuracy is sufficient to identify individual fluorescently labeled proteins in most multi-protein complexes. We investigate the limitations of localization precision and accuracy in fluorescence and AFM images separately and their effects on the overall registration accuracy of FIONA-AFM hybrid images. This combination of the two orthogonal techniques (FIONA and AFM) opens a wide spectrum of possible applications to the study of protein interactions, because AFM can yield high resolution (5-10 nm) information about the conformational properties of multi-protein complexes and the fluorescence can indicate spatial relationships of the proteins in the complexes. -- Research highlights: → Integration of fluorescent signals in AFM topography with high (<10 nm) accuracy. → Investigation of limitations and quantitative analysis of fluorescence-AFM image registration using quantum dots. → Fluorescence center tracking and display as localization probability distributions in AFM topography (FIONA-AFM). → Application of FIONA-AFM to a biological sample containing damaged DNA and the DNA repair proteins UvrA and UvrB conjugated to quantum dots.

  2. Image-Based Single Cell Profiling: High-Throughput Processing of Mother Machine Experiments.

    Directory of Open Access Journals (Sweden)

    Christian Carsten Sachs

    Full Text Available Microfluidic lab-on-chip technology combined with live-cell imaging has enabled the observation of single cells in their spatio-temporal context. The mother machine (MM cultivation system is particularly attractive for the long-term investigation of rod-shaped bacteria since it facilitates continuous cultivation and observation of individual cells over many generations in a highly parallelized manner. To date, the lack of fully automated image analysis software limits the practical applicability of the MM as a phenotypic screening tool.We present an image analysis pipeline for the automated processing of MM time lapse image stacks. The pipeline supports all analysis steps, i.e., image registration, orientation correction, channel/cell detection, cell tracking, and result visualization. Tailored algorithms account for the specialized MM layout to enable a robust automated analysis. Image data generated in a two-day growth study (≈ 90 GB is analyzed in ≈ 30 min with negligible differences in growth rate between automated and manual evaluation quality. The proposed methods are implemented in the software molyso (MOther machine AnaLYsis SOftware that provides a new profiling tool to analyze unbiasedly hitherto inaccessible large-scale MM image stacks.Presented is the software molyso, a ready-to-use open source software (BSD-licensed for the unsupervised analysis of MM time-lapse image stacks. molyso source code and user manual are available at https://github.com/modsim/molyso.

  3. Regulating alcohol advertising: content analysis of the adequacy of federal and self-regulation of magazine advertisements, 2008-2010.

    Science.gov (United States)

    Smith, Katherine C; Cukier, Samantha; Jernigan, David H

    2014-10-01

    We analyzed beer, spirits, and alcopop magazine advertisements to determine adherence to federal and voluntary advertising standards. We assessed the efficacy of these standards in curtailing potentially damaging content and protecting public health. We obtained data from a content analysis of a census of 1795 unique advertising creatives for beer, spirits, and alcopops placed in nationally available magazines between 2008 and 2010. We coded creatives for manifest content and adherence to federal regulations and industry codes. Advertisements largely adhered to existing regulations and codes. We assessed only 23 ads as noncompliant with federal regulations and 38 with industry codes. Content consistent with the codes was, however, often culturally positive in terms of aspirational depictions. In addition, creatives included degrading and sexualized images, promoted risky behavior, and made health claims associated with low-calorie content. Existing codes and regulations are largely followed regarding content but do not adequately protect against content that promotes unhealthy and irresponsible consumption and degrades potentially vulnerable populations in its depictions. Our findings suggest further limitations and enhanced federal oversight may be necessary to protect public health.

  4. The content of social media's shared images about Ebola: a retrospective study.

    Science.gov (United States)

    Seltzer, E K; Jean, N S; Kramer-Golinkoff, E; Asch, D A; Merchant, R M

    2015-09-01

    Social media have strongly influenced awareness and perceptions of public health emergencies, but a considerable amount of social media content is now carried through images, rather than just text. This study's objective is to explore how image-sharing platforms are used for information dissemination in public health emergencies. Retrospective review of images posted on two popular image-sharing platforms to characterize public discourse about Ebola. Using the keyword '#ebola' we identified a 1% sample of images posted on Instagram and Flickr across two sequential weeks in November 2014. Images from both platforms were independently coded by two reviewers and characterized by themes. We reviewed 1217 images posted on Instagram and Flickr and identified themes. Nine distinct themes were identified. These included: images of health care workers and professionals [308 (25%)], West Africa [75 (6%)], the Ebola virus [59 (5%)], and artistic renderings of Ebola [64 (5%)]. Also identified were images with accompanying embedded text related to Ebola and associated: facts [68 (6%)], fears [40 (3%)], politics [46 (4%)], and jokes [284 (23%)]. Several [273 (22%)] images were unrelated to Ebola or its sequelae. Instagram images were primarily coded as jokes [255 (42%)] or unrelated [219 (36%)], while Flickr images primarily depicted health care workers and other professionals [281 (46%)] providing care or other services for prevention or treatment. Image sharing platforms are being used for information exchange about public health crises, like Ebola. Use differs by platform and discerning these differences can help inform future uses for health care professionals and researchers seeking to assess public fears and misinformation or provide targeted education/awareness interventions. Copyright © 2015 The Royal Institute of Public Health. All rights reserved.

  5. The One to Multiple Automatic High Accuracy Registration of Terrestrial LIDAR and Optical Images

    Science.gov (United States)

    Wang, Y.; Hu, C.; Xia, G.; Xue, H.

    2018-04-01

    The registration of ground laser point cloud and close-range image is the key content of high-precision 3D reconstruction of cultural relic object. In view of the requirement of high texture resolution in the field of cultural relic at present, The registration of point cloud and image data in object reconstruction will result in the problem of point cloud to multiple images. In the current commercial software, the two pairs of registration of the two kinds of data are realized by manually dividing point cloud data, manual matching point cloud and image data, manually selecting a two - dimensional point of the same name of the image and the point cloud, and the process not only greatly reduces the working efficiency, but also affects the precision of the registration of the two, and causes the problem of the color point cloud texture joint. In order to solve the above problems, this paper takes the whole object image as the intermediate data, and uses the matching technology to realize the automatic one-to-one correspondence between the point cloud and multiple images. The matching of point cloud center projection reflection intensity image and optical image is applied to realize the automatic matching of the same name feature points, and the Rodrigo matrix spatial similarity transformation model and weight selection iteration are used to realize the automatic registration of the two kinds of data with high accuracy. This method is expected to serve for the high precision and high efficiency automatic 3D reconstruction of cultural relic objects, which has certain scientific research value and practical significance.

  6. Stochastic geometry for image analysis

    CERN Document Server

    Descombes, Xavier

    2013-01-01

    This book develops the stochastic geometry framework for image analysis purpose. Two main frameworks are  described: marked point process and random closed sets models. We derive the main issues for defining an appropriate model. The algorithms for sampling and optimizing the models as well as for estimating parameters are reviewed.  Numerous applications, covering remote sensing images, biological and medical imaging, are detailed.  This book provides all the necessary tools for developing an image analysis application based on modern stochastic modeling.

  7. Attentional Mechanisms for Interactive Image Exploration

    Directory of Open Access Journals (Sweden)

    Philippe Tarroux

    2005-08-01

    Full Text Available A lot of work has been devoted to content-based image retrieval from large image databases. The traditional approaches are based on the analysis of the whole image content both in terms of low-level and semantic characteristics. We investigate in this paper an approach based on attentional mechanisms and active vision. We describe a visual architecture that combines bottom-up and top-down approaches for identifying regions of interest according to a given goal. We show that a coarse description of the searched target combined with a bottom-up saliency map provides an efficient way to find specified targets on images. The proposed system is a first step towards the development of software agents able to search for image content in image databases.

  8. Step-by-step guide to building an inexpensive 3D printed motorized positioning stage for automated high-content screening microscopy.

    Science.gov (United States)

    Schneidereit, Dominik; Kraus, Larissa; Meier, Jochen C; Friedrich, Oliver; Gilbert, Daniel F

    2017-06-15

    High-content screening microscopy relies on automation infrastructure that is typically proprietary, non-customizable, costly and requires a high level of skill to use and maintain. The increasing availability of rapid prototyping technology makes it possible to quickly engineer alternatives to conventional automation infrastructure that are low-cost and user-friendly. Here, we describe a 3D printed inexpensive open source and scalable motorized positioning stage for automated high-content screening microscopy and provide detailed step-by-step instructions to re-building the device, including a comprehensive parts list, 3D design files in STEP (Standard for the Exchange of Product model data) and STL (Standard Tessellation Language) format, electronic circuits and wiring diagrams as well as software code. System assembly including 3D printing requires approx. 30h. The fully assembled device is light-weight (1.1kg), small (33×20×8cm) and extremely low-cost (approx. EUR 250). We describe positioning characteristics of the stage, including spatial resolution, accuracy and repeatability, compare imaging data generated with our device to data obtained using a commercially available microplate reader, demonstrate its suitability to high-content microscopy in 96-well high-throughput screening format and validate its applicability to automated functional Cl - - and Ca 2+ -imaging with recombinant HEK293 cells as a model system. A time-lapse video of the stage during operation and as part of a custom assembled screening robot can be found at https://vimeo.com/158813199. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video

    Science.gov (United States)

    Yeo, Boon-Lock; Liu, Bede

    1996-03-01

    Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.

  10. The application of near-infrared spectra micro-image in the imaging analysis of biology samples

    Directory of Open Access Journals (Sweden)

    Dong Wang

    2014-07-01

    Full Text Available In this research, suitable imaging methods were used for acquiring single compound images of biology samples of chicken pectorales tissue section, tobacco dry leaf, fresh leaf and plant glandular hair, respectively. The adverse effects caused by the high water content and the thermal effect of near infrared (NIR light were effectively solved during the experiment procedures and the data processing. PCA algorithm was applied to the NIR micro-image of chicken pectorales tissue. Comparing the loading vector of PC3 with the NIR spectrum of dry albumen, the information of PC3 was confirmed to be provided mainly by protein, i.e., the 3rd score image represents the distribution trend of protein mainly. PCA algorithm was applied to the NIR micro-image of tobacco dry leaf. The information of PC2 was confirmed to be provided by carbohydrate including starch mainly. Compared to the 2nd score image of tobacco dry leaf, the compared correlation image with the reference spectrum of starch had the same distribution trend as the 2nd score image. The comparative correlation images with the reference spectra of protein, glucose, fructose and the total plant alkaloid were acquired to confirm the distribution trend of these compounds in tobacco dry leaf respectively. Comparative correlation images of fresh leaf with the reference spectra of protein, starch, fructose, glucose and water were acquired to confirm the distribution trend of these compounds in fresh leaf. Chemimap imaging of plant glandular hair was acquired to show the tubular structure clearly.

  11. On the limitations and optimisation of high-resolution 3D medical X-ray imaging systems

    International Nuclear Information System (INIS)

    Zhou Shuang; Brahme, Anders

    2011-01-01

    Based on a quantitative analysis of both attenuation and refractive properties of X-ray propagation in human body tissues and the introduction of a mathematical model for image quality analysis, some limitations and optimisation of high-resolution three-dimensional (3D) medical X-ray imaging techniques are studied. A comparison is made of conventional attenuation-based X-ray imaging methods with the phase-contrast X-ray imaging modalities that have been developed recently. The results indicate that it is theoretically possible through optimal design of the X-ray imaging system to achieve high spatial resolution (<100 μm) in 3D medical X-ray imaging of the human body at a clinically acceptable dose level (<10 mGy) by introducing a phase-contrast X-ray imaging technique.

  12. Transfer function analysis of radiographic imaging systems

    International Nuclear Information System (INIS)

    Metz, C.E.; Doi, K.

    1979-01-01

    The theoretical and experimental aspects of the techniques of transfer function analysis used in radiographic imaging systems are reviewed. The mathematical principles of transfer function analysis are developed for linear, shift-invariant imaging systems, for the relation between object and image and for the image due to a sinusoidal plane wave object. The other basic mathematical principle discussed is 'Fourier analysis' and its application to an input function. Other aspects of transfer function analysis included are alternative expressions for the 'optical transfer function' of imaging systems and expressions are derived for both serial and parallel transfer image sub-systems. The applications of transfer function analysis to radiographic imaging systems are discussed in relation to the linearisation of the radiographic imaging system, the object, the geometrical unsharpness, the screen-film system unsharpness, other unsharpness effects and finally noise analysis. It is concluded that extensive theoretical, computer simulation and experimental studies have demonstrated that the techniques of transfer function analysis provide an accurate and reliable means for predicting and understanding the effects of various radiographic imaging system components in most practical diagnostic medical imaging situations. (U.K.)

  13. Computational content analysis of European Central Bank statements

    NARCIS (Netherlands)

    Milea, D.V.; Almeida, R.J.; Sharef, N.M.; Kaymak, U.; Frasincar, F.

    2012-01-01

    In this paper we present a framework for the computational content analysis of European Central Bank (ECB) statements. Based on this framework, we provide two approaches that can be used in a practical context. Both approaches use the content of ECB statements to predict upward and downward movement

  14. Evolutionary and Modern Image Content Differentially Influence the Processing of Emotional Pictures

    Directory of Open Access Journals (Sweden)

    Matthias Dhum

    2017-08-01

    Full Text Available From an evolutionary perspective, environmental threats relevant for survival constantly challenged human beings. Current research suggests the evolution of a fear processing module in the brain to cope with these threats. Recently, humans increasingly encountered modern threats (e.g., guns or car accidents in addition to evolutionary threats (e.g., snakes or predators which presumably required an adaptation of perception and behavior. However, the neural processes underlying the perception of these different threats remain to be elucidated. We investigated the effect of image content (i.e., evolutionary vs. modern threats on the activation of neural networks of emotion processing. During functional magnetic resonance imaging (fMRI 41 participants watched affective pictures displaying evolutionary-threatening, modern-threatening, evolutionary-neutral and modern-neutral content. Evolutionary-threatening stimuli evoked stronger activations than modern-threatening stimuli in left inferior frontal gyrus and thalamus, right middle frontal gyrus and parietal regions as well as bilaterally in parietal regions, fusiform gyrus and bilateral amygdala. We observed the opposite effect, i.e., higher activity for modern-threatening than for evolutionary-threatening stimuli, bilaterally in the posterior cingulate and the parahippocampal gyrus. We found no differences in subjective arousal ratings between the two threatening conditions. On the valence scale though, subjects rated modern-threatening pictures significantly more negative than evolutionary-threatening pictures, indicating a higher level of perceived threat. The majority of previous studies show a positive relationship between arousal rating and amygdala activity. However, comparing fMRI results with behavioral findings we provide evidence that neural activity in fear processing areas is not only driven by arousal or valence, but presumably also by the evolutionary content of the stimulus. This has

  15. A System for the Semantic Multimodal Analysis of News Audio-Visual Content

    Directory of Open Access Journals (Sweden)

    Michael G. Strintzis

    2010-01-01

    Full Text Available News-related content is nowadays among the most popular types of content for users in everyday applications. Although the generation and distribution of news content has become commonplace, due to the availability of inexpensive media capturing devices and the development of media sharing services targeting both professional and user-generated news content, the automatic analysis and annotation that is required for supporting intelligent search and delivery of this content remains an open issue. In this paper, a complete architecture for knowledge-assisted multimodal analysis of news-related multimedia content is presented, along with its constituent components. The proposed analysis architecture employs state-of-the-art methods for the analysis of each individual modality (visual, audio, text separately and proposes a novel fusion technique based on the particular characteristics of news-related content for the combination of the individual modality analysis results. Experimental results on news broadcast video illustrate the usefulness of the proposed techniques in the automatic generation of semantic annotations.

  16. Correlation between neurohypophyseal vasopressin content and signal intensity on T1-weighted magnetic resonance images. An experimental study of vasopressin depletion model using dehydrated rabbits

    International Nuclear Information System (INIS)

    Kurokawa, Hiroaki; Nakano, Yoshihisa; Ikeda, Koshi; Tanaka, Yoshimasa; Fujisawa, Ichiro

    1998-01-01

    We investigated the correlation between the signal intensity on T 1 -weighted MR images and vasopressin (VP) content in the posterior pituitary lobe. Fourteen rabbits were studied. There were 12 water-deprived rabbits (48, 72, 96, 120, 144 and 168 hours: 2 each) and 2 controls. Sagittal T 1 -weighted SE (spin-echo) MR images were obtained before and after dehydration. The signal intensity ratio of the posterior pituitary lobe to the pons was correlated with the VP content in the posterior lobe as measured by radioimmunoassay. Before water deprivation, high signal intensity in the posterior lobe was demonstrated clearly in all 14 rabbits. After water deprivation, the hyperintense signal gradually decreased and became indistinguishable from anterior lobe in four animals. The mean signal intensity ratio before water deprivation was 1.55±0.12 (mean±SD) and after water deprivation, gradually decreased over time and reached to 1.19 after 168 hours of water deprivation. Pituitary VP content and concentration decreased in parallel with the signal intensity ratio of the posterior pituitary. Significantly correlation was observed between the signal intensity ratio and VP concentration of posterior pituitary (r=0.809, p 1 -weighted image may reflect a indicator of pituitary VP content and thus may enable evaluation of disorders of water metabolism. (author)

  17. Near-Infrared Imaging for Spatial Mapping of Organic Content in Petroleum Source Rocks

    Science.gov (United States)

    Mehmani, Y.; Burnham, A. K.; Vanden Berg, M. D.; Tchelepi, H.

    2017-12-01

    Natural gas from unconventional petroleum source rocks (shales) plays a key role in our transition towards sustainable low-carbon energy production. The potential for carbon storage (in adsorbed state) in these formations further aligns with efforts to mitigate climate change. Optimizing production and development from these resources requires knowledge of the hydro-thermo-mechanical properties of the rock, which are often strong functions of organic content. This work demonstrates the potential of near-infrared (NIR) spectral imaging in mapping the spatial distribution of organic content with O(100µm) resolution on cores that can span several hundred feet in depth (Mehmani et al., 2017). We validate our approach for the immature oil shale of the Green River Formation (GRF), USA, and show its applicability potential in other formations. The method is a generalization of a previously developed optical approach specialized to the GRF (Mehmani et al., 2016a). The implications of this work for spatial mapping of hydro-thermo-mechanical properties of excavated cores, in particular thermal conductivity, are discussed (Mehmani et al., 2016b). References:Mehmani, Y., A.K. Burnham, M.D. Vanden Berg, H. Tchelepi, "Quantification of organic content in shales via near-infrared imaging: Green River Formation." Fuel, (2017). Mehmani, Y., A.K. Burnham, M.D. Vanden Berg, F. Gelin, and H. Tchelepi. "Quantification of kerogen content in organic-rich shales from optical photographs." Fuel, (2016a). Mehmani, Y., A.K. Burnham, H. Tchelepi, "From optics to upscaled thermal conductivity: Green River oil shale." Fuel, (2016b).

  18. A High-Dynamic-Range Optical Remote Sensing Imaging Method for Digital TDI CMOS

    Directory of Open Access Journals (Sweden)

    Taiji Lan

    2017-10-01

    Full Text Available The digital time delay integration (digital TDI technology of the complementary metal-oxide-semiconductor (CMOS image sensor has been widely adopted and developed in the optical remote sensing field. However, the details of targets that have low illumination or low contrast in scenarios of high contrast are often drowned out because of the superposition of multi-stage images in digital domain multiplies the read noise and the dark noise, thus limiting the imaging dynamic range. Through an in-depth analysis of the information transfer model of digital TDI, this paper attempts to explore effective ways to overcome this issue. Based on the evaluation and analysis of multi-stage images, the entropy-maximized adaptive histogram equalization (EMAHE algorithm is proposed to improve the ability of images to express the details of dark or low-contrast targets. Furthermore, in this paper, an image fusion method is utilized based on gradient pyramid decomposition and entropy weighting of different TDI stage images, which can improve the detection ability of the digital TDI CMOS for complex scenes with high contrast, and obtain images that are suitable for recognition by the human eye. The experimental results show that the proposed methods can effectively improve the high-dynamic-range imaging (HDRI capability of the digital TDI CMOS. The obtained images have greater entropy and average gradients.

  19. High speed electronic imaging application in aeroballistic research

    International Nuclear Information System (INIS)

    Brown, R.R.; Parker, J.R.

    1984-01-01

    Physical and temporal restrictions imposed by modern aeroballistics have pushed imaging technology to the point where special photoconductive surfaces and high-speed support electronics are dictated. Specifications for these devices can be formulated by a methodical analysis of critical parameters and how they interact. In terms of system theory, system transfer functions and state equations can be used in optimal coupling of devices to maximize system performance. Application of these methods to electronic imaging at the Eglin Aeroballistics Research Facility is described in this report. 7 references, 14 figures, 1 table

  20. ESTIMATION OF MELANIN CONTENT IN IRIS OF HUMAN EYE

    Directory of Open Access Journals (Sweden)

    E. A. Genina

    2008-12-01

    Full Text Available Based on the experimental data obtained in vivo from digital analysis of color images of human irises, the mean melanin content in human eye irises has been estimated. For registration of color images the digital camera Olympus C-5060 has been used. The images have been obtained from irises of healthy volunteers as well as from irises of patients with open-angle glaucoma. The computer program has been developed for digital analysis of the images. The result has been useful for development of novel methods and optimization of already existing ones for non-invasive glaucoma diagnostics.